Simple Big Data Framework but Main Article Perhaps Naive: “We don’t need more data scientists — just make big data easier to use” — Tech News and Analysis
As we try to break down the big task of tackling a Big Data strategy, having a framework greatly facilitates the process. A few blog posts back, I outlined one framework to use for attacking this area. This article contains another structure, somewhat unique. However, the author is not really applying the structure to this issue but rather as a way to support the argument for more software and less data scientists.
The article borders between visionary and naive. I agree that its inevitable that applications will be built to address more and more specific data use cases. However, the evidence is overwhelming that in the short run it is nearly negligent to advocate fewer data scientists or a smaller role for them. We are at the early stage of the Big Data cycle where data and the application contexts are ill defined. This is a period where grand promises can be made and then will likely be broken once implementation needs to happen in a justifiable way.
This does not mean that every company needs to hire their own data scientist — but rather that they at least have access to those kinds of skills in a consulting format. The talent is scarce for hiring but not as scarce for hiring as a consultant. This probably is the best approach for most firms at this stage.
- Dec 22, 2012 – 12:00PM PT
- By Scott Brave, Baynote
- 29 Comments
Sure, more data scientists would be great. But Scott Brave, of Baynote, says the better solution is to create analytics products that are so easy to use that you don’t even need a data scientist.
photo: Sergey Nivens/Shutterstock.com
Virtually any article today about big data inevitably turns to the notion that the country is suffering from a crucial shortage of data scientists. A much-talked-about 2011 McKinsey & Co. survey pointed out that many organizations lack both the skilled personnel needed to mine big data for insights and the structures and incentives required to use big data to make informed decisions and act on them.
What seems to be missing from all of these discussions, though, is a dialogue about how to steer around this bottleneck and make big data directly accessible to business leaders. We have done it before in the software industry, and we can do it again.
To accomplish this goal, it’s helpful to understand the data scientist’s role in big data. Currently, big data is a melting pot of distributed data architectures and tools like Hadoop, NoSQL, Hive and R. In this highly technical environment, data scientists serve as the gatekeepers and mediators between these systems and the people who run the business – the domain experts.
While difficult to generalize, there are three main roles served by the data scientist: data architecture, machine learning, and analytics. While these roles are important, the fact is that not every company actually needs a highly specialized data team of the sort you’d find at Google or Facebook. The solution then lies in creating fit-to-purpose products and solutions that abstract away as much of the technical complexity as possible, so that the power of big data can be put into the hands of business users.
By way of example, think back to the web content management revolution at the turn of the century. Websites were all the rage, but the domain experts were continually banging their heads against the wall – we had an IT bottleneck. Every new piece of content had to be scheduled and sometimes hard-coded by the IT elite. So how was it resolved? We generalized and abstracted the basic needs into web content management systems and made them easy for non-techies to use. As long as you didn’t need anything too crazy, the problem was solved easily, and the bottleneck averted.
Let’s dig a little deeper into the three main roles of today’s data scientist, using online commerce as a backdrop.
The key to reducing complexity is to limit scope. Nearly every ecommerce business is interested in capturing user behavior – engagements, purchases, offline transactions and social data – and almost every one of them has a catalog and customer profiles.
Limiting scope to this basic functionality would allow us to create templates for the standard data inputs, making both data capture and connecting the pipes much simpler. We’d also need to find meaningful ways to package the different data architectures and tools, which currently include Hadoop, Hbase, Hive, Pig, Cassandra and Mahout. These packages should be fit for purpose. It comes down to the 80/20 rule: 80 percent of big data use cases (which is all most ecommerce businesses need), can be achieved with 20 percent of the effort and technology.
Surely we need data scientists in machine learning, right? Well, if you have very customized needs, perhaps. But most of the standard challenges that require big data, like recommendation engines and personalization systems, can be abstracted out. For example, a large part of the job of a data scientist is crafting “features,” which are meaningful combinations of input data that make machine learning effective. As much as we’d like to think that all data scientists have to do is plug data into the machine and hit “go,” the reality is people need to help the machine by giving it useful ways of looking at the world.
On a per domain basis, however, feature creation could be templatized, too. Every commerce site has a notion of buy flow and user segmentation, for example. What if domain experts could directly encode their ideas and representations of their domains into the system, bypassing the data scientists as middleman and translator?
It’s never easy to automatically surface the most valuable insights from data. There are ways to provide domain-specific lenses, however, that allow business experts to experiment – much like a data scientist. This seems to be the easiest problem to solve, as there are a variety of domain-specific analytics products already on the market.
But these products are still more constrained and less accessible to domain experts than they could be. There is definitely room for a friendlier interface. We also need to take into consideration how the machine learns from the results that analytics deliver. This is the critical feedback loop, and business experts want to provide modifications into that loop. This is another opportunity to provide a templatized interface.
As we learned in the CMS space, these solutions won’t solve every problem every time. But applying a technology solution to the broader set of data issues will relieve the data scientist bottleneck. Once domain experts are able to work directly with machine learning systems, we may enter a new age of big data where we learn from each other. Maybe then, big data will actually solve more problems than it creates.
Scott Brave is co-founder and CTO of Baynote, an e-tail and e-commerce advisory business. He is also an editor of the “International Journal of Human-Computer Studies” (Amsterdam: Elsevier) and co-author of “Wired for speech: How voice activates and advances the human-computer relationship” (Cambridge, MA: MIT Press).