Why The Golden Age Of Machine Learning is Just Beginning

Why The Golden Age Of Machine Learning is Just Beginning

Even though the buzz around neural networks, artificial intelligence, and machine learning has been relatively recent, as many know, there is nothing new about any of these methods. If so many of the core algorithms and approaches have been around for decades, why is it just now that they are getting their day in the sun?

To answer that question, we can take a look at what has happened over the last five years or so with the attention and tooling around data. And we can also point to the dramatic increase in scalable compute power, or to be more specific about it, performance per watt and bit. These two factors combined have fed the development fury, growing data analysis well beyond the standard database and calculation approaches that have themselves been around for decades. The point is, we are at peak “data hype”—there was a rush to develop a host of new tools and frameworks (Hadoop, as but one example) to support larger, more complex datasets, then a secondary effort to push the performance of the data analysis on new or enhanced frameworks.

So could it be that machine learning in particular is the next natural step for all the companies and end users who have climbed aboard the data express? Indeed, the attention around large-scale, complex analytics and the systems and frameworks to support them spurred some of that evolution. But ultimately, one could make the argument that for some analytical workloads, in both the research and enterprise spaces, those advances have hit their own peak. All of the new methods and approaches that grew from the fertile “big data” soils have been sown and tested. And there is, again, for a narrow (but growing) set of workloads, room for another way of thinking about complex problem solving.

This is not to say that there hasn’t’ been ongoing research and development around new machine learning approaches that can leverage ultra-scalable hardware. But there is a bigger story, explains Patrick Hall, who has the unique position of being the senior machine learning scientists at statistics software giant, SAS. His title is noteworthy because he is finding solutions to problems that don’t fit well into classical statistical modeling approaches (which is what his company specializes in) with the goal of integrating those methods into existing enterprise products—at least at some point.

Hall’s assertion is that while all of the aforementioned trends are pushing machine learning to the forefront, the one thing that is different now is that data finally exists in sufficient volume that it does not work well for statistical analysis. That, coupled with the new developments in machine learning algorithms, means that the golden age of machine learning is finally arriving.

“This is data that can be found in many places; it’s wider than it is long—it has more columns than rows, more variables and observations. All of that is a bad fit for traditional statistics. There is now more data with correlated variables (for instance, pixels that are related in image data) or even in text mining.” Hall says equally, there is a wealth of new data from a range of sources that is defined by missing or sparse data where 1 percent or less of an entire dataset contains actual variables.

And for businesses that want to invest in analyzing this data where traditional statistics don’t fit, there is a huge opportunity–one that is feeding a new wealth of startups and new initiatives from established analytics companies who seem to be getting the message that calling a product “machine learning” even if it’s just a slightly upped version of analytics, is the rage. That causes a problem of definition, and there are, without naming names, some serious examples of analytics and BI companies taking the same old software and slapping a “machine learning” label on it simply because it sounds more robust or complex than data analytics. This is one of the growing pains for any new technology area, especially when the hype machine revs its mighty engines. Hall says users need to understand their data and problem and once that happens, it will be clear whether or not a standard statistics and database solution will suit versus something more versatile (and likely complex).

This isn’t to say that every traditional statistics and database company is changing its product messaging instead of the technology around machine learning. SAS introduced its first data mining product in the late 1990s (Enterprise Data Miner) and at the time, it had many of the machine learning models that are garnering all the hype lately (neural networks, decision trees, K means clustering, etc). There were, even then, Hall says, some emerging use cases where data was coming from the enterprise data warehouse to fit against models that lacked any parametric assumptions. So it’s not new—but the scope and number of those problems is growing, even in places where one might not expect it.

Among the enterprise arenas ripe for a machine learning boom are banking, insurance, and the credit card industries. Interestingly, all three of these are examples of regulated markets where having a black box approach to a problem is problematic for regulators. “There is always a tradeoff with machine learning. You trade interpretability for what you’re hoping is more accuracy and this is a tough tradeoff for regulated industries, but the fact is, they are seeing an opportunity finally and this tradeoff is one they are increasingly comfortable with.”

Hall and his company are well aware that they will have to keep innovating on both the language and product level to keep pace with the wave of machine learning startups that are being funded one by one. “There is indeed a lot of competition for attention right now,” he agrees. “We are trying to adapt our technology to these problems with concurrency and scalability for machine learningbut this is SAS, which means we are confined to a language syntax that, admittedly, looks old.” He says that even though the technology is jut as robust as ever, SAS is “stuck” because changing the core syntax means that the mainframes at American Express and Bank of America will come crashing down. “What we can do is change what runs behind that syntax, and that is what we are working on now.”

It is hard to say at this point how large-scale enterprises will think about all of that data in their warehouses that doesn’t fit the standard regression modeling bill. But to do be fair, doing more complex things with familiar frameworks and approaches is going to have its value, especially for regulated industries who are looking to beef up their analysis using machine learning methods since at least there is a root level of formality and familiarity. This is where SAS is hoping to succeed with its foray into machine learning for large enterprise—and where some of the emerging startups will have a tough time moving past consumer-focused image and facial recognition, speech recognition, or other areas.

It also might be too soon to say that machine learning is seeing the dawn of its golden age, but there is something on the horizon, glinting in the distance. Given the wealth of new investment and attention around machine learning as next great partner for the big data tools and approaches, this does not seem like a stretch.

posted @ 2015-10-23 20:23  菜鸡一枚  阅读(179)  评论(0编辑  收藏  举报