By Bobby Monks and Kathleen Campion

When sailors speak of “blue water,” they anticipate taking a boat offshore, out of sight of land, and usually for an extended cruise – an adventure underwritten by a frisson of risk.

Much of the chatter about artificial intelligence has the patina of blue-water thinking about it. Its promise is expansive – as broad as an ocean. There are unplumbed depths, certainly treasure and possibly danger.

At the risk of overworking the metaphor, today Artificial Intelligence (AI) sits in the matrix of a perfect storm of coincidence. It has been around since the ’70s (at a concept level, since the 50s), but everything changes when enormous computing capacity meets Big Data.

As the cost of computing power drops and the ability to collect and process data soars, AI’s potential builds. The ubiquitous cloud, where you store your cat pictures, is collecting oceans of data. What’s more, “crunching” that data has become more nimble than we’ve seen before. Users chasing the “golden fleece” of AI’s promise crave more than crafty pattern recognition.

Artificial intelligence promises a quantum leap in what machines can do. It’s not just more data crunched more nimbly. It’s the next step: the difference between yesterday and this morning – machines that learn.

Useful definitions of artificial intelligence are hard to come by, but a recent Bloomberg editorial about Google’s (GOOG, GOOGL) “AlphaGo,” the AI program that defeated Go Master Lee Sedol, is succinct and to the point:

“Computer algorithms can’t easily replicate the intuition and creativity that top players bring to the game. “AlphaGo”… uses a different approach. It fuses two methods of artificial intelligence. One, called a deep neural network, helps the AI recognize patterns by imitating the structure of the human brain. The other, called reinforcement learning, helps it improve decision making through trial and error.”

If exploding computing power and lagoons of data weren’t enough to drive tectonic shifts, add in enormous pools of money on the move. The VC community is pounding into AI’s promise. TechCrunch reports VCs have started funds – from WorkDay’s “Machine Learning” fund to “Bloomberg Beta” to the “Data Collective” – that are completely focused on funding companies that use machine learning. At the same time, billions in investor dollars are searching for something more than robo-driven algorithms. We have a tempest in the making.

AI applications in medicine, climate modeling, resource conservation and security are compelling. Most fields, probably all fields, will find some needs met by AI as this juggernaut gets rolling.

Let us focus on money. At the moment, some pieces of business are a better fit than others. AI is likely to disrupt existing search models. For example, when you ask Kensho’s program “What happens to car firms’ share price if oil drops by $5 a barrel?” it will scour financial reports, company filings, historical market data and the like, and reply in seconds. That’s what a human analyst would do, but would take longer and would be more subject to error and bias. Another consideration: the machine has no tortured private life nor venal career goals to color decision-making.

AI’s already invaluable in identifying credit card fraud and assessing insurance risk. Banks have been using AI internally to review trades and monitor risk of insider trading. Josh Sutton, global head of artificial intelligence at tech shop Sapient, notes AI would accelerate “… core middle and back offices to automate everything from trade processing through to KYC (Know Your Client) to AML (Anti-Money Laundering).”

Automating human functions brings us to the “danger-lite” part of the new equation. AI can eliminate the need for a lot of human labor and may well obviate the need for anything like the current legacy financial institutions. A far greater risk plays out in a letter from some of the planet’s deepest thinkers, Stephen Hawking and Elon Musk among them. They warn that AI is potentially more dangerous than nuclear weapons. For our part, we’ll stick with the near-term early innings of the new AI ballgame.

We know hedge funds are piling into AI. Wired reports AI guru Ben Goertzel’s new hedge fund makes all stock trades using artificial intelligence. Goertzel says their system “identifies and executes trades entirely on its own, drawing on multiple forms of AI.” No humans.

Bloomberg reports Ray Dalio’s $165 billion Bridgewater Associates is starting a new AI unit with programs that learn as markets change and that adapt to new information.

There is a lot of “noise” around the lift AI can give stock picking. But it is far from “game over.” Estimates vary, but the size of the global hedge fund industry is in the $2-3 trillion range. Since global stocks are valued in the $59-61 trillion range, even if hedge funds do all of their stock trading informed by AI, it is still a drop in the bucket.

Not surprisingly, most of the investors chasing the AI advantage in the financial services world are new adopters, wide-eyed about the potential, but perhaps naive about the pitfalls.

Not so the brainiacs at G Squared Capital LLP (G-2). Founders Gabriel Andraos and Dr. Gareth Shepherd have been building their learning machine-based stock picker for a decade.

After experimenting with barrels of fundamental factors, strategies and indicators, G-2 has teased out the components they’ve found to be consistent predictors of outperformance among individual companies.

They’ve selected 26 AI-driven “virtual analysts” – 13 long side and 13 short side. They program them with objectives based on different “human” strategies – growth, value, contrarian and more. Each virtual analyst assesses fundamentals, sentiment, technical and other “secret sauce” factors. They look at all US and global large-cap stocks. The “analysts” then choose buy and sell positions based on real-time data and on the patterns within that data that they observe. The founders like to joke that their “analysts” are like their human counterparts, except they don’t fly business class! The real humans stay out of it. We have seen their results, and they are impressive.

There is little question that all the wizards massaging AI in all of the airless rooms at Stanford and beyond will solve hundreds of thousands of problems that will change all of our lives. This does not, however, lead to “techno utopia,” where robots do our bidding. It very likely does lead to self-driving cars and earlier cancer diagnoses, and perhaps even some mitigation of imminent climate change risks. And stock picking?

Critics of AI’s potential to revolutionize stock picking argue it is short-lived, at best – that once the elegant algorithms are perfected, everyone will be trading that way, and that eliminates any advantage. There are at least two problems with that argument. First, markets don’t always work that way. For example, the companies that perfected proprietary quant models (think Renaissance Technologies) maintained their advantage for quite a long time – the market rushed to them.

More significantly, G-2’s Gareth Shepherd argues most observers miss the “guts” of the process. Making AI do this particular job – picking stocks – is neither a glamorous nor elegant business. It is, he insists “a messy, roll up your sleeves and get dirty with the data business.” It is an engineer’s messy world that rarely yields perfected solutions and is all about process. Shepherd maintains building the architecture and sustaining the patterns that have led to G-2’s results takes a committed, battle-tested team, not easily replicated nor outstripped.

The article was originally published by on April 15th, 2016.