The Future Belongs To Leaders Who Get Artificial Intelligence
6 steps to become 10 times smarter with Artificial Intelligence.
Getty Images
Co-Authors
Overnight on October 14, 2015, Tesla introduced an semi-autonomous driving system for owners of the Model S. Within a day, owners began uploading videos of themselves being driven around.
You can see the fear wash over people’s faces when cars slow down in front of them or the Model S automatically changes lanes.
They brace themselves for impact. They freeze and pray.
You can tell that part of them just wants to throw their hands on the steering wheel and take over.
But slowly, as the car makes the right choices over and over, the human drivers slowly relax and enjoy the experience. And some of them even seem to forget that they’re being driven around by a car: something that wasn’t possible only hours before.
This journey from surprise, to fear, to life-as-usual is something that we’re all going to experience over and over, as software algorithms take over decisions that we previously thought only humans could do. We call this the surprise-fear-embrace curve.
How we as companies, entrepreneurs, and executives navigate this curve will have a major impact on our career success.
The Barrier That Stops Leaders From Thriving In The World Of Artificial Intelligence
Arguably the largest barrier to navigating the curve is learning how to NOT trust our intuition.
Imagine spending your whole career learning a skill and becoming world class at it. Let’s say that you’re able to spot star employees during the hiring process.
You’ve built up a spider-sense intuition after hiring hundreds of employees over decades. Now, you can tell if someone would be a fit within minutes.
People applaud you for this skill. You build your identity around it. You get promoted because of it. Your career depends on it.
Now, imagine that an algorithm becomes part of the recruitment process and makes decisions that go against your intuition. At first, you might shout: “This doesn’t make sense!” But then, as the hiring results come in, you realize the perplexing decisions made by the algorithm turned out to be right.
At that point, this piece of code has outperformed you at something you’ve spent decades mastering and hundreds of thousands of dollars in school to learn.
We’re not talking about hypotheticals. This is actually already a reality for Xerox, Walmart, and increasingly, for other companies.
In the case of Xerox, its HR algorithm learned that the conventional wisdom was wrong. Experience has very little impact on the ultimate success of its call center workers. What really matters is personality.
In the past, we used to wonder, “Could this happen to my field?”
Today, we ask, “When will it happen?”
And it’s no wonder why. Algorithms trump human intuition in making many decisions. Shortly before he died in 2003, famous researcher Paul Meehl summarized the top studies on expert intuition vs. algorithmic decision making, and he found that in almost all cases, the algorithmic decision making performed better. Here are Meehl’s exact words:
When you are pushing over 100 investigations, predicting everything from the outcome of football games to the diagnosis of liver disease, and when you can hardly come up with a half-dozen studies showing even a weak tendency in favor of the clinician [the human intuition], it is time to draw a practical conclusion.
In Superforecasters, David Ferrucci, the principal investigator who led the IBM team that developed Watson to win Jeopardy, predicts that the demise of the guru model of expertise–“I’ll counter your Paul Krugman polemic with my Niall Ferguson counterpolemic, and rebut your Tom Friedman op-ed with my Bret Stephens blog”–will become obsolete.
He adds, “I think it’s going to get stranger and stranger for people to listen to the advice of experts whose views are informed only by their subjective judgment. Human thought is beset by psychological pitfalls, a fact that has only become widely recognized in the last decade or two.”
So, how and when can we learn to trust algorithmic decision making more and our intuition less in certain situations?
We Need To Fix Our Misconceptions Of Artificial Intelligence
Over the last decade, my team at Ayata and I have been anticipating the explosion of artificial intelligence. In 2009, the State of Texas invested to commercialize our research. Since then, we’ve been using our artificial intelligence software to improve mission-critical processes for Fortune 500 companies in oil & gas, high-tech, and other industries. Michael, an entrepreneur and writer, has written about leadership in Forbes, HBR, and Time.
Based on our combined experiences, there are certain holes that most corporate executives have in their thinking about the future of artificial intelligence.
Here are specific steps that you can take to plug those holes and set yourself and your organization up to thrive:
1. Ask a simple question, “How would Google do it?”
If auto manufacturers asked this question a decade ago, they wouldn’t be struggling today to play catch up as self-driving cars reinvent their entire industry over the next two decades. How would Google drill? How would Google farm? And more.
2. Look for hidden insights in your existing data.
Many mission-critical processes generate data that harbors insight that could transform your business.
Take oil & gas as an example. Industry veterans will tell you that “it is all about the rock.” It is true: that information about the rock–geology, geophysics, petrophysics, etc.–is critically important.
But also important is the information like the sounds captured by fiberoptic sensors during the hydraulic fracturing process. Hidden within these sounds is often critical information that helps the operator extract the most hydrocarbon possible while being safe and economical. The hydraulic fracturing process is already making the sounds. Collecting and analyzing this information is possible, but many never even think to collect it. The same is true in other industries.
3. Expand your definition of what data is.
When most people hear the word ‘data,’ they think numbers. However, data is much more than just numbers. According to IBM, 80% of the world’s data is unstructured (e.g., videos, images, sounds and text).
Without data to learn from, there is no artificial intelligence. Similarly, there is no human intelligence without the senses to collect data–eyes, ears, skin, tongue, etc. Simply having a brain is not enough.
By increasing the variety of data you collect, you can often magnify intelligence and transform decision making. This is like giving someone the Internet who previously only had books to learn from.
4. Explore the world of open-source algorithms.
Algorithms are improving rapidly across different disciplines within, and related to, Artificial Intelligence. Thanks to Google, Facebook, Microsoft, MIT, Stanford, Carnegie Mellon and other top-tier tech companies and academic institutions, the best algorithms in most AI disciplines are available to anyone–open source–to use and modify for free. These algorithms are treasures.
Ask your team why they are spending money on consultants and/or packaged software for solving data science problems without considering what’s available for free. They may have good reasons–you just want to make sure they do.
5. Get comfortable with useful algorithms that you don’t understand.
We now live in a world where artificial intelligence defeats the world’s top Go players. To comprehend how awesome this feat is, consider that there are more possible configurations in the game of Go than there are atoms in the universe!
Algorithms can produce actionable insights even though it may not yet be possible to explain the reasons behind these insights. Once AI starts consistently producing recommendations that improve outcomes, people should start using these algorithms and investigating why exactly these recommendations work as well as they do.
6. Avoid falling for the trap of “Algorithms should never do that.”
As algorithms take over more decisions, we grab on tighter to decisions that we think only humans can or should make. It will be important to be open to algorithms that perform better than humans in areas where we’d still prefer human judgement. Geoff Colvin, senior editor at Fortune, points out a few of these areas in his article Humans Are Underrated: judge and jury decisions, leadership and management, and goal setting.
The Surprise-Fear-Embrace Curve, Revisited
The most recent example of the Surprise-Fear-Embrace curve can be seen in the historic five-game Go match between Lee Sedol, a top Go player, and Google’s AI, AlphaGo.
After a surprising move by the AI player in the second game, Lee had to “go wash his face just to recover” according to one commentator. He ultimately lost the game, and he said something very telling: “Yesterday, I was surprised. But today I am speechless.” It is also telling what Google wrote in its official blog after its AI won the series. AlphaGo has been able to “find solutions that humans either have been trained not to play or would not consider.
This has huge potential for using AlphaGo-like technology to find solutions that humans don’t necessarily see in other areas.”
The transition from human intelligence to machine intelligence in our daily lives and in the enterprise is going to be messy.
It will challenge our identity.
It will go against our expert intuition that we’ve spent our careers building.
It will inherently require giving trust and control away to decisions we don’t understand.
It will create larger and more diverse opportunities than we can even fathom today.
That’s why we call it the surprise-fear-embrace curve, and that’s why we’d argue that one of the most important skills we can learn is how to ride it.
The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.
The daily digest for entrepreneurs and business leaders