Why Machines Will Always Need That Human Touch

602f4d52a220f8907355d74e Shape-Gradients-LL (1)

We explore the possibilities and realities of artificial intelligence replacing human beings in complex jobs.

Bring up artificial intelligence (AI) at any virtual watercooler or family gathering and the odds are high someone will mention the possibility of machines eventually replacing human beings.

In some ways, it’s natural that this would happen. We’ve been conditioned to think this way by the movies we watch and the studies we read predicting machines could eventually become so smart that people won’t be needed for businesses to operate. Millions will be automated out of jobs. Everything will run by itself.

While not out of the realm of possibility at some point, this view of the future is highly unlikely. First, because most AI we deal with at work is ‘weak AI’ or ‘niche AI’, not the strong AI needed to create sentient robots. Second, these niche AI systems have no intrinsic needs – without humans, they have no intrinsic purpose and goals. This is where the collaboration starts—with directions. Without these data and this trust, machines will go unused or be limited by their lack of efficacy.

The more we consider AI and the automation of complex decision-making processes, the more we realize that people and machines each have their own strengths and weaknesses. By cooperating with one another, we can leverage our positive qualities to maximize productivity, efficiency, innovation, and financial results. But by dividing our work, we limit possibilities for mutual growth and achievement.

As experienced business people, we all know that successful collaboration depends on defining clear roles, expectations, and rules of engagement for everyone involved. The only difference here is we’ll soon need to apply this approach to both human and artificial colleagues.

How can we accomplish that effectively? By fully digitizing, acting upon, and learning from data.

Digitizing Data

For people and machines to cooperate and begin digitizing decision making, it’s important to first recognize what each party brings to the table.

Companies are on pace to spend US$2.3 trillion a year on digital transformation for the next four years, in large part because it offers a faster and more efficient way to operate and compete than operations with only human beings.

The amount of data generated each day has become almost immeasurable (though some put the figure at 2.5 quintillion bytes). And that information, if collected, aggregated, analyzed, and used appropriately, will enable smarter and more actionable business decisions.

This is the reason data has become the lifeblood for many companies and why AI will be increasingly vital to their futures. By themselves, machines are rather passive. They scour through information provided to them or found on their own, process and sometimes analyze it, and then tell you whatever their rules say they should tell you. These analytical systems work in a pull mode. The system may create tons of information but is waiting for a user to come to pull it out from a dashboard or a report. Add AI into the mix, though, and suddenly they’re able to create models for spotting and making recommendations on existing, and even new, business issues. Hidden sales opportunities, evolving market trends, procurement and supply chain optimization, and financial forecasting all become a part of the optimization pathway.

Machines are incredibly gifted at pulling data from ERP, CRM, and other systems to spot important patterns. But even if they have the most powerful algorithm, there are still many things humans do better.

That’s because people are instinctual creatures. We intuitively make connections with our experience and five senses that smart machines cannot. Machines are not so great, for instance, at natural language. If given attributes describing objects or animals, they often struggle to identify them – things humans could do in an instant.

More to the point, they can only reach recommendations and decisions based on data available to them at the time. If that data happens to be inaccurate, flawed, or outdated, it can lead to useless results. Human beings, on the other hand, often instinctively know when something doesn’t seem right, and can shift their thinking and decision making to make sense of ambiguity.

Human beings also tend to be better at understanding context. We see the big picture in ways machines cannot. We also consider the real-world risks of any decision we make in highly personal terms. Will what I do result in loss of employment or business failure? Will it bring me bodily harm or cause my death? Could it harm others?

Acting on the Data

At the end of the day, these prior limitations deter many of us from fully trusting machines to make critical decisions. Sure, we’re fine letting them handle transactional matters where we’ve established rules that even the simplest AI couldn’t possibly botch. But we still want a pilot on a plane in case of an emergency, even though an aircraft can pretty much fly itself these days. And we may not ever be fully comfortable handing over the wheel to autonomous vehicles on twisty, mountainous roads, no matter how well built or sophisticated the AI might seem.

We have to trust machines at some point, however, because we need them. Just as they need us. And we must be willing to act on some of the information presented to us by the machines.

As discussed, this starts with establishing rules of engagement between us. You also need the right people in place to put together decision trees that describe the rules of the game for the machines. But it also requires some sort of Decision Intelligence platform, or “digital brain,” to keep interactions clean, fluid, and on track. This platform would essentially serve as a bridge between digital systems and their human counterparts.

Learning from Decisions

In addition to collecting, analyzing, and presenting information, a platform like this would closely monitor how people make decisions and attempt to learn from the resulting successes or failures.

Both human beings and machines learn through explanation, example, and experimentation – through what we experience and “see.” But we approach those three principles in different ways. People learn very quickly compared to machines. It might take reading a manual and spending about 20 hours or 50 miles on the road for us to learn how to safely drive a vehicle. Self-driving cars, on the other hand, need millions of miles before they become smart enough to roam around accident-free.

For machines to learn, they need a healthy “feedback loop.” This involves two types of information: Responses from people to system prompts asking them to accept, reject, ignore, or override a recommendation. Or information collected by the computer itself about the results of decisions that it or its human counterparts made in relation to expected goals, metrics, scenarios, or outcomes.

Feedback from people is critical in the early stages of a machine’s development and evolution. It’s similar to how our children learn and grow into fully functioning adults. The early input they receive from us, as parents, shapes how they make decisions as children, teens, and adults throughout the rest of their lives.

A New Paradigm of Decision Making

We are moving from an era of people doing the work of decision making, supported by computers and data platforms, to one in which machines will do much of the work, guided by people.

Computers can handle massive amounts of data and do wondrous things with it. But they require a steady flow of new quality data and exposure to human decision-making processes to make them the viable, trustworthy, long-term partners we know they can be.

The technology exists for this today. We just need more human-machine cooperation to bring it to life and realize its full potential. The possibilities are endless.

See Aera in action.

Schedule Demo