Cognitive Automation Redefines Agility at the Cognitive Automation Summit

By
Panel of Thought Leaders
27m
Cognitive Automation Redefines Agility at the Cognitive Automation Summit

Experts say there is a Certainty Of Missing Out for companies who haven't adopted a cognitive automation system.

What is cognitive automation? What does intelligent automation allow humans to do?

Frederic Laluyaux, from Aera Technology, Nokuthula Lukhele from the World Economic Forum, Prof. Joseph Fuller from Harvard Business School, and Ray Wang of Constellation Research discuss how "redefining agility" means cognitive automation at a global scale. (Transcript below.)

Fred Laluyaux:

All right. So for this first panel, it is my distinct pleasure to welcome three guests who will share their unique perspective. Nokuthuia Lukhele, you are a project lead in data and digital transformation at the World Economy Forum. And we had the pleasure to collaborate recently on your latest research paper on the business of data. Welcome.

Nokuthula Lukhele:

Thank you.

Fred Laluyaux:

Professor Joe Fuller. I think I'm going to call you Joe. As I normally do. You are currently Professor of Management Practice at Harvard Business School, where you also co-lead the school's initiative on managing the future of work. You also co-founded and led a strategic consulting for a monitor, and you're doing a lot of things.

Last but not least, Ray. Ray Wang, Fonterra and Sherman Principal Analyst at Constellation Research and Advisory Firm, which studies disruptive business and exponential technology trends.

Welcome to all of you. So happy to have you. Such a great panel. So, Joe, we'll start with you. We're talking about redefining agility, and I know that you have a lot of rich thoughts on this topic, you and I have been talking quite a bit. Can you share those thoughts with us on the importance of redefining agility for a large organization and specifically today?

Joseph B Fuller:

Well, I think, Fred, you have to start with the question, where are organizations on their journey to try to transform and be more competitive? And frankly, I think companies have gotten essentially to the bleeding edge of what they can do the way they organize work currently. Most organizations have de-layered, outsourced, re-engineered processes. If you had a large multinationals, they all have transformation fatigue, because they'd been through two, three, four major "transformations", which have led them to lurch to another level of economic efficiency without really fundamentally asking how they do work, how decision rights are allocated, what are the nature of the decisions that are important for them to make to maintain their competitiveness?

So I think as we go forward, companies are going to have to rethink agility. I think right now they're rather agile sumo wrestlers or football lineman or plow horses, and agility in the future is going to be much more like being a gymnast. And that's going to require different sets of muscles to be developed, a different training regime, and really a different mindset about everything from their physicality, if you will, to keep the metaphor going, actually, to the risks they're prepared to run. So when I look at gymnasts perform, I'm always in awe of how these incredible athletes will be doing back flips and tumbles and all sorts of things that if they make a mistake of a quarter of an inch, they're going to end up landing on their head. And that's going to require companies in making that next transformation, really to fundamentally rethink the juxtaposition of people and technology and how decision rights are made.

Historically, companies, in my experience, have deployed technology, including AI, essentially by asking a badly flawed question, which is, how can we use AI to augment or improve the way we do work now? With all due respect, that's not a very smart question. If you've got a fundamentally new technology, the right question to ask is, how do I deploy this technology for maximum impact and organize my work, including my human beings around it, to provide governance, judgment, physicality, where that's required, to teach it, how to be effective?

And so if companies acknowledge that they're really at the end of what they can get out of their current approach and move to a new definition of agility, then I think they can renew themselves and position themselves for the world that's emerging. But those that keep battering their head against the wall of, "I'm going to re-engineer what I've got," I'm afraid are going to be in for a rough ride.

Fred Laluyaux:

Yeah. And the timing is now. And technology's available, the the timing is now. Thanks for that, Joe. Nokuthuia, the world economy forum engages in addressing the seemingly intractable problems of tomorrow. So what plays to redefining agility and cognitive automation as we're discussing today, occupy on your agenda? How prominent of a topic is this?

Nokuthula Lukhele:

Yes, yes. I mean, at this stage, the anchoring theme for the forum is the great reset, which explores the dramatic effects of the pandemic on tomorrow's world. And so we've essentially had to live differently. We've had to work differently, shop differently, socialize differently, learn differently, and we're doing all of this virtually. So the great reset is our attempt to use this rare and narrow window of opportunity to transform businesses and create a better opportunity.

One of those pillars is around harnessing technology of the fourth industrial revolution. And this is where redefining agility and cognitive automation plays a large role. How do I use this technology to transform businesses? How do I use this technology as a vehicle to empower society? And what we've seen is that many of the executives feel like slow decision-making is a huge barrier to agility. And so the future of agility needs to be at the age of decision-making, where employees meet customers and distribute across partners and networks and cognitive automation enables us.

But besides just providing business agility, it's also a vehicle to provide societal value. Cognitive automation has applications in diagnosis, and treatment, and personalized customer experience, and public safety and emergency response, fraud, threat prevention. I mean, just to name a few, and it does this in a fast and impactful way. So this technology really is a demonstration of what technology can do in resetting how we provide value for business and society at large. And so that's where we see the legs with this technology and our broader theme of the great reset.

Fred Laluyaux:

And do you feel like you're pushing that topic to your member's organization, or there is a demand from the member's organization to have you guys help research the topic?

Nokuthula Lukhele:

Pre-COVID, it was definitely more of a push. As Joe mentioned, previous efforts were very much focused on incremental gains, usually operational efficiency. And so now what we're seeing is that there are more executives coming to us saying, "What does the future look like? What are the technologies we need to be looking for?" Because at this point it's about business survival. And then business growth.

Fred Laluyaux:

Yeah, no, it's a big reset. Thank you. Thank you so much for that perspective. So Ray, you and I have been talking about this topic for quite some time, I guess a few years now. You've recently published a research paper on how cognitive apps drive the future of autonomous enterprise. Sounds familiar to me. How do you see cognitive automation or cognitive apps as enabler of agility? So we talked about the need for agility, but how is cognitive automation and cognitive apps enabling that agility? I guess Nokuthuia has touched on it a little bit, but I'd love to get your perspective.

Ray Wang:

So one of the interesting things that has shifted, as what Nokuthuia is talking about, is competing on decision velocity. Machines decisions a hundred times per second. We're lucky to make a decision per second, but then by the time we get out of management, we might figure it out in a week. I mean, that was just an unfair imbalance. It's not even fair competition. So, I think that's the first part that we're getting to.

The second part is that we have to make some strategic decisions if we're getting to be autonomous, that gate. This is going to be different inside organizations. We've got to figure out, when do we fully trust intelligent automation? It's safe enough. It's accurate enough. We don't have to worry about it. And hopefully, we didn't just do something faster. That was not well thought out. We've completely rethought through the process. We thought through all the opportunities that are there.

The second piece though is, we have to ask this other question, which is, when do we augment the machine with the human? There's nuance. There's context. Why do you make an exception? Why did this even happen? These were the rules, but these humans are completely changing. What happened here? And so 99% accuracy in manufacturing, we'd be like, "Good job, good job." 99.1%, 99.2% accuracy in healthcare, that's not even acceptable. We're talking, we need five nines, six nines. The machine's got to be better than the human. Okay, fine. We get that.

And then the third question we have to ask is, when do we augment the human with a machine so they can make faster decisions? You can service them up, they can take action, we can build the reinforcement, we can fill in all the learning things that we have to do in terms of figuring out machine learning, getting the neural nets to work.

All right. And then the last piece is, when will you actually trust human judgment? Now this is an important question. If you believe that Skynet is going to run on its own, then yeah, don't involve any humans. Now, if you want to make sure that we're all around and start the process with the human, end the process with a human, and we all have a shot.

No, but on a serious note, that's what every company has to figure out. And whether it's in supply chain, or cash, or whether it's campaign to lead, whether it's procure to pay, it doesn't matter. Each business process, we're going to have to ask those four questions and that's what's going to take us to the autonomous enterprise.

Fred Laluyaux:

Yeah. We'll come back to your scale a little bit later on how you define and how you establish that trust. But I want to come back a little bit to what Nokuthuia was saying earlier around the changing perception of the change in demand. So the concepts that we're discussing today, they've developed over the last few years, following the seemingly unstoppable rise of the digital native global organizations.

However, it appears that the adoption of cognitive technologies and the deployment of new business models have accelerated since the beginning of the COVID crisis. And I think you were saying that a little bit earlier. Is this something that you see with a number of your organizations? Do you see the crisis as a catalyst in terms of actions, in terms of initiative? Or do you feel like we're still in the research? And Ray, you were talking about asking questions. I'm trying to get to the point of, is it time to act? Do you see our organization acting? I'll start with you, Nokuthuia, but I'm happy to have Ray and Joe, of course, comment as well.

Nokuthula Lukhele:

Yeah. And that's a good point, Fred, that all these things have existed before COVID, so what's new, what's changed? And there just seems to be this renewed energy. And the question is, if you've been doing all of this before, why hasn't it been as effective as you anticipated during this disruption? And it goes back to what was the objective of using these technologies? And pre-COVID, the objectives were around, how can I reduce costs, make operations efficient? And right now, what we're seeing from our members is, how can I truly transform my core? How can I enable new business models? How can I meet the customer where they are? And also with all these from society and on the environmental impacts that we're having, how can we use technology as a vehicle to accomplish positive societal impact, too?

And so that's where we're seeing this real urgency shift and the conversations, because we've had a few technology conversations around [inaudible 00:13:07] members and they've dramatically shifted. And when COVID happened, we framed this timeline of how we were going to tackle these technology issues. First, it was about reacting. Let businesses just survive these first one to three months until we figure out what's happening. And then it was around adapting, which is around where we are right now. And we're moving towards transforming, and that's where we really see the discussions exponentially going more and more towards, how can I adapt these new business models? Can we create guides? What technology should I use? And this is where we're seeing that appetite really after everyone has kind of found their feet, understand the new climate and just looking towards the future.

Fred Laluyaux:

No, that's very good insight. Reacting, adapting, transforming. Ray, Joe, any insight? I know you were advising companies all the time. Do you see that pattern, reacting, adapting, transforming, and the acceleration of adoption.

Joseph B Fuller:

I think we do, Fred. I think what COVID has done is fast forward a lot of trends that were visible under the surface, historically. But I think they've been accelerated, basically, because companies have changed their definition of risk. Historically, in adopting these technologies, they've been taking a more piecemeal, gradual approach, prove it to me, almost applying the same type of logic they applied installing any type of new software in the organization, which has, in most companies and most big governmental institutions, have been very much a question of not upsetting the current processes, not going too fast, not waking up with some terrible interruption of service or some terrible pollution of your real-time data. So go slow, go slow, go slow.

Now the consequences of going slow, the risk of going slow, are really apparent. And the greater risk is being left behind. The greater risk is thinking that you can impose a comfortable cadence when your organization is all aligned about, not threatened over, in making the changes that your customers want and your competitors are making. And with that new kind of risk paradigm companies are changing the way they're denominating their ROI calculations and the weight they apply to different risks in managing the trade offs they're facing. The greater risk right now is being left behind.

Fred Laluyaux:

Ray, you agree?

Ray Wang:

Yeah, I totally agree. I think what happened is all those digital initiatives that were sitting on the wayside, people are like, "Oh yeah, we can wait later. We don't have to do this. Our competitors haven't done that." Everyone's like, "This is the benchmark amount of spending that we're going to put into digital. Is it higher or lower than everyone else?" I mean, those are the wrong questions to ask. And suddenly, you see that little chart on Twitter, where it's like, "Which was your biggest factor for accelerating your activities and transformation? CEO, CIO, CDO, CTO. COVID." COVID was the one that actually won out, and that's actually what was the accelerant. You didn't have a digital channel? Good luck. We're not meeting in person. You didn't have a way of actually collecting, using that data harnessing to automate and do the next best action? Good luck. You're not going to be able to play.

And so suddenly all these things started to happen and people realized, "Okay, we really do have to take this seriously now." And the companies that actually have the talent to go do that are doubling down. So if we don't begin, you are missing out. And the FOMO is real. I mean, this fear of missing out on digital transformation is there. But that is just the beginning. There's the aspect of automation that actually has to occur. There's the aspect of then getting it to a state of cognitive applications, which hasn't happened yet for a lot of organizations. And then there's that accelerant. So there's a hockey stick that's about to take off. And we're just at the beginning. And other people were just like, "Oh, maybe we should get to the cloud." Seriously? Where are you? Actually, cloud is really bad. It took 20 years to get the 20% adoption. That is probably the slowest disruptive tech adoption I've seen. But you get the idea. You're still talking about the cloud when we're talking about cognitive automation. You're three steps behind.

Fred Laluyaux:

Yeah, yeah. No, it's very interesting. Go ahead, Joe.

Joseph B Fuller:

Just a quick comment. I think that we've gone from FOMO, fear of missing out, to COMO, certainty of missing out.

Fred Laluyaux:

Ooh, I like that.

Joseph B Fuller:

And the phenomenon that Ray was describing, I've seen throughout industry, throughout sectors, everyone's benchmarking, talking to their technology consultants, going to CIO conferences. I'll show you mine, if you show me yours. But now you've got the entirety of the C-suite and boards of directors saying that the fundamental durability and sustainability of our company is at risk here. And we have to stride out now and embrace what's available as opposed to taking this very measure. We can control our own future. We can control our destiny as we introduce technology to the marketplace. The marketplace adapts, the marketplace is imposing technology back on large companies, particularly these companies that were winners in the old world. And they've got to take off the life vest that they've been wearing and just plunge in.

Fred Laluyaux:

No, it's totally great. Absolutely. So, Ray, let's progress through the conversation, because you provided a framework as scale for cognitive apps, similar to the scale used to define the level of autonomy in self-driving cars. Can you explain that framework? Can you explain that scale so that people can help, relating to where you sit in that framework. Are you at the beginning, or are you fully autonomous? Can you walk us through from level one to level five, I guess, you're going through?

Ray Wang:

Yeah. That's a great point. Autonomous vehicles, it's really about how much control that the human has and how much control does the machine have. And it's the same thing as we walk into the autonomous enterprise. We're talking about five levels.

The first one is basic automation. Okay, we can get some tasks, they get to work. And that's nice. It's like the way cruise control works. We can regulate speed. Okay, cool, nice. But at some point we get to level two, which is human directed, and we're doing more and more with machines. We're doing more and more with artificial intelligence, and that's starting to pick up. And then at some point we get to level three where the machines intervene because they actually know better. They're actually making better decisions. They actually have enough data, they have enough context, they can actually think about what's the right step. And then at some point we get to full autonomy. These things are sentient, they're thinking on their own, they're getting to the point, and that's level four. And then the final level is, if humans are optional, we might not need them. You might not even need a steering wheel. So, that's the five levels. And we're headed in that direction when we think about where we are with autonomous enterprises and where cognitive apps are taking us.

Now this move to level three, it's going to happen in the next two to three years. This isn't something that's 10 years out. It's happening right now.

Fred Laluyaux:

Yeah. No, I agree. Nokuthuia, any thoughts on that? On that front, that framework? Do you see your member's organization moving up from level one to level four, and at some point, level five?

Nokuthula Lukhele:

Yeah. I think that the complexity that we deal with at the forum is that our members are at all those levels. And so, one way that we're trying to make this really valuable for everyone is that those who are at level four, are there lessons that they've learned? Are there challenges and pitfalls that they've overcome, that they could pass down to people on previous levels? Because a lot of the time, within the silos in my own company or in my own roles, there isn't that external support, that external advice, and that's where we provide this unique place where everyone, no matter which level you're at, can come together in a pre-competitive way and share lessons across industries, not just within my industry, within my company. And so this is pretty unique. But I do see the variations among the members that we interact with.

Fred Laluyaux:

Yeah, yeah. We can see it from our perspective across industries, across regions. The pressure is different. The one trend that we see that's undeniable, and I think you alluded to that earlier as well, is that it is an executive topic. It is a board level discussion. It is, we're engaging with the senior executive in organization, and this trend of, "Oh yeah. That's an interesting topic. I'll push you down three, four levels." This is stopping, I think now executives pay attention and see that as a critical survival issue.

Joe, you're the coach here of the future of work research project at Harvard Business School. What do you think cognitive automation means for the future of organization, and for the future of work, in two minutes. Or three minutes.

Joseph B Fuller:

Well, two or three come to mind, Fred. One is, cognitive automation is going to eliminate a lot of routine transactions in companies and between companies that are actually made more error prone by human involvement. And so I think that the quality of processes, the timeliness of processes, the accuracy of processes is going to go up. And this type of automation is going to eliminate types of work that are not actually very appealing. Historically, we've thought of automation, eliminating jobs that were dirty, dark, dangerous, like robots, and painting booths, and auto plants, or coal mining, and hazardous waste cleanup. This is going to eliminate a fourth type of job, the dull jobs. The jobs that don't really call on a human talent, the human's capacity to emote, to relate the social skills of human beings.

The second might be a little bit more problematic for employment, and particularly for companies in so much as I think companies are going to increasingly looking for the same type of talent. Historically, if you're going to be in an oil and gas company, or a consumer goods company, or professional services companies. Yeah, they were high order management skills that all those companies wanted, but they want people with academic and technical backgrounds in the vertical markets and deep functional expertise. Did an oil and gas company really put a lot of weight on consumer marketing expertise like a packaged goods company would? Not so much.

However, digital technologies are universal language. They're applicable across all industries. And whether it's AI experts, data and analytics people, digital marketing people. The advent of these tools and people understand these environments and can adapt to new tools, those become the skills of the future. And they're going to be relevant to everybody. So we've heard a lot about, well, people are going to move out of the big cities to beautiful lakeside villas and go to low tax environments. That may be true, but at the same time, more and more companies are going to be looking for the exact same type of talent, which just means you're going to have unorthodox patterns of competition. Someone who might be qualified today, or viewed as a logical hire in the IT department of a bank, will be highly competitive for jobs at Mayo Clinic, Apple computer, the NSA, Procter and Gamble, and that's going to change the competition for labor.

Fred Laluyaux:

Significantly.

Joseph B Fuller:

The final thing I'm going to say is, cognitive automation is going to cause a blurring of lines between enterprises. Historically, there've been lots of walls in terms of decision rights and handoffs between companies to ensure the security of transactions. AI, blockchain based, with a complete audibility of transactions, what happened. That's going to allow a reconfiguration of who does what for whom, when, and who has the decision rights to do things. And that can lead to just an unbelievable revolution in productivity.

Fred Laluyaux:

Fascinating, fascinating, Joe, thank you so much. It's time to wrap up. I mean, I wish we had another hour together and we could do that in person, but the circumstances are what they are. So I want to most sincerely, thank you all for your invaluable insight. What a conversation. The three of you are also prolific researchers and thought leaders on the topic, as we've all witnessed today. And I'm sure that the audience would like to read some of your research or listen to your podcast, Joe, as well. So, what I recommend you do is go on aeratechnology.com, resources, and you'll find the White Papers to research the links to all the work that you guys are producing.

A real pleasure talking to you all. Thank you very much for being our first guests for the first cognitive automation summit. Thank you.

Nokuthula Lukhele:

Thank you, Fred.

Ray Wang:

Thank you.

Joseph B Fuller:

Thank you, Fred.

Get social with us!

Follow us on LinkedIn: Cognitive Automation Community
Follow us on Twitter: CognitiveAutomation

 


By
Panel of Thought Leaders
,

Nokuthula Lukhele, Prof. Joseph Fuller, Ray Wang

Published:
September 24, 2020
Share:
SUBSCRIBE TO WEEKLY EMAIL
-->