AI and the Future: Volatility Uncertainty Complexity Ambiguity

The economist Frank Knight noted the distinction between risk and uncertainty. It is possible to measure risk but uncertainty cannot be similarly measured.

When risks are known, it become possible to make robust predictions. Uncertainty on the other hand poses unknown risks, which throw forecasts out of whack and precipitate unreasonable decisions.

We live in a VUCA world. Our existence is characterized by volatility, uncertainty, complexity and ambiguity, the four VUCA components.

Society is subject to forces different from any other era. Reality is frequently hazy and easily misread. Change is influenced by multiple social, political, economic and technological forces, and is often abrupt and unpredictable.

What does this mean for engineers?

Ten years ago, only one in six people worldwide used the Internet, but today that fraction is one in two. About 3.8 billion people now use the Internet globally. Of them, 2.8 billion people use social media. Overwhelmingly, most use a mobile device to do so.

It is inevitable that as we become even more connected, so will business and industry. However unromantic and intrusive this might sound, it will be increasingly impracticable to ‘go off the grid’.

Today’s smart machines are typically driven by expert systems. These systems include software that enables decisions, e.g., to support a medical diagnosis or the operation of a smart grid. The engine of that software is based on if–then rules that are learned by it.

If this sounds like reasoning, it is so. The reasoning of the software in a smart system is based on a library that contains certain facts, which are the ifs, and outcomes, which are the thens. As new knowledge is archived in this library, an inference engine in the software uses if–then rules to develop new facts, or ifs, and suggests different outcomes, or the ‘then what will happen’.

This is the basis of a class of artificial intelligence, which itself has now become an all embracing term.

Siemens reports that the global market for smart machines is growing by almost 20 percent annually and will reach about $15 billion by 2019. As the Internet has connected us, this is also becoming the norm for smart machines. Expert systems currently make up the largest market fraction of smart systems, their share will be overtaken by autonomous robots by 2024.

There are naysayers, Stephen Hawking called artificial intelligence “the worst event in the history of our civilization.” Elon Musk told Rolling Stone, “Climate change is the biggest threat that humanity faces this century, except for AI.”

Regardless, AI enabled autonomous robots will proliferate for a simple reason. These robots will be inexpensive. As the number of robotic appliances continues to increase, the cost of sensors is decreasing. The global sensors for robotics market already exceeds $16 billion.

The proliferation of a diversity of smart systems based on interconnected artificial intelligence will lead to disruptive technologies and introduce more uncertainty into change.

Here’s the problem. A human brain is not a computer and likewise computers, although capable of intelligently–based action, cannot reproduce the cognition and intelligence of our brains.  Artificial intelligence algorithms are trained with known data. Consequently, their acquired if–then rules cannot anticipate and formulate rational decisions during uncertain, or unknown, circumstances.

Artificial intelligence methods have been developed over the course of more than a half century.  Their influence has ebbed and flowed but now, through integration with pervasive connectivity and inexpensive sensors, AI is enabling significant technologies.

Nevertheless, today’s wave of AI is based on very primitive models of our brains. Sensors do not yet mimic how we perceive, computer memory cannot duplicate how we remember, and current if–then AI rules cannot truly duplicate how we act.

One could say that the AI algorithms that relate facts to outcomes, that is the if–then rules, are the result of rigorous problem–based learning and experiential learning, but without any appreciation of the underlying physics.

Even so, AI has transitioned from a scientific advance into an engineering tool. Continuing innovation in an increasing number of domains is requiring engineers from all disciplines to learn how to integrate AI tools into their engineering designs.

Open-source tools, such as Amazon’s DSSTNE, Microsoft’s DMLT, and Google’s TensorFlow, contain software libraries that enable machine learning. Last week Google released an open-source AI tool DeepVariant that is able to provide a more accurate depiction of a person’s genome from gene sequencing data than other methods.

Amazon’s Alexa and Apple’s Siri use natural language processing to make decisions, Oncologists are training IBM Watson to help them diagnose and treat lung cancer, Tesla and Google are competing to bring autonomous self driving cars to consumers, the Israeli company Zebra Medical Systems is developing tools for radiology that have greater than human accuracy.

Engineers are responsible for training the software engines of smart systems. They do so by developing a variety of if–then rules for different applications. To have confidence in the AI-enabled product, whether it is a refrigerator or a car, they must therefore understand the difference between uncertainly and risk, and be able to account for volatility and complexity.

An uncertain technological future requires adaptable and resilient engineers who can see through the fog to create robust engineering designs based on AI. They must understand the capabilities and limitations of both their environments and the cognition afforded through AI. And, they must have the courage to make audacious but safe decisions.

Therein lies the challenge for engineering leaders and educators.


One thought on “AI and the Future: Volatility Uncertainty Complexity Ambiguity

  1. This is a stimulating blog. AI threatens the status quo and pride in technological proficiency.

    PS. A little typo in the third last paragraph says I’m uncertainly instead of uncertainty.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s