Get real. AI has no ‘easy button’

Computer and information sciences are nothing new. Certainly, Artificial Intelligence (AI) isn’t new. In fact, several generations of humans have now been born into a world where computers, databases, information processing and artificial intelligence not only exist, but where the technology plays a major role in who gets a job, a contract, or who is the next victim of various forms of malfeasance. The rate of change is historically unprecedented, yet sometimes, when things change and you are part of that change, it can be difficult to notice.

This thing we call AI

It is always a good idea to start with first principles that help us articulate what we believe. What are we talking about and how will we know what it means? AI is definitely not something new that just started becoming available. It has been around since at least the 1950s, depending on how we want to stretch the definition. Early concepts included using computers to project human values, serve humans, etc.

The term “artificial intelligence” is brilliant. From a marketing perspective, it immediately makes us want something that can be intelligent for us. We want to know more about this sci-fi thing that we can now use for all sorts of purposes.

Unfortunately, the term is also extremely misleading. The sort of technology we are talking about is certainly artificial in the sense that it is embodied in computers with algorithms, but there is no intelligence. It’s basically math. The first artificial intelligence that most people encountered is commonly referred to as supervised machine learning. Basically, if you give a system enough information about the past in a context that is reasonably stable and reflective of the environment to learn about, you can build a system that predicts what will happen next as long as the immediate future looks enough like the past from which you built your ability to predict. At first, this approach sounds reasonable, and in fact it is, for certain systems which are mature, stable, and well understood.

Unfortunately, we live in a time where many things are neither mature, stable, nor well understood. In fact, the pace of change is such that simply curating data is a challenge because in many cases the environment is changing faster than the data which describes it. Confounding this challenge is the fact that data is now highly regulated, often monetised, and rarely available in complete form without significant expense and massive oversight requirements. The business of data is tricky, but this complexity does not stop many from continuing to try to use AI as a sort of ‘easy’ button.

Modern AI has come a long way from simple supervised machine learning. We have systems which are cognitive, mimicking the behaviour of human experts in the way they approach problems, and generative (GenAI), creating their own content. These systems can be used not only to understand, but also to push the boundaries in some cases beyond what is possible for humans alone to achieve.

Why now?

We sit at the cusp of a number of phenomena that have combined quite conveniently to enable AI to become centre stage. First and foremost, there is simply enough data, enough compute power, and enough commercial demand to force evolution in the space. This technology can be used to save money, which is always a huge driver of adoption. Technology can be used to replace people doing similar tasks, which has both benefits and risks. Arguments abound regarding job displacement due to AI, while counter arguments are very strong that we can free up humans to do things that uniquely require humans if we put the right focus into training and addressing future needs.

Myth or reality?

Like anything that becomes popular in culture, there are shared myths and inconvenient truths. One common fear is that we will somehow be overwhelmed by the technology. Certainly, it is possible that we will cause harm to humans by allowing AI to take on tasks beyond the scope of what was contemplated. This is not science fiction. In the rush to adopt technology, especially to either save money or make money, shortcuts are taken. These shortcuts can have serious implications.

It might be helpful to state some of these fears out loud. I will never report to a robot. However, let’s consider this statement more realistically. I already take direction from various forms of AI when I turn left because the guidance system told me to turn left, or when I click on a link that was suggested by a recommendation engine. Data is being curated and interpreted for me throughout the day to save me the trouble of looking at maps or to address the overwhelming choices I might make doing research or even just reading the news. However, this convenience should not be at the cost of my own critical thinking. I will not turn left down a one way street in the wrong direction even if the system tells me to do so. When I read the news, I still need to ask if it seems reasonable and I still need to look at multiple sources to get different perspectives. AI does not take away the responsibility to think critically.

Another common fear is that AI creates a world where there is basically an app for everything. Let’s just use AI to do it for us. In some cases this might be helpful. For example, in order to prepare for an interview, I might use AI to summarise recent advances in regulation around AI. In fact I have done this. The irony does not escape me that I am using AI to study itself. Nevertheless, I am very aware when I get the answer that there is missingness. Certain parts of the world are less represented and therefore don’t necessarily come out in the machine-generated results. There is no substitute for asking a good question, for understanding what might be missing, and for challenging the fact that certain types of information are more represented in social media and print than other types of information. Nuance and subtlety are often where the difference lies.

Dirty deeds

Why would we assume that all technology, or any technology, would only be used for positive outcomes? AI is currently at the heart of new types of ransomware, misinformation and disinformation campaigns, and many other forms of subtle and overt influence. The technology is used by fraudsters to support illicit activities, and even to improve the efficiency of funding these activities. The simple reality is that we cannot ignore the technology simply because if we do so, we will lose ground faster against those who are certainly using it in unintended and harmful ways.

Looking ahead

So what can we consider as guiding principles to help us navigate this time with great change as AI becomes more democratised and more mainstream?

  • Be realistic. There is no ‘easy’ button. You can’t just throw GenAI at every problem and make things better. Not all data contains the ingredients of a solution to the problem. Sometimes the world is changing faster than the data. Sometimes the data has been manipulated or it contains intentional or unintentional manipulation. Always question why the method is appropriate and what the problem is that you are addressing.
  • Be humble. The space is simply too complex, and changing too fast, for anyone to know all of it. It is extremely important to widen the circle and get advice from outside the organisation to make sure that you are being as inclusive as possible of new learning, new capabilities, and new risks.
  • Keep your head up. The amount of change and disruption in our environment is increasing at a rate that we cannot understand fully because we are part of that change and the change itself is begetting change. Even when you set out on a new initiative involving the best AI, carefully considered, with a wonderful upside potential, the environment can change in ways that make it irrelevant or inadvisable before you finish.

There is much to learn from challenging our beliefs and learning from one another. It is only by having critical dialogue and challenging our beliefs collectively, asking why we do what we do and what has to be true in order to make that the best course of action, that we will navigate the risks and opportunities of AI in the modern enterprise.

Anthony Scriffignano, Ph. D

Anthony Scriffignano is an internationally recognized data scientist with experience spanning over 40 years, in multiple industries and domains. Dr. Scriffignano has extensive background in linguistics and advanced algorithms, leveraging that background as primary inventor on multiple patents worldwide. Dr. Scriffignano is a Distinguished Fellow at The Stimson Center and previously held the role of Chief Data Scientist at Dun & Bradstreet for over 20 years.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE