Do We Need Policies to Regulate Modern AIs Right Now?

By Hersh Sanghvi

This is the fourth post in a series about artificial intelligence, along with its uses and social/political implications.

For a long time, Machine Learning (ML) and Artificial Intelligence (AI) have remained firmly in the domain of research applications and science fiction. This is changing thanks to the emergence of a huge variety of AI devices, from smartphones and wearables to robots and autonomous drones. Many companies are working on AI hardware and software as a way to help businesses and organizations create better, smarter solutions, like personal assistants and self-driving cars. As it advances, AI has begun to have an impact on everyday life. We are starting to see more people using AI-powered devices to help their daily lives. The question is: will it help or hurt us?

The previous paragraph, except the first sentence, was written entirely by a special kind of ML model called a “language model”. I typed the first sentence and then asked the model to give me suggestions for each subsequent phrase, chaining them into a coherent piece of writing. This kind of performance is remarkable; just a decade ago, even the most advanced natural language processing (NLP) algorithms struggled to perform at levels we consider mediocre today. Much of these advances have come from a sub-field of ML called “deep learning”, which uses exceedingly complex models to find complicated patterns in very large datasets. In the past decade, AIs have advanced closer to becoming intelligent machines as articulated by Frank Rosenblatt, a pioneer of ML, in 1958: “an electronic computer that…will be able to walk, talk, see, write, reproduce itself and be conscious of its existence”.

Of course, despite all these advances, we are still very far away from that goal today. However, the successes of deep learning have enabled systems that seem to truly see and understand what is in an image (this is called computer vision, or CV), and read, write, and comprehend languages. We’ve seen an explosion in the usage of image processing models, and language tools like GPT-3 from OpenAI are being used to automate writing news articles and more, like I did above. 

Despite recent progress, current AI still has many limitations that can make its widespread use dangerous. In light of this, it’s important to understand what these limitations are and ultimately how we as a society should take action. In particular, we’ll look at two exciting, consumer-facing applications of AI: CV and NLP. To get more insight into this topic, I interviewed Dr. CJ Taylor, professor of Computer and Information Science here at UPenn, whose research focuses on computer vision and robotics.

(As a note, when I use the term “AI”, I’m referring to “Narrow” or “Weak” AI, which is designed with limited capabilities, such as a program designed to only caption images. On the other hand, the robots in science fiction like C-3PO from Star Wars have “General” AI, which more closely mimics full human intelligence and consciousness)

In order to understand just how groundbreaking modern vision and language models are, we must inspect how the state of the art has evolved over the years. Both fields have existed since the mid-1900s, before personal computers were widespread. For a long time, programs in both fields were mainly "hand-engineered", with engineers specifying salient patterns, such as rules for ordering parts of speech. Combinations of those patterns would be used for tasks like automatic translation or character recognition. As Dr. Taylor explains, these hand-engineered approaches are generally able to handle tasks where the inputs to the program are tightly controlled and known ahead of time. These models can inspect products in a factory or recognize handwriting, situations where the camera angle and image being inspected would be nearly constant. However, they struggled with handling inputs they hadn’t seen during training, like different lighting conditions or image angles. An example of this is shown on pages 25 and 26 of this report, with the easiest and hardest images of sunflowers to classify from the ImageNet challenge in 2010. The easiest images to classify are in bright sunlight with sunflowers centered in the image. Conversely, cloudy conditions, zooming out the camera, and other distractions in the image cause the models to struggle.

Recently, major advances have been made towards fixing the problems of the older models. A breakthrough was made in 2012 with the introduction of a deep neural network whose task was to recognize categories of objects in images, like the example of sunflowers shown above. This network’s ability to correctly classify many categories of  images across a wide array of conditions showed that it was possible to efficiently train a very complex network without just memorizing the training data. Unlike the previous models we talked about, these deep learning models don’t require making restrictive assumptions about the data. Instead, they learned to recognize patterns in images and language by analyzing millions of examples. Today, after years of research into the applications of deep learning, these networks are being used to reliably label and caption images, answer questions about images, translate phrases from one language to another, recognize objects in images, and much more. Here are some consumer-facing examples you might have heard of:

Dr. Taylor points out that a lot of these recent uses of AI are in scenarios where a 99% success rate can be considered sufficient, or where there is a human in the loop to correct the AI’s mistakes. When Siri misunderstands someone with an accent, generally the worst thing that happens is they have to repeat the command. However, when we use autonomous AI programs that rely on vision and language AIs for more safety critical applications, the consequences can be much more severe, ranging from a self-driving car running a stop sign to a news article bot writing incorrect information to a facial recognition system causing a false arrest.

A major limitation of modern AI is that it is very inefficient to train. AIs for self-driving cars must drive millions of miles to collect enough training data to be considered truly road-ready. In comparison, the average American drives roughly 13,500 miles per year, and in many ways, humans are still more reliable drivers than AI. Current state-of-the-art AIs are generally much less efficient at learning than humans, and consequently require a lot of training data. This also means that AI struggles to learn patterns that show up infrequently in the data. Although most tasks (driving, text completion, image recognition, etc.) are very repetitive, there are many infrequent edge cases that require wildly different behavior than the norm. Without enough data on these scenarios, current AI cannot use “common sense” to come up with a logical solution. Instead, it is likely to output wildly incorrect answers.

Adding to these issues is that our current state-of-the-art deep learning models are essentially black boxes. It is very difficult to understand why, for example, a question-answering AI gave the answer that it did, so we can’t really know if our AIs are simply memorizing the training data or learning meaningful patterns. One common criticism of GPT-3 is that it simply outputs correct sounding text without any logical or mathematical reasoning, such as returning realistic-sounding but dangerously incorrect medical advice. Another issue with this lack of transparency is that AIs are susceptible to adversarial attacks: the patterns that AIs use to interpret data can be turned around and used to modify the inputs to make AIs give incorrect results. For example, researchers found that some carefully placed, yet realistic looking, graffiti causes a vision model to incorrectly classify a stop sign as a speed limit sign. This unpredictable behavior makes it difficult to certify the safety of these automated systems.

Furthermore, because AI learns patterns from large datasets, any biases present in data (often created by humans with their own inherent biases) will appear in the AI model, sometimes in exaggerated ways. For example facial recognition systems are often more likely to misclassify gender and racial minorities. This comes down to a variety of factors, including the fact that a majority of faces in datasets belong to white, male individuals, making facial recognition AIs more uncertain on people with different skin tones or facial structures. An example of situations where these biases can cause significant real-world harm is the COMPAS algorithm, which is used by US judges during sentencing to assess an individual’s risk of recidivism. An external audit of COMPAS revealed that it greatly overexaggerated the risk of recidivism of black inmates, while downplaying the risk by white inmates. As we covered before, GPT-3 is used to write news articles and other documents, but because it is trained on data scraped from the internet (the only data source large enough to train this incredibly complex model), it can propagate harmful cultural (such anti-Muslim) and gender bias. Although engineers can fix these biases in both vision and language models, this often requires extensive effort, does not always work, and sometimes introduces other biases into the model.

These flaws show that even an effective AI can cause significant harm, but it’s important to note that AIs have only just started being used in real-world applications where safety is critical, especially when compared to the speed of legislation. While deep learning technology evolves at a rapid pace, trying to legislate specific technical aspects of AI would be like trying to hit a moving target. As Dr. Taylor explains, it would also be difficult to institute ironclad standards of safety or fairness for automated systems as there is still hot debate on what exactly safety and fairness mean and how they can be quantified. Some argue for a temporary ban on commercial use of AI, but these technologies do have a lot of potential to help people, and Dr. Taylor points out that having safe interaction between people and these systems is critical for identifying their flaws. To limit the potential dangers of this fledgling technology while maximizing its usefulness, there are a few immediate actions that should be taken:

  • Initiatives to increase equity in computer science research. One of the best ways to resolve bias in datasets is to support AI developers who reflect the larger makeup of society. Although this won’t automatically fix the technical issues behind bias in deep learning and AI, it’s critical to have people creating and researching these technologies who are thoughtful about its possible biases.

  • Passing initiatives such as the Algorithmic Accountability Act, which, while strict standards of fairness and safety are being ironed out, requires corporations to assess the impact of their AI systems on consumers. While this Act focuses on privacy and security concerns, it could be amended to also require an assessment of how well the  automated system responds to unexpected disturbances.

  • Encourage dataset transparency. The performance of modern AIs is highly dependent on the data on which they are trained. Therefore, allowing audits of datasets used in commercial AI would allow independent oversight to ensure that bias is not being induced and that the dataset covers the range of possible situations the AI might need to respond to in real life. If consumer data is used in these datasets, care would have to be taken to make sure people’s privacy is protected.

These three options only cover a small subset of the problems with AI systems, and of possible AI legislation. For example, the safety certification of self-driving cars or warehouse robots, both of which rely on CV AI, wouldn’t be covered by any of the above options. But AI as a field has only recently entered scenarios where systems are actually interacting with the world. Dr. Taylor notes that we now take for granted the fact that cameras can recognize faces, or that computers can understand language, even though these seemed to be insurmountable challenges only a few years ago. One of the most exciting developments in the field has been seeing AIs that could previously only handle carefully managed environments now reacting to the messiness of the real world. Learning from these failures will help improve the fairness and reliability of AI in the future. In the meantime, we must take care to learn the right lessons and limit the harmful side effects of exploring AI.