Skip to main content

Reduce AI Bias and Hallucination Using Control Theory

· 4 min read
DeepMake

While AI is one of the hottest trends of the year, it has a dirty little secret: It lies. Okay, it's not really a secret. If you've worked with an AI chatbot like ChatGPT or a text-to-image generator like Stable Diffusion, you've probably seen it hallucinate, or generate a result that was completely unexpected or even blatantly untrue. 

Hallucinations happen because text-generating AIs are designed to string together words that most concisely address a prompt, not assess if the answer is true or not. These models are merely predicting which word comes next in a sequence. It doesn't help that many AI models are trained on huge data sets from the internet --- much of which is opinion, conjecture, or just plain untrue. And even if the training data is true, those models mix and match information and draw the most useful conclusion to a prompt. Sometimes you get a good result, and sometimes you get garbage. 

It gets really wild with large AI models that were trained on huge, unbounded data sets. Think of ChatGPT which was trained on one of the largest text data sets available, or Stable Diffusion which was trained on billions of images. With nearly limitless data, these models have plenty of opportunities to draw incorrect conclusions or deliver biased results. 

Luckily, there are ways to reduce hallucination in AI. The old mathematical technique called control theory is a process of controlling the inputs into a system to ensure the output is within the desired parameters. Using control theory, you can set boundaries on the domain of a model so it's not able to hallucinate.

Before we go on, let's clear up something: Control theory is not the same as asking ChatGPT to provide accurate, fact-based answers. If you've tried this before, you may have noticed that ChatGPT just hallucinates even more --- but with more confidence. For control theory to work, restrictions need to be designed as a part of the model itself.

One way control theory has proven itself with factory robots. Robots on an assembly line could be programmed with an infinite number of movements. But most of those movements wouldn't be applicable to the job they must do and could even cause damage to themselves or others. To ensure the robots are as efficient and productive as possible, engineers use control theory to restrict their possible movements so they can't perform any movements that would lead to impact with something around them.

For example, a robot that needs to make two welds six inches apart is limited phsically, or with low-level firmware so it can't move in a way that could cause damage to itself or anything around it, while the engineer just has to worry about giving the robot the tasks that they want the robot to do.

Applying control theory to LLMs is similar: Developers set limits in the AI model's programming, training data, or in the inference process where they apply the algorithm to a dataset. These limits return more accurate, focused results, and help reduce the hallucinations that large AI models like ChatGPT are prone to have.

If you're wondering why we're so sure this will work, it's because we're doing it. FaceSwap, the open source software that DeepMake grew from, does this by making the output of the image generator heavily predicated on the input. We use the input image --- at a low resolution --- to "control" the output. This helps prevent the model from inserting details that don't belong in the exact context such as sunglasses that randomly appear and disappear. The restrictions help FaceSwap to be better at it's job than if it didn't have those restrictions.

You don't have to take our word for it either! Other developers are applying control theory to applications to stop hallucinations or better direct AI results. ControlNet is a way to restrict the output of Stable Diffusion to better match some aspect of an input image. On the text side, Nemo Guardrails is an open source toolkit that attempts to control the output of LLMs, such as ChatGPT --- though it hasn't yet had the opportunity to get really good at it yet.

Control theory has been around for many years, but it might just make it's biggest splash in the world of AI. Making it so we can keep AI working on the hard problems that will make the world better. To that end, we encourage everyone working in AI to examine not just control theory, but all the old techniques that might be well suited for this new frontier.