Why AI will never be perfect

An examination of how and why AI have the same pitfalls humans do

Chris
5 min readSep 18, 2016

I see this a lot. People tend to think that because computers are machines that can perform nearly perfect logical and mathematical operations, then so too will anything that runs on them inherit this property of perfection, including AI. This leads people to believe that AI will be perfect and incapable of logical errors.

This is not the case. This is because modern AI learns by itself, from many examples, and can only ever practically approximate what we want it to learn, thus leaving room for error. This is the case for many different reasons, but it’s worth exploring a few.

One of the most common ways AI produce errors, occurs because there isn’t enough data for an AI to properly learn the underlying generalization. Think of it this way, if I wanted an AI to learn to distinguish between a cat and a dog, then what I would do is show it a bunch of pictures of cats and dogs, and constantly adjust the AI to better predict which the picture is of. It is possible that there is a kind of cat that looks similar to a dog that I didn’t show it a picture of, so it was never able to learn that kind of cat was actually a cat; therefore, it may mistakenly predict it is a dog. This is a symptom of the AI not being able to project the generalization it learned of a cat from the examples I showed it, onto the cat it had never seen before, and hence produce the error of predicting it was a dog.

Having too small a dataset can cause overfitting, but overfitting can happen with large datasets too. Overfitting is when an AI starts to lose its ability to generalize a learned solution to other valid examples, producing errors. A good way to think of this, is that the AI starts to memorize the dataset, rather than learn abstract patterns in the data. This, again, results in an error when trying to predict something that it hasn’t seen before. Overfitting is usually a symptom of having too small a dataset, but this is not necessarily the case. If an AI with a sufficiently large dataset is left to learn the data for too long, and isn’t stopped at some point in time where it can sufficiently predict the output, then this can result in overfitting as well.

As I’ve explained before, in another post, most modern AI takes the form of some kind of neural network, such as those used by deep learning. When it comes down to what these fundamentally are, they’re just large equations. So, yes, it is true that, within error of fixed precision floating point math, the computation of the neural network’s equation is logically correct, but this should not be confused with what the computation represents, which is simply the output in terms of the input (a cat being represented by a picture of a cat for example). The representation is the part which can be incorrect.

The reason AI can err is the same reason humans can and do frequently. If you think about it for a moment, our brains too can be thought of as large equations, they just work a little differently. Instead of using abstract numbers and variables as the medium for their representations (as neural networks do), our brains compute using the physical medium of biology. Biology is emergent from chemistry, and chemistry is emergent from physics. Physics appears predictable, and follows certain patterns of particle interaction. This causes chemistry to be predictable and follow patterns of interaction, and finally biology to be predictable and follow patterns. What I’m trying to show you, is that this level of logical predictability (computability) is never lost, however complex it becomes. For example, if you could replicate exactly, to every detail, a cell and its environment, down to the smallest level of physical reality, it would produce exactly the same behavior as the original. This is equivalent to producing exactly the same result using an equation, like a neural network. So now, you see, just because our brains run on logically coherent biological systems, doesn’t mean they’re infallible. It’s what their medium of computation represents that is the part which can be fallible, as is the same for neural networks and AI.

If you dive deep with AI, then you can see the same pattern emerge. What do we compute neural networks on? A computer. What does the computer use to perform computations and calculations? Ultimately, chemistry and physics. Though, notice that there is an extra step, such that representations of what a neural network learns relies on mathematical patterns, and those in turn are represented by the carefully designed mechanisms of a computer. Which might imply that our brains potentially use a similar intermediary representation, but let’s save that discussion for another time.

In conclusion, you can see that computers and our brains are not too different, and both are subject to logically correct computational patterns that emerge from the systems they’re built on, but both can also err because what they represent can be incorrect, not the underlying systems. Going full loop, back to the beginning, these representations are erroneous primarily because they don’t properly represent what they’re supposed to, typically because they aren’t general enough to encompass the ground truth of what they should. Just as our human brains are perfectly playing out the laws of biology, so too are neural networks perfectly playing out the laws of mathematics, but both are prone to represent things which may not be fully correct in terms of what they should represent.

*A small side note about the assumption that physics is computable (precisely predictable). Some of you who know physics well enough might point out that this assumption isn’t entirely well grounded, due to observed results from quantum physics, and in general our lack of a comprehensive understanding of how fundamental reality ultimately works. Well, as true as this may be, either way, it doesn’t really make a difference in the argument. Whether or not physics is predictable doesn’t matter much because both systems depend on and emerge from the same underlying physical system. So whatever properties one has, is transitively true of the other for the same reason. Regardless, I find it unlikely that it is the case that just because we don’t understand why physical phenomenon seem to not have patterns, doesn’t in fact mean that they don’t have some much deeper pattern that we are not aware of, yet. Take chaotically emergent behavior and complex cellular automata for example, which show that seemingly random and highly complicated behavior are in fact representable by extremely simple rules.

--

--

Chris
Chris

Written by Chris

I’m a computer scientist, software developer, and philosopher who is extremely interested in modern artificial intelligence and machine learning.

No responses yet