AI is 73 years old and still can’t drive – here’s why


I first learned to drive when I was around 10 years old and had a driver’s license by the time I was 16. AI is now about 73 years old and still can’t drive. What gives?

There are a lot of gadgetry-related reasons for AI not being able to drive a car.

Sensors aren’t good enough in all situations, the required processing power is more than a car can provide, processing times can be slower than a human brain (even with a supercomputer), AI isn’t capable of making ethical decisions, etc. There are lots of technological roadblocks stopping artificially intelligent cars from getting on the road.

But the biggest hurdle isn’t sensors or tech or even legal questions or ethics. It might be the fact that we don’t actually know how AI thinks.

A recent study by Anthropic found that AI chatbots can hallucinate, aren’t really good at doing basic math, and often lie about their own reasoning. Just like humans. It turns out, we know a whole lot of nothing about how brains work, AI or human.

As with the human brain, there are a lot of theories about how AI functions. “We think it does this because of that” sort of explanations. Like most of science, it’s just a best guess.

This can be put to the test pretty simply. I input “57+92” into ChatGPT. It responded with “57+92=149.” Which is correct. But when asked “How did you get that answer?”, the AI gave me the standard textbook “I added the ones and then the tens and combined the results.”

On that, ChatGPT lied.

That is not how the AI comes up with the answer. AI is a computer and like any advanced calculator, it just shoves the 1’s and 0’s together and gives a total. Like so:

111001 (57)
+
1011100 (92)
=
10010101 (149)

I know this because I have a Computer Science degree from 1994. I was there, Frodo, 2,000 years ago … when we had to learn binary and ASCII, and when BASICA came with the operating system. AI is a computer and this is still how it does math. I caught ChatGPT in this lie when I asked it how it really does math. It gave me a short explanation of numbers to binary to result. Similar to what I illustrated above.

Things have changed a lot since 1994, but not enough to make AI equal to the average human driver. Back when I was handed an already-outdated degree, vehicles in the US had two federally mandated sensors. Both were for emissions. The rest of the vehicle’s pre-crash safety systems relied on the driver’s eyeballs and reflexes.

Back to the present, the people at Anthropic are testing a language model AI called Claude, which is similar to what many of us likely interface with daily via natural language: ChatGPT, OpenAI, and so forth.

Claude, they found, will often create logic to fit a preconceived narrative in order to appease or appeal to the person it is interacting with. Similar to the way politicians smarm around a subject to appear to agree with someone. But with less creepiness and ill intent.

Translate that to the car on the road. While driving, we make thousands of decisions every minute that could literally affect not only our lives, but the lives of those around us. We know from distracted driving data that even a few moments of inattention can lead to horrific results. Now consider what happens with AI either self-justifying a bad response or being unable to make a decision fast enough. Never mind the question of who is to blame for the result.

The point is that while the technology informing AI needs a lot of work to get to the same level as a human driver, the learning model itself is going to need to catch up too. AI is now in its 70s, but can only operate at about the same cognitive level as a small child. And we’re not even sure how it does that.

Today’s vehicles have dozens of sensors, hundreds of feet of wiring, three or more on-board computers, wireless and Wi-Fi connectivity, GPS, and a lot of other gadgets. None of those can drive the car for you. Nope, not even those models that have the “T” on them. They require humans to drive them and probably will for some time. Most truly autonomous vehicles are “gated” into an area for both legal and technical reasons.

We humans have a distinct advantage over our AI creations: we are born with pretty top-shelf sensory systems that work in multimodal formats – with many millennia of evolution that’s made us pretty good at utilizing them. We, like most animals, excel at collating multiple inputs and reacting accordingly.

As sensory technology catches up, AI may yet prove itself a superior driver. But we can’t say that for sure, since AI doesn’t think the way we do. AI isn’t as good at multi-sensory input cognition and learning. That’s one of nature’s greatest feats but so far, we haven’t readily replicated it for machines. AI has also not proven itself very good at making snap decisions with little data to go on. It’s not intuitive. We make decisions intuitively quite often. Many times without much conscious thought. It’s a big part of what drives us to not only innovate, but to adapt and overcome.

The good news for self-driving vehicle fans, though, is that because we don’t understand AI’s thinking all that well, it could do things we aren’t expecting. It could become intuitive and capable of handling multiple inputs simultaneously. It’s safe to say that were it to do so, it would likely do it in a much shorter time than we humans did.

Let’s hope that we’ve figured out how to make it honest by then, at least. I’m not interested in AIs arguing with one another over who’s at fault. Insurance lawyers are bad enough.



Leave a Reply

Your email address will not be published. Required fields are marked *