Self Driving Probably Won’t Improve Until Artificial Intelligence Does

by


self driving probably wont improve until artificial intelligence does

Following yesterday’s article about the IIHS suggesting the implementation of driver-monitoring as a way to mitigate lackluster advanced driving systems, many readers asked why automated driving still seemed so far behind in terms of development. After all, we have had companies promising the sale of commercially viable autonomous vehicles for years. But companies are nearly half a decade behind schedule with a public that has almost completely lost faith in the program.

What happened?

Automotive News released a timely report exploring exactly that on Wednesday, attributing failings to the fact that artificial intelligence lacks the ability to conduct sound reasoning. Computers have become almost unfathomably good at data collection and information cataloging. This is fitting, considering that was their first task. However, engineers know that programming judgment into a machine is a difficult task when the number of variables involved are difficult to quantify. Some have even argued that sound decision making may even be impossible for machines.

“It’s that 2 to 3 percent of the time when the human intuition plays a role in kind of understanding why something is happening,” Alex Oyler, director of SBD Automotive, North America, told Automotive News. That’s “why all of the corner cases,” or infrequent situations, “are such a problem for autonomous vehicles,” he said.

There’s an interesting dichotomy taking place today. We have been told for years that artificial intelligence would usher in change akin to the previous industrial revolution. However, now that those systems are actually being implemented, many have become skeptical. While automated driving happens to be the aspect we are most interested in, artificial intelligence is now being implemented by countless industries and to very mixed effect.

Google’s AI rollout has been an unmitigated disaster, as was Amazon’s AI recruitment tool, trying to shoehorn IBM’s Watson into an oncology role, the chatbot Tay, and the factual accuracy of GPT-3. While self-driving endeavors have been slightly more successful, we continue seeing reports about autonomous fleets acting erratically and being harassed by locals tired of seeing them operating on public roads.

From Automotive News:

AI’s failure to understand causality has restricted the decision-making of AVs in these unusual corner or edge cases, exacerbating concerns that have prevented the widespread deployment of AV technology such as robotaxis. Then-Cruise CEO Kyle Vogt described the 2023 incident in which a Cruise AV dragged a pedestrian thrown into its path by a human hit-and-run driver as an edge case, a law firm’s investigation said.

In general, handling edge cases using algorithms is difficult because while AVs have solved the rote aspects of driving using data, human intelligence in novel situations — intuition, good sense and deductive reasoning — has yet to be replicated.

While AI may be capable of exploring similarities between situations, that extrapolation may pose safety issues of its own.

The amount of information modern vehicles can take in is genuinely staggering. The true amount is unknown due to the variables between models and the fact that automakers aren’t terribly keen on notifying customers that they’re vehicle probably spies on them even more than their phone does. But most estimates place the average amount of data accumulation from a connected automobile is somewhere between 25 and 350 gigabytes an hour.

If the vehicle happens to be rich with advanced driving capabilities, it’ll have more sensors and be on the higher end of the spectrum. Sadly, this doesn’t actually seem to be making much of a difference in terms of yielding bulletproof self-driving systems. No automaker has yet to produce such a thing and likely won’t until the systems have enough data to be trained to adapt to just about every conceivable scenario one might encounter.

Srikanth Saripalli, a senior member of the Institute of Electrical and Electronics Engineers, said that crash data accumulated from real-world events can be used to create thousands of related scenarios as a way to train artificial intelligence how best to respond. However, this still requires an unprecedented level of actual driving data to be accumulated before those scenarios can be run as simulations.

“I can now figure out what the car should do, because of what the human did, and now I can feed it back into my algorithm again,” Saripalli said. “Then hopefully, when this happens, that algorithm will take care of it.”

Most companies likewise stage events for the evolving systems to address. This strategy was actually the preferred method in the early days when most autonomous test platforms were relegated to closed courses. But it’s become clear that public roads throw out curve balls that couldn’t possibly be replicated in a laboratory setting where you can control the weather, the road markings are clear, and there aren’t any other drivers to contend with.

The above is also being used as an argument for why vehicles need more data — something the automotive and insurance industries would both very much like to see happen.

While pointing out the staggering number of factors that need to be taken into account, the article suggests that the same will need to be true for “idiosyncrasies of human behavior, such as mood and attention.” Accounting for those variables in the real world would undoubtedly require some kind of comprehensive driver-monitoring system that tracks your every action.

That’s a solution being pushed by industry lobbyists at the expense of customer privacy. However, arguments have been made that this is just a ploy for the business sector to profit off your data. There are also engineers suggesting that still wouldn’t be sufficient until computers are capable of causal reasoning. The assumption is that there will always be edge cases and artificial intelligence will always struggle with them until it can effectively mimic how the human brain reasons through problems.

“The first time [the system] sees something that is different than what it’s trained on, does somebody die?” said Phil Koopman, an IEEE senior member and a professor at Carnegie Mellon University. “The entire machine learning approach is reactive to things that went wrong.”

Despite having the ability to take in information faster than humans are capable, modern automobiles don’t have the quick reasoning necessary to adapt instantly to a changing situation unless it has previously been trained on that exact scenario. For example, you can teach a child to catch a red ball and then substitute it for a blue one without issue. But substituting the blue ball for a machine that’s only ever seen red ones might pose a serious problem.

Considering how many proverbial red balls there are on public roads, it’s actually kind of amazing how far the technology has already come. However, we are never going to see widespread autonomous driving until the systems are proven to be more reliable than human drivers and it’s still anybody’s guess as to when that will happen.

[Image: General Motors]

Become a TTAC insider. Get the latest news, features, TTAC takes, and everything else that gets to the truth about cars first by  subscribing to our newsletter.



Source link