Autonomous driving is fiction

Interview

Illah Nourbakhsh is professor at the prestigious Carnegie Mellon University in Pittsburgh. The onetime industrial city in Pennsylvania is considered one of the hubs for autonomous driving in the United States. In this interview, Professor Nourbakhsh describes the vision of driverless robot cars as mere fiction, and he warns about their implications for society.

headshot of Illah Nourbakhsh
Teaser Image Caption
Professor Illah Nourbakhsh speaking at the World Economic Forum in 2017

Mr. Nourbakhsh, do you enjoy driving?

I do. All find all things technical relaxing. Yet I am aware of the carbon footprint of driving and its negative climate impact.

Online, there is a website about a vintage Porsche 911 you owned and recently sold.

I love technology, especially vintage cars, as well as old airplanes. I am also a pilot. Old machines are very intuitive – you can figure out how to work them. Even as a child, I would take things apart and put them back together. I enjoy that. That is impossible to do with modern cars.

Modern cars are technologically highly complex, boasting ever more networking features and assistance systems. At some point, they are supposed to become fully autonomous. Pittsburgh, where you teach at the prestigious Carnegie Mellon University, is considered one of the hubs for autonomous driving in the US. What goes through your mind when you see one of these autonomous test vehicles on the road?

I smirk, because most of the time, I see two people sitting in these big, so-called driverless vehicles, talking to each other and with their hands on the steering wheel. So the car is by no means driving autonomously. I hardly ever see cars without an actual person in the driver’s seat.

So autonomous driving is still a fiction?

Yes. There is a huge discrepancy between reality and the marketing around autonomous driving. Tragically, we live in a world where people don’t know how to fact-check stories anymore. As a result, most people assume vehicles are capable of far more than they really are. Yet statistics teach us otherwise.

What do you mean by that?

Tesla or other companies state that their cars are 98 percent automated, with no need for human intervention. 98 percent sounds fantastic at first. For a computer scientist or an engineer, however, this rate is a nightmare scenario, because it means that two minutes of every 100-minute drive are unsafe.

There are concept vehicles today that don’t even feature a steering wheel or driver’s seat.

I have big problems with that. It takes us even further away from the truth because these concepts feed the illusion that humans are no longer needed. Yet there are still millions of small situations where the vehicle needs to be manually controlled – when entering a car wash, when refueling, or to get your child’s ball out from under the vehicle ... 

New technologies are being developed to solve problems. What are the problems autonomous cars are supposed to solve?

Just the other day, I had a frustrating discussion about this with a friend. She told me that Aurora was building driverless electric trucks. She was excited to tell me that operation and maintenance are much cheaper than with a conventional truck.

That does sound great.

Yes, but the cost savings don’t come from automation, but from electrification. In fact, automation is driving up the cost per mile significantly.

But it does save money if no driver is needed.

No, because the vehicle has to be equipped with countless, highly sensitive and very expensive sensors and cameras. The bottom line is that the cost per mile is higher than if a paid driver drives the vehicle from A to B. Automation does little to reduce costs.

Why do you think this belief has taken root?

Because we are not interpreting the data properly. For example, a few years ago, Google published a study predicting that in the autonomous age, a cab ride will cost a tenth of today’s price.

That sounds appealing. Where’s the catch?

I read the report – and I took a close look at the appendix, which explained the assumptions underlying this calculation. A large part of the projected savings was based on the assumption that the vehicle fleet can be insured as a whole, rather than taking out insurance for each individual vehicle, as is the case today. Google assumed that all these vehicles would be electric, so maintenance and operating costs would drop to near zero. However, the press release highlighted a different aspect, namely that the vehicles don’t require a driver. And that’s what the media picked up.

Investors also believe in technologies like autonomous driving. Why do you think they invest billions in such promises?

They see an opportunity for huge profits. They do not care about probabilities. They don’t care about things like, say, developing a new broom, even if someone were to come up with a new kind of broom that is far better than anything currently available on the market. The return on investment is simply not high enough for them. Instead, investors are banking on high-risk bets.

Why do they take such risks?

One reason: They know that 19 out of 20 of investments will fail, they factor that in. A second reason is that they follow a network logic. Let’s take the example of logistics. Investors have no interest in a technology that merely enhances the safety of a truck by eliminating the risk of a driver falling asleep at the wheel, thus avoiding accidents. They are looking for companies that promise them that their newly developed technology will cater to trucks, cabs, trains, and all other forms of transportation, a trillion-dollar market, in other words. Startups lure investors with the promise of a “first mover advantage”. They tell them that if they win just 15 percent of the market, they’ll be raking in 500 billion euros a year! In doing so, they assume that they will be monetizing the elimination of the driver. So they’re privatizing the revenues of the millions of drivers who are still on the roads today. This leads to other problems.

Such as?

First, power and wealth shift from workers’ incomes to capital owners. They justify this shift with efficiency gains. As a result, capital will be increasingly concentrated in the hands of a few. Second, the whole model is a fiction. In their calculation, they assume that they will be saving the cost of the drivers’ salaries. But it does not add up, because they will need well-trained people to train all the machines and to spot their vulnerabilities, for all modes of transportation. These highly skilled people will want to be paid well. This is not priced in.

Are technologies evolving faster than our understanding of what we should even want as a society?

Yes. We don’t perceive the negative consequences of a technology until it is established. That’s what happened with social media. Only after they were instrumentalized did we understand the dangers they posed. Now we are faced with the task of reforming that system. One thing I find curious about autonomous driving, though, is that the technology is developing much slower than expected. For a long time, many assumed that driving was a quantitative challenge that could be solved with sensors. They disregarded the role of social interaction and negotiation in traffic. They didn’t factor in how complicated it all is.

Can a machine ever be programmed to behave like a human being – to take that social component into account? That seems to be the prerequisite for autonomous cars to be used in regular traffic, in so-called mixed traffic.

I believe that mixed traffic is a very bad idea. I don’t think it will work. The situation is different on the interstate where you can have dedicated, separate lanes for driverless vehicles. But even that harbors dangers. In San Diego, for example, where immigrants coming from Mexico or elsewhere could suddenly cross these lanes, the whole scenario will fail.

In urban traffic, the situation is even more complex.

Yes, indeed. There, too, extra lanes would be needed, like the bus lanes we already have here in the U.S. That way you can avoid encounters with pedestrians, other vehicles, or wildlife. Under these circumstances, software-controlled driving may work. It would work the same way as in warehouses, where robots navigate in areas that are off-limits to people. Wherever humans and robots get in each other’s way, things get complicated, unless the humans are trained on how to behave around the machines.

Some researchers assume that for  machines and humans to co-exist, machines would need to be taught ethical principles. 

There are also people who claim that machines have a consciousness. This is complete nonsense. Machines have neither intuition nor consciousness. We don’t have the slightest idea of what that would even mean. This is pure fiction.

How can we use technology to improve our lives?

By getting the tech nerds out of their silos – and get them talking to communities. Engineers need to work with people of different backgrounds and life realities at eye level. They need to acknowledge that users know more than them in certain areas because the users know what they want to achieve with the technology. We must abandon the principle that only those who own capital get to develop technology because these are only very few companies, usually run by privileged white men.

What would it take to change society?

I think we have to focus on education – on school systems and universities that teach social sciences and technology together. Our task in higher education is to train a cohort of innovators who consider both aspects together. These are the people, who should be sitting in the executive suites of tech companies and serve as politicians in the future. They should have diverse backgrounds and represent the values of different communities. But this is a long-term project spanning several decades. It would require overhauling our entire education system.

A German version of this interview was first published by Tagesspiegel Background on August 18, 2022. Translation: Kerstin Trimble.