Attack of the self-driving cars

Researcher
Yao Deng; Dr James Xi Zheng
Writer
Fran Molloy
Date
1 July 2021
Faculty
Faculty of Science and Engineering

Share

Malware attacks on the control systems of self-driving cars could have disastrous outcomes, but new research from Macquarie University shows how car makers can detect and prevent them.

New research from Macquarie University’s Department of Computing has identified key vulnerabilities in self-driving vehicles that show just how exposed to potential sabotage their control systems can be.

“There are many different kinds of attacks that can occur on self-driving or autonomous vehicles, which can make them pretty unsafe,” says Yao Deng, who is a PhD candidate in Macquarie University’s Department of Computing.

Deng’s work involves breaking down the different weak spots in a critical part of the computer vision systems used by robots and autonomous vehicles (AVs) to recognise and classify images, called ‘convolutional neural networks', abbreviated to CNN.

He’s the lead author on a recent collaboration with computer scientists at Harvard, UCLA and the University of Sydney published by the International Conference on Pervasive Computing and Communications, detailing five major security threats to AVs which rely on CNN logic.

What’s more dangerous – self-driving cars or human drivers?

A 2015 report by the US Department of Transport found that driver error was behind more than 94 per cent of vehicle accidents.

That statistic is often cited by companies like Tesla, Uber and Google’s self-driving car spin-off, Waymo, who have all made huge investments in self-driving vehicle technology, along with big promises that autonomous vehicles could prevent millions of accidents and save thousands of lives.

Widespread deployment of autonomous vehicles is going to result in a lot of unemployed people, and some of them are going to be angry.

It makes sense; cars operated by robots are not going to break speed limits or violate road rules, take a corner too fast or get distracted by a text message.

But their vulnerability to malware and hackers means AVs may not be as safe as we think.

Australia’s road to self-driving cars

Australia is already laying the groundwork for AVs in the near future, with initiatives including the RAC Intellibus trial in South Perth, the Transport for NSW driverless shuttle bus pilot and the Coffs Harbour Council Busbot.

Safety first: The road toll is tipped to fall dramatically in a world of autonomous vehicles, but their vulnerability to hackers means they may not be as safe as we think.

Various government transport plans such as Victoria’s North East Link Road project and the Transport for NSW strategy now include provision for AVs, and several Australian mining companies routinely use self-driving vehicles within closed sites.

The most recent KPMG global Autononomous Vehicles Readiness index rates Australia as 15th in the world in its progress towards a self-driving future.

But while human driver error is statistically likely behind nearly all of the 1125 road deaths in Australia in the 12 months to May 2021 – could self-driving vehicles still pose a safety threat?

Deng’s research looks at a new kind of attack targeting the computer logic behind most AVs and identifies ways to protect against these.

What makes AVs vulnerable?

Cameras and a laser-pulse range measurement system called LiDAR form the “eyes” of the self-driving vehicle, feeding information about the driving scene and environment into a CNN computer model that makes decisions such as speed adjustment and steering corrections.

In focus: A camera monitors the driver of a self-driving car ... Deng’s work involves breaking down the weak spots in a critical part of computer vision systems.

“Unfortunately, CNNs can be easily fooled by adversarial attacks such as adding small, pixel-level changes to the input images which can’t be seen by the naked eye,” says Deng.

Deng says that these kinds of attacks have been tested in laboratories – for example, Tencent Keen Security Lab set up a falsified image attack test on the Tesla autopilot system which caused the Tesla to turn on rain wipers when there was no rain.

Work is underway around the world to protect AV autopilot systems against such attacks – but Deng says that security systems often don’t address inherent weaknesses in CNN logic.

Types of sabotage and how to prevent them

Most modern vehicles are now susceptible to hackers, but MIT computer scientist Dr Simson Garfinkel warns that AVs will face new types of attacks based on ‘adversarial machine learning’ – designed to trick algorithms into making errors which could have deadly results.

“Widespread deployment of autonomous vehicles is going to result in a lot of unemployed people, and some of them are going to be angry,” Garfinkel warned.

One well-known example of adversarial attacks on machine learning systems involved researchers at Carnegie Mellon University duping facial recognition systems by wearing clear spectacles with certain patterns on the frame, triggering incorrect results from the algorithm.

There are a number of ways that the ‘black box’ that is used by AVs can be tampered with, and these can lead to dangerous errors.

Attacks on self-driving vehicle control systems are less well-known, and the focus of Deng’s current research.

“There are a number of ways that the ‘black box’ that is used by AVs can be tampered with, and these can lead to dangerous errors in the AV system that can occur over time,” he says.

Deng explains that attackers could inject malware into an AV’s driving system when the vehicle connects to the internet to upgrade software and firmware.

This malware could then intercept the images the vehicle receives, falsifying the information that is sent to the computer.

Deng’s recent published research looks at how AV developers can defend their systems against different types of machine learning sabotage.

Examples include training AV systems to identify falsified images, and installing an alert that warns when there’s an unusual spike in computer processing.

Sharing the driving

So far, most jurisdictions do not permit AVs to operate on public roads without a human in the driver's seat, ready to take over at any moment.

Road accidents in AVs have so far been very rare and highly publicised and include a Tesla ploughing into a huge truck crossing a highway in Florida in 2016, killing its driver; an Uber which killed a jaywalking pedestrian in Arizona in 2018, and another Tesla driver death caused when the car crashed into a California roadside barrier in 2018.

But as AV manufacturers perfect their vehicle control systems, we are likely to see a shift where autopilot mode becomes far more acceptable, and eventually human drivers may be redundant.

Deng’s work finding ways to protect AVs against dangerous malware, could play a critical role in future vehicle safety.

Yao Deng is a PhD Candidate in Macquarie University's Department of Computing

Dr James Xi Zheng is a Senior Lecturer in the Department of Computing and Director of Intelligent Systems Research Group (itseg.org)

Share

Back To Top

Recommended Reading