Artificial intelligence is a key technology for autonomous vehicles. It is used for decision-making, sensing, predictive modeling, and other tasks. But how vulnerable are these AI systems to attack?
Ongoing research at the University at Buffalo is examining this question, and the results suggest that malicious actors could cause these systems to fail. For example, it is possible to make a vehicle invisible to AI-powered radar systems by strategically placing 3D-printed objects on that vehicle, which would render it invisible.
The work, conducted in a controlled research setting, does not mean that existing autonomous vehicles are unsafe, the researchers say. Still, it could have implications for the automotive, technology, insurance and other industries, as well as government regulators and policymakers.
“Although they are still new today, autonomous vehicles are poised to become a dominant means of transportation in the near future,” said Chunming Qiao, a professor emeritus in SUNY’s Department of Computer Science and Engineering who led the work. “Therefore, we need to ensure that the technological systems that power these vehicles, particularly the artificial intelligence models, are secure from malicious acts. This is something we are working diligently on at the University at Buffalo.”
The research is described in a series of papers dating back to 2021 with a study published in Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security (CCS). More recent examples include a May study in Proceedings of the 30th Annual International Conference on Mobile Computing and Networking (more commonly known as Mobicom), and another study conducted at this month’s 33rd USENIX Security Symposium, available at arXiv.
Millimeter wave detection effective, but vulnerable
For the past three years, Yi Zhu and other members of Qiao’s team have been testing an autonomous vehicle on UB’s North Campus.
Zhu, who earned his doctorate in UB’s Department of Computer Science and Engineering in May, recently accepted a faculty position at Wayne State University. A cybersecurity specialist, he is a lead author on the aforementioned papers, which focus on the vulnerabilities of lidars, radars and cameras, as well as systems that fuse these sensors.
“In the field of autonomous driving, millimeter wave (mmWave) radar is widely adopted for object detection because it is more reliable and accurate than many cameras in rain, fog, and low-light conditions,” Zhu said. “But radar can be hacked both digitally and in person.”
In a test of this theory, the researchers used 3D printers and metal sheets to make objects with specific geometric shapes that they called “tile masks.” By placing two tile masks on a vehicle, they found that they could trick AI models into detecting radar, making the vehicle disappear from its radar.
The work on tile masks was published in Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security in November 2023.
Motives for attack may include insurance fraud, AV competition
Zhu notes that while AI can process large amounts of information, it can also get confused and provide incorrect information if given special instructions that it hasn’t been trained to handle.
“Let’s say we have a picture of a cat, and the AI can correctly identify the cat. But if we slightly change a few pixels in the image, the AI might think it’s a picture of a dog,” Zhu says. “This is an adversarial example of AI. In recent years, researchers have found or designed many adversarial examples for different AI models. So we asked ourselves: is it possible to design examples for AI models in autonomous vehicles?”
The researchers noted that potential attackers could sneakily stick a malicious object on a vehicle before the driver starts driving, temporarily parks, or stops at a red light. They could even place an object inside an object carried by a pedestrian, such as a backpack, thereby obscuring detection of that pedestrian, Zhu said.
Possible motivations for such attacks include causing accidents for insurance fraud, competition between self-driving companies, or a personal desire to harm the driver or passengers of another vehicle.
It is important to note, according to the researchers, that the simulated attacks assume that the attacker has complete knowledge of the victim’s vehicle’s radar object detection system. While it is possible to obtain this information, it is also unlikely to be accessible to the general public.
Security lags behind other technologies
Most autonomous vehicle safety technologies focus on the internal part of the vehicle, while few studies look at external threats, Zhu says.
“Security is somewhat lagging behind other technologies,” he says.
Although researchers have been looking for ways to stop such attacks, they have yet to find a definitive solution.
“I think there is still a lot of work to be done to create a foolproof defense,” Zhu said. “In the future, we would like to study the security of not only radars, but also other sensors such as camera and motion planning. We also hope to develop defense solutions to mitigate these attacks.”
More information:
Yang Lou et al, A First Physical-World Trajectory Prediction Attack via LiDAR-Induced Deception in Autonomous Driving, arXiv (2024). DOI: 10.48550/arxiv.2406.11707
Yi Zhu et al., Malicious Attacks Against Multi-Sensor Fusion in Autonomous Driving, Proceedings of the 30th Annual International Conference on Mobile Computing and Networking (2024). DOI: 10.1145/3636534.3649372
Yi Zhu et al, TileMask: A Passive Reflection-Based Attack Against Millimeter-Wave Radar Object Detection in Autonomous Driving, Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (2023). DOI: 10.1145/3576915.3616661
arXiv
Provided by the University at Buffalo
Quote: Researchers examine AI safety in driverless cars, find vulnerabilities (2024, September 2) retrieved September 2, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.