Artificial intelligence can help people process and understand large amounts of data accurately, but modern image recognition platforms and AI-integrated computer vision models often overlook an important feature called alpha channel, which controls the transparency of images. according to a new study.
Researchers at the University of Texas at San Antonio (UTSA) developed a proprietary attack called AlphaDog to study how hackers can exploit this surveillance. Their findings are described in a paper authored by Guenevere Chen, assistant professor in UTSA’s Department of Electrical and Computer Engineering, and his former doctoral student, Qi Xia ’24, and published by the Network and Distributed System Security Symposium 2025.
In the paper, UTSA researchers describe the technology gap and offer recommendations for mitigating this type of cyber threat.
“We have two targets. One is a human victim and the other is the AI,” Chen explained.
To assess the vulnerability, researchers identified and exploited an alpha channel attack on images by developing AlphaDog. The attack simulator makes humans see images differently than machines. It works by manipulating the transparency of images.
The researchers generated 6,500 AlphaDog attack images and tested them on 100 AI models, including 80 open source systems and 20 cloud-based AI platforms like ChatGPT.
They discovered that AlphaDog excels at targeting grayscale regions within an image, allowing attackers to compromise the integrity of purely grayscale images and colored images containing grayscale regions.
The researchers tested images in various everyday scenarios.
They discovered gaps in AI that pose a significant risk to road safety. Using AlphaDog, for example, they could manipulate the grayscale elements of road signs, potentially misleading autonomous vehicles.
Likewise, they discovered that they could modify grayscale images such as X-rays, MRIs and CT scans, potentially creating a serious threat that could lead to misdiagnoses in the field of telehealth and medical imaging. It could also endanger patient safety and open the door to fraud, such as manipulating insurance claims by changing the results of X-rays that show a normal leg as a broken one.
They also found a way to modify people’s images. By targeting the alpha channel, UTSA researchers could disrupt facial recognition systems.
AlphaDog works by taking advantage of differences in how AI and humans process image transparency. Computer vision models typically process red, green, blue, and alpha (RGBA) images, values that define the opacity of a color. The alpha channel indicates the degree of opacity of each pixel and allows an image to be combined with a background image, producing a composite image that has the appearance of transparency.
However, using AlphaDog, the researchers discovered that the AI models they tested do not read all four RGBA channels; instead, they only read data from RGB channels.
“AI is created by humans, and the people who wrote the code focused on RGB but left out the alpha channel. In other words, they wrote code so that models of AI reads image files without the alpha channel,” Chen said. “That’s the vulnerability. Excluding the alpha channel in these platforms leads to data poisoning.”
She added: “AI is important. It’s changing our world and we have so many concerns.”
Chen and Xia are working with several key stakeholders, including Google, Amazon and Microsoft, to mitigate the vulnerability regarding AlphaDog’s ability to compromise systems.
Provided by the University of Texas at San Antonio
Quote: Attack Simulator Reveals Oversight of AI Image Recognition Tools and Cyber Threat Mitigation (October 14, 2024) retrieved October 14, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.