As artificial intelligence networks become more powerful and accessible, digitally manipulated “deepfake” photos and videos are becoming harder to detect. New research from Binghamton University, the State University of New York, breaks down images using frequency domain analysis techniques and looks for anomalies that could indicate they are generated by AI.
In an article published in Disruptive Technologies in Information Science VIIINihal Poredi, a doctoral student, Deeraj Nagothu and Professor Yu Chen of Binghamton’s Department of Electrical and Computer Engineering compared real and fake images beyond telltale signs of image manipulation such as elongated fingers or incomprehensible background text. Master’s student Monica Sudarsan and Professor Enoch Solomon of Virginia State University also collaborated on the study.
The team created thousands of images with popular generative AI tools such as Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream, and then analyzed them using signal processing techniques to understand their frequency domain characteristics. The difference between the frequency domain characteristics of AI-generated images and natural images is the basis for their differentiation using a machine learning model.
By comparing images using a tool called Generative Adversarial Networks Image Authentication (GANIA), researchers can spot anomalies (known as artifacts) due to the way AI generates fakes. The most common method for creating AI images is oversampling, which clones pixels to enlarge file sizes but leaves fingerprints in the frequency domain.
“When you take a picture with a real camera, you get information from all over the world: not just the person, flower, animal or thing you want to photograph, but all kinds of environmental information is embedded in it,” Chen said.
“With generative AI, images focus on what you ask it to generate, no matter how detailed. There’s no way to describe, for example, the quality of the air, the way the wind is blowing, or all the little things that make up background elements.”
Nagothu added: “While there are many emerging AI models, the fundamental architecture of these models remains largely the same. This allows us to exploit the predictive nature of its content manipulation and leverage unique and reliable fingerprints to detect it.”
The research paper also explores ways in which GANIA could be used to identify the AI origins of a photo, limiting the spread of misinformation via deepfake images.
“We want to be able to identify the ‘fingerprints’ of different AI image generators,” Poredi said. “This would allow us to create platforms to authenticate visual content and prevent any adverse events associated with disinformation campaigns.”
In addition to deepfake images, the team developed a technique to detect fake audio-video recordings based on AI. The developed tool, dubbed “DeFakePro,” exploits environmental fingerprints called grid frequency (ENF) signals created as a result of slight electrical fluctuations in the power grid. Like a subtle background hum, this signal is naturally embedded in media files as they are recorded.
By analyzing this signal, specific to the time and location of the recording, the DeFakePro tool can verify whether the recording is authentic or has been tampered with. This technique is very effective against deepfakes and explores in more detail how it can secure large-scale intelligent surveillance networks against such AI-based tampering attacks. This approach could be effective in combating disinformation and digital fraud in our increasingly connected world.
“Disinformation is one of the biggest challenges facing the international community today,” Poredi said. “The widespread use of generative AI in many fields has led to its misuse. Combined with our addiction to social media, this has created a flashpoint for a disinformation catastrophe. This is particularly evident in countries where restrictions on social media and freedom of expression are minimal. It is therefore imperative to ensure the reliability of data shared online, especially audiovisual data.”
While generative AI models have been misused, they are also making significant contributions to advances in imaging technology. Researchers want to help the public differentiate between fake and real content, but it can be difficult to keep up with the latest innovations.
“Artificial intelligence is evolving so quickly that once you develop a deepfake detector, the next generation of that AI tool takes those anomalies and corrects them,” Chen said. “Our job is to try to do something different.”
More information:
Nihal Poredi et al, Authentication of AI-generated images based on generative adversarial networks using frequency domain analysis, Disruptive Technologies in Information Science VIII (2024). DOI: 10.1117/12.3013240
Provided by Binghamton University
Quote: New tools use AI ‘fingerprints’ to detect altered photos and videos (2024, September 12) retrieved September 12, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.