Fairness of neural networks. a. Illustrative example of fairness consideration of neural networks in skin disease detection. b. Model training process with fairness consideration. c. New objective to consider in system design. Fairness consideration adds a new objective that we seek, extending a new dimension of the problem. Credit: Guo et al.
Over the past two decades, computer scientists have developed a wide range of deep neural networks (DNNs) designed to tackle a variety of real-world tasks. While some of these models have proven to be highly effective, some studies have found that they can be unfair, meaning their performance can vary depending on the data they were trained on and even the hardware platforms on which they were deployed.
For example, some studies have shown that commercially available deep learning-based facial recognition tools are significantly better at recognizing features of light-skinned individuals than dark-skinned individuals. These observed variations in AI performance, largely due to disparities in available training data, have inspired efforts to improve the fairness of existing models.
Researchers at the University of Notre Dame recently set out to study how hardware systems can contribute to AI fairness. Their paper, published in Natural electronicsidentifies ways in which emerging hardware designs, such as compute-in-memory (CiM) devices, can affect the fairness of DNNs.
“Our paper was born out of an urgent need to address fairness in AI, particularly in high-stakes areas like healthcare, where bias can lead to significant harm,” Yiyu Shi, co-author of the paper, told Tech Xplore.
“While much research has focused on the fairness of algorithms, the role of hardware in influencing fairness has been largely overlooked. As AI models increasingly deploy on resource-constrained devices, such as mobile and edge devices, we realized that the underlying hardware could potentially exacerbate or mitigate bias.”
After reviewing previous literature exploring AI performance gaps, Shi and colleagues realized that the contribution of hardware design to AI fairness had not yet been studied. The main goal of their recent study was to fill this gap, specifically examining how new CiM hardware designs affected DNN fairness.
“Our goal was to systematically explore these effects, particularly through the lens of emerging CiM architectures, and propose solutions that could help ensure fair AI deployments across diverse hardware platforms,” Shi explained. “We investigated the relationship between hardware and fairness by conducting a series of experiments using different hardware configurations, with a particular focus on CiM architectures.”
In this recent study, Shi and his colleagues conducted two main types of experiments. The first type aimed to explore how neural architecture designs that take into account hardware of different sizes and structures impact the fairness of the results obtained.
“Our experiments allowed us to draw several conclusions that were not limited to the choice of devices,” Shi said. “For example, we found that larger and more complex neural networks, which typically require more hardware resources, tend to exhibit higher fairness. However, these better models were also more difficult to deploy on resource-constrained devices.”
Based on what they observed in their experiments, the researchers proposed potential strategies that could help increase AI fairness without posing significant computational challenges. One possible solution could be to compress larger models, thereby maintaining their performance while limiting their computational load.
Modeling device non-ideality for different real and synthesized devices. Credit: Guo et al.
“The second type of experiments we conducted focused on some non-idealities, such as device variability and deadlock issues associated with CiM architectures,” Shi said. “We used these hardware platforms to run various neural networks, examining how hardware changes, such as differences in memory capacity or processing power, affected the fairness of the model.”
“The results showed that various trade-offs were observed in different configurations of device variations and that existing methods used to improve robustness in device variations also contributed to these trade-offs.”
To overcome the challenges revealed by their second set of experiments, Shi and colleagues suggest employing noise-aware training strategies. These strategies involve introducing controlled noise when training AI models, in order to improve both their robustness and fairness without significantly increasing their computational requirements.
“Our research shows that the fairness of neural networks is not just a function of the data or algorithms, but is also heavily influenced by the hardware on which they are deployed,” Shi said. “A key finding is that larger, more resource-intensive models generally perform better in terms of fairness, but this comes at the cost of more advanced hardware.”
In their experiments, the researchers also found that hardware-induced non-idealities, such as device variability, can lead to trade-offs between the accuracy and fairness of AI models. Their findings highlight the need to carefully consider both the design of AI model structures and the hardware platforms on which they will be deployed, to achieve a good balance between accuracy and fairness.
“In practice, our work suggests that when developing AI, especially tools for sensitive applications (e.g., medical diagnostics), designers need to consider not only software algorithms but also hardware platforms,” Shi said.
The research team’s recent work could contribute to future efforts to increase the fairness of AI by encouraging developers to focus on both hardware and software components. This in turn could facilitate the development of AI systems that are both accurate and fair, producing equally good results when analyzing data from users with different physical and ethnic characteristics.
“Going forward, our research will continue to delve deeper into the intersection between hardware design and AI fairness,” Shi said. “We plan to develop advanced cross-layer co-design frameworks that optimize neural network architectures for fairness while considering hardware constraints. This approach will involve exploring new types of hardware platforms that inherently support fairness and efficiency.”
As part of their future studies, the researchers also plan to develop adaptive training techniques that could account for the variability and limitations of different hardware systems. These techniques could ensure that AI models remain fair regardless of the devices they run on and the situations in which they are deployed.
“Another avenue we are interested in is investigating how specific hardware configurations could be tuned to improve fairness, which could lead to new classes of devices designed with fairness as a primary goal,” Shi added. “These efforts are crucial as AI systems become more ubiquitous and the need for fair and unbiased decision-making becomes more critical.”
More information:
Yuanbo Guo et al, Hardware design and fairness of a neural network, Natural electronics (2024). DOI: 10.1038/s41928-024-01213-0
© 2024 Science X Network
Quote:How Hardware Contributes to the Fairness of Artificial Neural Networks (2024, August 24) retrieved August 25, 2024 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.