A recently developed electronic tongue is able to identify differences between similar liquids, such as milk with varying water content; miscellaneous products, including types of soda and coffee blends; signs of spoilage of fruit juices; and cases of food safety problems.
The team, led by Penn State researchers, also found that the results were even more accurate when artificial intelligence (AI) used its own evaluation parameters to interpret the data generated by the e-tongue.
The researchers published their results on October 9 in Nature.
According to the researchers, the e-tongue can be useful for food safety and production, as well as medical diagnostics. The sensor and its AI can broadly detect and classify various substances while collectively assessing their respective quality, authenticity and freshness. This assessment also provided researchers with insight into how AI makes decisions, which could lead to better development and applications of AI, they said.
“We’re trying to make an artificial language, but the process by which we experience different foods involves more than language,” said corresponding author Saptarshi Das, Ackley Professor of Engineering and professor of engineering sciences and mechanics. . “We have the tongue itself, made up of taste receptors that interact with food species and send their information to the taste cortex, a biological neural network.”
The taste cortex is the region of the brain that perceives and interprets various tastes beyond what can be detected by taste receptors, which primarily classify foods into the five broad categories of sweet, sour, bitter, salty, and savory. As the brain learns the nuances of tastes, it can better differentiate the subtlety of flavors. To artificially mimic the taste cortex, researchers developed a neural network, which is a machine learning algorithm that mimics the human brain to evaluate and understand data.
“Previously, we studied how the brain responds to different tastes and mimicked this process by integrating different 2D materials to develop a kind of model for how AI can process information more like a human being,” said the co -author Harikrishnan Ravichandran, doctoral student. student in engineering sciences and mechanics advised by Das.
“Now in this work, we are studying multiple chemicals to see if the sensors can detect them accurately and, furthermore, if they can detect minute differences between similar foods and discern instances of food safety issues.”
The tongue includes a graphene-based ion-sensitive field-effect transistor, or conductive device capable of detecting chemical ions, linked to an artificial neural network, trained on various datasets. Critically, Das noted, the sensors are not functionalized, meaning that a single sensor can detect different types of chemicals, rather than having a specific sensor dedicated to each potential chemical. The researchers provided the neural network with 20 specific parameters to evaluate, all related to how a liquid sample interacts with the electrical properties of the sensor.
Based on these parameters specified by the researchers, AI could accurately detect samples including diluted milks, different types of sodas, coffee blends, and multiple fruit juices at multiple freshness levels and report of their content with an accuracy greater than 80%. in about a minute.
“After achieving reasonable accuracy with the human-selected parameters, we decided to let the neural network set its own figures of merit by feeding it the raw sensor data. We found that the neural network achieved accuracy of “Nearly ideal inference of over 95.% using machine-derived figures of merit rather than those provided by humans,” said co-author Andrew Pannone, a doctoral student in engineering sciences and mechanics advised by Das.
“So we used a method called additive Shapley explanations, which allows us to ask the neural network what it was thinking after it made a decision.”
This approach uses game theory, a decision-making process that takes into account the choices of others to predict the outcome of a single participant and assign values to the considered data. Using these explanations, the researchers were able to reverse engineer how the neural network weighed various components of the sample to make a final decision, giving the team insight into the neural network’s decision-making process, which is remained largely opaque in the field. of AI, according to the researchers.
They found that instead of simply evaluating individual human-assigned parameters, the neural network considered the data it determined to be most important together, with Shapley’s additive explanations revealing how much importance the neural network took. takes into account each input data.
The researchers explained that this assessment could be compared to two people drinking milk. They can both identify that it is milk, but one person may think it is skim milk that has gone skimming while the other thinks it is 2% still fresh . The nuances of why are not easy to explain, even by the person doing the assessment.
“We found that the network was looking at more subtle features in the data, things that we as humans have difficulty defining properly,” Das said.
“And because the neural network considers the characteristics of the sensors holistically, it smoothes out variations that might occur on a daily basis. When it comes to milk, the neural network can determine the varying water content of milk and, in this context , determine whether all indicators of degradation are significant enough to be considered a food security problem.
According to Das, the tongue’s capabilities are only limited by the data it is trained on, meaning that while this study was focused on food evaluation, it could also be applied to medical diagnosis. And while sensitivity is important regardless of where the sensor is applied, the robustness of their sensors paves the way for wide deployment across different industries, the researchers said.
Das explained that the sensors do not need to be exactly the same, because machine learning algorithms can look at all the information together and still produce the correct response. This makes the manufacturing process more convenient and less expensive.
“We realized that we can live with imperfection,” Das said. “And that’s what nature is: it’s full of imperfections, but it can still make solid decisions, just like our electronic language.”
More information:
Andrew Pannone et al, Robust chemical analysis with graphene chemosensors and machine learning, Nature (2024). DOI: 10.1038/s41586-024-08003-w
Provided by Pennsylvania State University
Quote: Electronic tongue that detects subtle differences in liquids also provides insight into how AI makes decisions (October 9, 2024) retrieved October 9, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.