A team of AI researchers from Open AI has developed a tool for AI developers to measure AI’s machine learning engineering capabilities. The team wrote an article describing their benchmark tool, which they named MLE-bench, and published it on the arXiv preprint server. The team also published a web page on the company’s site showcasing the new tool, which is open source.
As computer machine learning and related artificial applications have flourished in recent years, new types of applications have been tested. One such application is machine learning engineering, where AI is used to solve technical thinking problems, conduct experiments, and generate new code.
The idea is to accelerate the development of new discoveries or find new solutions to old problems while reducing engineering costs, thereby allowing new products to be produced at a faster rate.
Some in the field have even suggested that certain types of AI engineering could lead to the development of AI systems that outperform humans in conducting engineering work, thereby rendering their role in the process obsolete. Others in the field have expressed concerns about the security of future versions of AI tools, questioning the possibility that AI engineering systems will discover that humans are no longer needed at all.
OpenAI’s new benchmarking tool doesn’t specifically address these concerns, but opens the door to the possibility of developing tools intended to prevent either or both outcomes.
The new tool is essentially a series of tests, 75 of which in total come from the Kaggle platform. Testing involves asking a new AI to solve as many as possible. All are based on the real world, like asking a system to decipher an ancient scroll or develop a new type of mRNA vaccine.
The results are then examined by the system to see how well the task was solved and whether its result could be used in the real world, after which a score is assigned. The results of these tests will undoubtedly also be used by the OpenAI team as a yardstick to measure the progress of AI research.
Notably, MLE-bench tests AI systems on their ability to perform engineering work autonomously, which includes innovation. To improve their scores on these bench tests, it is likely that the AI systems being tested will also need to learn from their own work, perhaps including their MLE bench results.
More information:
Jun Shern Chan et al, MLE-bench: Evaluation of Machine Learning Agents on Machine Learning Engineering, arXiv (2024). DOI: 10.48550/arxiv.2410.07095
openai.com/index/mle-bench/
arXiv
© 2024 Science X Network
Quote: OpenAI Unveils Benchmarking Tool to Measure Machine Learning Engineering Performance of AI Agents (October 15, 2024) retrieved October 15, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.