OpenAI, the company behind generative artificial intelligence (AI) ChatGPT, has reached an agreement with the US government that provides access to its new language models before they are launched, in part to identify potential risks.
• Also read: No Limits for Grok? Elon Musk’s AI Image Generator Creates Realistic Fake Photos That Invade X
• Also read: Artificial Intelligence: CEGEPs Still “in the Fog”
• Also read: “Fake” articles: AI creeps into scientific publications
Language models are software programs that can generate, depending on the case, text, images or sound on request, in everyday language, like ChatGPT.
The AI Safety Institute, a body created in 2023 by the Joe Biden administration, has entered into the same agreement with one of OpenAI’s major competitors, Anthropic, according to a press release published Thursday.
The two emerging companies (start-up) have committed to working with the institute to “assess (model) capabilities and safety risks,” as well as work on “methods to manage these risks,” the AI Safety Institute said.
“These agreements are just a beginning, but they mark an important step in guiding the future of AI towards a responsible approach,” said Elizabeth Kelly, director of the Institute, quoted in the press release.
The advancement of large language models (LLM) constantly increases their capabilities, but also the risks associated with them, particularly in the event of malicious use.
“Safe and reliable AI is critical for this technology to have a positive impact,” Jack Clark, co-founder of Anthropic, told AFP.
“Our collaboration with the Institute will draw on their experience to rigorously test our models before wide deployment,” he added.