A survey of nearly 3,000 machine learning experts on how our lives will be different in an AI world has been conducted and the results are in.
The good news: A majority believe AI will usher in a wave of remarkable progress in fields such as science, literature, mathematics, music and architecture, years earlier than a survey similar planned two years ago.
The bad news is that we are all going to die.
At least that’s the sentiment of 38 to 51 percent of respondents who believe there is at least a 10 percent chance of an AI-triggered extinction scenario. Nearly 60% said the chance was at least 1 in 20.
The survey was conducted by AI Impacts, which studies the long-term consequences of artificial intelligence.
Not all results centered on pessimism. Researchers found that key AI development is progressing at such a rapid pace that respondents believe several key achievements will be reached years earlier than expected just two years ago.
For example, respondents said there is at least a 50% chance that machines will gain the ability to perform every possible human task without human assistance – and do so better and more cheaply – by 2047. Two years ago, the estimated target date was 2060.
Other exciting AI achievements were planned as early as the late 2020s. They include the ability to generate video from other angles, write a New York Times bestselling novel, and, lo and behold, fold Laundry.
And imagine generating an impeccable song with the style and sound of Taylor Swift, The Weeknd or Ed Sheeran, indistinguishable from the actual artist. This will be achievable within a few years, the survey estimates. Some credible efforts have already been published. The ethics of such achievements were not addressed in the study.
A total of 70% of experts believe that good outcomes are more likely than bad ones, as AI becomes smarter and more powerful.
The study titled “Thousands of AI Authors on the Future of AI” was published on the arXiv preprint server January 5.
The study found that of 39 tasks described in their questionnaires, 35 had at least a 50% chance of being completed within a decade. These tasks included beating a human at Go (after everyone had learned the same number of games), recognizing an object after seeing it just once, and winning the prestigious and notoriously difficult Putnam Mathematics Competition.
While a few expressed concern about an extinction event, more than half of those surveyed expressed “substantial” or “extreme” concern about troubling trends in AI, particularly the spread of false and misleading information.
As an NBC report recently warned: “A convergence of events at home and abroad, on traditional and social media – and in an environment of growing authoritarianism, deep distrust and political and social unrest – makes the dangers of propaganda, lies and conspiracy theories more dire than ever. »
As the American public grows wary of another likely clash between President Biden and former President Trump, as well as key elections in more than 50 other countries, the capacity for AI-generated disinformation threatens to impact the political chemistry within and between the nations of the world.
The survey also found “extreme concern” from respondents about deepfakes, the manipulation of public opinion trends, the potential use of AI by authoritarian leaders to control populations and the spread inequalities by irresponsible users of AI.
Three-quarters of respondents said “more” or “much more” security research should be conducted to address growing concerns about AI-related abuse.
“While optimistic scenarios reflect the potential of AI to revolutionize various aspects of work and life,” the report concludes, “pessimistic predictions, especially those involving extinction risks, are a stark reminder of the high stakes involved in the development and deployment of AI.”
More information:
Katja Grace et al, Thousands of AI Authors on the Future of AI, arXiv (2024). DOI: 10.48550/arxiv.2401.02843
arXiv
© 2024 Science X Network
Quote: The future of AI could be great or catastrophic (January 30, 2024) retrieved January 30, 2024 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.