• About
  • Advertise
  • Contact
Wednesday, May 14, 2025
Manhattan Tribune
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
Manhattan Tribune
No Result
View All Result
Home Science

Lifelong learning will power the next generation of autonomous devices

manhattantribune.com by manhattantribune.com
31 January 2024
in Science
0
Lifelong learning will power the next generation of autonomous devices
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Addressing lifelong learning in AI systems. a, Applications: lifelong learning presented in the context of sequential tasks (large circles) and subtasks (smaller circles) with varying degrees of similarity, and associated hardware challenges. b, Algorithmic mechanisms: a broad class of mechanisms that address lifelong learning. Dynamic architectures add or prune network resources to adapt to changing environments. Regularization methods restrict the plasticity of synapses to preserve knowledge of the past. Proofreading methods interweave repetition of prior knowledge while learning new tasks. c, Hardware challenges: Lifelong learning imposes new constraints on AI accelerators, such as the ability to reconfigure data paths with fine granularity in real time, dynamically reallocate computational resources and of memory within a size, weight and power (SWaP) budget, to limit memory overhead. for replay buffers and quickly generate potential synapses, new neurons and layers. d, Optimization Techniques: Hardware design challenges can be solved by performing aggressive optimizations across the entire design stack. Some examples are reliable and scalable dynamic interconnects, quantization at <4-bit precisions during training, hardware programmability, integration of high-bandwidth memory, and support for reconfigurable data flow and parsimony. Credit: Natural electronics (2023). DOI: 10.1038/s41928-023-01054-3

Search for “lifelong learning” online and you’ll find a long list of apps to teach you how to quilt, play chess, or even speak a new language. However, in the emerging areas of artificial intelligence (AI) and autonomous devices, “lifelong learning” means something different and is a bit more complex. It refers to the ability of a device to operate continuously, interact with and learn from its environment, on its own and in real time.

This capability is essential to the development of some of our most promising technologies, from automated delivery drones and self-driving cars to rovers and extraplanetary robots capable of doing work too dangerous for humans.

In all of these cases, scientists are developing algorithms at a breakneck pace to enable such learning. But the specialized AI hardware accelerators, or chips, that devices need to run these new algorithms must keep pace.

This is the challenge faced by Angel Yanguas-Gil, a researcher at the Argonne National Laboratory of the US Department of Energy (DOE). His work is part of the Argonne Microelectronics Initiative. Yanguas-Gil and a multidisciplinary team of colleagues recently published a paper in Natural electronics which explores the programming and hardware challenges faced by AI-based devices, and how we might be able to overcome them through design.

Learn in real time

Current approaches to AI are based on a training and inference model. The developer “trains” the offline AI ability to use only certain types of information to perform a defined set of tasks, tests its performance, and then installs it on the destination device.

“At this point, the device can no longer learn new data or experiences,” says Yanguas-Gil. “If the developer wants to add features to the device or improve its performance, they must decommission the device and train the system from scratch.”

For complex applications, this model is simply not feasible.

“Think of a planetary rover that encounters an object it hasn’t been trained to recognize. Or enters terrain it hasn’t been trained to navigate,” Yanguas-Gil continues.

“Given the lag between the rover and its operators, shutting it down and trying to retrain it to work in this situation won’t work. Instead, the rover must be able to collect new types of data. It must connect this new information to the information it already has – and the tasks associated with it – and then make decisions about what to do next in real time.

The challenge is that real-time learning requires much more complex algorithms. In turn, these algorithms require more energy, more memory, and more flexibility from their hardware accelerators to operate. And these chips are almost always strictly limited in size, weight, and power, depending on the device.

Keys to Lifelong Learning Accelerators

According to the paper, AI accelerators need a number of capabilities to enable their host devices to learn continuously.

The learning capability must be located on the device. In most intended applications, the device will not have time to retrieve information from a remote source such as the cloud or request a transmission containing instructions from the operator before having to perform a task.

The accelerator must also have the ability to change how it uses its resources over time to maximize the use of energy and space. This could involve deciding to change where it stores certain types of data or how much power it uses to perform certain tasks.

Another necessity is what researchers call “model retrievability.” This means that the system can retain enough of its original structure to continue to perform its intended tasks at a high level, even though it is constantly changing and evolving due to its learning. The system should also prevent what experts call “catastrophic forgetting,” where learning new tasks causes the system to forget old ones. This is a common phenomenon in current machine learning approaches. If necessary, systems should be able to revert to more efficient practices if performance begins to suffer.

Finally, the accelerator might need to consolidate knowledge gained from previous tasks (using data from past experiments through a process called replay) while actively completing new ones.

All of these capabilities present challenges for AI accelerators that researchers are only beginning to address.

How do we know it works?

The process of measuring the effectiveness of AI accelerators is also a work in progress. In the past, assessments focused on task accuracy to measure the degree of forgetting that occurred in the system as it learned a series of tasks.

But these measurements are not nuanced enough to capture the information developers need to develop AI chips that can address all the challenges required for lifelong learning. According to the paper, developers are now more interested in assessing the extent to which a device can use what it learns to improve its performance on tasks that precede and follow the point in a sequence where it learns new information. Other emerging metrics aim to measure how quickly the model can learn and how well it manages its own growth.

Progress in the face of complexity

If this all sounds exceptionally complex, well, it is.

“It turns out that to create devices that can actually learn in real time, we will need breakthroughs and strategies ranging from algorithm design to chip design to new materials and devices,” explains Yanguas-Gil.

Fortunately, researchers may be able to draw on or adapt existing technologies originally designed for other applications, such as memory devices. This could help realize lifelong learning capabilities in a way compatible with current semiconductor processing technologies.

Likewise, new co-design approaches developed through Argonne’s microelectronics research portfolio can help accelerate the development of new materials, devices, circuits and architectures optimized for lifelong learning. In their article, Yanguas-Gil and colleagues propose some design principles to guide development efforts in this direction. They understand:

  • Highly reconfigurable architectures, so the model can change how it uses energy and stores information as it learns, similar to how the human brain works.
  • High data bandwidth (for fast learning) and large memory footprint.
  • On-chip communication to promote reliability and availability.

“The process of addressing these challenges is only just beginning in a number of scientific disciplines. And it will likely require very close collaboration between these disciplines, as well as an openness to new designs and new materials,” says Yanguas -Gil. “This is an extremely exciting time for the entire lifelong learning ecosystem.”

More information:
Dhireesha Kudithipudi et al, Design Principles of AI Accelerators for Lifelong Learning, Natural electronics (2023). DOI: 10.1038/s41928-023-01054-3

Provided by Argonne National Laboratory

Quote: Lifelong learning will power the next generation of autonomous devices (January 31, 2024) retrieved January 31, 2024 from

This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.



Tags: autonomousdevicesgenerationlearninglifelongpower
Previous Post

New heart treatment could help body develop replacement valve

Next Post

Using neuroimaging, researchers confirm cumulative brain-wide effects of ADHD

Next Post
Using neuroimaging, researchers confirm cumulative brain-wide effects of ADHD

Using neuroimaging, researchers confirm cumulative brain-wide effects of ADHD

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Category

  • Blog
  • Business
  • Health
  • International
  • National
  • Science
  • Sports
  • Wall Street
  • World
  • About
  • Advertise
  • Contact

© 2023 Manhattan Tribune -By Millennium Press

No Result
View All Result
  • Home
  • International
  • World
  • Business
  • Science
  • National
  • Sports

© 2023 Manhattan Tribune -By Millennium Press