• About
  • Advertise
  • Contact
Wednesday, May 14, 2025
Manhattan Tribune
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
Manhattan Tribune
No Result
View All Result
Home Science

New AI framework generates images from scratch

manhattantribune.com by manhattantribune.com
11 January 2024
in Science
0
New AI framework generates images from scratch
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


A new generative AI model can create images from a blank frame. Credit: Los Alamos National Laboratory

A potentially revolutionary new AI framework called “Blackout Diffusion” generates images from a completely blank image, meaning that, unlike other generative diffusion models, the machine learning algorithm does not require not launching a “random seed” to start.

Blackout Diffusion, presented at the recent International Conference on Machine Learning, generates samples comparable to current diffusion models, such as DALL-E or Midjourney, but requires fewer computing resources than those models.

“Generative modeling brings the next industrial revolution with its ability to facilitate many tasks, such as generating software code, legal documents and even art,” said Javier Santos, an AI researcher at Los Alamos National Laboratory. and co-author of Diffusion occultante.

“Generative modeling could be leveraged to achieve scientific discoveries, and our team’s work has laid the foundation and practical algorithms for applying generative diffusion modeling to scientific problems that are not continuous in nature.”

Diffusion models create samples similar to the data they are trained on. They work by taking an image and repeatedly adding noise until the image is unrecognizable. Throughout the process, the model tries to learn how to return it to its original state.

Current models require input noise, meaning they need some form of data to start producing images.

“We showed that the quality of samples generated by Blackout Diffusion is comparable to current models using a smaller computational space,” said Yen-Ting Lin, a Los Alamos physicist who led the Blackout Diffusion collaboration.

Another unique aspect of Blackout Diffusion is the space in which it operates. Existing generative diffusion models operate in continuous spaces, meaning the space they work in is dense and infinite. However, working in continuous spaces limits their potential for scientific applications.

“To make existing generative diffusion models work, mathematically speaking, diffusion must live on a continuous domain; it cannot be discrete,” Lin said.

On the other hand, the team’s theoretical framework operates in discrete spaces (meaning that each point in space is isolated from others by a certain distance), which opens opportunities for various applications, such as texts and scientific applications.

The team tested Blackout Diffusion on a number of standardized datasets, including the modified National Institute of Standards and Technology database; the CIFAR-10 dataset, which contains images of objects in 10 different classes; and the CelebFaces Attributes dataset, which includes more than 200,000 images of human faces.

Additionally, the team used the discrete nature of Blackout Diffusion to clarify several widely held misconceptions about how diffusion models internally, providing critical understanding of generative diffusion models.

They also provide design principles for future scientific applications. “This demonstrates the first fundamental study on discrete-state diffusion modeling and paves the way for future scientific applications with discrete data,” Lin said.

The team explains that generative diffusion modeling has the potential to significantly accelerate the time spent running many scientific simulations on supercomputers, which would support scientific progress and reduce the carbon footprint of computational science. Some of the various examples they mention are underground reservoir dynamics, chemical models for drug discovery, and single-molecular and single-cell gene expression for understanding biochemical mechanisms in living organisms.

The study is published on the arXiv preprint server.

More information:
Javier E Santos et al, Blackout Diffusion: Generative Diffusion Models in Discrete State Spaces, arXiv (2023). DOI: 10.48550/arxiv.2305.11089

Journal information:
arXiv

Provided by Los Alamos National Laboratory

Quote: New AI framework generates images from scratch (January 11, 2024) retrieved January 11, 2024 from

This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.



Tags: frameworkgeneratesimagesScratch
Previous Post

A study shows that otters, beavers and other semi-aquatic mammals stay clean underwater thanks to their flexible fur.

Next Post

Lab-grown retinas explain why people see colors dogs can’t see

Next Post
Lab-grown retinas explain why people see colors dogs can’t see

Lab-grown retinas explain why people see colors dogs can't see

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Category

  • Blog
  • Business
  • Health
  • International
  • National
  • Science
  • Sports
  • Wall Street
  • World
  • About
  • Advertise
  • Contact

© 2023 Manhattan Tribune -By Millennium Press

No Result
View All Result
  • Home
  • International
  • World
  • Business
  • Science
  • National
  • Sports

© 2023 Manhattan Tribune -By Millennium Press