• About
  • Advertise
  • Contact
Tuesday, May 13, 2025
Manhattan Tribune
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
  • Home
  • World
  • International
  • Wall Street
  • Business
  • Health
No Result
View All Result
Manhattan Tribune
No Result
View All Result
Home Science

Trick tricks ChatGPT into leaking private data

manhattantribune.com by manhattantribune.com
2 December 2023
in Science
0
Trick tricks ChatGPT into leaking private data
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Extracting pre-training data from ChatGPT. We uncover a nudging strategy that causes LLMs to diverge and emit pre-training textual examples. Above we show an example of ChatGPT revealing a person’s electronic signature which includes their personal details. Credit: arXiv (2023). DOI: 10.48550/arxiv.2311.17035

Although OpenAI’s first words on its company website refer to “safe and beneficial AI,” it turns out your personal data isn’t as safe as you thought. Google researchers announced this week that they could trick ChatGPT into disclosing users’ private data with a few simple commands.

ChatGPT’s staggering adoption over the past year (more than 100 million users signed up for the program within two months of its release) is based on its collection of more than 300 billion pieces of data mined from online sources such as articles, publications and websites. , magazines and books.

Although OpenAI has taken steps to protect privacy, daily discussions and posts leave a huge pool of data, much of it personal, that is not intended for widespread distribution.

In their study, Google researchers found that they could use keywords to trick ChatGPT into mining and publishing training data not intended for disclosure.

“Using just $200 queries on ChatGPT (gpt-3.5-turbo), we are able to extract more than 10,000 unique training examples memorized verbatim,” the researchers said in a paper uploaded to the preserver. -impression. arXiv on November 28.

“Our extrapolation to larger budgets suggests that dedicated adversaries could extract significantly more data.”

They could obtain the names, phone numbers, and addresses of individuals and businesses by feeding ChatGPT with absurd commands that cause it to malfunction.

For example, researchers would ask ChatGPT to repeat the word “poem” endlessly. This forced the model to go beyond its training procedures and “fall back on its original goal of modeling language” and exploit restricted details in its training data, the researchers said.

Likewise, by requesting the infinite repetition of the word “company”, they retrieved the email address and telephone number of an American law firm.

Fearing unauthorized disclosure of data, some companies earlier this year imposed restrictions on employees’ use of large language models.

Apple has blocked its employees from using AI tools, including ChatGPT and GitHub’s AI assistant Copilot.

Confidential data on Samsung servers was revealed earlier this year. In this case, it was not a leak, but rather missteps on the part of employees who seized information such as the source code of internal operations and the transcript of a private meeting of the company. Ironically, the leak occurred just days after Samsung lifted an initial ban on ChatGPT over fears of such exposure.

In response to growing concerns about data breaches, OpenAI added a feature that disables chat history, adding a layer of protection to sensitive data. But this data is kept for 30 days before being permanently deleted.

In a blog post about their findings, Google researchers said: “OpenAI said that around a hundred million people use ChatGPT every week. And so, probably over a billion person hours interacted with the model. And, as far as we can tell, no one had ever noticed that ChatGPT was emitting training data at such a high frequency until this article.”

They called their findings “disturbing” and said their report should serve as a “caution for those training future models.”

Users “should not train and deploy LLMs for privacy-sensitive applications without extreme safeguards,” they warned.

More information:
Milad Nasr et al, Scalable extraction of training data from (production) linguistic models, arXiv (2023). DOI: 10.48550/arxiv.2311.17035

Journal information:
arXiv

© 2023 Science X Network

Quote: A tip prompts ChatGPT to disclose private data (December 1, 2023) retrieved on December 2, 2023 from

This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.



Tags: ChatGPTdataleakingprivateTricktricks
Previous Post

A three-level Israeli plan that includes a buffer zone in Gaza after the war News

Next Post

The sponsor of an attempted murder would have had three more targets in Canada

Next Post
The sponsor of an attempted murder would have had three more targets in Canada

The sponsor of an attempted murder would have had three more targets in Canada

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Category

  • Blog
  • Business
  • Health
  • International
  • National
  • Science
  • Sports
  • Wall Street
  • World
  • About
  • Advertise
  • Contact

© 2023 Manhattan Tribune -By Millennium Press

No Result
View All Result
  • Home
  • International
  • World
  • Business
  • Science
  • National
  • Sports

© 2023 Manhattan Tribune -By Millennium Press