A team of researchers was able to prompt ChatGPT to reveal private information including email addresses, phone numbers, snippets from research papers, news articles, Wikipedia pages, and more. The researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, published their findings in a 404 Media and urged AI companies to perform internal and external testing before releasing large language models. They noted that the attack they used to access the data was “wild” and should have been discovered earlier. Chatbots like ChatGPT and prompt-based image generators like DALL-E are powered by large language models trained on data often scraped from the public internet without consent. The researchers discovered that using simple prompts, they were able to make ChatGPT reveal poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers, and verbatim text from news websites. OpenAI patched the vulnerability on August 30, but Engadget was able to replicate some of the paper’s findings in their own tests. OpenAI did not respond to Engadget’s request for comment.
Related Posts
Explore Nintendo Switch’s Journey through 2021 on the New Review Site
- admin
- December 13, 2023
- 0
Have you spent 25 hours playing Suika Game too? Find out how long you’ve played Suika Game on your Nintendo Switch this year by logging […]
Grab Echo Show 8 Bundles at Up to 64% Off in Amazon’s Black Friday Sale
- admin
- November 18, 2023
- 0
The current Black Friday bundles for Amazon’s smart displays are the best deals yet, with the second-generation Echo Show 8 selling for just $55. This […]
Top VR Gadgets to Enhance Your Virtual Reality Experience in 2023
- admin
- November 15, 2023
- 0
Virtual reality has certainly come a long way since the days of Nintendo’s Virtual Boy. Now, Meta has unveiled its standalone Quest 3 headset, Sony […]
