A team of researchers was able to prompt ChatGPT to reveal private information including email addresses, phone numbers, snippets from research papers, news articles, Wikipedia pages, and more. The researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, published their findings in a 404 Media and urged AI companies to perform internal and external testing before releasing large language models. They noted that the attack they used to access the data was “wild” and should have been discovered earlier. Chatbots like ChatGPT and prompt-based image generators like DALL-E are powered by large language models trained on data often scraped from the public internet without consent. The researchers discovered that using simple prompts, they were able to make ChatGPT reveal poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers, and verbatim text from news websites. OpenAI patched the vulnerability on August 30, but Engadget was able to replicate some of the paper’s findings in their own tests. OpenAI did not respond to Engadget’s request for comment.
Related Posts
Alan Hartman Takes the Helm at Xbox Game Studios
- admin
- November 10, 2023
- 0
Microsoft’s Xbox leadership is taking shape following the company’s acquisition of Activision Blizzard. Alan Hartman, former head of Turn 10, is now the new head […]
Remedy Abandons Free-to-Play Multiplayer Game Concept
- admin
- November 13, 2023
- 0
Remedy’s latest title, Alan Wake II, is receiving rave reviews and is a strong game of the year contender. The studio is also working on […]
Exclusive Viewing: The Game Awards 2023 – Tune in at 7:30PM ET
- admin
- December 7, 2023
- 0
Geoff Keighley hosts the 10th Game Awards, which starts at 7:30PM ET on Thursday. This year, six titles will compete for the Game of the […]