A team of researchers was able to prompt ChatGPT to reveal private information including email addresses, phone numbers, snippets from research papers, news articles, Wikipedia pages, and more. The researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, published their findings in a 404 Media and urged AI companies to perform internal and external testing before releasing large language models. They noted that the attack they used to access the data was “wild” and should have been discovered earlier. Chatbots like ChatGPT and prompt-based image generators like DALL-E are powered by large language models trained on data often scraped from the public internet without consent. The researchers discovered that using simple prompts, they were able to make ChatGPT reveal poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers, and verbatim text from news websites. OpenAI patched the vulnerability on August 30, but Engadget was able to replicate some of the paper’s findings in their own tests. OpenAI did not respond to Engadget’s request for comment.
Related Posts
Insider Hackers Exploit SEC Rule to Expose Infiltrated Company
- admin
- November 16, 2023
- 0
A hacking group breached a financial software company’s network and reported the breach to the US Securities and Exchange Commission (SEC). According to DataBreaches.net, the […]
Save $100 on the Best Wi-Fi 6 Mesh Router in Amazon’s Black Friday Deals
- admin
- November 20, 2023
- 0
The TP-Link Deco AXE5400 Wi-Fi 6 mesh router system is available on Amazon during its Black Friday sale for $200, which is $100 off the […]
Introducing Amazon’s Astro: The Tiny Robot Security Guard
- admin
- November 15, 2023
- 0
Amazon introduces Astro for Business, the Alexa-equipped home robot now available as a security guard for small and medium-size businesses. The robot comes with an […]