A team of researchers was able to prompt ChatGPT to reveal private information including email addresses, phone numbers, snippets from research papers, news articles, Wikipedia pages, and more. The researchers, from Google DeepMind, the University of Washington, Cornell, Carnegie Mellon University, the University of California Berkeley, and ETH Zurich, published their findings in a 404 Media and urged AI companies to perform internal and external testing before releasing large language models. They noted that the attack they used to access the data was “wild” and should have been discovered earlier. Chatbots like ChatGPT and prompt-based image generators like DALL-E are powered by large language models trained on data often scraped from the public internet without consent. The researchers discovered that using simple prompts, they were able to make ChatGPT reveal poetry, Bitcoin addresses, fax numbers, names, birthdays, social media handles, explicit content from dating websites, snippets from copyrighted research papers, and verbatim text from news websites. OpenAI patched the vulnerability on August 30, but Engadget was able to replicate some of the paper’s findings in their own tests. OpenAI did not respond to Engadget’s request for comment.
- December 6, 2023
Meta has launched a standalone version of its image generator as it tests dozens of new generative AI features across Facebook, Instagram and WhatsApp. The […]
Apple to Pay $25 Million in Civil Penalties for Alleged Favoritism of Visa Holders in Hiring Practices
- November 10, 2023
Apple has agreed to pay $25 million in backpay and civil penalties to settle allegations of favoring visa holders over US citizens and permanent residents […]