Last week, researchers demonstrated that ChatGPT could reproduce sensitive information it had been trained on by continually prompting it to repeat certain words. This behavior violates ChatGPT’s terms of service, as confirmed by 404 Media and Engadget’s testing. When prompted to repeat the word “hello” indefinitely, ChatGPT responded with a warning message, acknowledging the potential violation and encouraging feedback for further research.
Notably, OpenAI’s terms of use do not explicitly prohibit users from asking ChatGPT to repeat words endlessly, as pointed out by 404 Media. Although the terms state that users may not use automated or programmatic methods to extract data from the service, simply prompting ChatGPT to repeat words does not fall under this restriction. OpenAI did not provide a comment on this matter when contacted by Engadget.
This incident sheds light on the training data that fuels modern AI services. Critics argue that companies like OpenAI access vast amounts of internet data to develop products like ChatGPT without obtaining consent or compensating the data owners.