Warning: Your ChatGPT login email may be compromised

Warning: Your ChatGPT login email may be compromised

In a concerning revelation, a research team led by Rui Zhu, a Ph.D. candidate at Indiana University Bloomington, uncovered a potential privacy risk associated with OpenAI’s powerful language model, GPT-3.5 Turbo. Last month, Zhu reached out to individuals, including New York Times employees, using email addresses obtained from the model.

The experiment exploited GPT-3.5 Turbo’s ability to recall personal information, bypassing its usual privacy safeguards. Although imperfect, the model accurately provided work addresses for 80 percent of the Times employees tested. This raises alarms about the potential for generative AI tools like ChatGPT to disclose sensitive information with slight modifications.

OpenAI’s language models, including GPT-3.5 Turbo and GPT-4, are designed to continuously learn from new data. The researchers used the model’s fine-tuning interface, intended for users to provide more knowledge in specific areas, to manipulate the tool’s defenses. Requests that would typically be denied in the standard interface were accepted through this method.

OpenAI, Meta, and Google employ various techniques to prevent requests for personal information, but researchers have found ways to bypass these safeguards. Zhu and his colleagues used the model’s API, rather than the standard interface, and engaged in a process known as fine-tuning to achieve their results.

OpenAI responded to the concerns, emphasizing its commitment to safety and rejection of requests for private information. However, experts raise skepticism, highlighting the lack of transparency regarding the specific training data and the potential risks associated with AI models holding private information.

The vulnerability discovered in GPT-3.5 Turbo raises broader concerns about privacy in large language models. Experts argue that commercially available models lack strong defenses to protect privacy, posing significant risks as these models continuously learn from diverse data sources. The secretive nature of OpenAI’s training data practices adds complexity to the issue, with critics urging for increased transparency and measures to ensure the protection of sensitive information in AI models.

Next
Previous