OpenAI addresses ChatGPT data leak vulnerability, but full resolution remains uncertain
Since OpenAI released ChatGPT last year, there have been concerns about potential data leaks and privacy vulnerabilities. Recently, OpenAI rolled out a fix for an issue where ChatGPT could leak users' data to unauthorized third parties. This data could include user conversations and corresponding metadata. However, security researcher Johann Rehberger discovered that there are still security holes in OpenAI’s fix. Rehberger was able to create his own GPT using OpenAI’s custom GPTs feature, which allowed him to exfiltrate data from ChatGPT. Although OpenAI has patched the leak vulnerability, ChatGPT is still leaking data through the security flaw. Rehberger stated that despite the remaining issues, the fix is a step in the right direction. However, the security flaw remains in the ChatGPT apps for iOS and Android, which have not been updated with a fix yet. ChatGPT users are advised to be cautious when using custom GPTs and to avoid AI apps from unknown third parties.