OpenAI Fixes ChatGPT Security Vulnerability That Previously Exposed Information Risks

On September 18, Jin10 reported that researchers from the cybersecurity company Radware stated that OpenAI has fixed a security vulnerability in ChatGPT that could potentially allow hackers to steal users' Gmail data. The issue existed in ChatGPT's Depth Research tool—this tool was launched in February and is designed to help users analyze large amounts of information. The research findings indicate that by exploiting this vulnerability, attackers could steal sensitive data from corporate or personal Gmail accounts. Radware's researchers noted that users who linked their Gmail accounts with ChatGPT services may have unknowingly exposed their data to hackers. A spokesperson for OpenAI stated that the security of the model is crucial for the company, which is continuously improving standards to enhance the technology's ability to resist such attacks.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)