A hacker said they purloined private details from countless OpenAI accounts-but researchers are skeptical, and the company is examining.
OpenAI states it's examining after a hacker claimed to have actually swiped login qualifications for 20 million of the AI company's user accounts-and put them up for sale on a dark web online forum.
The pseudonymous breacher published a puzzling message in Russian marketing "more than 20 million gain access to codes to OpenAI accounts," calling it "a goldmine" and using prospective purchasers what they claimed was sample data containing email addresses and passwords. As reported by Gbhackers, the complete dataset was being marketed "for simply a couple of dollars."
"I have more than 20 million gain access to codes for OpenAI accounts," emirking wrote Thursday, according to a translated screenshot. "If you're interested, reach out-this is a goldmine, and Jesus concurs."
If genuine, this would be the third major security incident for the AI company considering that the release of ChatGPT to the public. Last year, a hacker got access to the business's internal Slack system. According to The New York Times, the hacker "took details about the style of the business's A.I. innovations."
Before that, in 2023 an even simpler bug involving jailbreaking triggers enabled hackers to obtain the private data of OpenAI's paying customers.
This time, however, security researchers aren't even sure a hack occurred. Daily Dot reporter Mikael Thalan composed on X that he found void email addresses in the expected sample data: "No proof (recommends) this supposed OpenAI breach is legitimate. At least two addresses were void. The user's just other post on the online forum is for a stealer log. Thread has given that been erased as well."
No evidence this supposed OpenAI breach is genuine.
Contacted every email address from the supposed sample of login credentials.
At least 2 addresses were invalid. The user's only other post on the online forum is for a stealer log. Thread has given that been erased as well. https://t.co/yKpmxKQhsP
- Mikael Thalen (@MikaelThalen) February 6, 2025
OpenAI takes it 'seriously'
In a declaration shared with Decrypt, forum.pinoo.com.tr an OpenAI spokesperson acknowledged the circumstance while maintaining that the business's systems appeared safe.
"We take these claims seriously," the representative said, including: "We have actually not seen any proof that this is linked to a compromise of OpenAI systems to date."
The scope of the alleged breach stimulated concerns due to OpenAI's huge user base. Millions of users worldwide depend on the business's tools like ChatGPT for organization operations, instructional purposes, and content generation. A genuine breach might expose personal discussions, industrial jobs, and other sensitive information.
Until there's a last report, some preventive measures are always suggested:
- Go to the "Configurations" tab, log out from all linked devices, bphomesteading.com and allow two-factor authentication or 2FA. This makes it virtually impossible for a hacker to gain access to the account, even if the login and passwords are compromised.
- If your bank supports it, then create a virtual card number to manage OpenAI memberships. This way, it is much easier to find and avoid fraud.
- Always keep an eye on the conversations saved in the chatbot's memory, wiki.cemu.info and know any phishing efforts. OpenAI does not request any individual details, and asteroidsathome.net any payment upgrade is always handled through the main OpenAI.com link.