OpenAI Says Hackers Stole Data After Recent Code Flaw

OpenAI said hackers stole some data following a recent code security issue, marking the latest cybersecurity incident to affect a major artificial intelligence company.
The company disclosed that the incident involved unauthorized access linked to a code-related security problem. OpenAI has not publicly detailed the scope of the data taken, the number of affected systems, or when the intrusion was detected. The company also has not identified the attackers.
In separate reporting tied to the same broader period of security concerns, OpenAI has said that no user data was stolen in a supply-chain attack in which hackers accessed employee devices. OpenAI’s statements indicate a distinction between the incident involving stolen data after the code security issue and the supply-chain incident involving employee devices, where the company says user information was not taken.
The development matters because it underscores the growing security pressure on companies building and operating widely used AI products. OpenAI’s tools and services are embedded in consumer and enterprise workflows, and any breach—whether it involves internal systems, code repositories, or other company data—can raise concerns about operational integrity, intellectual property protection, and trust in the security of the software supply chain.
The incident also highlights how modern breaches often span multiple layers of technology. Code security issues can create openings that affect software development processes and internal environments, while supply-chain attacks can exploit trusted dependencies or third-party components. Even when user data is not affected, companies can still face meaningful risk if internal data or code is accessed or copied.
OpenAI has not provided additional public information on remediation steps in connection with the data-theft disclosure, beyond its broader messaging that it did not lose user data in the supply-chain incident involving employee devices. It is unclear whether OpenAI has notified customers, partners, or regulators, or whether any law enforcement agency is involved.
What happens next will depend on what OpenAI determines about what was accessed and what information was taken. Companies in similar situations typically move to contain access, review logs, rotate credentials, and audit code and internal systems for unauthorized changes, while also assessing whether any downstream products or integrations were affected. OpenAI has not announced a timeline for further updates.
The disclosure lands amid a wider wave of cybersecurity incidents hitting technology and manufacturing organizations, including a separate report that Foxconn confirmed a cyberattack affecting some North American factories. The spate of incidents reflects the broad range of targets and methods attackers are using, from corporate networks and employee devices to software supply chains.
For OpenAI, the immediate focus will be on clarifying the impact of the code security issue and demonstrating that its controls can prevent similar incidents going forward, as the company faces intensifying scrutiny over how it protects sensitive systems and data.
