OpenAI is making a calculated move in Europe: give trusted defenders access to powerful cyber-focused AI before regulators conclude that only the companies building these systems understand the risk.
Reuters reported that OpenAI is granting selected European companies access to its latest models, including GPT-5.5-Cyber, through a Trusted Access for Cyber program. Named participants include Deutsche Telekom, BBVA, Telefonica, Sophos, Scalable Capital and dozens more companies in sensitive sectors such as finance, telecoms, energy and public services.
This is not a normal product rollout. It is a controlled-access model for a controlled-access problem: frontier AI can help security teams find and fix vulnerabilities, but the same capability can also make attackers faster.
The access strategy
OpenAI's pitch is that capable defenders should not be stuck waiting while offensive use of AI improves. The company told Reuters that verified organizations will get access with safeguards aimed at keeping use defensive. Emmanuel Marill, OpenAI's EMEA managing director, framed the tradeoff plainly: block dangerous activity while giving trusted defenders tools that actually help protect systems.
That wording matters. It shifts the debate from whether cyber-capable AI should exist to who is allowed to use it, under what constraints, and with what oversight.
A day earlier, Reuters reported that the European Commission welcomed OpenAI's offer to provide access to cybersecurity features. Brussels said Anthropic had held meetings with officials but had not reached a similar access stage.
Europe wants visibility, not just assurances
The timing is useful for OpenAI. European officials are already wrestling with how to monitor frontier models that can reason about software flaws, exploit chains and defensive hardening. Politico reported that OpenAI's offer includes talks with EU authorities and follows frustration over limited access to Anthropic's cyber-capable Mythos model.
For regulators, the practical problem is simple: if only the lab can test the strongest version of a model, public agencies are left grading safety from the outside. Access gives officials and trusted institutions a better chance to understand real capability, not just policy documents.
For OpenAI, it is also reputational positioning. The company gets to present itself as cooperative in a region where digital regulation is becoming a core market condition.
The harder line for AI labs
Cyber models expose a tension that general chatbots can hide. A model that is useful for vulnerability discovery is, by design, close to a model that could help exploit vulnerabilities. Safety is no longer just about refusing bad prompts. It is about identity checks, logging, rate limits, compartmentalized access and a clear definition of who counts as a trusted defender.
That creates winners and losers. Large banks, telecoms and cybersecurity vendors may get early access because they can pass verification and offer institutional accountability. Smaller security teams may still be stuck with weaker public tools, even though they face the same attackers.
The result could be a new kind of security gap: not just between companies with and without AI, but between companies allowed to use the most capable defensive models and everyone else.
What to watch next
The important signal is not just that OpenAI has a cyber model. It is that access to cyber-capable AI is becoming part of diplomacy, regulation and enterprise risk management at the same time.
Watch whether Europe turns these voluntary access arrangements into a more formal expectation for frontier labs. Also watch whether OpenAI's trusted-access approach becomes a template for other sensitive AI capabilities, from biosecurity to critical infrastructure simulation.
If it works, controlled access could give defenders a needed speed boost. If it becomes too narrow, the most capable tools may concentrate in the hands of a few large institutions while smaller organizations remain exposed. That is the policy fight now starting to take shape.



