OpenAI's GPT-5.4-Cyber and the Identity Tax on AI Tools
OpenAI wants you to upload your government ID to use their cybersecurity model. That's the future we're building now.
OpenAI launched GPT-5.4-Cyber yesterday. To use it, you upload a photo of your government-issued ID to a third-party service called Persona.
The companies building the most capable AI models now require Know Your Customer processes for certain features. Not because regulators demanded it. Because the models can do things dangerous enough that the labs decided to gate them voluntarily.
The cyber-permissive framing
OpenAI calls GPT-5.4-Cyber "cyber-permissive." That's careful language. They mean it will help you find vulnerabilities, craft exploits, analyze malware. The things security researchers do daily, now with less friction if you're willing to prove who you are.
They announced Trusted Access for Cyber in February 2026. I missed it then. Most people did. It was a quiet launch of identity verification for reduced guardrails. Now they're expanding it alongside this new model variant, positioning it as democratization while adding an identity checkpoint.
The contradiction is the point. They want to say "we're opening access" while narrowing who can use the sharp tools. A Google Form still gates the best capabilities. Photo ID gets you the self-service tier.
What Anthropic forced
This is OpenAI's response to Claude Mythos, Anthropic's security-focused model from a week earlier. The AI labs are in an arms race over defensive cybersecurity tools, each trying to claim they're the responsible choice for security teams.
Anthropic didn't require ID verification. OpenAI did. That tells you something about their respective threat models or their appetite for liability.
The timing matters. These announcements landed within seven days of each other. Neither company mentioned the other by name. Both emphasized years of prior security work. Both used the word "democratize."
The UX of verified danger
From a product perspective, this is fascinating. OpenAI integrated Persona, a dedicated identity verification vendor, rather than building their own system. That's smart. It's also a signal that they expect this to scale and persist.
The flow is self-service until it isn't. Upload your ID, get access to GPT-5.4-Cyber's reduced guardrails, but fill out a separate application if you want the capable tools. Two gates: one automated, one human-reviewed.
This is the compromise between "lock everything down" and "ship it open." It won't satisfy anyone. Security researchers will complain about friction. Regulators will question whether a photo ID is sufficient verification. Bad actors will find workarounds or use less restricted alternatives.
But it establishes a precedent. The most capable AI tools now come with an identity tax. Not for all features. Just the ones that could cause real damage in the wrong hands.
We're watching the labs invent their own compliance frameworks before governments finish writing the rules. That's either responsible self-governance or preemptive PR, depending on your view of corporate incentives. Probably both.