The cybersecurity landscape is shifting fast, and not just because of new threats. AI is increasingly embedded into defensive workflows, and in response, OpenAI is expanding its Trusted Access for Cyber (TAC) program alongside a new model called GPT-5.4-Cyber, designed specifically for security-focused use cases.

The goal is to provide verified defenders with broader access to AI systems that support vulnerability discovery, code analysis, and system hardening. Access remains controlled due to the dual-use nature of cybersecurity tools, where the same capabilities can be used for both defense and abuse.
GPT-5.4-Cyber is a variant of GPT-5.4 tuned for legitimate cybersecurity workflows. It reduces unnecessary refusal behavior in approved security contexts and is designed for tasks like vulnerability analysis, secure code review, and binary inspection when deeper system understanding is required.
The model includes support for binary reverse engineering in defensive workflows. This allows security professionals to analyze compiled software when source code is not available, which is common in malware analysis and incident response.
TAC is expanding to thousands of verified individual defenders and hundreds of teams responsible for securing critical software systems. Access is based on identity verification and trust signals rather than open availability.
The TAC framework is built around three principles: democratized access, iterative deployment, and ecosystem resilience. Democratized access focuses on making advanced capabilities available to legitimate defenders, including independent researchers and organizations securing infrastructure. Iterative deployment focuses on improving models through real-world feedback and updated safety systems. Ecosystem resilience focuses on supporting the security ecosystem through grants, open-source contributions, and tools that help identify and fix vulnerabilities.
One of the key systems in this ecosystem is Codex Security, which monitors codebases and suggests fixes. OpenAI reports that it has contributed to resolving thousands of high and critical severity vulnerabilities across open-source and production environments.
AI is now widely used in cybersecurity on both sides. Defensive teams use models for code analysis, vulnerability detection, and incident response acceleration, while attackers also explore AI-assisted methods for phishing, malware development, and social engineering.
Access to GPT-5.4-Cyber is limited to vetted security vendors, researchers, and organizations that complete identity verification. Enterprise users can request access through official channels, while individuals can apply through TAC verification.
Higher access tiers reduce safeguards for approved cybersecurity workflows while maintaining oversight and usage controls. In some cases, restrictions such as zero-data retention policies apply, especially in environments where usage cannot be fully monitored.
(via OpenAI)


