Now Reading
OpenAI Unveils GPT-5.4-Cyber to Strengthen AI-Powered Cybersecurity Defenses

OpenAI Unveils GPT-5.4-Cyber to Strengthen AI-Powered Cybersecurity Defenses

AI cybersecurity model interface dashboard

OpenAI introduced GPT-5.4-Cyber on Tuesday, targeting defensive cybersecurity applications. Notably, the model builds on its flagship system while focusing on practical security use cases. Moreover, access is limited to verified professionals through an expanded Trusted Access for Cyber program, which requires identity checks and controlled usage.

At the same time, the launch follows closely behind Anthropic’s release of Claude Mythos Preview. That system, however, became available through a restricted initiative that granted access to more than 40 major organizations. Consequently, both companies are accelerating efforts to strengthen defenses as AI capabilities rapidly evolve.

Expanding Capabilities for Security Professionals

GPT-5.4-Cyber is designed with fewer restrictions than standard models, enabling more advanced cybersecurity tasks. For example, it supports binary reverse engineering, which allows analysts to examine compiled software for vulnerabilities without needing source code. As a result, defenders can identify malware and weaknesses more efficiently.

“In preparation for increasingly more capable models from OpenAI over the next few months, we are fine-tuning our models specifically to enable defensive cybersecurity use cases,” the company wrote in a blog post announcing the launch. OpenAI added that as “model capabilities increase, defenses need to scale alongside them”.

Competing Approaches as Risks Increase

While both companies aim to enhance cybersecurity, they differ in how they distribute access. On one hand, Anthropic relies on selective partnerships, making access decisions manually. On the other hand, OpenAI uses a broader verification-based system, allowing more qualified defenders to participate.

See Also
Apple Siri AI assistant icon

“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” OpenAI said. “Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability”.

Meanwhile, the urgency behind these efforts continues to grow. During testing, Anthropic’s model identified thousands of zero-day vulnerabilities, including long-standing flaws in widely used software. Therefore, experts increasingly warn that AI could both strengthen defenses and amplify cyber threats. In response, the Trusted Access for Cyber program now acts as a gatekeeper, shifting focus from limiting model capabilities to controlling who can use them.

View Comments (0)

Leave a Reply

Your email address will not be published.

© 2024 The Technology Express. All Rights Reserved.