OpenAI introduced Advanced Account Security on April 30, 2026, as an optional high-security setting for ChatGPT accounts.
It is mainly designed for two groups of users. One includes journalists, elected officials, political dissidents, researchers, and others who are more likely to face targeted attacks. The other includes security-conscious users who want stronger protection for their ChatGPT and Codex accounts.
Once enabled, this feature protects not only ChatGPT, but also Codex when accessed through the same login account.
Why ChatGPT accounts need a higher level of security
Many people now use ChatGPT for increasingly private and high-stakes work.
A ChatGPT account may contain:
- Personal questions and long-running conversations
- Work documents and project context
- Connected tools and workflows
- Code and development tasks in Codex
- Enterprise, research, or security-related materials
If an account is taken over, the loss is not limited to leaked chat history. An attacker may also access connected tools, view sensitive context, or interfere with work in progress.
So what OpenAI is introducing is not just another login option. It is a stricter set of account protection measures.
What Advanced Account Security includes
OpenAI places this capability in the Security settings of ChatGPT accounts on the web, where users can opt in.
After it is enabled, it strengthens account security in several ways.
First, sign-in becomes stronger.
Advanced Account Security requires passkeys or physical security keys and disables password-based login. The goal is to make phishing-resistant sign-in the default for people who need it most.
Second, account recovery becomes stricter.
Traditional account recovery often relies on email or SMS. If an attacker controls a user’s email account or phone number, they may use that access to reset the account. To reduce this risk, Advanced Account Security disables email and SMS recovery and uses stronger recovery methods instead, such as backup passkeys, security keys, and recovery keys.
There is an important tradeoff here: after enabling the feature, account recovery depends much more on the user keeping those recovery methods safe. OpenAI explicitly states that if users enrolled in this feature lose their recovery methods, OpenAI Support will not be able to help recover the account.
Third, sessions become shorter and easier to manage.
OpenAI shortens sign-in sessions to reduce the exposure window if a device or active session is compromised. Users also receive login alerts and can review and manage active sessions across their devices.
Fourth, training exclusion becomes automatic.
For people handling sensitive information, preventing conversations from being used for model training is an important privacy setting. When Advanced Account Security is enabled, that preference takes effect automatically: conversations from those accounts will not be used to train OpenAI models.
Working with Yubico to promote physical security keys
OpenAI also announced a partnership with Yubico to offer users a customized security key bundle.
It includes:
YubiKey C Nano: designed to stay plugged into a laptop, reducing daily sign-in frictionYubiKey C NFC: designed as a backup and for use across laptops and mobile devices
OpenAI says users can also use other FIDO-compliant physical security keys or software passkeys.
This means Advanced Account Security is not tied to one specific piece of hardware. It is designed around phishing-resistant authentication methods.
Trusted Access for Cyber users will be required to enable it
OpenAI also says that individual members of Trusted Access for Cyber who access its more capable and permissive cybersecurity models will be required to enable Advanced Account Security starting June 1, 2026.
Organizations can meet the requirement in another way: by attesting that their single sign-on workflow already uses phishing-resistant authentication.
This arrangement makes sense. The more powerful the model capability, the stronger the account protection needs to be. This is especially true for cybersecurity research, vulnerability analysis, and red-teaming scenarios, where the account itself becomes a high-value target.
Who should consider enabling it
This feature is not necessarily for everyone.
If you only use ChatGPT for ordinary conversations and do not want to deal with the complexity of stricter recovery, it may be reasonable to wait.
But the following users should seriously consider it:
- People who often handle sensitive work materials in ChatGPT
- People who use Codex with private code repositories
- Journalists, public affairs professionals, researchers, executives, and other high-risk users
- Cybersecurity professionals
- People already comfortable with passkeys or physical security keys
- People especially concerned about phishing, SIM swapping, or email account takeover
Before enabling it, it is best to prepare backup passkeys, security keys, and recovery keys, and make sure they are stored properly. Otherwise, security improves, but account recovery becomes much harder.
What this means for AI products
Advanced Account Security is not a model capability update, but it reflects the fact that AI products are entering higher-risk usage.
As ChatGPT and Codex begin to carry workflows, code, documents, enterprise connectors, and long-term context, the account is no longer just a way to “log in to a chat tool.” It becomes the key to an AI work environment.
The more these products resemble personal workspaces, the more important account security, recovery mechanisms, session management, and training-data controls become.
OpenAI’s decision to put passkeys, physical security keys, recovery restrictions, session management, and training exclusion into one setting is the right direction. It gives high-risk users a clear place to raise account protection to a level more suitable for sensitive work.
Conclusion
Advanced Account Security can be understood as a high-security mode for ChatGPT and Codex.
It reduces the risk of account takeover through stronger sign-in, stricter recovery, shorter sessions, login alerts, and automatic training exclusion. The tradeoff is that users must manage their own recovery methods more carefully, because traditional email and SMS recovery are no longer available after enabling it, and OpenAI Support cannot serve as a fallback.
If you already use ChatGPT or Codex for important work, especially involving private code, sensitive documents, or a high-risk identity, this feature is worth paying attention to.
Reference link: