OpenAI has introduced Advanced Account Security for ChatGPT, adding passkey and USB-key login options to protect users from hacking and phishing [1, 2].

These updates address a critical vulnerability in AI adoption as more users store sensitive personal and professional data within LLM conversations. By moving beyond traditional passwords, the company aims to reduce the risk of unauthorized account access via social engineering or credential theft.

The new security framework allows users to authenticate their identity using physical hardware keys or platform-native passkeys [2]. This shift toward phishing-resistant multi-factor authentication is designed to ensure that only the legitimate owner of the account can gain access, even if a password has been compromised [1].

OpenAI has made these features available to ChatGPT users on its platform worldwide [1, 2]. The rollout follows a growing trend among tech providers to implement FIDO-based standards, which replace shared secrets with public-key cryptography.

Users can now navigate to their account settings to enable these protections. The integration of USB keys provides a physical layer of security that is significantly harder for remote attackers to bypass compared to SMS-based verification codes [2].

While the company did not provide specific statistics on current account breach rates, the implementation of these tools suggests a proactive approach to securing the ecosystem as ChatGPT integrates further into corporate and personal workflows [1].

OpenAI has introduced Advanced Account Security for ChatGPT

The move to passkeys and hardware tokens signals OpenAI's transition from a consumer-centric chatbot to a professional-grade tool. As AI agents gain more permissions to interact with emails and files, the 'password' becomes a single point of failure. By adopting phishing-resistant hardware standards, OpenAI is aligning its security posture with enterprise-level requirements to mitigate the risk of large-scale data leaks.