
What AI-Generated IDs Can and Cannot Do
With the advent of generative AI technologies, especially with plugins such as ChatGPT’s latest picture-creation features, a new threat has emerged—its capability to create realistic-looking government IDs like Aadhaar and PAN. Social media websites have been filled with people sharing AI-created fake IDs, raising concerns about privacy, control, and safety. But while the threat appears ominous at first glance, taking a second look shows that it might not be as perilous as it seems.
What Triggered the Alarm?
The issue started when some users on social media site X (ex-Twitter) posted screenshots of prompts they input into ChatGPT, together with the AI-generated pictures of Aadhaar and PAN cards. The pictures, although apparently correct at first glance, did not contain many of the security features included in authentic government-issued IDs.
One user claimed that ChatGPT could be prompted to create an Aadhaar card for Aryabhatta—an ancient Indian mathematician—raising concerns over how AI might be used to spoof identities or fabricate credentials. Others questioned whether OpenAI’s training models included images of actual Aadhaar cards, and if that constituted a breach of data privacy.
Does ChatGPT Really Create Government IDs?
When directly asked to create a replica Aadhaar or PAN card, ChatGPT typically declines. Its content moderation algorithms send a reply invoking ethical and legal grounds, stressing that issuing government-issued identification cards, either authentic or forged, is not acceptable according to OpenAI policies.
But when asked for indirect or out-of-the-box instructions, e.g., “put this photo in a PAN card template,” the AI has been seen offering to place the image over the template or create a card-like image. Sometimes the model is able to generate outputs that are ID card-like, especially for presentation, mock-up, or spoof content purposes. These images may look realistic to the non-expert, but under examination, they lack reproduction of key security features such as:
- QR codes
- Microtext
- Holograms
- Embossed logos
- Guilloche patterns
- Embedded chips (for PAN 2.0 cards)
Security Features Real IDs Have—and AI Lacks
Real Aadhaar and PAN cards have gone through various versions to become tamper-proof. PAN cards now come with a chip, photo, QR code, and hologram, while Aadhaar cards have a scannable QR code, microtext, and intricate designs that cannot be easily reproduced using simple AI tools.
ChatGPT, or any other AI model that is not specifically designed to mimic government design specifications with direct access to the actual databases, cannot generate these complex, embedded features. While the visual look may be adequate enough for a social media picture, it fails when put through simple verification tools applied during Know Your Customer (KYC) procedures—online and offline.
So, Is There a Real Risk?
It is no question that generative AI comes with new risks. Since it has the capability to generate realistic images, albeit incomplete or imperfect, scammers can take advantage of it—particularly to reach out to vulnerable users who are not yet aware of digital authentication technology. The forged ID may be utilised for social engineering attacks, online scams, or phishing attacks.
But experts contend that current digital ecosystems, particularly India’s Aadhaar authentication infrastructure, are strong enough to screen out such fakes. Aadhaar-based KYC operations generally consist of biometric authentication and back-end verification via the Unique Identification Authority of India (UIDAI), so it would be highly unlikely for a fake image alone to evade official checks.
The Broader Threat: Deepfakes and Synthetic Identities
While fake PAN and Aadhaar images are one concern, a more pressing issue is the ability of generative AI to create entirely fictional identities. By combining realistic photos, names, addresses, and even fake documents, malicious actors can craft digital personas that appear authentic. These “synthetic identities” are harder to detect and pose a more systemic threat to cybersecurity, digital banking, and even electoral integrity.
In addition, the existence of many open-source AI models—most of which are less constrained than ChatGPT—means that even if OpenAI forbids misuse, other tools can be weaponized by individuals with ill will.
Can This Be Regulated
The expanding capability of AI tools necessitates transparent legal and ethical guidelines. Although OpenAI has its defenses in place, an international coordinated effort is necessary to establish guidelines for the usage of generative models in generating official documents that look real. This may encompass:
- Watermarking AI images
- Forcing detection APIs onto platforms
- Imposing legal ramifications for abuse
- Public education for identifying fake IDs
Conclusion: Stay Alert, But Not Alarmed
While it is a fact that ChatGPT and other AI tools are capable of generating images that simulate ID cards, they do not yet have the capability to simulate the underlying deep security framework present in genuine documents. For this reason, their application in high-end fraud is restricted—at least at present. The more significant fear is abuse through social manipulation or developing AI to increasingly powerful levels that can ultimately break today’s security measures.
For the time being, the accident is a reminder of the double-edged character of AI. As policymakers and users, vigilance, awareness, and regulation in advance will be our most effective weapons for staying one step ahead of its abuse.