Deepfakes are now almost indistinguishable from real footage. Here’s an honest look at the technology, the threats, and what we can actually do about it.
How Deepfakes Are Getting Better — And What That Means for Truth
Two years ago, most deepfakes had tells. The skin texture was slightly wrong. The lighting didn’t quite match. Eyes blinked at odd intervals. A careful observer could usually spot them.
That’s no longer reliably true.
The state-of-the-art deepfake generation available today — not even cutting-edge research, just commercially available tools — produces synthetic video and audio that is essentially indistinguishable from real footage without forensic analysis. I’ve tested this. The results are genuinely unsettling.
What the Technology Can Do Now
Let’s be specific about capabilities, because the term “deepfake” covers a spectrum.
Face swap: Placing one person’s face onto another’s body in video. This has been possible for a while, but the quality has reached the point where artifacts visible in early examples are largely eliminated.
Full video synthesis: Generating entirely synthetic video of a real person from scratch — saying something they never said, in a setting they were never in. This requires more source material but is increasingly accessible.
Audio cloning: Replicating someone’s voice from a sample as short as three seconds with modern tools. The resulting audio is convincing enough to fool people who know the subject’s voice.
Real-time deepfake: The most concerning development. Live video filters that can change your appearance in real time during video calls are now consumer products. More sophisticated versions can put any face over yours during a live call.
The barrier to access has collapsed. Sophisticated deepfakes no longer require studio budgets or expert teams. Consumer-grade tools produce results that would have been state-of-the-art three years ago.
The Real Harms Being Done
This isn’t just theoretical. Specific categories of harm are already well-documented:
Non-consensual intimate imagery (NCII). The vast majority of deepfake content currently online is non-consensual pornographic material featuring real people. This is being done overwhelmingly to women. The psychological and social harm to victims is severe, documented, and ongoing. This is not a hypothetical future concern — it’s happening at scale now.
Financial fraud. Multiple documented cases exist of criminals using voice deepfakes to impersonate executives and authorize fraudulent transfers. A finance employee in Hong Kong transferred $25 million to fraudsters after being convinced by a deepfake video call impersonating company leadership. This was not an edge case — it’s a template that’s being widely replicated.
Political manipulation. Synthetic audio and video of political figures has been used in elections in multiple countries. Fabricated audio of candidates saying incendiary things. Manipulated video clips. The challenge for fact-checkers is that by the time a debunking reaches most audiences, the original fake has already spread.
Erosion of epistemic trust. Perhaps the most insidious effect is what researchers call the “liar’s dividend” — the increasingly plausible claim that any real footage is actually a deepfake. Real evidence can now be dismissed as synthetic. This is already being used by public figures as a defense mechanism against legitimate accountability.
Detection: Where Things Stand
Detection technology exists and is improving, but it’s in an arms race it periodically loses.
Forensic detection tools — used by researchers, journalists, and platforms — can identify many deepfakes through analysis of pixel-level artifacts, physiological inconsistencies (like unnatural blood flow patterns visible under the skin), and metadata analysis. But these tools require time, expertise, and access that individual consumers and most journalists don’t have.
Platform-level detection by major social media companies exists but is incomplete and inconsistent. The volume of synthetic content now in circulation makes comprehensive detection essentially impossible with current tools.
Content credentials: The most promising systemic approach. Tools like the Content Authenticity Initiative (C2PA) attach cryptographic provenance to media — essentially a certificate of origin that records what device captured it and what edits were made. When this becomes standard in cameras, editing software, and publishing platforms, verifying authentic media becomes much easier. We’re several years away from broad adoption.
What Individuals Can Do
Realistically, your defenses are limited but not nothing:
Verify before sharing. Before spreading video or audio content of a public figure saying something surprising or outrageous, spend 30 seconds checking whether multiple credible outlets have verified it. Deepfake content tends to spread through low-trust platforms first.
Establish verification channels. If your organization relies on remote authorization for financial transactions, add out-of-band verification steps — a code word agreed in advance, a callback to a known number. Don’t treat video calls as sufficient verification for high-value decisions.
Support detection research. Organizations like MIT’s Media Lab, the Partnership on AI, and AI Foundation are working on detection and provenance tools. These efforts matter.
The Uncomfortable Conclusion
The deepfake era is already here. It arrived faster than our social, legal, and technical defenses were ready for.
The response has to be multi-layered: better detection tools, legal frameworks with real penalties for harmful synthetic media, platform responsibility, and a media-literate public with calibrated skepticism about surprising video and audio content.
None of these are quick fixes. But they’re the only realistic path through.