In February, a finance worker at a multinational firm in Hong Kong was tricked by a deepfake video call into transferring over $25 million to scammers. Weeks later, a French woman lost her inheritance to a deepfake dating scam. And the list keeps growing.
Deepfake detection tools have been around for a while, but they haven’t kept up. Generative AI is moving fast, and most traditional models aren’t built to evolve alongside it.
BitMind claims to change that paradigm.
Their system is currently hitting 88% accuracy on real-world content, a big jump from the 69% rate of traditional tools. However, with scams getting more sophisticated by the day, the question remains: Can detection systems keep up?
We spoke to BitMind’s co-founder and CEO, Ken Jon Miyachi, about what’s broken in the current approach to deepfake detection and how he believes decentralized networks might offer a better path forward.
Based in Austin and with a background in AI and blockchain, Miyachi left Amazon to build systems that respond faster to threats like synthetic media. During our conversation, he explained why traditional detectors fall short and what individuals and organisations can do to protect themselves.
Our chat with Ken Jon Miyachi is shared below.
Crowdfund Insider: The UK government recently called deepfakes “the greatest challenge of the online age,” with projections of eight million to be shared in 2025. What makes this technology so threatening?
Ken Jon Miyachi: The data speaks for itself. In 2023, we had around 500,000 deepfakes circulating online. Now, we’re looking at 8 million in 2025. That’s a 16x increase in just two years, and the quality has improved exponentially. What used to require expensive equipment and technical expertise now takes minutes on a laptop.
The biggest challenge is that detection systems aren’t keeping up. Traditional detectors achieve only 69% accuracy, which isn’t good enough when deepfakes target financial transactions or election integrity.
Just last week in Australia, a retired teacher and his wife lost their entire $500,000 life savings after seeing a deepfake of TV personality Eddie McGuire promoting an investment opportunity. They clicked a fake Google ad and lost everything they’d saved for retirement. Real people, real consequences.
Crowdfund Insider: Why are traditional detection approaches falling short?
Ken Jon Miyachi: The fundamental problem is that conventional detectors are static while generative AI is dynamic. Most detection systems are trained on specific datasets, deployed, and then left unchanged until the next scheduled update.
Meanwhile, new AI generation techniques emerge weekly. By the time a traditional detector is updated, it’s already falling behind.
It’s also prohibitively expensive for most organizations to maintain cutting-edge detection capabilities. You need specialized talent, massive computing resources, and continuous retraining, all for a system that’s constantly playing catch-up.
Crowdfund Insider: Can you walk through a typical deepfake scam that targets businesses or individuals?
Ken Jon Miyachi: For businesses, the most common pattern starts with scammers gathering public images and videos from LinkedIn or company websites to build a convincing fake persona. They then target junior employees, arranging video calls while posing as executives and using deepfake technology to request urgent money transfers. It exploits hierarchy and urgency and it works because no one expects a familiar face to be fake.
The Hong Kong case where $25 million was stolen shows the scale of the threat. Most companies don’t report these incidents due to reputation concerns, but they’re happening with increasing frequency.
Individual scams typically involve fake investment opportunities with deepfaked celebrities, emergency calls from synthetic versions of relatives, or voice clones of loved ones claiming to be in trouble. The technology is essentially democratizing sophisticated fraud.
Crowdfund Insider: How is BitMind’s approach different from traditional detection methods?
Ken Jon Miyachi: We’ve flipped the economics of detection completely. Instead of one team trying to keep up with thousands of deepfake creators, we’ve built BitMind on top of Bittensor, the decentralized network where hundreds of developers can come and compete to create the best detection algorithms, with automatic rewards for superior performance.
As deepfakes get more sophisticated, the value of detecting them increases, attracting more talent to solve the problem. Detection capabilities evolve organically rather than through planned update cycles.
The results speak for themselves. Traditional systems hit around 69% accuracy on real-world deepfakes, while our latest benchmarks reached 88%. And unlike static systems, we’re continuously improving as our network responds to new generation techniques.
Crowdfund Insider: What should companies be doing to protect themselves against deepfakes?
Ken Jon Miyachi: Most organizations have decent email security but haven’t addressed synthetic media at all. Security teams need to evolve their playbook beyond phishing and malware to include identity fraud using deepfakes.
Start with technical safeguards like detection tools, then implement verification protocols that don’t rely solely on visual or audio cues. For financial approvals, use multi-factor authorization with out-of-band verification, a separate channel from where the request originated.
Training is critical. Teach your team to recognize potential synthetic media by watching for unusual blinking patterns, audio-visual misalignments, and contextual inconsistencies. And develop an incident response plan before you’re targeted – because it’s a matter of when, not if.
Crowdfund Insider: How can ordinary people protect themselves against deepfakes?
Ken Jon Miyachi: We’ve made all our detection tools freely available to the public, including a Chrome extension that automatically flags potential AI-generated images while browsing, and bots for platforms like X and Discord. These tools provide a first line of defense against common scams.
Beyond tools, develop healthy skepticism toward emotionally charged content, especially during sensitive periods like elections. Verify information across multiple sources before taking action, particularly when money is involved.
Remember that humans still have advantages in detecting contextual inconsistencies – like a public figure saying something completely out of character. Trust those instincts when something feels off, even if you can’t immediately identify why.
Crowdfund Insider: How does BitMind’s economic model drive innovation?
Ken Jon Miyachi: Traditional detection development is limited by academic publication cycles or corporate budget allocations. Our network creates a direct financial pipeline to reward innovation without bureaucratic hurdles.
When validators identify superior detection methods, they automatically direct rewards to those developers. This creates immediate financial incentives for innovation that scales with the challenge.
This approach attracts global talent that might otherwise work on more lucrative AI applications, ensures resources flow to the most effective approaches regardless of institutional backing, and creates sustainable funding for ongoing detection research.
In a way, we’re building a whole ecosystem around solving this one problem and so far, it’s leading to results that bigger, centralized teams haven’t managed to achieve.
Crowdfund Insider: How does this change the nature of trust online?
Ken Jon Miyachi: It really challenges what we’ve come to rely on. We used to believe that if something was on video, it had to be real, that’s no longer true. Historically, visual evidence was the gold standard, video didn’t lie. Now it might. This impacts everything from financial transactions to legal evidence to everyday communication.
The financial sector is particularly vulnerable because so much of our infrastructure relies on identity verification through video calls, selfies, and document uploads. If we don’t address this systematically, we’ll soon face fully automated fraud pipelines from fake identity creation to synthetic onboarding processes.
We need to build a new standard of proof that combines AI verification tools, cryptographic provenance, and layered trust signals. The paradigm shift is similar to how we handled email spam, moving from a world where each user had to manually identify spam to robust filtering systems that work invisibly in the background.
Crowdfund Insider: What’s your prediction for deepfakes and detection by 2030?
Ken Jon Miyachi: By 2030, we’ll transition from after-the-fact detection to comprehensive content provenance systems. We’re probably headed toward a world where digital content falls into two buckets, stuff that’s been verified, and stuff you’ll need to treat with caution. Major platforms will prioritize the former and create friction for the latter.
This won’t be a single, centralized system. We’ll see multiple interoperable verification mechanisms built on open standards, with decentralized networks providing resilience against manipulation.
If we act now, we can build these safeguards into the foundations of our digital infrastructure. It’s not just about catching fakes, it’s about restoring trust in digital interactions. That’s a future worth building.