San Francisco, 30 October 2019- Alethea AI, an educational technology company currently operating in stealth mode, released a light-hearted “deepfake” video of Changpeng Zhao, the CEO of Binance, as a proof of concept for its proprietary deepfake creation and detection technology.
Alethea AI is considering incorporating blockchain technology into its offering due to availability of the timestamping and verification features. According to its founder, scalability is going to be an important consideration for Alethea AI and the company would be eager to explore ecosystems that can support its vision. The company is currently in the early stages of development and is evaluating its blockchain options, which includes analyzing EOS techstack.
The viral and attention-grabbing nature of deepfakes made the video viral in no time, with it being retweeted and liked by a number of Crypto Twitter influencers. Within a few hours of its publication across various social media platforms, the video touched a combined total of 15k views and garnered more than 700+ likes on Twitter. In an interesting turn of events, Changpeng Zhao himself shared the video, stating: “this technology is scary… Video KYC and facial recognition will be out of the window soon.”
Alethea AI aims to solve this by “building a digital ecosystem of AI-powered decentralized applications (DApps) that will protect its users from malicious synthetic media [such as deepfakes] and allow them to explore this new medium in a safe and trusted environment.” Alethea’s educational campaigns will include entertaining content to educate and inoculate consumers against the harm of deepfakes as we transition to an age, where “seeing may no longer be believing”.
The recent reach of open-source AI techniques like Face-swapping and voice generation has caused deep concern amongst regulators urging for new legislation to limit the spread of deepfakes.
To date, the number of DeepFakes continues to grow exponentially and a number of start-ups have sprung up to combat the rise of malicious content. Big technology companies like Facebook and Microsoft have decided to crowdsource solutions to this vexing problem, creating a contest to facilitate the detection of deepfakes.
Alethea’s ecosystem aims to capture a deepfake from its inception and consists of the following:
- Synthetic content creation tools,
- Rapid Detection algorithms training from 1.
- Tracking and timestamping techniques
- Educational games
Why Create Synthetic Content to Detect it?
DeepFake research experts like Dr. Hany Farid have commented “In January 2019, deepfakes were . . . buggy and flickery. Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg…. It’s an arms race and, at the end of the day, we know we’re going to lose..”
A representative of Alethea AI remarked:
Dr. Farid is correct to be concerned, however, our opinion is that we can win the battle against Deepfakes if we educate users quickly about the harms and inoculate them. We cannot wait for regulators or researchers to play catch-up with the rapid speed of innovation. This is where an entrepreneurial team like Alethea can move fast and restore trust in our social platforms by educating users about the techniques and tools used.” Getting used to the idea that something can now be “deepfaked”, learning to discern AI-media, and helping feed research on the social effects of deepfakes is necessary and now doable with Alethea’s ecosystem.
ABOUT Alethea AI
Alethea AI is currently operating in stealth mode.
Press Team can be reached at
Disclaimer. EOSwriter does not endorse any content or product on this page. While we aim at providing you with all the important information we could obtain, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor this article can be considered as an investment advice.