How Crypto Scammers Are Using AI Deepfakes to Phish Victims

How Crypto Scammers Are Using AI Deepfakes to Phish Victims

Crypto scammers and hackers are finding newer ways to penetrate safety measures, even as the crypto industry accelerates attempts to add layers of advanced security to various platforms. Hackers and scammers are now tapping into AI deepfakes in order to breach the security of crypto exchanges and Web3 related firms. Using deepfake AI, notorious elements aim to bypass the identification criteria established by platforms, Binance Chief Security Officer Jimmy Su said in a recent interview

Deepfakes are artificially generated photos or videos that designed to convincingly replicate the voice as well as facial features and expressions of an individual — living or deceased. Artificial intelligence (AI) and machine learning (ML) tools are utilised to create deepfakes with realistic graphics.

If scammers succeed in creating deepfakes of crypto investors, it increases their chances of bypassing the security of crypto platforms and stealing user funds. “The hacker will look for a normal picture of the victim online somewhere. Based on that, using deep fake tools, they’re able to produce videos to do the bypass. Some of the verification requires the user, for example, to blink their left eye or look to the left or to the right, look up or look down. The deep fakes are advanced enough today that they can actually execute those commands,” Su told CoinTelegraph .

For some months now, players in the crypto sector have been highlighting the growing threat that AI-generated deepfakes pose to uninformed and unsuspecting victims. In February 2023, a deepfake video of Binance CEO Changpeng Zhao surfaced on social media. In that clip, an artificially generated Zhao can be heard calling people to exclusively trade crypto with them.

A similar deepfake video of Elon Musk sharing misleading crypto investment advice was also spotted on social media earlier this month.

Since these deepfake videos are highly appealing, many people might not be able to spot some warning signs that they are deepfakes. In the coming times, Su predicts that AI will be able to detect the uneven parts of deepfakes and will improve in quality.

“When we look at those videos, there are certain parts of it we can detect with the human eye. for example, when the user is required to turn their head to the side. AI will overcome [them] over time. So, it’s not something that we can always rely on. Even if we can control our own videos, there are videos out there that are not owned by us. So, one thing, again, is user education,” Su said in the interview.

A recent report from blockchain research firm CertiK estimates that a whopping $103 million (roughly Rs. 840 crore) was stolen in crypto exploits this year in April. Exit scams and flash loans emerged as the largest source of stolen funds in crypto crimes. In the last four months of 2023, CertiK estimates $429.7 million (roughly Rs. 3,510 crore) were stolen by crypto scammers and hackers.


Samsung Galaxy A34 5G was recently launched by the company in India alongside the more expensive Galaxy A54 5G smartphone. How does this phone fare against the Nothing Phone 1 and the iQoo Neo 7? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.