Deepfake creation tools and services can be found on darknet marketplaces. These services offer generative AI video creation for a variety of purposes, including fraud, blackmail, and stealing confidential data. The prices for creating or purchasing deepfakes may vary depending on the complexity of the project and the quality of the final product. According to the estimates by Kaspersky experts, prices per one minute of a deepfake video may range from $300 to $20,000. This was announced at Kaspersky Cyber Security Weekend – META 2023.
Kaspersky analyzed various Darknet marketplaces and underground forums offering creation of deepfake videos and audios for different malicious purposes. In some cases, individuals may request specific targets for deepfake creation, such as celebrities or political figures.
Cybercriminals use generative AI video in several ways for illegal activities. They can use deepfakes to create fake videos or images that can be used to defraud individuals or organizations. For example, they can create a fake video of a CEO requesting a wire transfer or authorizing a payment, which can be used to steal corporate funds. Fake videos can be used to create compromising videos or images of individuals, which can be used to extort money or information from them. Cybercriminals can also use deepfakes to spread false information or manipulate public opinion. For example, they can create a fake video of a politician making controversial statements, which can be used to influence the outcome of an election.
A deepfake of Elon Musk promoting a new cryptocurrency scam
Deepfake technology can be employed to bypass verification in payment services by creating realistic fake videos or audio recordings of a legitimate account owner. These can be used to trick payment service providers into thinking that they are the actual account owner, thereby gaining access to the account and its associated funds.
“Increasingly, deepfakes are used in attempts at blackmail and fraud. For example, the CEO of a British energy firm was tricked out of $243,000 by a voice deepfake of the head of his parent company requesting an emergency transfer of funds. As a result, funds were wired to the fraudster’s bank account. Suspicions were only raised when the criminal requested another transfer, but by then it was too late to get the funds that were already transferred back. A similar case was reported in the UAE, where $400,000 were stolen in a scam also involving voice deepfake,” comments Vladislav Tushkanov, Lead Data Scientist at Kaspersky. “It’s important to remember that deepfakes are a threat not only to businesses,
but also to individual users: they can spread misinformation, be used for scams, or to impersonate someone without consent. Increasing your digital literacy level is key to counter these threats.”
Continuous monitoring of Dark web resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. By monitoring the darknet, researchers can uncover new tools, services, and marketplaces used for the creation and distribution of deepfakes. This type of monitoring is a critical component of deepfake research, and helps to improve our understanding of the evolving threat landscape. Kaspersky’s Digital Footprint Intelligence service includes this type of monitoring to help its customers stay ahead of the curve when it comes to deepfake-related threats.
To be protected from threats related to deepfakes, Kaspersky recommends: