Cybersecurity

FBI warns of ‘deepfakes’ in sextortion schemes

The FBI is warning the public about the use of “deepfakes” to harass or blackmail targets with fake sexually explicit photos or videos of them. 

The agency said in a statement on Monday that technological advancements have continuously improved the quality, customizability and accessibility of content generated by artificial intelligence (AI), and victims have reported that their photos or videos have been altered to turn into sexual content. 

The FBI said malicious actors usually take content from someone’s social media account, the internet and the victim themself to create the content that is explicit and appears to resemble them. The content is then circulated on social media, online forums or pornographic websites. 

Those who create the content and target victims are usually motivated by wanting to obtain additional content, financial benefit or the ability to harass others, according to the agency. 

The FBI said it has observed an increase in the number of victims reporting sextortion, when a victim is coerced to provide sexually explicit photos or videos of themselves and threatened with them being shared publicly, as of April. The perpetrators have generally demanded payment in exchange for not posting them or that the victim send real photos or videos of themselves. 


The release states that the public should exercise caution in what they post online or send in direct messaging on social media, dating apps or other online sites. 

“Although seemingly innocuous when posted or shared, the images and videos can provide malicious actors an abundant supply of content to exploit for criminal activity. Advancements in content creation technology and accessible personal images online present new opportunities for malicious actors to find and target victims,” the FBI said. “This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.” 

The agency recommended that the public monitor their children’s activity online, frequently search for their own information online to know what is publicly available, implement privacy settings on social media accounts and use complex passwords and multi-step authentication to safeguard their accounts. 

It said people also should be cautious in what friend requests to accept, not provide anyone unknown with money or other valuable items, research platforms’ privacy policies and be careful when interacting with known accounts acting outside their “known pattern of behavior,” as they could be hacked. 

The warning comes as AI experts and industry leaders called attention last week to the risks of the growth of AI, saying that mitigating them should be a “global priority.”