As some countries start easing up their pandemic restrictions and life returns to a new kind of normal, post COVID-19, we find ourselves on the verge of more change. And this can make some people anxious while others hopeful.

Subsequently, those security experts that are looking at the future in terms of security perhaps fall into the “more anxious” category.  Preferring I’m sure, in most cases to “not see” or “un-see” the underground market and forums. Why are they so anxious? By doing some research security experts see there’s, firstly a sustained interest in ransomware. Secondly, the astonishing cheap offerings of deepfake services to complement every cyber crim’s campaign of choice. In addition, once they set them up together, what do you have? Deepfake ransomware.

The Ransomware process

Cybercrime waiting to happen

Why is cybercrime just waiting to happen? First, for business anyway, news about ransomware continues to be relevant, as a cyber-criminal’s preferred target. Secondly, during the COVID-19 pandemic businesses have had to organise their staff to work from home. Some organisations aren’t coping with the change in environment, given the need for a widespread increase in security measures as staff are scattered in all locations. Thirdly, if we factor in the how cybercrime gangs operate, and there need for the surprise element to catch people off guard, these tools evolve in time and with the crime. So, they’re continually upgrading, by not keep the same malicious tools for long.

 “Deepfake ransomware”? Never heard of it.

Can you believe we’re at a place where deepfake ransomware is even a thing?

The phrase Deepfake ransomware is a compound word that is very new, while the two words aren’t. Let’s look at each of the words to get an idea of how they can related. So, we can review and see why the security experts are “feeling anxious.”

The word “deepfake” is the manipulation of media using images, and/or videos with a voice that uses Artificial Intellugence (AI). As a result, a believable media bite is composed that is challenging to the naked eye and/or software. This technology has the possibility of being used in scam campaigns.

On the other hand, ransomware, is malware that holds the victim’s files hostage. This is achieved by encrypting important files or locking victims out of certain computer features to prevent them from making repairs to the system, until a ransom is paid.

If we combine the two “deepfake ransomware” it’s suggesting that deepfake technology can be used in ransomware campaigns or vice versa. It is plausible, although a stretch to the imagination. But to help us better understand the relationship between the two and how they work together, a few experts have given some examples of how this might look in reality.

Paul Andrei Bricman, was as the first it seems to publicly coin the phrase “deepfake ransomware.” Meanwhile a student at the University of Groningen who specialises in AI, went with name “RansomFake” instead, declaring it “the lovechild of ransomware and deepfake.” Incidentally, he is also the co-founder of not-for-profit REAL (Registrul Educaț,Ional Alternativ)

Bricman’s describes RansomFake as “a type of malicious software that automatically generates fake video.” It shows the victim in a compromising, incriminating, and/or intimate action. The cybercriminal then threatens to send the video unless of course a ransom is paid. The targets is given the option after payment is received, to permanently delete the video file.

You can bet that more bad actors with little to no background in programming will take an interest in the technology, especially if it’s automated. It is revealed, in a recent trend Micro report that there’s a high interest in how deepfakes could be used for sextortion and bypassing authentication protocols that rely on image verification when using certain sites. For instance, dating websites.

We are starting to understand a security expert’s anxiety, as deepfake ransomware is an emerging threat as it takes extortion-based type ransomware up a notch.

1st Scenario – For example, a threat actor pulls together videos and voice samples of their target to create a deepfake video, all from publicly available websites. But then sprinkle it with certain elements inspired from ransomware. For instance, a countdown timer that lasts for 24-48 hours. And there you have deepfake ransomware.

2nd Scenario – Similarly, the threat actor creates a deepfake video of their target. They take screenshots of the video and pretend to be a legitimate contact. After that, as the impersonated contact they send off screenshots and a link to the supposed video that they can watch.

The target who is perhaps curious, with a mixture of guilt and even a bit scared clicks on the link and gets directed to a video of showing them in a very bad light, a compromised state.  Meanwhile ransomware is being downloaded onto their system. In some cases, the link doesn’t even direct them to a video at all, it starts auto- downloading the execution of a ransomware file. Certainly, deepfakes exploit videos, voices but also photo images as well.

This is a common occurrence. In actual fact ransomware actors tried something similar back in 2015.

Whether a video of someone is real or doctored to look real, the damaged caused is very real.  It can destroy a target’s reputation. And we all tend to believe what we see about someone especially when it’s on video. Afterall we see it with out very own eyes. 

I’m not going to be a likely target, am I?

I don’t recommend assuming that, you’ll never be the target whether as an individual, group or an organisation. We could after all, find ourselves on the receiving end of one of these type attacks. It’s always good to educate and prepare for the worst, just encase.

Is there a way to protect against?

Patching software for vulnerability holes is not needed for this campaign but you should be doing this regularly.

How do we protect ourselves against deepfake ransomware, you ask?

Firstly, watch what you post on social media in general.  As group pics, selfies, tik tok videos are all public now and up for grabs. In other words, don’t make it easy for cybercriminals to get pictures, videos and your voice to be able to create a deepfake.

Secondly, think seriously about who you share your content with and where. It’s a good idea to do an audit of your current photos and videos online and see exactly who has access to them.

Take down the public-facing photos or set them to be viewed by certain groups in your pool of contacts. If you didn’t post the pictures, simply un-tag yourself, better still ask the contact to take them down.

It’s part of data detoxing. And it’s one of the steps to keep your digital footprint as minimal as possible. Above all, this process is good for your privacy, your pocket and your sanity.

We can understand a security expert’s anxiety as deepfake ransomware is of real interest in the underground cybercrime markets, especially with the cheap pricing of deepfake services. So, we should aim to keep our public pictures and videos private. It’s that simple.