Whether you’re crafting an email, creating some concept art, or scamming vulnerable people by making them think you’re a friend or relative in distress, AI is here to help! But since some people would rather not be scammed, let’s talk a little bit about what to look out for.
The past few years have seen a huge increase not only in the quality of media being created, from text to audio to images and video, but also in how easy and affordable it is to create that media. The same kind of tools that help a concept artist create some fantasy monster or spaceship, or allow a non-native speaker to improve their business English, can also be used maliciously.
Don’t expect the Terminator to knock on your door and sell you a Ponzi scheme – these are the same old scams we’ve been facing for years, but with a creative AI twist that makes them easier, cheaper, or more convincing.
This is by no means a complete list, but rather just a few of the obvious tricks that AI can enhance. We’ll be sure to add new tricks as they emerge, or any additional steps you can take to protect yourself.
Reproduce the voice of family and friends
Synthetic voices have been around for decades, but it’s only in the past year or two that technological advances have made it possible to generate a new voice from just a few seconds of audio. That means anyone whose voice has been broadcast publicly—say, in a news report, a YouTube video, or on social media—is vulnerable to having their voice cloned.
Scammers can use this technique to create convincing fake versions of their loved ones or friends. They can of course be made to say anything, but in the service of a scam, they are more likely to make an audio clip asking for help.
For example, a parent might receive a voicemail from an unknown number that sounds like their child, explaining how their things were stolen while traveling, someone let them use their phone, could mom or dad send some money to this address, the Venmo recipient, work, etc. One can easily imagine variants with car trouble (“They won’t release my car until someone pays them”), medical issues (“This treatment isn’t covered by insurance”), etc.
This type of fraud has already been carried out using President Biden’s voice! They’ve caught whoever was behind it, but scammers in the future will be more careful.
How can you fight audio cloning?
First, don’t bother trying to detect fake audio. They are getting better every day, and there are many ways to hide any quality issues. Even experts are fooled!
Anything coming from an unknown number, email address, or account should automatically be considered suspicious. If someone says they’re your friend or family member, continue to communicate with them as you normally would. They’ll likely tell you they’re fine and that it’s (you guessed it) a scam.
Scammers tend not to follow up if they are ignored – whereas a family member might. It’s okay to leave a suspicious message on while you think.
Phishing and personalized spam via email and messaging
We all get spam from time to time, but AI that generates text makes it possible to send personalized mass emails to each individual. And with data breaches happening on a regular basis, a lot of your personal data is up for grabs.
Receiving a fraudulent email that says “Click here to view your bill!” With clearly intimidating attachments that seem to require very little effort is a completely different matter. But even with a little context, these messages are quite convincing, using modern locations, purchases, and habits to appear like a real person or a real problem. Using a few personal facts, the language model can assign a general set of these emails to thousands of recipients within seconds.
So, what used to be “Dear Customer, Please find your invoice attached” has become something like “Hi Doris! I’m with the promotion team at Etsy. One of the items you’ve been searching for recently is now 50% off! And shipping to your address in Bellingham is free If you use this link to claim the discount. A simple example, but still. With real name, shopping habit (easier to spot), general location (same) etc., suddenly the message becomes a lot less clear.
In the end, this is still just spam. But this kind of ad hoc spam had to be done by underpaid people on content farms in foreign countries. Now this can be done on a large scale by an LLM with better prose skills than many professional writers!
How can you fight spam emails?
As with traditional spam, vigilance is your best weapon. But don’t expect to be able to tell the difference between generated text and human-written text. Few people can, and certainly no other AI model can (despite the claims of some companies and services).
Despite the improved script, this type of scam still faces the fundamental challenge of getting you to open unclear attachments or links. As always, unless you are 100% sure of the authenticity and identity of the sender, do not click or open anything. If you are even slightly unsure – and that’s good for farming – do not click, and if you have someone in the know to send them to for another pair of eyes, do so.
“Fake your identity” scam and verification
Given the number of data breaches over the past few years (thank you, Equifax!), it’s safe to say that almost all of us have a fair amount of personal data floating around on the dark web. If you follow good online security practices, much of the risk is mitigated by changing your passwords, enabling multi-factor authentication, and so on. But generative AI could pose a new and serious threat to this space.
With so much data available about a person online and for many, even a snippet or two of their voice, it’s becoming increasingly easy to create an AI persona that looks like a target person and has access to a lot of facts used to verify identity.
Think of it this way. If you’re having trouble logging in, can’t configure your authenticator app properly, or lose your phone, what do you do? Call customer service, most likely, and they’ll “verify” your identity using some trivial fact like your date of birth, phone number, or Social Security number. Even more advanced methods like “taking a selfie” are getting easier to play with.
The customer service agent – as far as we know, is also an AI! – might oblige this fake person and give them all the perks you would get if they were actually called. What they can do from this position varies greatly, but none of it is good!
As with others on this list, the danger lies not in how realistic this forgery is, but in the ease with which scammers can carry out this type of attack on a large scale and repeatedly. Not long ago, this type of impersonation attack was expensive and time-consuming, and as a result was limited to high-value targets like wealthy individuals and CEOs. Nowadays, you can build a workflow that creates thousands of impersonation agents with minimal oversight, and these agents can independently call customer service numbers on all of a person’s known accounts—or even create new ones! Only a few of them need to be successful to justify the cost of the attack.
How can you fight identity fraud?
Just like before AI came along to support scammers’ efforts, “Cybersecurity 101” is your best bet. Your data already exists; You cannot put the toothpaste back into the tube. but you are Can Make sure your accounts are adequately protected against the most obvious attacks.
Multi-factor authentication is easily the most important step anyone can take here. Any kind of dangerous account activity goes straight to your phone, and suspicious logins or attempts to change passwords will appear in the email. Don’t ignore these warnings or mark them as spam, even (especially!) if you receive a lot of them.
Deepfakes and extortion generated by artificial intelligence
Perhaps the scariest form of AI-based fraud is the possibility of blackmail using fake images of you or a loved one. We can thank the fast-moving world of open image models for this terrifying future possibility! People interested in some aspect of advanced image generation have created workflows to not only photograph naked bodies, but also to associate them with any face they can capture. I don’t need to go into the details of how this technology is actually being used.
But one unintended consequence is an extension of the scam commonly called “revenge porn,” but more accurately described as the non-consensual distribution of intimate images (though it can be difficult to replace the original term, such as “deepfake”). When someone’s private images are released either through hacking or by a vengeful ex-partner, they can be used as blackmail by a third party who threatens to make them widely public unless a payment is made.
AI enhances this scam by making it so you don’t need actual intimate photos in the first place! Anyone’s face can be added to an AI-generated body, and while the results aren’t always convincing, they may be enough to fool you or others if they’re pixelated, low-resolution, or partially blurry. That’s all it takes to scare someone into paying to keep it a secret – although, like most blackmail operations, the first payment is unlikely to be the last.
How can you fight AI-generated deepfakes?
Unfortunately, the world we’re moving toward is one in which fake nudes of almost anyone will be available on demand. It’s creepy and weird and disgusting, but unfortunately the cat is out of the bag here.
Nobody is happy about this situation except the bad guys. But there are some things that work for all potential victims. It may be cold comfort, but these photos aren’t really you, and it doesn’t take actual nude photos to prove that. These photo models may produce realistic bodies in some ways, but like other generative AI, they only know what they’ve been trained on. So fake images will lack any telltale signs, for example, and are likely to be downright wrong in other ways.
Although the threat likely will never completely diminish, there is increasing recourse for victims, who can legally force image hosts to remove images, or ban scammers from sites where they post. As the problem grows, so do the legal and private means to combat it.
TechCrunch is not a lawyer! But if you are a victim of this, report it to the police. It’s not just a scam, it’s harassment, and while you wouldn’t expect the cops to do the kind of deep detective work required to track someone down, sometimes these cases do get resolved, or the scammers get spooked by the requests sent to your ISP or forum host.