Impersonation and Emotional Scams: Combination of Deepfakes and Large Language Models Will Change the Game
Criminals aren’t slouches when it comes to technology – quite the contrary. Just because they chose a life of crime doesn’t make them inherently lazy. In fact, they’ve demonstrated a level of creativity and problem solving that has made digitally enabled crimes, such as scams, harder and harder to stop. And with the advent of consumer-ready artificial intelligence applications that can be used to create near perfect imitations of trusted people and organizations, scams are about to evolve to a point where unaided detection will be impossible.
Scammers aim to deceive people and obtain their money, personal information, or other benefits, often by imitating a trusted third party. Scams can take many forms, such as bank impersonation scams, investment scams, lottery scams, romance scams, and so on. Up to now, eagle-eyed consumers could pick up clues that they are being confronted with a scam of one type or another. But that is about to change.
As technology advances, scammers are finding new ways to manipulate victims. The greatest impending threat will come from the combination of deepfakes and large language models, which can create realistic but fake content that is hard to distinguish from legitimate activity.
- Deepfakes use artificial intelligence (AI) to manipulate images, videos, or audio of real people. They can make people say or do things that they never did or would do. For example, deepfakes can be used to create fake news, fake interviews, fake endorsements, or fake evidence. For reference, there are plenty of convincing examples that can now be found online of celebrities like Tom Cruise and Scarlett Johanssen, and politicians like Nancy Pelosi and Barack Obama.
- Large language models (LLMs) are AI systems that can generate natural language text based on a given input or prompt. They can produce coherent and fluent text that can mimic the style, tone, or content of a specific domain, person, or genre. For example, LLMs can be used to create chatbots, fake reviews, fake posts, fake profiles, or fake emails. They can also be used to generate images and even video – making them all-in-one solutions for creating deepfake content. Criminals are already adapting LLMs for their own purposes as evidenced by the availability of FraudGPT and WormGPT.[1]
Scammers are already using deepfakes in romance and so-called ‘grandparent’ scams, and LLMs are helping them craft error-free emails and text messages. But it’s the combination of deepfakes and LLMs that will become a powerful and dangerous tool for scammers, as they will generate convincing and personalized scams that can target specific individuals or groups that are virtually indistinguishable from legitimate communications. For instance, scammers can use LLMs to conduct research and generate everything needed to conduct an effective scam.
Protect yourself with KnowScam – Learn More!
Is published with permission from BioCatch
[1] WormGPT and FraudGPT – The Rise of Malicious LLMs (trustwave.com)