Artificial intelligence has become a powerful tool for businesses and professionals, streamlining daily tasks and boosting productivity. Unfortunately, criminals are also leveraging AI to make committing crimes faster and easier.
The use of generative AI in fraudulent activities has grown so concerning that the FBI recently issued a public service announcement to warn the public.
With the holidays approaching, an increase in scams is expected as criminals try to exploit the busy season. To help you spot potential threats, here are examples of how generative AI can be used to carry out fraudulent schemes.
AI-Generated Text
AI-generated text mimics human writing, making it a powerful tool for criminals. They use it to enhance the credibility of scams like social engineering, spear-phishing, and financial fraud, while also bypassing common fraud detection methods. Here are some examples of how AI-generated text is used:
- Generative AI enables criminals to craft messages more quickly, allowing them to target a broader audience with convincing content.
- Language translation capabilities provided by generative AI help foreign scammers minimize grammatical and spelling errors when targeting victims in the U.S.
- AI tools are employed to generate large volumes of fake social media profiles, deceiving victims into transferring money.
- Fraudulent websites for cryptocurrency and other investment scams are built with AI-generated content to appear more legitimate.
- Scammers embed AI-powered chatbots into fake websites to lure victims into clicking harmful links.
AI-Generated Images
AI-generated images are highly realistic and can be used by criminals to enhance the believability of their schemes. From fake social media profile pictures to fraudulent identification documents, these tools allow scammers to deceive their targets more effectively. Below are examples of how AI-generated images are exploited in various fraud schemes:
- Creating realistic profile photos for fake social media accounts used in scams like social engineering, spear-phishing, romance, and investment fraud.
- Producing fake IDs, such as driver’s licenses or official credentials, to aid in identity theft or impersonation.
- Generating convincing photos to share with victims during private communications, making them believe they’re interacting with a real person.
- Fabricating images of celebrities or influencers promoting counterfeit products or fake sales.
- Crafting visuals depicting natural disasters or global crises to solicit fraudulent charity donations.
- Designing images used in market manipulation schemes to mislead investors.
- Creating explicit images of individuals to extort money in sextortion cases.
AI-Generated Audio
AI-generated audio replicates voices with remarkable accuracy, allowing criminals to impersonate public figures or loved ones to deceive their victims. This technology is increasingly being used in fraud schemes to manipulate and exploit individuals. Here are some examples:
- Creating audio clips mimicking a loved one’s voice to stage emergency scenarios, requesting urgent financial help or ransom payments.
- Using voice replication to impersonate individuals and gain unauthorized access to sensitive accounts, such as banking or corporate systems.
AI-Generated Video
AI-generated videos are becoming highly convincing and are being exploited by criminals to add authenticity to their scams. These videos often portray realistic images of people, such as public figures or fabricated personas, designed to deceive victims. Here are examples of how criminals misuse AI-generated videos:
- Producing videos for live video calls to impersonate company executives, law enforcement, or other authority figures.
- Creating videos to share in private messages, convincing victims that the person they’re communicating with is real.
- Designing promotional videos with fake endorsements or misleading content to support investment fraud schemes.
Ways of Protecting Yourself
- Establish a unique code or phrase with your family to confirm their identity.
- Look for small inconsistencies in images or videos, such as distorted hands or feet, unnatural eyes or teeth, blurry or irregular faces, odd accessories like glasses or jewelry, misplaced shadows, watermarks, lag, voice mismatches, or unnatural movements.
- Pay attention to the tone and language during phone calls to help differentiate between a real conversation and one using AI voice replication.
- Limit the online exposure of your images or voice by setting social media accounts to private and restricting followers to trusted individuals, reducing the chances of fraudsters creating fake identities.
- If in doubt, hang up and contact the organization or bank directly using a known phone number to confirm the legitimacy of the call.
- Avoid sharing personal or financial information with anyone you've only interacted with online or by phone.
- Never send money, gift cards, cryptocurrency, or other valuables to someone you don’t know or have only met online or by phone.