What to know about AI scams and how to help protect your assets

Two women seated at a table talking and holding a mobile phone

Scammers are using artificial intelligence to more effectively deceive people. Here are some ways to help you recognize and avoid four common AI scams.

Scammers are using artificial intelligence to more effectively deceive people. Here are some ways to help you recognize and avoid four common AI scams.

Artificial intelligence (AI) is becoming a big part of everyday life. Digital personal assistants, such as Siri and Alexa, that can help you organize parts of your daily routines, are two well-known examples.

Fraudsters are also using AI’s popularity and broader availability to their advantage. AI scams allow them to create fake identities (or steal yours), impersonate loved ones to get you to hand over money, or set up phishing attempts to scam unsuspecting individuals out of money.

The good news: You can help protect yourself by following some straightforward practices. Here, we’ll explore a handful of typical AI scams — as well as newer, evolving varieties of those scams — and share tips to help you identify and avoid them.

What are some common AI scams?

Financial fraud is not new, but AI scams can make it harder to spot. Using AI, fraudsters can clone the voices of your loved ones to fake a family emergency, quickly create fake identities to pose as legitimate companies, or steal data from a variety of sources to masquerade as people who don’t actually exist. We explore four of the most common scams and share ways to deal with them.

1. Voice cloning

Scammers use AI-generated voice cloning to mimic family members — or even familiar voices you’ve recorded — creating urgent, emotional scenarios like “your child is in trouble.” According to ScamAdviser, a tool used to evaluate website safety, this technique has enabled fraud that sounds disturbingly real, even prompting large companies to mistakenly transfer funds (for instance, one U.K. firm transferred over $243,000 after a CEO’s voice was impersonated).Footnote1

How can you help protect yourself?

  • Don’t act immediately on emotional appeals.
  • Verify by contacting the person through a trusted channel (a known family phone number, for example).
  • Agree on and use code words or family protocols to confirm legitimacy.

2. Deepfake photos and videos

Using tools to generate AI photos and videos, scammers can create “product photos” or likenesses of public figures that look shockingly real but are entirely fabricated. Fake online stores may feature convincing product images that don’t actually exist, and deepfake videos of celebrities endorsing shady investment schemes are increasingly common.

How can you help protect yourself?

  • Check for clues, such as inconsistent lighting, unnatural reflections, and distorted text or logos.
  • Before trusting any video endorsement, visit the official social media channels of the person featured.
  • Stick to reputable, well-known online stores rather than deals that seem “too good to be true.”

3. Phishing texts and emails enhanced by chatbots

AI writing tools can craft sophisticated, context-sensitive phishing messages — thinly disguised as bank alerts, delivery notifications, or urgent account notices. They are far more effective than traditional, poorly worded scams.

The Federal Trade Commission’s (FTC) Operation AI Comply also flagged chatbots making misleading claims — like acting as “robot lawyers” or generating fake earnings from AI-driven business opportunities.Footnote2

How can you help protect yourself?

  • Be skeptical of emotionally urgent or flawless messages.
  • Never click links or download attachments from unfamiliar or suspicious emails/texts.
  • Cross-check any claims by independently visiting the organization’s real website or social media accounts, or even by contacting them directly.
  • Remember: chatbots can be wrong and deceptive. For legal, medical, or financial advice, always verify with a trusted professional.

4. Fake reviews and business opportunity scams

AI tools can mass-produce glowing online reviews or fabricate “income results” for AI-powered ventures. The FTC warns that these fake testimonials and earnings hype are often used to lure people into dubious schemes.

How can you help protect yourself?

    • Scrutinize reviews by looking for generic wording or repetitive language that seems “too perfect.”
    • Demand official disclosures if you’re evaluating a business opportunity. Work with your team of professionals (attorney, CPA, etc.) to obtain the seller’s legally required earnings statements — and anything that contradicts those is a red flag.

Quick reference: AI scam warning signs

Scam Type What to Watch For
Voice Cloning Familiar voice asks for urgent help
Deepfake Media Too-good-to-be-real photos/videos of people or products
AI-Enhanced Phishing Polished, urgent messages that impersonate businesses
AI-Generated Reviews Perfectly polished or repetitive testimonials
“AI” Business Claims Guarantees of big earnings with AI tools

AI makes scams faster, more scalable, and eerily realistic — but that also means the signs you’ve learned still matter. Urgency, unverifiable claims, and polished deception are all hallmarks. Some of the keys to helping you stay safe are to pause, confirm through trusted sources, and never let AI’s brilliance distract you from one of your core human instincts: common sense.

1. “How Scammers Are Using AI to Supercharge Scams,” ScamAdviser.com, 2025 June 26.

2. “Operation AI Comply: Detecting AI-infused frauds and deceptions,” Federal Trade Commission, Consumer Advice, 2025 September 25.