Cybersecurity experts are advising families to adopt a “safe word” system as deepfake phone scams become more common, according to recent reporting.

 

 

The recommendation follows data indicating that one in four Americans reported receiving a deepfake voice call within the past year. These calls use artificial intelligence tools to imitate the voice of a trusted person, such as a family member or colleague, in order to request money or sensitive information.

Researchers state that impersonation scams have existed for years, but AI-generated voice cloning has increased their effectiveness. Modern tools can replicate a person’s voice using only short audio samples, making it more difficult for targets to distinguish between real and fraudulent calls.

The proposed safeguard involves agreeing on a private word or phrase known only to trusted individuals. During a suspicious call, the recipient can request the safe word to verify the caller’s identity. If the caller cannot provide it, experts advise ending the conversation and contacting the person directly through a known number.

Experts recommend that safe words be unique and not easily guessed or found online. Publicly available information, such as names, locations, or dates, should be avoided. In some cases, longer phrases or multiple verification steps may be used to increase reliability.

The guidance reflects broader concerns about the use of AI in social engineering attacks. Deepfake technology has been increasingly used in fraud schemes, including impersonation of executives and family members, with the aim of creating urgency and prompting immediate financial transfers.

Authorities and researchers state that while technical detection tools are being developed, simple verification methods such as safe words remain a practical measure for individuals and families.

Leave a Reply