As technology advances, voice scams / cloning (deepfake) becomes increasingly sophisticated, enabling scammers to impersonate individuals, even children, and target their unsuspecting parents. In response to this growing menace, I’ve devised a novel phone greeting tailored specifically for my teenagers.
A close acquaintance recently fell victim to a scam text supposedly sent by his middle daughter, prompting him to transfer £100 to an account urgently, allegedly to address a perplexing and time-sensitive situation outlined in the deceptive message.
The scammer’s strategy is quite ingenious, exploiting the commonplace parental anxiety that accompanies the anticipation of distressing news when children are not within immediate reach. The effectiveness is heightened by the inherent credibility attached to any bad news originating from a 19-year-old texting, “I smashed my phone.” The scammer merely needs to capitalize on this vulnerability.
Despite the flaws in the story, we couldn’t help but berate him for his lack of caution, pointing out the failure to ask fundamental questions like, “If it’s your phone that’s broken, why does the money need to go into someone else’s bank account?” Surprisingly, he didn’t even verify the situation by calling the number to speak directly to his daughter. In retrospect, losing £100 might be a relatively mild consequence compared to potential life savings.
Now, consider the scenario where you hear your child’s voice, perfectly mimicked, earnestly requesting money. How many would be able to resist the persuasive power of voice cloning? Last year, the folks at Stop Scams UK attempted to enlighten me on this issue – a scammer could easily extract a child’s voice from their TikTok account and subsequently locate the parent’s phone number. I initially misunderstood the process, thinking they had to piece together a message from random words scattered on social media. My skepticism centered around the viability of constructing a believable narrative from football tips and K-pop content, completely overlooking the capability of AI to analyze and replicate speech patterns from a given sample.
Despite these concerns, I still maintain that circumventing this threat is feasible. When faced with a plea for urgent assistance from a supposed kid-machine, respond with heartfelt affection: “Precious and perfect being, I love you with all my heart.” The expected reply from the kid-machine – “I love you too” – should be automatic. After all, a real child would likely interject with a more authentic excuse, such as claiming to be unwell. Some things, it seems, are beyond the reach of algorithmic replication.