As discussions evolve around how AI could impact the cybersecurity landscape and, in particular, how phishing attacks could become more sophisticated, Crossword’s CEO Tom Ilube reflects on his own experience of being impersonated online, which he suspects has AI origins and what email recipients should now watch out for.
A few weeks ago my company received an email from a charity in Australia asking whether I had really offered to give them a personal gift of £2.5m.
Not long afterwards, another charity sent a message saying they were “thrilled” to hear that I had approved a grant of £500k. Several others of similar size have been sent to me recently, from as far afield as Canada and Australia. Clearly, with the use of a fake Gmail account, someone out there is having a good time impersonating me and trying to lure unsuspecting charities into a conversation that will not end well for them.
I was asked what the end game of this sort of scam is, given that the fraudster is offering the victim money. I assume it is the usual trick of luring the victim in so deep that when the final request for a “standard transaction fee of just £50k to release the £1m donation” is made, it all seems very reasonable. Money changes hands. Fraudster cracks open the champagne and the victim never hears from them again.
As many people with a public profile will have experienced, it is not unusual in this day and age to find that you are being impersonated online. The best you can do when you are told is to report it to the relevant authorities (which I have done) and publicise it on your social channels (which I have also done) in the hope that anyone receiving an email that is too good to be true will be smart enough to do their due diligence. That is why I am writing this post. And that is also why if you look at my LinkedIn profile now you will see a message saying:
NOTE: IF YOU RECEIVED AN UNSOLICITED EMAIL FROM SOMEONE PURPORTING TO BE ME AND OFFERING YOU MONEY, I CAN ASSURE YOU IT DEFINITELY WAS NOT ME!
However, there are a couple of things about these recent impersonations that strike me as quite interesting.
The first is the quality of the phishing emails. So far, each email has been unique and not just the odd tweak here and there. They are very different and are tailored specifically to the target victim. They are very well written and quite long. In fact, when I first read a couple of them, I immediately thought “this is the sort of email I could imagine being generated by someone using ChatGPT.” The emails don’t have the usually clunky feel that phishing emails often have. That is why they seem to be successfully hooking people who are clearly experienced in the target charities and must have seen scam emails many times before.
If you know the emails are fake, and you re-read them carefully you get the feeling that there is something in the language that is not quite right. There is a human touch that is missing. I ran the text through an AI content detector and the result was that the email text was “only 26% human generated”. In fact, this is a good tip - if in doubt, try putting the email through an AI content detector and it will soon tell you whether it was AI generated.
The problem is that if they are generated by large language models (LLMs) rather than crafted by hand, they are simply going to get better and better. Will they get to the point where they are able to generate emails purporting to be from me that my own family would not be able to tell apart from the real thing? This is the Turing Test for real, potentially on a massive scale!
The second thing these fraudsters did was to invite the targets to a zoom call. Yes, you read that correctly! I assumed it was a bluff. But one potential victim told me that he had actually attended the zoom call. What happened? The zoom call took place as planned, but “Fake Tom” didn’t appear. Instead, they used a bot and said that they were having problems with wifi, so couldn’t get the video working. That obviously rang further alarm bells (thank goodness) so the victim contacted us directly and we quickly put them straight.
But it won’t be long before using a bot and giving the excuse of poor WiFi evolves to using a credible AI generated face and voice. One year? Two years? Five years? I’m not sure when it will be good enough to impersonate me in the eyes of someone who doesn’t know me well, but it can’t be too far in the future can it?
What I think we are seeing here are the early signs of a step change in online impersonation. It’s not perfect yet but give it a year or two and we could be in a completely different world.
In the meantime, I am a generous chap but if someone claiming to be me offers you millions of pounds then I can assure you IT IS DEFINITELY NOT ME!
For more information on how you can protect your business against phishing attacks, the NCSC has specific guidance on defending your organisation here. If you are looking to understand how protected you are against attacks, Crossword’s team of experts can help you assess your cyber controls and procedures. For more information, please contact us.