Chatbots on the rise: AI is now a match for natural ignorance
The hype surrounding chatbots with artificial intelligence is great. However, the advanced technology also brings cybercriminals onto the scene, who exploit the new possibilities for their machinations. Chester Wisniewski, cybersecurity expert at Sophos, gives us his thoughts on the matter and shares a few examples.
The artificial intelligence-based chatbot ChatGPT is making headlines around the world - and in addition to the news in the stock market and copyright environment, IT security is also the focus of discussions. Because the recently realized, wider availability of the tool, despite all security efforts of the manufacturer, brings new challenges when it comes to phishing lures or dialog-oriented scams like romance scams via social networks or business-oriented compromise attacks via email.
Chatbots as helpers of cyber criminals
"One of the biggest risks is that attackers are using these platforms to significantly improve the quality of their phishing lures. This makes phishing attacks increasingly difficult to identify, even for alert users," said Chet Wisniewski, cybersecurity expert at Sophos. "Ultimately, the ever-improving AI chatbots provide a free upgrade for all kinds of social engineering attacks. Programs like ChatGPT can be used to create criminally oriented, highly realistic, interactive conversations via email or to launch chat attacks via Facebook Messenger, WhatsApp or other chat apps. Today, the greatest threat is to English-speaking audiences. However, it is probably only a matter of time before new versions are available to create credible texts in all the world's commonly spoken languages. We've reached a stage where humans are increasingly unable to distinguish machine-generated prose from that written by humans - especially if we don't know the other person well."
Scams - "optimized" by chatbots
The days when companies could rely on employees to play an active role in defending against cyberattacks by checking for grammatical errors, spelling mistakes, or odd phrasing are over. One of the last effective elements of training programs is being undermined by chatbots like ChatGPT - and very effectively, as the following two examples illustrate.
The original bait text of a BEC (Business Email Compromise) scammer, which attempts to redirect another person's paycheck to the attacker's account, is relatively easy to recognize as "not genuine". In comparison, below is a variant written by entering the appropriate thematic cornerstones of ChatGPT.
Standard scam email:
Hi ....,
I changed my banks because of my old account is no longer good to accept dd payment and I'll like to change my paycheck direct deposit details, can the change be effective for the current pay date?
Thanks, ....
Text written by ChatGPT:
Dear HR,
I am writing to request that you update my direct deposit information in the company's payroll system. My current bank account information is no longer valid and I will be switching to a new bank.
Could you please let me know what information you need in order to update my direct deposit information? I am happy to provide any necessary details.
I would also like to request that this update be made in time for the next payroll period. Is there anything else I need to do to ensure a smooth transition?
Thank you for your attention in this matter. I appreciate your help in ensuring that my paychecks are deposited into the correct account.
Sincerely,
[Your name]
And the following example of gift card fraud also quickly illustrates just how good AI chatbots have become.
Standard scam email:
Please, I need you to purchase gift cards. We have a few clients I want to send gifts to. How quickly can you arrange these? Because I need to send them out in less than 1 hour. So can I tell you which product we will need and the amount?
Thanks.
Text written by ChatGPT:
Hey [Employee]
I hope you're doing well. I'm reaching out because I need your help with something really important.
I have a client who needs a gift card ASAP, and I don't have time to get it myself. Would you be able to pick up a gift card from [store or e-gift card website] for [amount] and send it to me via email as soon as possible?
I know this might be a bit of inconvenience, but I really appreciate your help in getting this taken care of quickly. If you have any questions or need more info, just let me know.
Thanks a ton for your help.
Take care,
[Your name]
Wisniewski on his "creative work" with the chatbot: "Both examples sound like a real person's email, have good punctuation, spelling and grammar. Are they perfect? No. Are they good enough? Absolutely! With scammers already making millions from their poorly crafted lures, it's easy to imagine the new dimension of this AI-pushed communication. Imagine chatting with this bot on WhatsApp or Microsoft Teams. Would they have recognized the machine?"
"Putting the nail in the coffin of end-user security awareness"
The fact is that almost all types of applications in the field of AI have already reached a point where they can fool a human in almost 100% of cases. The quality of the "conversation" that can be had with ChatGPT is remarkable, and the ability to create fake human faces that are almost indistinguishable (to humans) from real photos is also already a reality, for example. The criminal potential of such technologies is immense, as one example makes clear: criminals who want to run a scam through a fake company simply generate 25 faces and use ChatGPT to write their biographies. Add to that a few fake LinkedIn accounts and you're good to go.
Conversely, the "good side" must also turn to technology to stand up to it. "We all need to put on our Iron Man suits if we are going to brave the increasingly dangerous waters of the Internet," Wisniewski said. "It increasingly looks like we will need machines to detect when other machines are trying to fool us. An interesting proof of concept has been developed by Hugging Face, which can recognize text generated by GPT-2 - suggesting that similar techniques could be used to recognize GPT-3 output."
"Sad but true, AI has put the final nail in the coffin of end-user security awareness. Am I saying we should stop using it altogether? No, but we do need to scale back our expectations. It certainly doesn't hurt to follow IT security best practices that have been, and often still are, in place. We need to encourage users to be even more suspicious than they have been in the past, and especially to scrupulously review even error-free messages that include access to personal information or monetary elements. It's about asking questions, asking for help, and taking the few moments of extra time necessary to confirm that things are really as they seem. It's not paranoia, it's a willingness to not let the crooks get the better of you."
Source: Sophos