With the continuous advancement of technology, cybercriminals are swift to exploit new tools and techniques for their malicious activities.
It didn’t take long for an “evil” version of ChatGPT to emerge, giving rise to reports about WormGPT, a novel generative AI cybercrime tool that empowers cyber criminals to write malware and craft persuasive phishing emails.
Concerns About Disruptive Technology
This development in the cybersecurity threat landscape is deeply concerning, according to Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4 AFRICA.
She states, “Cyber criminals have always been among the first groups to leverage the advantages of disruptive technology, leading to an ongoing cat-and-mouse game as defenders strive to keep up with the criminals.”
Exploitation of ChatGPT’s Functionalities
Tools like WormGPT present a broader opportunity for individuals with criminal intent but lacking technical prowess. This means that cybercriminals can exploit the functionalities of ChatGPT, circumventing safety measures and ethical controls, and create more sophisticated and personalized phishing attacks that can deceive even the most discerning individuals and organizations.
Establishing Safety Codes As a Defense
“As users, our best defense is to remain extra vigilant and, unfortunately, not trust anything at face value. A useful tactic is to establish safety code words to authenticate requests within our closest work, family, and friendship circles, in order to avoid falling victim to impersonation attacks,” advises Collard.
WormGPT signifies a significant development in the realm of cybersecurity, as it enables cyber criminals to exploit AI tools that have become commonplace in our everyday lives for malicious purposes.