ChatGPT, the artificial intelligence-based chatbot tool created by Open AI, that has taken the world by storm with its human-like ability to respond to queries, is falling prey to cybercriminals. Two months into its launch, hackers have started using the platform to generate malicious content to dupe people.
In the Dark Web, a hacker think tank has been busy posting how ChatGPT can be used to build malicious tools and recreate malware strains for data theft.
Another hacker showed how to use the platform to create a marketplace script on the Dark Web for trading illegal goods, according to Check Point Research (CPR).
“Recently, I have been playing with ChatGPT. And, I have recreated many malware strains and techniques based on some write-ups and analyzes of commonly known malware,” a hacker said, taking part in a thread.
According to CPR, it is not just coding savvy hackers, but even people with less technical skills who can use the platform for malicious purposes.
Srinivas Kodali, a privacy and digital infrastructure transparency activist, says it is quite a natural social phenomenon. “Technology can always be used for good and bad things. It is the responsibility of the government to create awareness, educate the public and to regulate and keep tabs on the bad actors,” he said.
Challenges
ChatGPT seems to be aware of this challenge. When a user posed a question on the platform about the scope for malicious uses, it responded out that some might try to “use me or other language models to generate spam or phishing messages”.
“As a language model, I do not have the ability to take action or interact with the real world, so I cannot be used for malicious purposes. I am simply a tool that is designed to generate text based on the input that I receive,” it says.
OpenAI, which developed the platform, has warned that ChatGPT could sometimes respond to harmful instructions or exhibit biased behavior, although it has made efforts to make the model refuse inappropriate requests.
“Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools,” Sergey Shykevich, Threat Intelligence Group Manager at Check Point, said.
.