OpenAI has blocked several ChatGPT accounts involved in illicit use by Chinese, North Korean and other hacker groups.
In its latest report shared on 5 June, the top AI company unveiled how it is detecting and preventing malicious uses of AI to ensure that the technology is safe for all.
Multiple abuses of ChatGPT
OpenAI highlighted many ways that hackers from several countries are using ChatGPT for malicious purposes.
One of such is a deceptive employment scheme which involved the scamming of IT workers through deceptive hiring schemes.
The threat actors whose behavior was similar to hackers from North Korea used OpenAI’s models to develop materials supporting seemingly fraudulent attempts to apply for IT, software engineering and other remote jobs around the world.
Another category was ChatGPT accounts using OpenAI models to bulk generate social media posts consistent with the activity of a covert influence operation on TikTok, X, Reddit, Facebook, and several other websites.
They primarily issued prompts in Chinese and focused on political and geopolitical topics relevant to China, and one user outrightly stated in a prompt that they worked for the Chinese Propaganda Department.
There were also accounts engaged in social engineering in the U.S. and Europe, mainly writing prompts in Chinese during mainland Chinese business hours and translating emails and messages from Chineses to English.
OpenAI was able to detect and ban these accounts, effectively stopping them from engaging in any further illicit use.
Other illicit accounts
Apart from the Chinese and North Korean accounts, ChatGPT accounts originating from other countries and groups were also banned.
These include those focused on politics and current events in the Philippines, with content generated and posted on TikTok, Facebook, and other platforms.
Accounts which appeared to originate from Russia, which used OpenAI’s models to generate German-language content about the German 2025 election, and criticizing the US and NATO were also banned.