AI and foreign misinformation
OpenAI recently posted information about deceptive operations utilizing the OpenAI LLM that they have disrupted. The list included several operations from foreign agencies that were using OpenAI to help spread propaganda and deceptive information. The list includes:
Spamouflage: Based out of China, OpenAI was used for social media activity research, generating articles in multiple languages, and debugging code for managing and accessing various online databases.
Doppelganger: Based out of Russia, OpenAI was used to generate comments and dialogue in multiple languages on social media, including X (formerly Twitter), translating and editing articles, generating headlines, and posting information on Facebook.
Bad Grammar: Based out of Russia, this operation was targeting Ukraine, Moldova, the Baltic States, and the US. OpenAI was used to debug code for running a bot that created political comments that were then posted on Telegram in multiple languages.
International Union of Virtual Media (IUVM): Based out of Iran, OpenAI was used to generate and translate articles, headlines, social media, and website tags.
IMHO, we're going to see a lot of this going forward although, as AI capabilities expand and proliferate, the need to use large commercial AI models will probably disappear and be replaced by a plethora of customized LLMs that will be much harder to track and disrupt.