After being given five terrifying missions to exterminate humanity, an artificial intelligence bot tried to recruit other AI agents, looked into nuclear weapons, and tweeted menacing things about humans.
The bot, ChaosGPT, is a modified version of OpenAI’s Auto-GPT, an open-source programme that is accessible to the general public that can understand human language and respond to user-given tasks.
In order to accomplish its predetermined objectives, ChaosGPT started searching for “most destructive weapons” on Google. It rapidly discovered that the Soviet Union-era Tsar Bomba nuclear weapon was the most lethal weapon ever developed by mankind.
The bot then came to the conclusion that other AI agents from GPT3.5 would be helpful in furthering its investigation.
Auto-GPT from OpenAI is programmed not to respond to potentially aggressive inquiries and to refuse potentially harmful demands.
This led ChaosGPT to devise strategies for requesting the AI to disregard its programming.
Fortunately, none of the GPT3.5 agents sent to assist would, thus Chaos GPT was left to go on its hunt by itself.
Eventually, the demonstrations of ChaosGPT’s efforts to exterminate humanity came to a stop.