“Punishing artificial intelligence for deceptive or harmful actions doesn’t stop it from misbehaving; it just makes it hide its deviousness, a new study by ChatGPT creator OpenAI has revealed.” reports Live Science.
“Since arriving in public in late 2022, artificial intelligence (AI) large language models (LLMs) have repeatedly revealed their deceptive and outright sinister capabilities. These include actions ranging from run-of-the-mill lying, cheating and hiding their own manipulative behavior to threatening to kill a philosophy professor, steal nuclear codes and engineer a deadly pandemic.”
“Now, a new experiment has shown that weeding out this bad behavior during the training process may be even tougher than first thought.”