ZDNET: “Generative AI has stirred up as many conflicts as it has innovations — especially when it comes to security infrastructure.”
“Enterprise security provider Cato Networks says it has discovered a new way to manipulate AI chatbots. On Tuesday, the company published its 2025 Cato CTRL Threat Report, which showed how a researcher — who Cato clarifies had ‘no prior malware coding experience’ — was able to trick models, including DeepSeek R1 and V3, Microsoft Copilot, and OpenAI’s GPT-4o, into creating “fully functional” Chrome infostealers, or malware that steals saved login information from Chrome. This can include passwords, financial information, and other sensitive details.”