Premium

Microsoft Unveils Open-Source Tool To Protect AI From Being Hacked

Microsoft has announced that it has launched an open-source tool to attempt to prevent AI systems from being hacked. The Counterfit project, which was released on GitHub, enables business developers to evaluate the severity of a cyberattack by simulating a threat against an AI system.

Become a Subscriber

Please purchase a subscription to continue reading this article.

Subscribe Now

Counterfit is a command-line interface tool for conducting automated attacks at scale on AI systems. Microsoft built it as part of its own "red team" attack testing efforts. Organizations can use this tool to attempt to "evade and steal AI models," Microsoft indicated. It has a logging capability that provides "telemetry" information, which can be used to understand AI model failures.

“This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities to proactively secure AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering (RAISE) initiative,” the company said in a blog post.

Cybersecurity is a major priority for companies around the world. Microsoft surveyed 28 organizations, including Fortune 500 companies, governments, non-profits, and small and medium-sized businesses to see what processes are already in place for securing AI systems. The survey showed that 25 out of 28 organizations said that they don't have the right tools in place to secure AI systems.

The tool can be deployed via Azure Shell from a browser or installed locally in an Anaconda Python environment. Microsoft says that the tool comes with attack algorithms preloaded, with developers and security experts being able to use the cmd2 scripting engine built into the tool to carry out the tests.

There are several benefits to using artificial intelligence to help stop cyber threats. Firstly, AI can process much larger volumes of data than a human can, meaning that they can pick up any threats earlier and faster. Another advantage is it reduces the likelihood of any errors in a company's cybersecurity software, allowing for security that is more trustworthy.

The announcement pointed to a slew of resources that organizations can use to understand machine learning failures. There's also a "Threat Modeling" guide for developers of AI and machine learning systems.