Advertisement

Microsoft's new Security Copilot will help network admins respond to threats in minutes, not days

The AI will prioritize attack responses and catch data patterns that human defenders might miss.

SOPA Images via Getty Images

Humanity took another step towards its Ghost in the Shell future on Tuesday with Microsoft's unveiling of the new Security Copilot AI at its inaugural Microsoft Secure event. The automated enterprise-grade security system is powered by OpenAI's GPT-4, runs on the Azure infrastructure and promises admins the ability "to move at the speed and scale of AI."

Security Copilot is similar to the large language model (LLM) that drives the Bing Copilot feature, but with a training geared heavily towards network security rather than general conversational knowledge and web search optimization. "This security-specific model in turn incorporates a growing set of security-specific skills and is informed by Microsoft’s unique global threat intelligence and more than 65 trillion daily signals," Vasu Jakkal, Corporate Vice President of Microsoft Security, Compliance, Identity, and Management, wrote Tuesday.

“Just since the pandemic, we’ve seen an incredible proliferation [in corporate hacking incidents],"Jakkal told Bloomberg. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”

Security Copilot should serve as a force multiplier for overworked and under-supported network admins, a filed which Microsoft estimates has more than 3 million open positions. "Our cyber-trained model adds a learning system to create and tune new skills," Jakkal explained. "Security Copilot then can help catch what other approaches might miss and augment an analyst’s work. In a typical incident, this boost translates into gains in the quality of detection, speed of response and ability to strengthen security posture."

Jakkal anticipates these new capabilities enabling Copilot-assisted admins to respond within minutes to emerging security threats, rather than days or weeks after the exploit is discovered. Being a brand new, untested AI system, Security Copilot is not meant to operate fully autonomously, a human admin needs to remain in the loop. “This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”

To more fully protect the sensitive trade secrets and internal business documents Security Copilot is designed to protect, Microsoft has also committed to never use its customers data to train future Copilot iterations. Users will also be able to dictate their privacy settings and decide how much of their data (or the insights gleaned from it) will be shared. The company has not revealed if, or when, such security features will become available for individual users as well.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.