Microsoft Provides New Agentic AI Instruments to Safety Copilot
Microsoft has introduced a significant enlargement of its AI-powered cybersecurity platform, introducing a set of autonomous brokers to assist organizations counter rising threats and handle the rising complexity of cloud and AI safety.
The replace marks the subsequent section for Microsoft Security Copilot, launched a 12 months in the past, as the corporate provides 11 AI-powered brokers to automate duties like phishing detection, information safety, vulnerability administration, and menace evaluation. The transfer underscores Microsoft’s technique to make use of AI not solely as a goal for defense, but additionally as a frontline protection towards more and more subtle cyber assaults.
“With over 30 billion phishing e-mails detected in 2024 alone and cyber assaults now exceeding human capability to reply, agent-based AI safety has turn out to be an crucial,” mentioned Vasu Jakkal, company vp for Microsoft’s Safety Group, in a blog publish.
Six of the brand new AI brokers are developed in-house and 5 are constructed by Microsoft’’s safety companions, together with OneTrust, Aviatrix, and Tanium. The instruments will start rolling out in preview beginning April 2025.
“An agentic strategy to privateness will likely be game-changing for the trade,” mentioned Blake Brannon, chief product and technique officer, OneTrust, in an announcement. “Autonomous AI brokers will assist our clients scale, increase, and improve the effectiveness of their privateness operations. Constructed utilizing Microsoft Safety Copilot, the OneTrust Privateness Breach Response Agent demonstrates how privateness groups can analyze and meet more and more advanced regulatory necessities in a fraction of the time required traditionally.”
Among the many new additions is a Phishing Triage Agent in Microsoft Defender, designed to filter and prioritize phishing alerts, offering explanations and enhancing with consumer suggestions. One other, the Conditional Entry Optimization Agent, screens identification techniques to identify coverage gaps and advocate fixes. Microsoft can also be debuting an AI-powered Menace Intelligence Briefing Agent that curates menace insights tailor-made to every group’s threat profile.
The discharge comes amid surging world curiosity in generative AI and a parallel rise in what Microsoft calls “shadow AI” — unauthorized AI use inside organizations, usually exterior of IT oversight. Microsoft estimates that 57% of enterprises have seen an uptick in safety incidents tied to AI, whilst 60% admit they haven’t applied ample controls.
To deal with this, Microsoft is extending its AI safety posture administration throughout a number of clouds and fashions. Beginning Might 2025, Microsoft Defender will assist AI safety visibility throughout Azure, AWS, and Google Cloud, together with fashions like OpenAI’s GPT, Meta’s Llama, and Google’s Gemini.
Different new safeguards embody browser-based information loss prevention (DLP) instruments to dam delicate data from being entered into generative AI apps like ChatGPT and Google Gemini, in addition to enhanced phishing safety in Microsoft Groups — lengthy a goal of e-mail-like assaults.
“The rise of AI has launched new cyber threat vectors, however it’s additionally our biggest ally,” mentioned Alexander Stojanovic, vp of Microsoft Safety AI Utilized Analysis, in an announcement. “That is only the start of what safety brokers can do.”
For extra data, go to the Microsoft blog.
In regards to the Writer
John K. Waters is the editor in chief of a variety of Converge360.com websites, with a concentrate on high-end growth, AI and future tech. He is been writing about cutting-edge applied sciences and tradition of Silicon Valley for greater than two a long time, and he is written greater than a dozen books. He additionally co-scripted the documentary movie Silicon Valley: A 100 12 months Renaissance, which aired on PBS. He may be reached at [email protected].