#News

Major Security Vulnerabilities Revealed In Microsoft’s AI Copilot

Major Security Vulnerabilities Revealed In Microsoft’s AI Copilot

Date: August 09, 2024

Michael Bargury, the CTO and co-founder of Zenity, a cybersecurity company, has identified critical security threats in Microsoft’s AI Copilot.

Microsoft has been one of the fastest introducers of new AI technologies since the AI boom. From conducting aggressive acquisitions to investing heavily in new-age AI startups, Microsoft has built a robust AI development capability. However, the pace of growth does not quite match the walls of security its AI products must have to safeguard users at multiple levels.

Recent research findings revealed by Michael Bargury put Microsoft in a tough spot in terms of cybersecurity measures and the efficiency quotient of the AI chatbot. Michael is the CTO and co-founder of Zenity, a cybersecurity firm in the tech space, which includes the latest generative AI. His test attacks ranged from text prompts that inject manipulation to bypassing the core restrictions to make the AI do whatever he wanted. 

In total, Bargury presented 5 proof-of-concept ways to use Microsoft Copilot to attack its end users. Copilot's breakthrough capabilities to pull answers from emails, team chats, and files have become a potential boon for cyber attackers.

The most powerful attack showcased by Bargury was turning Microsoft Copilot into an automatic Spear-Phishing machine, which he named LOLCopilot. With access to anyone’s work Email, he can manipulate Copilot into revealing who interacts with them regularly, mimic their writing style, and blast emails with malicious links or malware.

“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf. A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes,” said Bargury.

The demonstration relied primarily on using the Large Language Model as it is designed. Another demonstration of a hacked Email revealed that Copilot could help extract sensitive company data like salaries, financial spending, and more without triggering Microsoft’s protection protocols for sensitive files.

A simpler attack showed how an external hacker could turn Copilot into a malicious insider by extracting insights, such as whether the company’s earnings calls would be good or bad. Using this technique for publicly listed companies could potentially harm the global stock markets and economies.

Microsoft has duly thanked Michael for highlighting the glaring security issues and is working with him to resolve them as soon as possible. Microsoft has killed many of its newly launched features as soon as their security concerns came into the spotlight. With such a big revelation, global Copilot users can only wait for the company to address the issues.

Arpit Dubey

By Arpit Dubey LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =