News

How Has the “Groundbreaking” BodySnatcher Vulnerability Affected ServiceNow AI Agents?

By Matt Rooke

Earlier this week, ServiceNow published details of a new security vulnerability affecting the Virtual Agent API and Now Assist AI Agents.

Known as ‘BodySnatcher’, the issue would have enabled attackers to impersonate a ServiceNow user with elevated privileges, giving them far-reaching control over AI agents in the organization. To do this, the hacker would only need the target user’s email address, effectively bypassing multi-factor authentication (MFA) and single sign-on (SSO) controls.

According to the security researcher who discovered this issue, BodySnatcher is “The most severe AI-driven security vulnerability uncovered to date”. If executed, the issue could have granted hackers almost unlimited access to an organization’s sensitive information, including “Social Security numbers, healthcare information, financial records, or confidential intellectual property.”

NowBen spoke to Aaron Costello, the security researcher who identified the vulnerability, to understand more.

Virtual Agent, Now Assist, and APIs

BodySnatcher was identified in October 2025 by SaaS security firm AppOmni. Since then, ServiceNow has collaborated with AppOmni’s security researcher, Aaron Costello, to resolve the problem and issue a patch. Now, details have been released to the wider ServiceNow and security community, with the vulnerability being given an official CVE Identifier: CVE-2025-12420.

The issue affected Virtual Agent, which has been a feature of the ServiceNow ecosystem for some years. This is essentially ServiceNow’s enterprise chatbot, which enables users to interact with the system’s underlying data and services using plain language commands.

Specifically, BodySnatcher affected the ServiceNow Virtual Agent API, which itself was designed to enable users to interact with Virtual Agent via third-party software, such as Slack. To do this, the connection used two main forms of authentication: Message Auth (to identify the third-party software provider) and Auto-Linking (to authenticate the user submitting the request). Together, these create a relatively friction-free way for users to communicate with the Virtual Agent chatbot via third-party tools.

This situation is nothing new – both Virtual Agent and its API have been around for some years now.

The fundamental issue with BodySnatcher was this: Virtual Agent alone is a relatively low-risk attack vector, since its responsibilities are generally tightly limited to straightforward actions like ‘reset password’. But, more recently, Virtual Agent’s scope was expanded to enable agent-to-agent communication. This created the back door that would enable the BodySnatcher vulnerability.

BodySnatcher: The ‘Groundbreaking’ Agentic AI Vulnerability

Since the release of Now Assist AI Agents (a more recent addition to the ServiceNow ecosystem), ServiceNow has enhanced the capabilities of the existing Virtual Agent API to enable agent-to-agent communication via Now Assist agents. But there were a few fundamental challenges in how this communication was established, which collectively laid the groundwork for BodySnatcher:

  • First, authentication to any of these new agentic providers required only the single, non-rotating static client secret that they had been configured with. This token was the same across all ServiceNow instances, meaning anybody with access to it could interact with AI agents via the Virtual Agent API.
  • At the same time, the Auto-Linking logic inherently trusted requests from any user who supplied the shared token, with no multi-factor authentication (MFA) required.

These issues meant that an attacker could impersonate a single user, simply by knowing their email address. While this was surprising enough on its own, the risk still remained limited at this stage. But the real scope of BodySnatcher comes into play when we consider a further API: A2A Scripted REST API. This is designed to adapt incoming data into a format that’s readable by the Virtual Agent API.

But once this connection was set up, it gave potential hackers almost unprecedented ability to command ServiceNow AI agents, via this API. As long as the agent was in an active state, an attacker would simply need the email address of a relevant ServiceNow account, with no additional authentication required. From there, they could command the agent to perform any range of malicious tasks, including accessing, modifying, or removing a virtually limitless range of sensitive data.

As Costello explains, the potential risk of this was significant:

“With respect to what was publicly understood regarding the availability of AI agents on the platform, this understanding is groundbreaking.

The general consensus was that in order for an AI agent to be executed outside of testing, it must be deployed to a channel that has explicitly enabled the Now Assist feature. But this is not the case. Evidently, as long as the agent is in an active state, and the calling user has the necessary permissions, it can be executed directly through these topics.”

Aaron Costello, Chief of Security Research, AppOmni

Practical Steps: How to Stay Safe from BodySnatcher (And Other Agentic Threats)

“On October 30, 2025, ServiceNow addressed this vulnerability by deploying a relevant security update to the majority of hosted instances. Security updates were also provided to ServiceNow partners and self-hosted customers.

At this time, ServiceNow is unaware of this issue being exploited in the wild against customer instances.”

ServiceNow (source: ServiceNow Support)

Costello and AppOmni have collaborated closely with ServiceNow over recent weeks and months to communicate and resolve the vulnerability. As such, ServiceNow released a patch to cloud-hosted customers last October, shortly after BodySnatcher was identified. Customers with self-hosted/on-premises ServiceNow deployments should have received communication about this from ServiceNow already, since this will require a manual update. 

If you aren’t sure whether the relevant patch has been installed, ServiceNow advises users to check their systems are up-to-date with the following versions, or later:

  • Now Assist AI Agents (sn_aia)
    • 5.1.18 or later
    • 5.2.19 or later
  • Virtual Agent API (sn_va_as_service)
    • 3.15.2 or later
    • 4.0.4 or later

According to ServiceNow, there are currently no recorded instances of hackers using this vulnerability. By all accounts, it was discovered by Costello and resolved by ServiceNow before it could be exploited – though it’s impossible to be 100% sure about this.

“ServiceNow’s immediate response was to rotate the provider credentials and remove the powerful AI agent shown in the PoC, effectively patched the ‘BodySnatcher’ instance. But these are point-in-time fixes. The configuration choices that led to this agentic AI vulnerability in ServiceNow could still exist in an organization’s custom code or third-party solutions.”

Aaron Costello, Chief of Security Research, AppOmni

Nonetheless, while this specific issue has been patched, BodySnatcher relied on several configuration choices at the organizational level that could still enable other agentic AI-related security threats. According to Costello, a series of best practices could help mitigate or contain the risk of vulnerabilities like this:

  • Enable multi-factor authentication, specifically when auto-linking is used to enable bot-to-bot and agent-to-agent communication. This would add an extra stage of authentication, essentially breaking the BodySnatcher exploit at the impersonation stage.
  • Avoid creating and developing powerful AI agents that can perform admin tasks. Ideally, an agent should be configured with only the privileges explicitly required for its task. This extends the concept of ‘least privilege’ to AI agents, effectively limiting the scope and potential damage of any successful attack.
  • Disable unused and inactive agents regularly, and revoke elevated permissions to agents that no longer need them. This keeps the attack surface to a minimum.
  • Maintain a comprehensive inventory of AI agents and actions, and regularly monitor historical logs of AI agent interactions. This helps you identify any vulnerabilities or misconfigurations that could be exploited, and provides a data trail to follow when anomalous behavior is detected.

AI Security: The Story of 2026?

“The discovery of BodySnatcher represents the most severe AI-driven security vulnerability uncovered to date and a defining example of agentic AI security vulnerabilities in modern SaaS platforms. It demonstrates how an attacker can effectively ‘remote control’ an organization’s AI, weaponizing the very tools meant to simplify enterprise workflows.”

Aaron Costello, Chief of Security Research, AppOmni

While BodySnatcher has been resolved by ServiceNow, it’s clear that wider agentic AI security issues remain. Often, this new technology remains something of a wild west. While organizations and employees are enthusiastically adopting agentic AI tools, many struggle to understand the security implications.

As agentic AI becomes increasingly common, we can expect more issues like BodySnatcher to come to the fore over the coming months. Clearly, AI security is going to be a huge topic in the ServiceNow ecosystem and elsewhere as we move through 2026.

The Author

Matt Rooke

Matt is a tech writer at NowBen.

Leave a Reply