In the last few weeks, the ServiceNow ecosystem has been dominated by cybersecurity news. Shortly before the end of the year, ServiceNow announced two of its highest-value acquisitions of all time: Veza and Armis. The first of these is rumored to have a price tag of over $1B, and the second is confirmed to be worth more than $7B. Clearly, ServiceNow’s eyes are now firmly planted on the security scene.
And when you consider the other recent security headlines, it doesn’t take a genius to work out why. As reported in NowBen, a new ServiceNow security vulnerability (known as ‘BodySnatcher’) was published last week. If exploited, it could have given hackers remote control over AI agents, with only a legitimate user’s email address required for authentication.
BodySnatcher is just one example, but it will be far from the last. Now, it’s becoming increasingly difficult for organizations investing in agentic AI to ignore the security considerations. To help shine a light on this increasingly vital issue, NowBen spoke to two security experts about the security implications of AI, as well as the tools and tactics that organizations should be using to mitigate them:
- Aaron Costello, Chief of Security (AppOmni), who recently uncovered the BodySnatcher vulnerability.
- Adam Pilton, a Cybersecurity Advisor (Heimdal) and former cybercrime detective.
Agentic AI Security: The Challenge
In the ServiceNow community, 2025 will likely be remembered as the year that agentic AI truly took off. Over the year, we saw a range of new product releases, including the AI Control Tower, AI Agent Fabric, and AI Agent Studio, alongside more widespread rollout and adoption of Now Assist Agents.
But when it comes to security, the agentic AI transition comes with a range of particular and distinct challenges. Any discussion about security in this space needs to start by understanding what problems we’re trying to solve, and why.
1. Organizations Lack Visibility
First, visibility. Too often, organizations simply don’t know what they don’t know when it comes to agentic AI, as Aaron Costello explains:
“When it comes to AI agent governance, the main challenge is that organizations are struggling to understand the use and development of AI on their platforms. Often, there’s no visibility for security teams about what these agents can do,” said Costello.
Here’s the issue: An organization can’t govern what it doesn’t understand. In order to effectively secure this new technology, you first need visibility over what AI agents are in use across the organization, who is using them, and what responsibilities they’ve been given.
2. AI and Security Skills Remain Limited
“I think it’s really important to call out the differences between agentic AI and generative AI,” said Pilton. “When we’re looking at generative AI like ChatGPT, we really need to be thinking about data security. The biggest threat that organizations face is people uploading or copying in data that’s proprietary or sensitive.
“When it comes to agentic AI, the risk is greater, because you still have the data risk, but you also have the element of ‘What privilege does this agentic AI have? What is it allowed to do?’ If the agentic AI has admin access, it can potentially change or delete documents or even entire databases.”
The other major challenge comes down to knowledge and skills.
Generally, those building and deploying AI agents aren’t security specialists. At the time, security teams aren’t often ServiceNow or AI specialists. This means, in practice, organizations and those building AI agents often don’t understand what the risks and challenges are. When you combine that with the current marketing imperative for organizations to quickly adapt to AI, this creates a real risk. Often, it results in organizations simply shipping AI agents to market without imposing any guardrails around them.
At the same time, even those with some understanding of AI security might not have all the nuance that’s needed to solve this problem. As Adam Pilton explains, agentic AI tools require an additional layer of security controls alongside those used in generative AI.
3. Prompt Injection Poses a Distinct Threat to Agents
When we spoke to security experts about the issue of AI security, one topic kept coming up: prompt injection. This describes how hackers can embed malicious instructions into a prompt, ultimately tricking the AI agent into performing an action it wasn’t intended to.
“With agentic AI, the crimes are often pretty much the same [as traditional cybercrime],” said Pilton. They’re getting faster, and they’re getting better quality when you use AI… but a phishing email is still a phishing email – and these are the attacks that work. But when it comes to the security element, one element we haven’t had to consider before is prompt injection.”
Here’s just one example: security researchers recently demonstrated how Google’s Gemini could be manipulated to switch on smart home products, using a malicious prompt delivered by a Google Calendar invitation. When Gemini was asked to summarize upcoming calendar events, the malicious request was read and performed.
As we’ll see below, solutions to this problem are developing. But it’s a difficult problem to completely solve, because AI prompts are qualitative by design. This makes it difficult to draw firm lines between the instructions an agent should follow and those it shouldn’t.
What Are the Key Principles of Agentic AI Security?
Given this situation, it’s important to lay some foundations. Or, to put it more plainly: What does effective AI security actually involve? What are the key principles at play here, and how can you go about implementing them? Again, we chatted to our security experts to define some key principles:
1. Build a Comprehensive Inventory of AI Agents
“For businesses, the starting point should be to understand what AI is in the business. What are people using? What are you willing for people to use? Then you can start to look at how we secure data and the privileges these agents have,” said Pilton.
First, organizations need to build a complete inventory of what AI agents are in use across the organization. This should also include information on who they’re being managed by, where they’re being deployed, and what tasks they’re being assigned. Ultimately, you can’t manage an agent you don’t know you have.
2. Manage AI Agent Privileges
“In terms of agentic AI guardrails, it’s really a case of laying out what the AI can do and excluding everything else. Let’s say we’ve got an AI agent looking at our emails, and it’s there to summarize the ones that we should be looking at and deleting others. That is where it should sit – it shouldn’t be approving calendar invites.”
Adam Pilton, Cybersecurity Advisor, Heimdal
‘Least privilege’ is a vital cybersecurity concept. It states that an account, user, device, or system should only have the minimum privileges required for the task it’s responsible for.
Extending this principle to AI agents is vital, as Adam Pilton explains. Organizations need to ensure there are tight guardrails around what an AI agent can and can’t do, by excluding anything that isn’t specifically laid out in the agent’s prompt.
3. Monitor AI Agent Logs
The next thing you need is a data trail of all actions and decisions being taken by agents. This ensures that if and when issues happen, you have a detailed bank of information you can use to understand the cause and how to avoid it happening again in the future.
“I would highly recommend that organizations monitor their historical log of AI agent interactions,” said Costello. “The best way to do this is to connect your ServiceNow tenant to Splunk or something similar, so AI agent logs flow into the same place as your other security logs.”
Another important point to remember: Not all IT teams will have access to ServiceNow and tools like the AI Control Tower. In this case (as Aaron Costello points out), it’s important to ensure agentic AI logs are being centralized in the same place as other security alerts. This eliminates silos and ensures any issues can be quickly resolved.
4. Deprovision Inactive and Unused Agents
“In addition to ensuring AI agents are securely deployed, it’s imperative that a process exists for deprovisioning inactive and unused agents… An agent’s active status can leave it susceptible to potential abuse, even if it is not deployed to any bot or channel. By implementing a regular auditing cadence for agents, organizations can reduce the blast radius of an attack.”
Adam Pilton, Cybersecurity Advisor, Heimdal
This is another key tenet of least privilege: Attack surface reduction. The logic is this: The fewer agents, accounts, devices, or apps you have, the fewer entry points there are for hackers looking to exploit them.
In the context of agentic AI, it’s important to ensure that agents are removed when they’re redundant, duplicated, or simply no longer used. The best way to do this is for IT teams or ServiceNow technical architects to implement a regular task to review and monitor any AI agents currently in use.
How Can ServiceNow Help?
Most of the points on this list are not new concepts in cybersecurity. In fact, techniques like building inventories, monitoring activity, and implementing least privilege have been fundamental bedrocks of cybersecurity for some time.
But applying these techniques to AI agents requires a distinct set of tools. In ServiceNow, the following products can help alleviate the challenges we’ve outlined in this article:
- AI Control Tower: ServiceNow’s flagship AI security offering, the AI Control Tower, goes further than much of today’s IT market in helping to manage and secure your AI agents. It offers controls to identify what agents are being used across the organization and create a centralized inventory. Admins can also provision and deprovision AI agents, and understand what decisions are being taken by each one.
- AI Agent Studio: As of last year’s Zurich update, the AI Agent Studio offers role-based access controls for agents, while also enabling organizations to define who can discover and use a particular agent.
- Now Assist Guardian: This is a built-in safety and compliance layer designed specifically to mitigate AI risks. It features logging and audit trails, alongside controls around offensive content. Most importantly, it also monitors AI inputs and outputs to identify suspicious patterns, such as prompt injection.
Final Thoughts: The Future of Agentic AI Security
Over the last year, organizations have moved quickly to deploy AI agents into their organizations and workflows. While the market imperative is obvious, there’s a real risk now that agents are shipped to market without the right controls and protections.
Due to its industry scale and heritage, ServiceNow is well-placed to help organizations manage the challenges we’ve discussed in this article. Indeed, the functionality available through tools like the AI Control Tower is generally more extensive than that offered by other major tech vendors.
But there remain challenges. Most crucially, many of these tools are scattered across different areas of the ServiceNow platform, where IT and security teams aren’t generally spending most of their time. At the same time, many everyday ServiceNow admins and architects aren’t aware of the risks in these areas and the tools available to help mitigate them.
When it comes to agentic AI, therefore, it’s the organizations that take the time to understand this landscape and effectively train their teams that stand the best chance of staying safe.