Agentic AI Security & Governance
1 Overview
Description
This page is a companion to the Pro-Code AI Agents best practice page. It focuses exclusively on the security, ethics, and governance aspects of deploying agentic AI systems within the SAP ecosystem. Threat categories and controls are cross-referenced to the OWASP Top 10 for LLM Applications (2025) where applicable.
Agentic AI systems introduce a fundamentally different risk profile compared to traditional enterprise applications, and even compared to standard LLM usage. A traditional application follows predefined workflows and responds to user input predictably. A standard LLM call is stateless: one prompt in, one response out. An agent, however, operates autonomously across multiple turns, makes decisions about which tools to call, accumulates memory across steps, and can chain actions that affect real business systems.
This autonomy is what makes agents powerful. It is also what makes them dangerous if not properly secured.
The attack surface of an agentic system is not just the LLM prompt. It extends to every tool the agent can call, every piece of data it can access, every message exchanged between agents in multi-agent systems, and every decision it makes without human oversight. Traditional application security (input validation, authentication, authorization) is necessary but insufficient. Agentic AI requires additional, purpose-built controls.
This page provides opinionated guidance on how to secure agentic AI deployments in SAP environments, based on SAP's security guidelines for agentic AI, the OWASP Top 10 for LLM Applications, and established threat modeling practices.