Accelerated Innovation

Ship High-Performing GenAI Solutions, Faster...

A Deep Dive into Preventing Excessive LLM Agency

Workshop
Are your LLMs making decisions or taking actions beyond what your teams explicitly intended?

As GenAI systems evolve into agents and multi-step workflows, unchecked autonomy can introduce hidden risk, unexpected behavior, and loss of human control. 
To win, your GenAI solutions must tightly define, measure, constrain, and oversee LLM agency across every decision and action path.

The Challenge

Excessive LLM agency emerges gradually and is often discovered only after something goes wrong. 
• Undefined autonomy boundaries: Teams lack clear criteria for what constitutes acceptable versus excessive LLM-driven behavior. 
• Unscored model actions: Decisions and actions are executed without structured risk or appropriateness evaluation. 
• Unobserved agent behavior: Emergent behaviors in agent chains go unnoticed until they cause visible impact. 
These gaps lead to unsafe actions, policy violations, and erosion of trust in autonomous GenAI systems. 

Our Solution

In this hands-on workshop, your team designs controls to limit and manage LLM agency through guided analysis and applied exercises. 
• Define what excessive autonomy looks like in the context of LLM-driven behavior. 
• Score model actions for risk and appropriateness before execution. 
• Constrain outputs using guardrails and structured templates. 
• Monitor emergent behaviors across chained agents and multi-step workflows. 
• Design human-in-the-loop escalation paths for high-risk or ambiguous actions. 

Area of Focus

Defining Excessive Autonomy in LLM Behavior 
Scoring Model Actions for Risk and Appropriateness 
Constraining Outputs via Guardrails and Templates 
Monitoring Emergent Behaviors in Agent Chains 
Establishing Human-In-The-Loop Escalations 

Participants Will

• Clearly define acceptable levels of autonomy for LLM-driven systems. 
• Apply action scoring to reduce unsafe or inappropriate model behavior. 
• Constrain model outputs through guardrails and structured response patterns. 
• Detect and analyze emergent behaviors across agent chains. 
• Leave with escalation patterns that reintroduce human oversight when needed. 

Who Should Attend:

Security EngineerSolution ArchitectsPlatform EngineersGenAI EngineersEngineering Managers

Solution Essentials

Format

Virtual or in-person 

Duration

4 hours 

Skill Level

Intermediate 

Tools

Agent workflow examples, scoring frameworks, and guided design exercises 

Build Responsible AI into Your Core Ways of Working