Accelerated Innovation

Ship High-Performing GenAI Solutions, Faster...

A Deep Dive into Preventing GenAI Supply Chain Risks

Workshop
Do you know exactly what models, components, and dependencies your GenAI solutions are built on—and the risks they introduce?

GenAI systems rely on external models, open source components, and complex distribution pipelines that expand the attack surface beyond your direct control. 
To win, your GenAI solutions must actively manage supply chain risk through dependency visibility, integrity verification, and provenance tracking.

The Challenge

GenAI supply chains introduce hidden risks that traditional security reviews often miss. 
• Opaque dependencies: Teams lack a clear inventory of external models, libraries, and services embedded in GenAI solutions. 
• Unverified components: Open source and third-party models are adopted without sufficient integrity or trust validation. 
• Weak provenance tracking: Model origins, transfers, and usage licenses are poorly documented or unenforced. 
These gaps increase exposure to tampering, license violations, and downstream security incidents. 

Our Solution

In this hands-on workshop, your team evaluates and strengthens GenAI supply chain security through guided analysis and applied exercises. 
• Identify and document GenAI supply chain dependencies across models, libraries, and services. 
• Evaluate security and operational risks in external model libraries. 
• Design secure model hosting and transfer pipelines to reduce tampering risk. 
• Verify the integrity of open source components used in GenAI systems. 
• Establish practices for tracking model provenance and usage licenses. 

Area of Focus

Identifying Supply Chain Dependencies 
Evaluating Risks in External Model Libraries 
Securing Model Hosting and Transfer Pipelines 
Verifying Open Source Component Integrity 
Tracking Model Provenance and Usage Licenses 

Participants Will

• Gain visibility into the full supply chain supporting GenAI solutions. 
• Assess and prioritize risks introduced by external models and dependencies. 
• Apply integrity checks to open source and third-party components. 
• Secure model distribution and hosting pipelines against tampering. 
• Leave with clear provenance and licensing tracking practices for GenAI assets. 

Who Should Attend:

Security ArchitectPlatform EngineersGenAI EngineersRisk and Compliance Leaders

Solution Essentials

Format

Virtual or in-person

Duration

4 hours 

Skill Level

Intermediate 

Tools

Dependency mapping techniques, integrity validation concepts, and guided risk assessment exercises 

Build Responsible AI into Your Core Ways of Working