Accelerated Innovation

Iteratively Tuning Your GenAI Solutions

Optimizing Your Model(s)

Workshop
Do you know which model changes actually improve performance—and which ones just increase cost or latency?

As teams iterate on GenAI solutions, model choices, tuning strategies, and optimization techniques quickly multiply. Without a structured evaluation approach, it becomes difficult to understand tradeoffs or justify model decisions.

To win, your GenAI solutions must use models that are empirically evaluated, deliberately tuned, and optimized for real-world cost, speed, and accuracy constraints.

The Challenge

When model optimization lacks discipline, teams struggle to make confident decisions:

  • Evaluating performance: Compare models across tasks without consistent metrics or task-level benchmarks.
  • Choosing tuning strategies: Tune hyperparameters, embeddings, prompts, or fine-tunes without knowing which levers matter most.
  • Balancing tradeoffs: Improve accuracy at the expense of cost or latency, or optimize cost while degrading output quality.

These challenges lead to inefficient spending, unstable performance, and slow progress toward production readiness.

Our Solution

In this hands-on workshop, your team systematically evaluates and optimizes GenAI models using structured experiments and comparative analysis.

  • Evaluate model performance across tasks using consistent, task-specific criteria.
  • Tune hyperparameters and embeddings to improve task alignment and output quality.
  • Compare open versus closed source model alternatives using defined tradeoff frameworks.
  • Optimize models for cost, speed, and accuracy based on workload requirements.
  • Track and contrast the impact of fine-tuning versus prompt engineering approaches.
Area of Focus
  • Evaluating Model Performance Across Tasks
  • Tuning Hyperparameters and Embeddings
  • Exploring Open vs Closed Source Alternatives
  • Optimizing Models for Cost, Speed, and Accuracy
  • Tracking Fine-Tuning vs. Prompt Engineering Impact
Participants Will
  • Compare and select models based on evidence rather than vendor claims.
    • Apply tuning techniques that measurably improve task-level performance.
  • Understand when fine-tuning outperforms prompt engineering—and when it doesn’t.
  • Balance cost, latency, and accuracy tradeoffs with greater confidence.
  • Establish a repeatable process for ongoing model evaluation and optimization.

Who Should Attend:

Technical Product ManagersSolution ArchitectsML EngineersGenAI EngineersOps, SRE, and Reliability LeadersEngineering Managers

Solution Essentials

Format

Facilitated workshop (in-person or virtual) 

Duration

4 hours 

Skill Level

Intermediate 

Tools

GenAI models, evaluation artifacts, tuning exercises, and comparative analysis tools

Ready to replace ad hoc model tuning with repeatable, data-driven decisions?