Building AI Design Enablement at Scale

Scaling capability beyond the core team

Company ServiceNow
Timeline 2023 – 2025
Scope Organization-wide Enablement
500+ Upskilled 94% Completion 30% Manager Confidence Uplift

Context & Stakes

AI design demand outpaced supply. Teams wanted to move fast, but lacked the foundational fluency to do so responsibly. The central AI team risked becoming a bottleneck.

If this failed: We would either slow everyone down through gatekeeping, or lose control of quality and consistency as teams shipped AI without shared understanding.

The Real Problem

Capability didn't scale with demand. If we didn't teach teams how to make good AI decisions, every decision would route through us—creating dependency instead of autonomy.

Vision & North Star

If we could empower teams with the right knowledge and frameworks, they could own AI decisions confidently, and we could focus on higher-leverage platform-level work.

Strategy & Approach

The core strategy was to scale judgment, not just knowledge. Teams needed more than definitions—they needed decision frameworks, pattern recognition, and confidence to apply responsible AI principles in context.

Strategic choices:

  • Standards as a living system: Documented patterns while products evolved, treating documentation like a product backlog with tight partner feedback loops.
  • Practice-first modules: Learning tied to real delivery moments (design reviews, governance checkpoints) instead of abstract theory.
  • Role-based tracks: Tailored content for ICs, managers, and business units—recognizing that evaluating AI work requires different skills than designing it.
  • Ambassadors as multipliers: Built a distributed network of advocates who could answer questions, reinforce patterns, and model best practices in their own teams.

Tradeoffs I made:

  • Depth vs. adoption: Prioritized foundational fluency over comprehensive coverage to get teams moving faster.
  • Control vs. speed: Created governance paths for custom solutions instead of blocking innovation with rigid standards.
  • Perfect docs vs. shipping value: Released incremental improvements continuously rather than waiting for complete documentation.

Alternatives I rejected:

  • Office hours-only approach (too reactive, didn't scale)
  • Approval gates for all AI work (would slow teams unacceptably)
  • One-and-done training (doesn't stick without reinforcement and practice)

What I Shipped

Multi-tiered enablement program:

  • AI Design 101 (foundational training for 350+ Experience Team members globally)
  • Business-unit workshops (6 custom sessions tailored to exec-requested needs)
  • AI Design Leadership (manager-specific training on evaluating and approving responsible AI)
  • Quarterly recorded reviews (ongoing pattern updates shared org-wide)
  • Self-service resources (design review checklists, pattern decision aids, prompt guidance, evaluation rubrics)
  • AI Ambassador program (distributed support network, biweekly roundtables)
  • Corporate education partnership (collaborated with ServiceNow's education team to integrate AI design standards into the company-wide structured learning track, ensuring consistent methods and language across all design education—ultimately transitioning ownership from my team to the education function)

The hardest part

Creating and documenting standards while the products were still evolving. Staying ahead was rarely possible, so I built feedback loops with partner teams and treated documentation like a product backlog—shipping iteratively based on what teams needed most urgently.

AI Design Enablement Program Structure showing Foundation, Tailored, Leadership, and Ongoing layers with Self-Service Resources and Ambassadors as always-on support
The program structure: sequential learning layers (Foundation → Tailored → Leadership → Ongoing) supported by continuous access to self-service resources and Ambassador networks.
Cross-team collaboration model showing workflow phases from Understand through Release, with activities for AI/Scenic teams and Platform/BU teams at each stage
Cross-team collaboration model designed to drive early, frequent feedback loops with business units and platform teams—keeping partners informed and accelerating work through shared understanding. (Click to view full size)

AI Ambassador Program

AI Design Ambassador illustration

I recruited Ambassadors from teams already shipping AI successfully and ran biweekly roundtables where they could review in-flight work, discuss new patterns, and influence platform decisions through early feedback. This gave them early access to guidance and elevated their AI fluency, while helping us understand how patterns needed to adapt across different team contexts.

Over time, they became the first line of support in our community channels—answering questions, sharing examples, and modeling best practices. This distributed expertise meant my central team could focus on platform-level problems instead of repeated 1:1 pattern coaching.

Cross-Functional Leadership

Enablement only works when it's aligned with how teams ship. I partnered across Product and Engineering to tie the curriculum to real delivery moments: design reviews, pattern adoption, and governance checkpoints.

  • Product: aligned on the "minimum bar" for AI experiences and where teams could differentiate.
  • Engineering + applied AI: translated technical constraints into design guardrails and created shared handoff expectations.
  • Legal / Responsible AI partners: ensured guidance supported policy expectations without turning it into fear-driven compliance.
  • Design leadership: equipped managers to coach AI work in their own orgs (result: +30% confidence in leading AI design initiatives).

How I got buy-in

Product leadership was initially skeptical—they saw enablement as slowing down innovation in the rush to ship AI products. I reframed it: every repeated escalation costs 2-3 hours of senior design time. Investing 100 hours in shared fluency would save 300+ hours in redundant 1:1 coaching. That math changed the conversation. Four months later, Product launched their own enablement program using the same model.

Risks, Pushback, and What Almost Derailed It

Risk: time and resourcing. The central team was already shipping core AI platform experiences—adding enablement on top risked burnout and delivery delays.

Response: I designed enablement to be incremental, modular, and community-powered. We shipped smaller improvements continuously, prioritized the highest-leverage standards first, and relied on tight partner feedback loops to validate what to codify next.

Pushback: Early on, teams wanted to move fast with bespoke AI concepts that didn't fit platform patterns. I used research to separate "platform-wide patterns" from "approved custom solutions" and created a governance path that kept innovation moving without fragmenting the platform. Teams could still differentiate—they just needed to follow responsible AI standards and document their rationale.

Human Impact and Before/After

Before: Teams depended on the AI design group for basic pattern decisions. Every question funneled to us. The support channel was one-way: we answered, they listened.

After: The community started answering itself. Ambassadors and close partners shared examples, corrected misconceptions, and helped teams apply patterns to new use cases with confidence. Our support channel became a living knowledge base where expertise was distributed, not centralized.

"Great communication, meeting facilitation, visibility into work-in-progress. High value, high return on time investment."

Justin Roozeboom

"Michelle has been my lifeline."

Amanda Chaffee
(Amanda's team had an Ambassador embedded who could answer questions in real-time, eliminating the need to wait for central team office hours.)

Outcomes & Impact

500+
Designers upskilled
94%
Program completion rate
+40%
First-pass quality improvement

Reduced bottlenecks on the central AI team and improved first-pass quality in design reviews by 40% as teams gained shared standards, vocabulary, and reusable components. The program also drove a 30% confidence increase among design managers in their ability to evaluate and approve AI designs—ensuring leadership could guide their teams with the same rigor and shared language. Most importantly, it created an active network of AI design advocates who could teach, reinforce, and evolve the practice across the org.

What I'd Do Differently

I'd keep the same core strategy, but I would change three things:

  • Secure executive sponsorship earlier: to protect time, reinforce that enablement is product leverage, and reduce "optional" participation friction.
  • Instrument learning to business moments: connect enablement signals to downstream outcomes (review turnaround, escalations, pattern adoption) instead of relying mostly on completion + confidence metrics.
  • Design the operating model sooner: clarify ownership rotation and recognition so enablement work is visible, rewarded leadership behavior, not invisible "extra" work.

Personal learning

I underestimated how much change management mattered. The content was necessary, but creating rituals, incentives, and leadership alignment is what made it stick.

Leadership Takeaway

I scale impact by turning ambiguity into systems: standards, rituals, and communities that make quality repeatable and teams more autonomous. This case demonstrates my ability to build infrastructure that outlasts individual contributions.