TL;DR
- In just 8 weeks, we upskilled over 500 engineers including developers, QA, and data professionals on AI-powered development practices.
- It was not another training course. It was an enterprise-wide transformation sprint with real-world outcomes, curated tooling, secure guardrails, and a grassroots learning movement led by internal “AI anchors.”
- The result? Tangible productivity gains, developer-led innovation, and a step closer to becoming an AI-native organization.
From Legacy Teams to AI-Native Builders: How We Trained 500+ Developers in 60 Days
We didn’t wait for AI transformation to happen to us. We built the system to make it real fast, governed, and organization-wide.
Two months ago, we asked ourselves:
What if every developer, QA, and data engineer in our org could build with AI like it was second nature?
While pockets of our engineering org had started exploring AI tools like GitHub Copilot, Amazon Q, and Cursor, the adoption wasn’t widespread. Most teams were still operating on traditional development cycles. That gap had to close and fast.
So we put that to the test. In just 60 days, we equipped 500+ developers, QA, and data professionals with the tools, workflows, and mindset to become AI-native inside delivery pipelines. It was a rewrite of how we enable AI-powered engineering at scale.
Here’s how we made it happen and what changed when AI became part of our muscle memory.
Why We Had to Rethink Enablement - The Challenge
We weren’t chasing a trend. We were meeting a demand.
Upskilling at scale is never easy. And with over 500 developers, QA and data engineers, and distributed delivery teams, this wasn’t going to be a linear LMS rollout.
Key constraints:
- No separate bench time or off-days. All learning had to happen alongside project delivery.
- Guardrails were critical: no client IP exposure, secure private repos only, and local pre-commit hooks to prevent sensitive data leakage.
- Learning needed to be contextual, not generic. We curated domain-specific tracks across development, QA, and data.
We realised: giving access to Copilot or Amazon Q wasn’t enough. True transformation required a system. So we built one.
- Structured learning paths by role
- Tooling integrated into projects
- Zero downtime or learning silos
- Governance embedded from day one
How the 60-Day Sprint Was Designed
This was a staged transformation sprint, embedded into delivery cycles. Here’s how we pulled it off:
- Anchor-Led Learning
We handpicked 25+ “AI anchors” who are not AI experts, but because they were self-driven, trusted by their peers, and could lead by example. Each anchor guided ~30 learners across functions, mentored them weekly, and escalated blockers in real-time.
- Curated Learning Material
We didn’t point people to a list of AI tutorials and hope for the best. Week-by-week, we curated videos, tools, and tasks tailored to our tech stack and project needs. For example:
Week 1: Tool access, prompt basics, project mapping
Week 2: Prompt chaining and reasoning workflows
Week 3: Backend/frontend development using Copilot/Cursor
Week 4: Test generation, BDD, and coverage improvement
Week 5: Static analysis and performance tuning with LLMs
Week 6–7: Estimation, architecture augmentation, agentic previews
Week 8: Showcase week + assessment
- Trackable Progress & Live Dashboards
We built custom internal apps (yes, using AI) - internal dashboards to track weekly progress, assignment completion, and team-level gaps. While this helped us for reporting they also enabled us to dynamically refocus attention on individual squads.
- Real Project Relevance
Assignments weren’t abstract. Where client approvals allowed, teams applied AI tools directly within project sprints. Elsewhere, we provided public repos with intentional code smells, asking devs to identify and refactor with AI assistants.
- Security-First Enablement
Sensitive projects were off-limits for AI tool integration. To manage this: - We deployed private repositories for all training use.
- Implemented pre-commit hooks to block insecure pushes.
- Trained all anchors and managers on AI-safe practices before rollout.
Some PMs who hadn’t coded in years built full apps to track their team’s AI adoption. Others built prompt libraries, repo dashboards, or utility bots.
All of this culminated in a live tech showcase where squads demoed AI-powered solutions they built during the challenge.
What Changed
Despite being optional in spirit and compressed in time, the impact was undeniable:
- Over 100+ engineers went beyond assignments to build their own AI-powered apps, trackers, and utilities some of which are now adopted across our org.
- Several teams replaced manual trackers with AI-generated dashboards for project velocity, completion, and peer benchmarking.
- PMs and senior leaders who hadn’t coded in years shipped internal tools using LLM-assisted platforms, proving the accessibility of modern AI dev.
- We’re now launching Phase 2: a deeper agentic AI track for our cohort.
What We Learned
- AI-native is a mindset reset.
The success of this initiative was about engineers rethinking how they build, test, and ship software. - Anchors > Instructors.
Peer-led learning created far more momentum than any top-down training could. The community sustained the pace. - Structure matters.
Without weekly schedules, dashboards, and visible metrics, this initiative would’ve collapsed under the weight of daily delivery demands. - AI adoption requires governance from Day 1.
Security, licensing, hallucination handling, and guardrails must be baked into the rollout.
Most importantly, it gave us a cultural shift: engineers no longer wait for L&D or tooling teams. They self-serve, self-test, and drive enablement forward.
What’s Next
- Launching an advanced agentic AI pod for orchestrated workflows
- Refining our internal tooling benchmarks for AI-code quality
- Helping clients deploy the same system inside their orgs
This started as an initiative. It turned into a movement. Seeing teams push through learning curves, ship real value, and own their transformation is nothing short of brilliance. We’ve built something powerful here. The next chapter will be even bolder.”
— Abarna Visvanathan, Group Project Manager
AI should be an org-wide initiative
We didn’t run a training program. We ran an org-wide rehearsal for the kind of future we’re building toward, one where AI is not an assistant but a teammate.
This sprint was our way of asking: What if every engineer in your org was AI-native? Not hypothetically. Systematically.
Now we know the answer. Let's talk AI.