Patterns, data, and frameworks from 14 months of training real teams.
Kaido Koort · Founder · plepic.com
Product (role):
Analysis · Strategy · Product · CTO · Game Design · Incentive Alignment · Father of Two
Co-authored with Joosep Simm, 2025
Up from near zero 18 months ago
Adoption is led from the top, not the bottom.
< 100 employees — Claude Code dominates. Individual choice.
10,000+ — Copilot dominates. Enterprise procurement decides.
About 1 in 8 developers just use whatever their company sets as default.
No single tool does everything. Teams are assembling their own stacks.
Teams where developers can choose and experiment with AI tools report higher satisfaction and more productive use.
Teams locked to a single corporate default report the opposite.
Individuals adopt fast. Organizations don't.
94% of Computer & Math tasks. 90% of Office & Admin.
The feeling of speed is real. The measurement isn't.
After the study, developers still believed AI gave them a 20% speedup.
"The productivity benefits have proven difficult to conclusively prove" CTO, sports-tech company — via Pragmatic Engineer, 2026
"The productivity benefits have proven difficult to conclusively prove"
CTO, sports-tech company — via Pragmatic Engineer, 2026
Builders — Ship quality code faster. Frustrated by AI slop from others.
Shippers — Most enthusiastic. Ship fast. May skip quality.
Coasters — Generate output that burdens the team.
Their response: mandatory senior sign-off. Structure around the tools, not removal of them.
Code stays in your repo. Agent runs on your machine. Major providers don't train on API or business-tier data.
Permissions are granular — you decide what runs autonomously and what needs approval. Every change still goes through your normal PR review.
The risk profile is the same as any developer with admin rights. So are the mitigations.
Only 1/3 trained even 1/4 of their workforce.
"DHH went from rejecting AI tools to calling it 'wearing a mech suit.' In 6 months." Pragmatic Engineer, April 2026
"DHH went from rejecting AI tools to calling it 'wearing a mech suit.' In 6 months."
Pragmatic Engineer, April 2026
7 hackathons for 6 companies. Average score: 8.2/10.
But the consistent feedback: "Needs more time." "Needs more structured practice." "More deep dives."
A half-day creates awareness. Changing how someone works takes distance from daily work.
Between Jira tickets and Slack messages, rewiring a workflow doesn't happen.
The evidence pointed to three factors:
Depth over breadth — progressive skill building, not a sampler platter
Real code over tutorials — your own codebase, not a demo project
Distance from daily work — dedicated time, not "between meetings"
14 developers reported time estimates on real production work after the training.
Projects ranged from Vue2→Vue3 migration (92 files) to GDPR coordination apps to C++ audio plugins.
"Couple of hours → impossible without" "Saves weeks and months. 2-3 months traditional vs completed with AI"
These weren't speed improvements. They were scope unlocks.
Not experience level. Not programming language. Not company size.
Structured workflow + practice on real code + dedicated time + peer & instructor feedback
Every team that sustained results ended up building the same thing: context, guardrails, and workflow around the agent.
Every AI session starts from scratch. No project context. No guardrails. No verification.
Copy-paste from chat. Hope it works. Debug when it doesn't.
It reads your C4 architecture model — system boundaries, components, relationships.
Every session starts with full context. Not a blank slate.
Before any code: requirements, constraints, edge cases — explored through dialogue.
Before a single line of code is written.
Security. Performance. Accessibility. API consistency. Data model. Error handling...
The agent writes code that already conforms to the reviewed spec.
Less rework. Fewer surprises in code review.
They open a browser and test what they built.
Map your architecture → brainstorm the feature → validate the design → implement → test in browser → open PR
Each step is an agent skill. Each skill is reusable across the team.
The role shifts from writing code to directing work.
Introspection skills analyze session patterns — what worked, what wasted time, what to change.
"The greatest value was the shift in mindset. The focus wasn't on using AI as a tool, but on building an architecture around it." Kristina Krist, Mindworks Industries
"The greatest value was the shift in mindset. The focus wasn't on using AI as a tool, but on building an architecture around it."
Kristina Krist, Mindworks Industries
"Before, AI felt like experimentation. Now I clearly understand how to make it actually work for me." Joonas Honga, Mediplan
"Before, AI felt like experimentation. Now I clearly understand how to make it actually work for me."
Joonas Honga, Mediplan
We watched 300 developers build it. Over 14 months. Across 20+ companies.
The patterns that work — and the ones that don't — come from that.
6-week structured training — 6 Fridays, max 20 developers, your own codebase
From C4 architecture to fleet mode. Graduation project ships real code.
8.7/10 · NPS 50 from experienced Estonian developers
plepic.com/training
A senior developer hire in Estonia: 3-6 months of search, loaded cost well past €50K before they ship anything.
Train the 4 you already have to each ship modestly more — same capacity gain. Faster. Lower risk.
State covers 80% of training costs (€2,500 max per employee).
Less than one day of your senior developer's time — to train someone for 6 weeks.
This subsidy only applies to certified external training. Not internal workshops.
Digital Change Agent · Founder @Plepic