PlepicPlepic

What 300 Developers Taught Us About Agentic Coding

Patterns, data, and frameworks from 14 months of training real teams.

Kaido Koort · Founder · plepic.com

Kaido Koort
Kaido Koort, MSc, MA

23 years. Eight teams led. Six products launched. One handbook.

Product (role):

  • StaffLogic — Workforce Optimisation Software (Product Manager)
  • Smart-ID — Leading eID in the Baltics (Head of Development)
  • SurfCast — Wind Notification App (Founder & Product Designer)
  • Lunar Base — Strategy Game (Founder & Game Designer)
  • PSD2 — Payment Initiation Platform (CTO)
  • KaTa — Claims Handling Software (CTO)

Analysis · Strategy · Product · CTO · Game Design · Incentive Alignment · Father of Two

AI Coding for Beginners

Co-authored with Joosep Simm, 2025

The landscape

Autocomplete is not agentic coding

Autocomplete Chat Agent
What it does Suggests next line Answers your question Completes the task
Your role You write, it guesses You copy-paste You review, it executes
Scope One line One answer Entire workflow

55%

of experienced developers now use coding agents

Up from near zero 18 months ago

Pragmatic Engineer, 2026 — 906 respondents, median 11-15 yrs experience

63.5%

of staff+ engineers use agents — the highest of any seniority

Adoption is led from the top, not the bottom.

Tool choice depends on company size

< 100 employees — Claude Code dominates. Individual choice.

10,000+ — Copilot dominates. Enterprise procurement decides.

About 1 in 8 developers just use whatever their company sets as default.

70% of developers use 2-4 AI tools simultaneously

No single tool does everything. Teams are assembling their own stacks.

Experimentation freedom correlates with results

Teams where developers can choose and experiment with AI tools report higher satisfaction and more productive use.

Teams locked to a single corporate default report the opposite.

75%

of organizations haven't captured significant value from AI

Individuals adopt fast. Organizations don't.

AI capability vs. observed usage

center

94% of Computer & Math tasks. 90% of Office & Admin.

Where would you place your team?

Individuals experimenting Pockets of value Organizational transformation
Tool usage High High High
Workflow change None Some teams Systematic
Measured impact Unclear Localized Company-wide

Why the gap exists

Developers predicted +24%. Reality: -19%.

The feeling of speed is real. The measurement isn't.

After the study, developers still believed AI gave them a 20% speedup.

"The productivity benefits have proven difficult to conclusively prove"

CTO, sports-tech company — via Pragmatic Engineer, 2026

AI amplifies what's already there

Builders — Ship quality code faster. Frustrated by AI slop from others.

Shippers — Most enthusiastic. Ship fast. May skip quality.

Coasters — Generate output that burdens the team.

Pragmatic Engineer, April 2026 — consensus from 900+ engineers

Amazon saw "incidents with high blast radius" from AI-assisted code

Their response: mandatory senior sign-off. Structure around the tools, not removal of them.

Internal Amazon briefing, cited by Pragmatic Engineer, April 2026

What about security?

Code stays in your repo. Agent runs on your machine. Major providers don't train on API or business-tier data.

Permissions are granular — you decide what runs autonomously and what needs approval. Every change still goes through your normal PR review.

The risk profile is the same as any developer with admin rights. So are the mitigations.

5%

of companies achieve AI value at scale

Only 1/3 trained even 1/4 of their workforce.

"DHH went from rejecting AI tools to calling it 'wearing a mech suit.' In 6 months."

Pragmatic Engineer, April 2026

Patterns from 300 developers

Hackathons revealed something unexpected

7 hackathons for 6 companies. Average score: 8.2/10.

But the consistent feedback: "Needs more time." "Needs more structured practice." "More deep dives."

Excitement doesn't equal transformation

A half-day creates awareness. Changing how someone works takes distance from daily work.

Between Jira tickets and Slack messages, rewiring a workflow doesn't happen.

What format actually produces lasting change?

The evidence pointed to three factors:

Depth over breadth — progressive skill building, not a sampler platter

Real code over tutorials — your own codebase, not a demo project

Distance from daily work — dedicated time, not "between meetings"

What the graduation projects revealed

14 developers reported time estimates on real production work after the training.

Projects ranged from Vue2→Vue3 migration (92 files) to GDPR coordination apps to C++ audio plugins.

Squad 1 participant self-reports, April 2026

The time data, with full context

Agentic Workflow Old Workflow Project type
~10h team 1-2 weeks Large migration
1-2 days 1+ month New app from scratch
4-5h 2-3 days Feature build
2 days 2 weeks Legacy overhaul
7 min setup 2 hours Project scaffolding
9h ~80h (10 work days) SDK development
38h (in progress) 80h (original estimate) Complex feature
24h ~48h Medium feature
Squad 1 participant self-reports, April 2026

Several reported the work wouldn't have happened at all

"Couple of hours → impossible without"
"Saves weeks and months. 2-3 months traditional vs completed with AI"

These weren't speed improvements. They were scope unlocks.

Squad 1 participant self-reports, April 2026

The four conditions that made it click

Not experience level. Not programming language. Not company size.

Structured workflow + practice on real code + dedicated time + peer & instructor feedback

The harness pattern emerged independently across teams

Every team that sustained results ended up building the same thing: context, guardrails, and workflow around the agent.

What the harness looks like in practice

Without a harness

Every AI session starts from scratch. No project context. No guardrails. No verification.

Copy-paste from chat. Hope it works. Debug when it doesn't.

With a harness, the agent knows your system

It reads your C4 architecture model — system boundaries, components, relationships.

Every session starts with full context. Not a blank slate.

A brainstorming skill turns a vague idea into a structured spec

Before any code: requirements, constraints, edge cases — explored through dialogue.

Then the design doc gets validated — automatically

Before a single line of code is written.

10

independent agents review every design doc

Security. Performance. Accessibility. API consistency. Data model. Error handling...

Only after validation does implementation begin

The agent writes code that already conforms to the reviewed spec.

Less rework. Fewer surprises in code review.

Agents don't just write code

They open a browser and test what they built.

The full pipeline

Map your architecture → brainstorm the feature → validate the design → implementtest in browseropen PR

Each step is an agent skill. Each skill is reusable across the team.

One developer. Three features. Parallel.

The role shifts from writing code to directing work.

The agent watches itself and improves

Introspection skills analyze session patterns — what worked, what wasted time, what to change.

"The greatest value was the shift in mindset. The focus wasn't on using AI as a tool, but on building an architecture around it."

Kristina Krist, Mindworks Industries

"Before, AI felt like experimentation. Now I clearly understand how to make it actually work for me."

Joonas Honga, Mediplan

We didn't learn this from documentation

We watched 300 developers build it. Over 14 months. Across 20+ companies.

The patterns that work — and the ones that don't — come from that.

For those exploring further

If this resonated, here's what we offer

6-week structured training — 6 Fridays, max 20 developers, your own codebase

From C4 architecture to fleet mode. Graduation project ships real code.

8.7/10 · NPS 50 from experienced Estonian developers

plepic.com/training

Hiring takes months. Upskilling 4 devs adds a 5th.

A senior developer hire in Estonia: 3-6 months of search, loaded cost well past €50K before they ship anything.

Train the 4 you already have to each ship modestly more — same capacity gain. Faster. Lower risk.

€504 per developer with Töötukassa subsidy

State covers 80% of training costs (€2,500 max per employee).

Less than one day of your senior developer's time — to train someone for 6 weeks.

This subsidy only applies to certified external training. Not internal workshops.

Let's talk.

Web
plepic.com
Training
plepic.com/training
Mail
kaido@plepic.com
Phone
+372 5077 333
Discord
#claude-code-ee · 180 members
Kaido Koort Kaido Koort

Digital Change Agent · Founder @Plepic

linkedin.com/in/kaidokoort