Engineers won’t be replaced by tools that do their tasks better; they’ll be replaced by systems that make those tasks nonessential.

Sangeet Paul Choudary wrote an insightful piece on AI-driven job displacement and a more transformative way to think about it:

Cite

 

To truly understand how AI affects jobs, we must look beyond individual tasks to comprehend AI’s impact on our workflows and organizations.

 

The task-centric view sees AI as a tool that improves how individual tasks are performed. Work remains structurally unchanged. AI is simply layered on top to improve speed or lower costs. …In this framing, the main risk is that a smarter tool might replace the person doing the task.

 

The system-centric view, on the other hand, looks at how AI reshapes the organization of work itself. It focuses on how tasks fit into broader workflows and how their value is determined by the logic of the overall system. In this view, even if tasks persist, the rationale for grouping them into a particular job, or even performing them within the company, may no longer hold once AI changes the system’s structure.

If we adopt a system-centric view, how does the role of a software engineer evolve1? I’ve had a notion for some time — the role will transform into a software “conductor”.

Software conductors

music conductors

conducting is the art of directing the simultaneous performance of several players or singers by the use of gesture

The tasks a software conductor must master differ from those of today’s software engineer. Here are some of the shifts I can think of:

Task Orchestration Mastery

The craft is knowing exactly how much detail to provide in prompts: too little and models thrash; too much and they overfit or hallucinate constraints. You’ll need to write spec-grade prompts that define interfaces, acceptance criteria, and boundaries — chunking work into units atomic enough for clear execution yet large enough to preserve context. Equally critical: recognizing when to interrupt and redirect — catching drift early and steering with surgical edits rather than expensive reruns or loops.

AI-Compatible System Design

You’ll need to design systems that AI can both navigate and extend elegantly. This means clear module boundaries with explicit interfaces, descriptive naming that models can infer purpose from, and tests that double as executable specs. The goal: systems where AI agents can make surgical changes quickly and efficiently without cascading tech debt.

Parallel Experimentation

We’re moving from building one solution to exploring many simultaneously. This unlocks three levels of experimentation:

Feature variants — Build competing product approaches in parallel. One agent implements phone-only authentication while another builds traditional email/password. Both ship behind feature flags. Let users decide which wins.

Implementation variants — Build the same feature with different architectures. Redis caching on path A, SQLite on path B. Run offline benchmarks and online canaries to measure which performs better under real load.

Personalized variants — Stop looking for a single winner. The most radical shift: each user might get their own variant. Not just enterprise vs consumer, but individual-level personalization where the system learns what works for you specifically. Power users get keyboard shortcuts and dense information; casual users get guided flows with progressive disclosure. Users who convert on social proof see testimonials; analytical users see feature comparisons. AI makes the economics work — what was prohibitively expensive (maintaining thousands of personalized codepaths manually) becomes viable when AI generates, tests, and synchronizes variants automatically.

The skill: running rigorous evals, measuring trade-offs with metrics, and orchestrating the complexity of multiple live variants.

Real-time Cost-Performance Arbitrage

Every API call has a price, a latency budget, and quality trade-offs. You’ll need to master arbitrage between expensive reasoning models and cheaper models, knowing when to leverage MCPs, local tools, or cloud APIs. Learn how models approach refactors differently from new features or bug fixes, then tune prompts, context windows, and routing strategies accordingly.

Observability & Evals as a Discipline

You’ll need to build golden test sets, trace model runs, classify failure modes, and treat evals like unit tests. Evaluation frameworks with baseline datasets, regression suites, and automated canaries that catch quality drift before production become non-negotiable. Without observability, you can’t iterate safely or validate that changes actually improve outcomes.

Generalist Thinking Over Specialist Skills

Framework fluency loses value when AI handles syntax. What matters is depth in three areas:

Core computer science fundamentals — Not because AI doesn’t know them, but because you need to verify AI made the right trade-offs for your specific constraints. AI might use quicksort when your dataset is always 10 items. It might optimize a function that runs once a day while missing the N+1 query in your hot path — where you loop through 1000 users making a database call for each instead of batching. Your value is code review with context: catching when AI optimizes for the wrong thing, knowing when simple beats clever, and spotting performance cliffs before they ship.

Product judgment — Knowing which problem to solve, not just how to solve it. AI can build any feature you describe, but it can’t tell you whether that feature matters. Understanding user needs, prioritizing ruthlessly, and recognizing when you’re overbuilding becomes the bottleneck.

Domain expertise — Deep knowledge of your problem space — whether it’s payments, healthcare, logistics, or graphics. AI can write generic code, but it struggles with domain-specific edge cases, regulations, and the unwritten rules experts know. The more niche your expertise, the harder you are to replace.


These are the skills that matter for the next three years. But I don’t have a crystal ball beyond that. At the pace AI is evolving, even conductors might become a role that AI plays better. The orchestration itself could be automated, leaving us asking the same questions about the next evolution.

For now, learning to conduct is how we stay relevant.


  1. Companies will change how they ship too; but the nearer shift is the individual’s role, so that’s my focus for this post. ↩︎