When Everyone Becomes an Editor: The Shift from Making to Reviewing in the AI Era
Update the job descriptions.
You're reading Playbooks & Priorities, a newsletter about working, parenting, and working parenthood.
There’s this meme of the knowledge worker getting to their desk in an open floor plan office with their cup of coffee. They'd sit down at their desk, crack their knuckles, and spend hours crafting that perfect bit of code, that insightful market analysis, or that pixel-perfect design comp. Their value was measured by what they produced. Their job was actually about creating things.
Well, things have changed.
I had this moment last week when I asked Claude to draft an email response to an email I had been putting off responding to. In about 15 seconds, it gave me something that would have taken me 20 minutes to write. Was it perfect? No. Was it 90% there? Absolutely. And that remaining 10%—the tweaking, refining, adding my voice, cutting the fluff—took me just three minutes.
This is our new reality: we're all becoming editors-in-chief of our AI assistants. The day-to-day of knowledge work is fundamentally shifting from production to review, and most of us aren't prepared for what that means.
The Production Party Is Over (And That's Okay)
For decades, we've tied our professional identities to our output. Engineers measured commits and features shipped. Writers counted words and articles published. Designers showcased portfolios bulging with work they'd personally pixel-pushed into existence. Consultants showed off their bright and shiny PowerPoint decks.
Now? AI can generate code at the speed of thought. It can write blog posts faster than any human. It can prototype designs while you're still describing what you want. The raw production capability of these tools is staggering, and trying to compete on pure output is a losing game.
I remember the first time I realized I couldn't keep up. I was working on drafting some analytics requirements, and on a whim, asked an AI to take a crack at it. What would have been my entire afternoon's work appeared in seconds. It wasn't perfect (more on that later), but it was a sobering moment. If production speed was my primary value, I was in trouble.
But here's the thing: it's actually fine. Great, even. Because production was always just a means to an end. What matters is quality results that solve real problems, and that's where human judgment is more crucial than ever.
Review Is the New Create
Let's be clear: reviewing isn't just checking boxes or playing red-pen tyrant on someone else's work. It's becoming the central act of knowledge creation in the AI era.
Effective review draws on deeper expertise than basic production. Think about it: to create something decent from scratch, you need to know enough to produce something workable. But to review effectively, you need to:
Recognize patterns across vast amounts of content
Identify subtle issues that might not be obvious
Understand the difference between "technically correct" and "actually good"
Know which problems are worth fixing and which to let slide
Articulate why something isn't working, not just that it isn't
In many ways, reviewing is where real expertise shows up. Anyone who's worked with a truly principal-level engineer knows this. They might write less code than senior engineers, but their code reviews are gold—spotting architectural issues, security vulnerabilities, and maintenance nightmares that others would miss.
What's happening now is that this review-focused work mode is expanding to all knowledge workers. And it's not just reviewing others' work—it's reviewing, directing, and refining AI-generated output.
What This Looks Like in the Wild
This shift is playing out differently across roles, but the pattern is consistent:
Software Engineers
The stereotype of the 10x engineer cranking out code at superhuman speed is fading. Today's most valuable engineers might push fewer commits but excel at:
Reviewing AI-generated code for security and edge cases
Identifying architectural improvements in auto-generated implementations
Directing AI pair programmers toward optimal solutions
Maintaining coherence across systems with multiple AI contributors
I was talking to a staff engineer friend at an AI-forward company who recently told me, "My commits are down 40% this year, but my comments and reviews are up 300%. And my impact is actually higher." The value has shifted from typing to thinking.
Product Managers
PMs used to spend days crafting detailed PRDs. Now, tools like Cursor, Windsurf, and Gemini can generate comprehensive requirements docs from a brief description. But this doesn't make PMs obsolete—it transforms them:
More time spent talking to customers
Less time writing specs, more time refining AI-generated ones
More capacity for stakeholder alignment and feedback incorporation
Faster iteration cycles with rapid document generation
Greater focus on evaluating rather than producing documentation
A PM friend told me she now creates "review-focused PRDs" where she outlines the core requirements, has AI generate the details, then spends her time refining and integrating technical feedback. Her throughput has increased dramatically.
Designers
The shift from "pixel-pusher" to "creative director" is happening overnight:
Less time creating initial mockups, more time directing AI-generated options
Increased focus on brand coherence across machine-generated assets
More attention to subtle details that AIs miss
Expert evaluation of usability issues in generated interfaces
One design lead I know now runs "AI design sprints" where her team generates dozens of options with AI tools, then applies their expertise to select, refine, and integrate the best elements. It’s no longer just two variations but dozens.
Writers and Content Creators
The written word might be experiencing the most dramatic shift:
Writing jobs are becoming 20% drafting and 80% editing
Greater premium on voice, style, and brand consistency
More emphasis on strategic content direction versus production
Increased throughput through AI drafting + human refinement
A content strategist at a SaaS company told me they've tripled their output while maintaining quality by using AI to generate first drafts of everything from blog posts to release notes, then applying human expertise to edit and polish.
Getting Good at Giving Feedback
Here's an uncomfortable truth: most of us are actually terrible at reviewing and giving feedback. We've never had to develop this muscle because we've primarily been makers, not editors.
Bad feedback is vague ("this doesn't feel right"), contradictory ("make it pop but keep it minimal"), or simply unhelpful ("I don't like it").
Effective review is a skill that requires development:
1. Get specific about what's wrong
Instead of "This code doesn't feel robust," try "This function doesn't handle the empty string case, and might throw an exception when the API returns null."
2. Learn to separate levels of concern
Is the issue with an AI-generated marketing email that it has typos (easy fix), doesn't match the brand voice (medium fix), or misunderstands the fundamental value proposition (start over)?
3. Develop frameworks for evaluation
The best reviewers don't reinvent their criteria each time. They develop consistent frameworks: security/performance/maintainability for code, clarity/consistency/conversion for copy, etc.
4. Focus on systems, not instances
Instead of fixing one issue in isolation, identify patterns: "The AI consistently misunderstands our refund policy when generating customer communications."
The subtle art of knowing what to fix versus what to leave alone is critical. Perfect is the enemy of shipped, and in a world where AI can generate endless variations, you need to know when to say "good enough."
What This Means for How We Work
This shift creates massive challenges for how we structure and evaluate work:
How do we measure someone's review quality?
It's easy to count words written or tickets closed. It's much harder to measure the value of excellent judgment. We need new metrics focused on outcomes, not outputs.
Performance reviews get awkward
"What did you accomplish this quarter?" "Well, I reviewed 200 AI-generated designs and made them 30% better." "How do we quantify that?" "Exactly."
Collaboration morphs into something new
When AI can generate variations infinitely, the bottleneck becomes alignment on what "good" looks like. Teams need to develop shared evaluation criteria.
Career ladders need restructuring
If your organization still defines senior roles primarily by production throughput, you're optimizing for the wrong thing. Judgment, direction, and review skills should be explicitly valued.
I've heard of companies starting to adapt by creating "review pairs" where juniors direct AI tools to produce work that seniors then review, creating a more efficient division of labor that plays to human strengths at different experience levels.
It’s all about results. It always has been.
That’s it. Not the output, but the outcome.
The Human Touch in an AI World
The silver lining in all this is that we get to focus on the most human parts of knowledge work:
Strategic thinking about what should be created
Evaluative judgment about what's good
Nuanced understanding of context and audience
Application of taste and discernment
These are exactly the things that make work meaningful and non-commoditized. They're also the things most resistant to automation.
The challenge is building muscles that many of us haven't needed before. The knowledge worker who thrived by putting their head down and "just producing" will struggle in this new reality. The future belongs to those who can direct, refine, and elevate machine-generated work through the application of distinctly human judgment.
This doesn't mean stop creating entirely. There's still immense value in human-generated first drafts or core concepts. But it does mean embracing a shift in where we focus our energy and how we measure our contribution.
The production party might be over, but the era of human discernment is just beginning. And that might be a trade worth making.
What do you think? Is your role shifting from maker to reviewer?
Special thanks to Dalia Havens and Grant Miller for helping shape my thoughts on working with AI.
Yes! I’ve been seeing this shift play out in my own work and have been building frameworks for reviewing AI-generated stuff in response, but somehow didn’t realize that this is happening so broadly, or what that might mean.
This post kicked open a door in my brain that I’m excited to walk through. Thank you for writing and sharing it! 🎉