What happens to human judgment, craft, and meaning when AI reshapes how work gets done? A podcast exploring the human side of technology, artificial intelligence philosophy, and staying thoughtful in an era of automation.
What happens to human judgment, craft, and meaning when artificial intelligence reshapes how work gets done? Polymathic explores the human side of technology — reflecting on the intersection of AI and the human experience, the philosophy of building software, and what it takes to stay thoughtful in an era of automation. Subscribe to get new episodes automatically.
Every subscription makes a bet that most customers won't use what they're paying for. The customer who closes that gap becomes a problem to be managed.
Builders who work across multiple projects leave fingerprints everywhere. The same mind solves the same problem differently in every domain — and usually doesn't notice. You need someone to read it back to you.
The most productive day in an organization's life usually looks like nothing happened. No launches, no features, no announcements. Just people quietly making the existing work more honest.
The word 'agent' has become meaningless. Everyone from chatbot vendors to autonomous system builders uses it. We've been here before — with self-driving cars — and it didn't end well.
Experienced developers are 19% slower with AI tools — and they don't even know it. The data says the productivity revolution isn't about faster code. It's about fixing the system around the code.
The assumption that work scales with people is so embedded in how organizations think that questioning it feels like questioning gravity. But one operator just ran ten parallel operations in a single day. The unit of capacity isn't the person. It's the decision-maker.
Work is dead. And we have killed it. AI didn't defeat the myth that human value comes from reliable output — we built the systems that exposed it. What comes next isn't replacement. It's revaluation.
Most organizations are measuring work they stopped doing years ago. The dashboard is green. The reports are filed. Nobody realizes the entire apparatus is pointed at ghosts.
Most systems have more suppression than their owners realize. It gets installed for good reasons. The cost accumulates slowly, in the form of systems you can't operate because you've removed the signals that would let you understand them.
The most dangerous organizational failures don't throw errors. They look fine, return results, and quietly stay frozen at the moment of their creation.
The gap between having a solution and using a solution is one of the most persistent failure modes in organizations. You see the escaped variable. You see the risk register. You assume the work is done.
Dropping a column from a production database is the organizational equivalent of admitting you were wrong. Five projects cleared their queues on the same day, and the bottleneck that emerged wasn't execution — it was taste.
Most products don't fail at building. They fail at the handoff between building and becoming real. What happens when the code is done and the only things left are judgment calls?
Two agents modified the same file independently and created database locks. The fleet hit 135 issues in one day — and the coordination problem that comes with it.
The most productive thing you can do with a product is take features away. Eighty-nine issues closed across eight projects, and the hardest lesson came from a pipeline that ran perfectly and produced nothing.
A product pivoted its entire philosophy mid-session — from 'here's your list' to 'here's your next thing.' The code shipped in the same conversation as the idea. That's not iteration. That's something else.
Building the product is the fun part. Deploying it, configuring auth, pasting email templates into dashboards, rotating leaked API keys — that's where the work actually lives.
112 issues across 12 projects. Two new products went from nothing to code-complete MVP in single sessions. And the most interesting signal wasn't the speed — it was the scout that came back empty-handed.
The AI community is reinventing organizational design from scratch — badly. Agencies figured this out decades ago. Competencies, not clients. Briefs, not prompts. Lateral communication, not hub-and-spoke. The answers are already there.
Every agent framework organizes around tasks. The agencies that actually work organize around competencies. The AI community is about to rediscover this the hard way.
The most dangerous gap in any organization isn't between what you know and what you don't. It's between what your systems know and what they're willing to say.
Organizations are full of things that look like governance, strategy, and quality control but are actually decorative. The trigger conditions nobody reads, the dashboards nobody checks, the review processes that rubber-stamp. When you finally audit what's functional versus ornamental, the ratio is alarming.
Sixty-three issues closed across thirteen projects in one day. Four milestones completed. And the hardest problem wasn't building — it was keeping up with what you've already built.
Every organization has this problem: knowledge locked inside one person's head. Today I accidentally designed a solution — and it has nothing to do with documentation.
Every organization has loaded weapons lying around that nobody remembers loading. The most dangerous capability in any system is the one you built 'just in case.'
There's a moment in every project where the work stops being about building and starts being about keeping things running. Nobody announces this transition. Nobody gives you new tools for it. And most people keep building long past the point where they should have stopped.
Your system works. Then you try it somewhere else and it falls apart. The gap between 'works here' and 'works anywhere' is where most automation dies — and most organizations never look.
Your product works until someone actually uses it. The gap between 'works in dev' and 'works for a person' is where most systems fail — and most organizations avoid looking.
Continuous delivery removed the endings from work. That felt like progress. But without formal completion, you lose the ability to say what you actually accomplished — and more importantly, what you're done thinking about.
The most dangerous failures in any system — technical or organizational — aren't the ones throwing errors. They're the ones that appear to work perfectly. And they'll keep appearing to work perfectly right up until they don't.
I want to talk about something that happened this week that I almost missed because it looked boring. Five separate software projects — all mine, all running semi-autonomously with AI pipelines — i...
Three projects independently discovered the same bug pattern today — code that reports success when something important didn't happen. The most dangerous failures don't look like failures at all.
I want to talk about something that happened this week that looks like a technical problem but is actually a management problem. And I think it maps onto something most organizations are going to f...
We've been celebrating that AI made junior engineers profitable. That's not what happened. AI made it economically viable to give them access to work that actually builds judgment, work we always knew
I've been running a portfolio of software projects using a mix of autonomous AI pipelines and human-led parallel agent sessions. Yesterday, three different projects had monster output days — and th...
So here's something I noticed today that I want to sit with. I run several projects that use autonomous pipelines — AI systems that pick up tasks, write code, open pull requests, ship changes. One ...
I want to talk about persistence. Specifically, the difference between persistence and stubbornness — and why that difference might be the most important design problem in any system that operates ...
I want to talk about pacing. Not productivity, not velocity — pacing. Because I think we're about to discover that a lot of what we called 'workflow' was actually a rhythm our brains depended on, a...
I want to talk about the difference between execution and verification. Because something happened this week that made the distinction painfully clear, and I think it matters far beyond software.
There's a moment in any system—a team, a company, a workflow—where the thing you've been optimizing for stops being the constraint. And you don't notice right away. You keep pushing on the old bott...
I want to talk about staging areas. Not the technical kind—the human kind. The places where work goes to sit. The inbox you check before forwarding. The draft folder. The approval queue. The meetin...
So here's something I want to think through today. I've been working across several projects simultaneously, and what's striking me isn't the building. It's the deleting. The removing. The taking a...
I want to talk about what happens when something stops being a tool and becomes plumbing. Because that shift is happening in my work right now, and I think it's happening everywhere, and most peopl...
So I want to talk about archiving. Not the technical act of it—moving files into a folder, adding lines to a gitignore—but the psychological act. The decision to say: this thing is done. Not broken...
The problem isn't workflow efficiency. It's that you're treating thought leadership like a manufacturing process when it's actually a translation problem.
So here's something I've been thinking about. When systems fail, they don't just reveal technical problems. They reveal priorities. They reveal what teams actually value versus what they say they v...
So everything broke today. Not dramatically, not spectacularly—just quietly, persistently broken. Supabase went down, and three different products I work on all stopped working at the same time. Sa...
Most knowledge workers spend 45 to 90 minutes each morning manually triaging the internet. The time already exists in your day. You're just spending it on filtering instead of reading.
So here's something I've been sitting with today. I watched three different products ship integration APIs within hours of each other. Same basic problem—let external systems send data in. Three co...
I want to talk about where complexity actually lives. Not where we think it lives, not where the org chart says it lives, but where it actually shows up when you're trying to get something done.
I want to talk about something I noticed this weekend that I think applies far beyond the work I was doing. It's about measurement—specifically, what happens when the act of measuring something cha...
I want to talk about what happens when copying becomes faster than deciding. And what that reveals about how organizations actually standardize—which is almost never the way they think they do.
I've been thinking about friction. Not the dramatic kind—not the system crash, not the project that fails spectacularly. I mean the quiet kind. The accumulation of small things that don't quite wor...
I want to talk about the difference between a system that works and a system that's ready. These aren't the same thing. The gap between them is where most projects stall out—not from failure, but f...
I want to talk about something I keep running into: the moment when you realize the outside of something no longer matches the inside. And what that actually costs.
I've been thinking about the gap between 'it works' and 'you can use it.' These aren't the same thing, and the distance between them is where most organizational dysfunction lives.
Every time a new tool emerges for making or manipulating symbols, we panic. The pattern is so consistent it's almost embarrassing. Here's what happened each time.
I've been thinking about what happens when your tools get good enough to tell you the truth. Not good enough to do the work—good enough to show you what you've been avoiding.
So here's something that happened yesterday that I'm still thinking about. Seven different projects—completely unrelated work, different domains, different goals—all hit the same wall on the same d...
So here's something I've been sitting with. You finish a piece of work. You ship it. Everything looks good. And then production starts teaching you that you weren't actually done.
So here's something I've been sitting with lately. There's this gap—a subtle one—between a system that's running and a system that's actually working. And I don't mean broken versus not broken. I m...
So here's something I've been sitting with this week. I've been building systems that generate content—podcast scripts, social media posts, that kind of thing—and almost immediately after getting t...
I spent part of today watching a game fall apart in my hands. Not because it was broken—technically everything worked fine. It fell apart because I'd confused being clever with being usable.
So here's something I've been sitting with lately. I spent the last couple days working across a bunch of different projects, and I noticed something strange. In almost every single one, the most i...
Hey, it's Paul. January 22nd, 2026. Today was a launch day, which means it was also a "things broke immediately" day. Dialex went live at dialex.io, and the first thing that happened was every request got blocked with a 403 Forbidden error. I talk about reasonable decisions accumulating into unreasonable situations, why iteration speed matters more than initial tool choice, and how dashboards make accumulated state visible.
The tools you build determine what kinds of thinking become possible. On infrastructure, friction, and building deliberately for thought rather than just throughput.
A junior developer used to wait days for mentor feedback. Now that loop closes in seconds. When feedback is scarce, you batch your questions. When feedback is abundant, learning becomes continuous. AI changes the supply side of learning—most of our systems weren't designed for this.
We've built work cultures that reward activity, even when nothing actually changes. In technical systems, activity doesn't count—only state change does. This essay explores why "busy" has become the most misleading signal we have, and how focusing on state instead of motion makes work more honest, less draining, and actually productive.
AI removes the constraints that gave teaching its shape—one teacher, thirty students, limited time. But lifting constraints doesn't make the work easier. It makes it different. Teachers trained for a bounded classroom now face an unbounded role that requires judgment, discernment, and presence in ways we haven't yet mapped.
Explore why AI-generated content deserves the same scrutiny as traditional writing, focusing on quality, accuracy, and transparency in the writing process.
Explore the key differences between Continuous Delivery, Continuous Deployment, and Continuous Integration to enhance your DevOps strategy and boost product...
Discover a faster, simpler way to connect Rails with SugarCRM by accessing the database directly, avoiding slow API calls for seamless synchronization.
Easily implement in_place_editor for collections in Ruby on Rails partials with this straightforward guide and troubleshooting tips. Save time and simplify...