AI-Powered Clarity. Built-In Fragility.
AI-powered clarity is real. So is built-in fragility. I was aware. And I still got caught.
I test the limits of AI-augmented work. Claude Code is my primary environment. AI shapes how I think, decide, ship. And this is increasingly how many of us work.
The gains are real. What's happening to software development lifecycle is mindblowing. Meeting notes in seconds. Complex data visualized before my coffee gets cold. Decisions backed by analysis we couldn't do alone.
The Known Enemy
We know the risks. Intellectual atrophy. AI agrees before you finish your sentence. Compounds your mistakes faster. Lets you outsource thinking and feel productive doing it.
This is the enemy we see coming.
The Setup I Built
So I fight this. Built a setup aimed at becoming 10x more efficient, not just looking 10x faster.
Claude Code as my primary working environment. For data-driven, AI-augmented decision making. A thinking partner. Iteratively improving it.
Started with better prompts, better context, better structure. The basics.
Then workflows across the whole development lifecycle. Quality gates. Iteration loops.
Great virtual work companions. Kahneman for cognitive biases. Taleb for fragility and risk. Not just name-dropping. Actually challenge my assumptions from different perspectives.
Then higher level. AI as a thinking partner. Like a co-founder who challenges assumptions. A COO keeping me accountable toward my own goals.
Superwhisper + AI refinement loop. I'd talk through messy ideas, stream of consciousness. AI would structure them, ask followup questions when there were gaps, push me to clarify, make assumptions explicit. The output always looked clean.
I had the frameworks covered. One-way doors, two-way doors. Slow down on irreversible decisions. Multiple perspectives. Devil's advocate. Chain of thought reasoning.
I was protecting my thinking. My decision quality.
I wasn't blindly trusting it. I still had my guard up. Critical thinking applied to every output. Checking for hallucinations, pushing back on weak reasoning, validating against my own experience.
The Question I Forgot to Ask
My setup kept evolving. But in the process, I forgot one question.
Who am I becoming?
Collision With Reality
Late November. Sick with the flu for over two weeks, in an isolated environment. More dictating than real conversations. More AI-structured output than raw human thinking.
I noticed interesting opportunity. Agentic AI. Unique GTM strategy. Something I knew I could contribute to. I did my homework. AI-assisted research. Talking points ready. Guard up.
Then came the meetings. My phrasing. How I structured thoughts in real-time. I was circling the point without landing it. The structure I was used to having wasn't there.
Remember Superwhisper? That convenient workflow where I'd dictate messy thoughts and AI would structure them?
I'd stopped doing the structuring myself. Why would I? The output always looked clean. AI was making my input look polished. I thought it was a feature. It was hiding what was atrophying underneath.
In those meetings, it was just me. No AI structuring my words. No follow-up questions before I left "plan mode."
I lost the opportunity. The muscle I needed in that moment had atrophied while I was busy being "productive."
Even fighting unknown unknowns as hard as I could, I missed the most important one: who I was becoming in the process. Reality caught up.
The Shift Isn't Optional
Let me be clear: this isn't an argument against AI adoption.
The shift is real. Different cognitive work. Resist and you lose.
But the question isn't only whether your team is productive now. It's whether they're building capability or dependency.
What To Do About It
What I do now, and encourage people I work with to do:
-
Schedule unstructured thinking time. No AI cleanup. Sit in confusion longer. For me, that's an offline notebook and pen. Some of the best insights come from wrestling with ambiguity, not outsourcing it.
-
Run a fragility audit. Focus on high-leverage areas - where your judgment matters most. What are you assuming you can still do that you haven't practiced recently?
-
Build in deliberate friction. Not everywhere. But on decisions that matter, slow down. Do the thinking yourself first, then use AI to pressure-test it.
AI-powered clarity is real. So is built-in fragility. Track your productivity gains. But also track who you're becoming while AI does the heavy lifting.
And yes, I modified my Claude Code setup too. New companion. Its job: flag atrophy risks.
"Skills at risk of atrophying: Claude organizes your scattered thoughts. But some of the best insights come from sitting in confusion longer. Are you cutting that short?"
Still wondering what other blind spots I'm missing...
Jakub Grzesiak ยท Fractional CTO