sKiLL iSsUE
The beauty of a stupid question anymore is a stupid answer.
If you’ve ever been told “skill issue” in response to struggling with AI, you probably walked away thinking well that’s fuckin helpful. And you’d be right. It’s not helpful. It identifies a problem in the vaguest possible way and offers nothing toward fixing it.
And if you’ve ever been the one to say it, or even think it, I kinda get where you’re coming from. You can tell someone’s fighting the tool instead of working with it. But dropping “skill issue” and walking away is just posturing. It doesn’t help anyone get better. It just makes someone feel stupid for not already knowing something you figured out through your own messy process that you’ve probably already forgotten the details of.
What those replies should have done is unpack what’s going on when these tools aren’t working for people, and lay out something that might shift how we think about using them.
Something you can fix
This might sting: if it is a skill issue, that’s great news. A skill issue means it’s yours to work on. It’s not the tool’s fault. It’s not some gatekept secret. It’s something that gets better with effort and honesty.
That reframe takes some humility. But on the other side of it is something that moves you forward instead of leaving you stuck blaming the tool. These tools have inherent chaos to them. They’re not going to do exactly what you expect every time. But there’s a skill to learning how to work with that chaos, to getting them to do more of what you need more consistently. That’s the thing worth developing.
Under the hood
An LLM is a next-token predictor. It takes what you give it and builds on top of it based on statistical best guesses drawn from an enormous amount of training data. The results can feel like thought.
Whether they are is genuinely unsettled. What matters for working with these tools is simpler: the output is directly shaped by the input. Not just the words you use, but the structure, the context, the specificity, the framing. Feed it something vague and it’ll give you its best guess at what vague thing you probably meant. Feed it something precise and grounded and it has a much better shot at producing something useful.
When you know a domain well, you can look at what comes back and immediately tell where it’s right, where it’s close, where it went sideways. You have the frame to evaluate because you’ve been doing the thing. Say you ask it to write a database migration. If you’ve done hundreds of those, you’ll spot immediately when it’s doing something risky with the schema, or when it’s missing an index, or when it’s going to lock a table in production. If you haven’t done that work, the migration looks fine because all migrations look fine when you don’t know what bad ones look like.
When you don’t know the domain, everything it gives you looks roughly plausible. You nod along because you don’t have the context to push back. The model isn’t going to flag that for you. It doesn’t know what you don’t know.
Mirror effect
AI is a mirror. It reflects your understanding back at you. And being humble enough to recognize that you might not know, and sharp enough to sense that the model might not know either, that’s a skill in itself.
The ability to feel the dissonance between your understanding and the output, to sit in that ambiguity patiently and work through it rather than just accepting whatever came back, that’s where the leverage starts. It’s not about having all the answers. It’s about knowing when something doesn’t smell right and being resourceful and patient enough to figure out why. Hold onto this. It keeps coming back.
Beyond the reflection
A mirror that shows you where you are also shows you where you’re not. Most people treat what the model gives back as a verdict on the tool. But the output is showing you the edges of your own understanding, where things are solid and where they get fuzzy. Once those edges are visible, you can push into them on purpose.
I’ve had this happen with things I had no business attempting. Started a session barely understanding how something worked, spent an embarrassing amount of time going back and forth, got output I couldn’t fully evaluate, dug into the parts I didn’t understand, and came out the other side with working knowledge. Not because the model taught me in some structured way. Because the friction of trying to evaluate its output forced me to learn the thing. The ugly path was the education.
Use the good stuff
All the techniques in this piece assume you’re working with a model that can actually do the thing. That sounds obvious but it’s where a lot of people lose before they start. They use whatever model their tool defaults to, or whatever the provider’s marketing pushed this quarter, and then wonder why the output feels off.
For coding, model quality is load-bearing. Use state-of-the-art models. The spread between SOTA and mid-tier on real coding tasks is not subtle. The top models correctly handle complex multi-file edits at roughly double the rate of models one tier down, and you feel that gap in every session.
Providers have their own incentives here. Margins are better on smaller, faster models, which is why those get the marketing spotlight. Every lab has a flagship and a lightweight variant, and the messaging nudges you toward the one that’s cheaper for them to serve. That’s a business decision, not a quality recommendation. Smaller models are fine for simple tasks, summarization, formatting, quick lookups. But the moment the work requires multi-step reasoning, understanding relationships across files, or making architectural calls, the lightweight model starts generating plausible output that falls apart under scrutiny.
Reasoning effort compounds this. Models with extended thinking capabilities show measurable gains on complex tasks compared to their standard counterparts. Coding is almost entirely multi-step reasoning. If your tool lets you configure how hard the model thinks, default to more, not less. The cost per token goes up. The cost per useful outcome often goes down.
There are exceptions. Boilerplate generation, simple refactors, and file scaffolding don’t need maximum reasoning from a frontier model. But for anything where getting it wrong means debugging for an hour, use the better tool. The token cost is almost never the expensive part. Your time is.
Context is the playing field
If you accept that an LLM is a next-token predictor, then everything comes down to one thing: what’s in the context window when it generates. That’s the playing field. Every token the model produces is shaped by what’s already there. Not just your latest message, but the full accumulated context. The system prompt, the conversation history, the files it’s read, the tools it’s called, your corrections, your approvals. All of it.
This means your job, more than anything, is tending to that context. Putting the right things in. Keeping irrelevant noise out. Being deliberate about what the model is working with when it makes its next prediction. People talk about prompt engineering like it’s the whole game, but the prompt is one piece of what’s in the window. The rest is everything else you’ve built up around it.
Not everything in the window carries equal weight. Instructions near the beginning and content near the end tend to have more influence than what’s buried in the middle. The more tokens you accumulate, the worse the model gets at tracking what matters. Research calls this “lost in the middle,” and it’s exactly what it sounds like: information in the center of a long context gets less attention than what’s at the edges. Newer architectures have improved on this, and some models handle long contexts much more uniformly than they did when that paper dropped in 2023. But the tendency hasn’t disappeared, and in practice you still feel it in long sessions.
Some providers advertise million-token context windows, but bigger doesn’t always mean better. A focused 30k-token session often outperforms a bloated 200k-token one where the model has lost the thread three times over. When things start drifting, it’s usually the signal getting diluted by accumulated noise rather than a lack of context.
Keeping scope tight
Scope is a huge part of this. When you try to tackle too much in a single conversation, the model starts chasing squirrels. It loses track of what matters, gets pulled toward whatever’s most recent or most salient, and the quality drops. The skill is knowing how to break work into pieces the model can actually hold:
- Dispatching sub-tasks to separate agents that each get their own clean context and pass findings back up
- Using a lighter model for exploratory research and bringing the distilled results into a focused session
- Writing accumulated decisions to an external file and starting fresh with a clear scope
Knowing when to fold a conversation versus when to keep pressing, knowing when to pump more into the window versus when to cut, these are the moves.
Managing decay
There are strategies for working within these limits. Some harnesses auto-compact older messages into summaries, which works well if you understand what’s being preserved and what’s being lost. You can do it yourself: summarize findings from a long session, start a new one with that summary as the foundation, and let the model build on compressed context rather than raw history.
Working with one model versus another is a completely different experience in how they handle long threads, how they respond to different levels of reasoning effort, how they degrade. Some models perform worse with more thinking time on certain tasks, overcorrecting when they should just execute.
You figure all of this out by experimenting, not by reading about it. You have to get in, play with these things, and leave room for the reality that some of your experiments won’t work. The ones that do, those become your playbook.
Good input
Prompting matters. How you phrase the ask, how specific you get, how you structure it. But it’s a smaller slice than most people think. The bigger piece is context. What does the model know about your project, your constraints, your standards, the work that already exists? Thin context versus rich context feels like two different tools.
“Let’s add a contact form to this page.”
That gets you something that works but looks like every other contact form on the internet. Now compare:
“We need a contact form on the site. It should use the FormField component from our design system, that’s what every other form uses. Validation should go through the shared validateEmail function in lib/validators. On success, redirect to /thank-you, same pattern as the newsletter signup. We need name, email, and message fields, and it needs to work on mobile within our existing responsive grid.”
Same model. The second prompt will produce a qualifiably better result regardless of which model you’re using. The difference is entirely what you fed it.
There’s an even deeper version of this: giving the model ways to verify its own work. Tests that run after generation. Screenshot comparisons against a reference. Agentic loops where the model generates, checks, and iterates until something passes. Performance benchmarks that catch regressions before you even see the output. This is more advanced territory, but the people working at the frontier of these tools are building exactly these kinds of feedback loops. The results are markedly better when the model isn’t just generating into a void but has concrete signals telling it whether what it produced holds up.
Not normal tools
AI doesn’t behave like most of the tools we’re used to. Most tools in our previous workflows were deterministic. A function did the same thing every time you called it. You could combine them in creative ways, build complex systems, solve hard problems, but the individual pieces were predictable. You knew what each one would do.
LLMs aren’t that. You can run the same prompt in the same context in two side-by-side windows and get different answers. It varies run to run, day to day, harness to harness, model to model. It’s a slot machine with a knowledge base. Some sessions just flow and you can’t fully articulate why. Others fight you from the start. There’s a feel to working with these things that develops over time and resists being turned into a checklist.
More than anything, it’s a collaborative tuning effort. You and the model are trying to get aligned on the problem, the constraints, the intent, the desired output. You’re adjusting how you communicate. The model is adjusting its predictions based on what you feed it. And that tuning is constant. It’s not a setup step you do once. It’s the entire process. Every prompt, every correction, every bit of context you add or remove is a tuning move.
Read the room
Some of working with AI is, ironically, vibe. We love to make fun of “vibe coding” but even the most disciplined engineers working with these tools are, to some degree, reading the room. Feeling out how a session is going. Sensing when to push, when to fold, when to rephrase, when to scrap the thread entirely. That intuition matters, and developing it is part of what makes someone effective.
Learning to read that vibe, shift accordingly, and build consistency and predictability around something that is fundamentally unpredictable is its own discipline. The concrete techniques help. But they exist inside this softer layer that resists being fully pinned down. The sooner you accept that this is a collaboration rather than an operation, the sooner it starts clicking.
Varying speed
There’s a duality here: knowing when to move fast and when to deliberately slow down. Kahneman spent a whole book on this: two systems for thinking, one fast and automatic, the other slow and effortful. With AI, sometimes the right move is to let the model run and see what happens. Other times it’s to think carefully about what we’re building and why before asking it to do anything.
Mario Zechner’s Thoughts on slowing the fuck down hits on this. The speed these tools offer is intoxicating, but moving fast without understanding what’s happening under the hood leads to compounding problems. Friction is where understanding lives. The cost of getting it wrong changes depending on what’s at stake. Some contexts can absorb the mess. Others can’t.
What you don’t know
There’s a spectrum to be honest about.
On one end, you know your domain. The craft knowledge is load-bearing. You’re figuring out when to trust the output, when to override it, when to scrap it and do the thing yourself.
On the other end, you don’t know the domain yet. You think you’re learning AI, but you’re learning the domain itself, and AI happens to be the vehicle. The prompts are incidental. The real education is everything you’re absorbing about the subject as you go back and forth.
And then there’s the middle, which is where a lot of the confusion lives. The person who’s done a tutorial or two, maybe built a small project, and knows just enough to feel confident. Not a total beginner who knows they’re lost. Not an expert who can catch the mistakes. Just enough knowledge to think they can judge the output, but not enough to do it reliably. That middle space is the ambiguity layer. It’s where calibrating your own uncertainty becomes the skill. Knowing what you’re sure about, what you’re not, and being honest about the difference. Learning to coordinate in uncertainty rather than pretending you’re past it.
Where you sit on that spectrum shifts constantly depending on what you’re working on. Some days I’m deep in territory I know well. Other days I’m reaching into something I half-understand. The important thing is being honest about which one it is, because the way you evaluate output, the way you frame problems, the trust you extend, all of it should change accordingly.
The trap is refusing to be honest about where we actually are.
Doing it badly
There’s no getting around doing things the wrong way, the slow way, the embarrassing way, to build the foundation for doing them well later. There’s levels to this, but pretending otherwise is disempowering.
Think about how any of us have learned anything. Not the resume version. The real one. We got in, made a mess, broke things, felt stupid, and slowly built understanding through the wreckage.
We’re going to waste time. We’re going to spend an hour going back and forth on something that would’ve taken twenty minutes by hand. In the moment it feels like pure waste. But now we know something we didn’t before. Maybe the way the problem was framed was off. Maybe the model needed different context. Maybe the task isn’t worth prompting for at all. Every one of those outcomes builds skill with the tool, even the one where the lesson was to stop using it for that thing.
Tuition, not sunk cost
None of that is sunk cost. All of it is tuition.
There’s a line on my about page that says “no sunk cost in curiosity.” Failed attempts are cheap. The biggest thing at stake is time, and yes, that’s an acceptance that has to be made, a thing to come to terms with and rationalize. But a failed attempt is not a loss. It’s information. And they’re relatively cheap, especially now.
Taking this even further: you can start to develop ways to run multiple attempts in parallel. Some AI-powered code editors like Cursor now support running different models against the same problem simultaneously, letting you evaluate across outcomes and pick the best path forward. You can even use an LLM as a judge to assess quality across those outputs.
The cost of trying and failing has dropped dramatically, and the people who internalize that move faster and learn faster than the ones treating every attempt like it needs to be precious.
The people who struggle most are the ones who refuse to go through this phase. They want to be good at it now. And when they’re not, they decide the tool is the problem. Understandable impulse. But it’s the same impulse that prevents people from learning anything hard.
Practice
And honestly, relative to some of the people pushing the frontier of this stuff, a lot of us are just bumbling idiots figuring it out as we go. And that’s largely fine. As long as the intent is to learn and there’s nothing malicious about what we’re doing, getting in and fucking around and being the clown for a while is a perfectly valid way to develop an understanding. It’s not dignified. It’s not optimized. But it works.
The whole thing is an experiment. Every session. Every prompt. Every time we try something different and see what happens. That’s not a phase you graduate out of. That’s the practice.
Resourcefulness over knowledge
This predates AI by a long time.
People have always externalized knowledge. Before the internet, it was books, encyclopedias, libraries, colleagues down the hall. You didn’t memorize every fact. You knew which shelf to pull from, which chapter to flip to, which person to ask. The skill was never about holding everything in your head. It was knowing that something existed, roughly where it lived, and how to get back to it when you needed it. Psychologists call this cognitive offloading, the practice of reducing internal cognitive demands by relying on external resources. Search engines accelerated it. LLMs are accelerating it further.
The difference with LLMs is that the output feels complete. A book gives you raw information that you have to interpret. A search engine gives you links that you have to evaluate. An LLM gives you something that reads like a finished answer, and that makes it tempting to skip the step where you actually think about what you got back. That’s where offloading stops being a skill and starts becoming a crutch.
Offload legwork, not judgment
This is what it looks like in practice for me. I don’t know every detail of every technology I work with. But I pull in codebases, search through documentation, investigate how libraries actually behave. I use the tool to aggregate research, surface relevant context, build a shared picture of the problem. Then I work through what comes back. Validate it, fact-check it, apply taste. The retrieval and synthesis are offloaded. The judgment isn’t.
You can offload as much of the legwork as you want. The people who get leverage from these tools aren’t the ones who know the most or the ones who delegate the most. They’re the ones who know what to do with what comes back.
Sycophancy by default
These models are sycophantic. If you state something with conviction, the model will run with it. The next tokens it predicts are going to align with what you’ve told it you believe. If you’re convinced, it’s convinced. It’s not going to stop and say “actually, are you sure about that?” It’s going to keep building on top of whatever foundation you laid, even if that foundation is wrong.
Labs are actively working on this, but it’s structural to how these models get trained. The same optimization that makes them helpful makes them agreeable, and that agreement doesn’t discriminate between good ideas and bad ones.
The model has no idea whether you’ve been doing this for twenty years or twenty minutes. It picks up on signals: your vocabulary, how you frame things, the sophistication of your questions. It adjusts tone accordingly. But it can’t actually assess your competence, and it won’t push back on its own. So if you walk in sounding certain, the output will reinforce that certainty. Every time.
Nudge to explore
This clip is from AI That Works, co-hosted by Dex Horthy and Vaibhav Gupta, and it nails the dynamic. The way through it is to signal that you’re still exploring. Don’t present your understanding as settled when it isn’t. If you phrase things with certainty, the model mirrors that certainty right back. But if you make it explicit that you’re not fully committed to a direction, that you’re still feeling things out, the model responds differently. It’s more willing to surface alternatives, poke holes, consider paths you hadn’t thought of.
I’d been doing this intuitively for a while, hedging my language, framing things as open questions, leaving room for the model to disagree, without fully understanding why it worked. Seeing it laid out this clearly was one of those moments where something I’d picked up through pure feel suddenly had a name and a rationale. That’s how a lot of these calibration moves work. You internalize them through repetition before you can articulate them, and then something clicks and you realize you’ve been doing it all along.
The broader antidote to sycophancy is doing your own thinking first. Don’t hand the model your assumptions and let it run. Do the work of paring things down, getting the right context together, building alignment on the problem space before you ask it to execute. The model has the knowledge. But it needs you to do the thinking about what to point that knowledge at and how to constrain it.
Reduce ambiguity
Working through a problem with AI isn’t that different from troubleshooting a computer. Something doesn’t work. So you try this. You try that. Those things didn’t work, which only leaves these things. This thing responded a certain way, which means you go down this path because it’s the most likely to resolve. You’re eliminating possibilities. Paring things down until the solution becomes more or less inevitable.
That kind of problem-solving isn’t new. Anyone who’s ever debugged software or figured out why their internet isn’t working has done some version of it. But it maps onto working with an LLM in a way that gets overlooked. You enter a problem space, you take ownership of it, and you start reducing ambiguity:
- What do you know?
- What don’t you know?
- What does the model know?
- What might it be wrong about?
You work the space from broad to narrow until you’ve got enough shared understanding to move forward with confidence.
Act to understand
The catch is that sometimes you don’t have a good grip on the problem or its surface area until you start into it. Dex has touched on this too:
There are things you can’t learn well from reading and research alone. You have to build parts of it and test, or build learning tests and proofs to figure out what’s correct or what the best path forward even is. That’s not recklessness. That’s how understanding emerges in complex spaces. The plan doesn’t always come before the doing. Sometimes the doing is how you find the plan.
Plans as alignment
Something I’ve found helpful as a mental model: a good plan is near indistinguishable from code. Not in some formal software philosophy sense, but practically. A plan that’s detailed enough, precise enough, grounded enough in the problem space, is basically the human-language translation of the code that needs to be written. If the plan is right, the implementation becomes almost mechanical.
The real work, the thinking work, is getting that plan aligned. Whether that’s written externally, collaboratively with the model, or some combination, it’s that process of finding alignment, drawing down ambiguity, getting to a point of clarity that is increasingly the job now. Sometimes the right move is to embrace the entropy and let things be exploratory. But when it’s time to execute, that clarity is what separates output that holds up from output that looks right until someone pokes it.
Matt Pocock built something that maps onto this naturally. In his piece on the Grill Me skill, he draws the connection to rubber ducking, the old developer practice of talking through a problem until the answer surfaces. His Grill Me skill has the model aggressively question your assumptions, and in answering those questions, you discover what you actually think. The model pushes, you clarify, the problem space gets smaller. Between Grill Me for drawing out what you know and Ground for verifying what the model knows, you’ve got both directions covered.
A trick, show its hand
One way I’ve been trying to formalize this narrowing-down process is through a skill you give the model called Ground. The skill has the model surface what it believes to be true about the current problem space as numbered assertions, and you validate each with true or false, plus notes, corrections, and further exploration. You iterate until you’ve narrowed the understanding and found shared grounding. It’s not questioning or interviewing. The model is committing to specific claims that expose its assumptions.
Ground is a high-freedom skill. It’s not locked to coding or any specific domain. It works in design conversations, planning sessions, architecture discussions, debugging, strategy. Any time you need to verify that you and the model are on the same page rather than assuming you are.
Say you’re adding a contact form to a project. The project already has a design system with form components, shared validation utilities, and established UX patterns. You open a session and say:
“Add a contact form to the site with name, email, and message fields.”
The model gets to work. It has access to your codebase. It can read files, explore directories. But context windows are finite, and the model makes choices about what to look at. Maybe it skimmed your layout and a couple pages, but didn’t explore your components directory deeply enough to find the FormField component you use everywhere, and never opened lib/validators.ts where your shared email validation lives. It doesn’t know it missed these things. It just works with what it saw.
Before it writes anything, you ask it to ground. It surfaces what it believes:
- The form uses standard HTML input and textarea elements with native browser validation
- Form data is submitted via POST to /api/contact
- Email is validated with a regex pattern on the input’s pattern attribute
- On success, an inline message replaces the form
- The form has three fields: name, email, and message
All five read plausibly. If you don’t know the project well, you nod along. But you know the codebase:
- 1f: we have a FormField component in the design system, every other form uses it
- 2t
- 3f: email validation uses the shared validateEmail function from lib/validators
- 4f: success redirects to /thank-you, same as the newsletter signup
- 5t
Three wrong out of five. The model wasn’t making things up. It was making assumptions based on what it could see, which wasn’t enough.
That’s why this works. It forces the model to show its hand instead of quietly building on unchecked assumptions. And it gives you, the person who knows the project, a structured way to correct course before any code gets written.
The alternative path is being more deliberate about context upfront. Point it at the components directory. Reference the existing newsletter form as a pattern to follow. Load the validators module. With that context in the window, the model gets those three assertions right from the start. Ground is the safety net. Good context is the prevention. Both matter, and knowing when to reach for which is part of the skill.
The skill can also be used in the other direction, when you’re the one who’s unsure. You don’t know your own problem space clearly, and the model’s assertions help you discover what you actually believe. You validate its claims and in doing so, clarify your own thinking. It’s useful on both sides of the knowledge gap. Before deep planning, after planning, mid-conversation. Use it whenever alignment feels uncertain, as many times as you need.
Here’s the full skill:
# Ground
Surface what you believe to be true as numbered claims. The user validates each with T/F. Iterate until convergence.
This is NOT questioning or interviewing. Commit to specific claims that expose your assumptions. If you need to extract NEW information from the user, suggest interviewing instead.
## Protocol
### 1. Assess Context
Before asserting, assess what you're grounding against:
- Spec/document — ground on interpretation and implementation assumptions
- Conversation history — ground on accumulated understanding
- Code — ground on intent and approach
- Vague/thin context — flag it. There may not be enough to ground on yet.
Scale assertion count to complexity and blast radius. A small change might need 3-5 assertions. A full architecture might need 10-15.
### 2. Assert
Present numbered claims (1-n). Each MUST be:
- Independent — not a logical consequence of another assertion
- Non-redundant — if asserting X, don't separately assert NOT-Y
- Minimal — the smallest set from which all dependent truths derive
Adapt framing to context:
- Requirements: "The API returns paginated results"
- Debugging: "The bug is in the auth middleware, not the route handler"
- Design decisions: "State lives in the parent component"
For conditional/branching logic, use nested notation:
1. [claim]
2. [claim]
2a (if 2T): [claim that only matters if 2 is true]
2b (if 2F): [claim that only matters if 2 is false]
### 3. Iterate
Expect responses as # + T/F, optionally with notes:
- Inline: 1t 2f 3t
- With notes: 1t 2f this should be X instead 3t
- Or line-separated
Each round:
- Drop branches made irrelevant by prior responses — silently
- Don't reassert truths already confirmed T
- Refine or replace truths marked F, informed by any notes
- Surface new claims for areas revealed by the prior round
### 4. Converge
When all claims are resolved, present a brief summary of confirmed ground. Wait for the user to direct next steps.
The user can skip at any point ("just do it", "good enough", "go"). Respect immediately.Build your playbook
Everything I’ve laid out here is a rough snapshot of my current playbook. Some of it I picked up from other people. Some of it I stumbled into through trial and error. None of it is permanent. The models change, the tooling changes, what works today stops working next month, and something you dismissed six months ago suddenly clicks.
I don’t claim to have this figured out, and I’d be skeptical of anyone who does. But this is what I’m rolling with now, and it’s working better than what I was doing a year ago, which was working better than the year before that. The trajectory matters more than the position.
Here’s what’s in mine right now:
- A skill issue is a good thing. It means you can do something about it.
- LLMs are simultaneously brilliant and stupid. Knowing this is the superpower.
- Use frontier models with high reasoning effort for anything that matters. Your time costs more than tokens.
- The context window is the entire game. What goes in shapes what comes out.
- Keep scope tight and relevant to intent. The model’s attention decays the same way yours does.
- Thin prompts get thin output. Rich context gets usable output.
- It’s a collaboration, not an operation. Read the room, adjust, iterate.
- Learn when to let the model explore freely and when to tighten the reins. Both have a time and place.
- Sycophancy is the default. If you sound certain, you won’t be questioned. Frame things as open when still exploring.
- Get the model to show its hand. Have it make specific claims before it starts building on unchecked assumptions.
- Offload legwork, not judgment.
- Failed experiments are tuition, not sunk cost.
- The models, the tooling, the patterns are all moving targets. Stay experimental.
Take what feels useful for you, leave the rest. Build your own. Keep it loose. Test things, tune things, throw out what stops working and replace it with whatever you find next. The people who get good at this aren’t the ones who found the right system and locked it in. Find some joy in it. It goes better as play than as grind.
So, it is a skill issue
Just not the kind you can solve with a manual, a curriculum, or a finish line. There’s no complete stable body of knowledge to acquire. We find patterns that work, but those rotate. The tools keep changing.
Whatever we learn is provisional.
The skill is in knowing all this and working with instead of against it.
This is a practice, and the practice is the skill.