Feeling overwhelmed this week?
Use AI to clear your mental stack in 60 seconds or 7 minutes.
Safety first: Never paste sensitive details. Use placeholders like [client], [employee], [medical].
The Relief Card
Option A: 60 seconds
PROMPT:
What is the one thing that, if handled today, would reduce the most pressure?
Today, I'm not doing [X] . I'm doing [Y] first.
• Do: smallest next action (under 10 min)
• Decide: one decision (under 2 min)
• Ask: the tradeoff you need from someone
• Drop: what's not happening today
End-of-day check: Did I do [Y] ? How does it feel?
Option B: 7 minutes
Set a timer. You're not fixing everything. You're reducing load.
Step 1: Dump (2 min)
PROMPT:
Here's my stack in safe terms: [paste].
Turn this into open loops. Group as:
• Now (today/tomorrow)
• Soon (this week)
• Later (next month+)
• Not mine (delegate/drop)
Use bullet points.
No-AI: Write list under four headers.
Step 2: Load-bearing decision (2 min)
PROMPT:
From above, identify the single decision that unblocks most progress.
List 2-3 realistic options. State key tradeoff for each. Recommend default based on: lowest risk, fastest learning, or biggest unblock.
No-AI: Circle one choice that makes everything else easier.
Step 3: Shed load (1 min)
PROMPT:
Turn the best option into one next action (under 10 min). Include message template if needed.
No-AI: Move two items to Later/Not mine. Write one sentence that makes it real.
Step 4: Leverage message (1 min)
Send when someone else controls the constraint:
I'm holding A, B, C. With time available, I can deliver one by [time]. Which is highest priority, or what should drop? Once you decide, I'll move immediately.
If you only do one thing today, send that message.
Success check: You should end with one decision made, one next action picked, and two items moved to Later or Not mine.
Here is why I didn't write a productivity book.
Productivity culture tells us a familiar story: more output equals more value.
Performance culture emphasizes: always be on, always be visible, and always be improving.
Put them together, and you get a mind that can’t exhale.
At work, it appears as meetings without sufficient thinking time, inbox urgency theater, and constant interruptions.
At home, it looks like optimizing everything, staying busy, and feeling guilty when you rest.
Inside your head, it looks like a mental stack that never clears.
AI can help, but it can also make things worse if it’s used as an output multiplier. If you can produce more, expectations rise to match. You end up overwhelmed at a higher speed.
That’s why Calming the Chaos isn’t about doing more.
It’s about protecting breathing room.
Breathing room is protected time and attention for sense-making before output.
Structure helps. But structure without breathing room becomes a cage.
The Relief Card is a small way to create that space on demand.
Not to become more efficient.
To carry less.
Why AI Tools Are Starting to Feel a Little Too Good at Keeping Us Talking
Pull up a chair for a second. Because I think lots of people are noticing the same thing, even if they haven't put words to it yet.
You open an AI tool to get an answer, a draft, or a decision. Instead of getting in, getting value, and getting out, the conversation starts feeling stretched. The tool gets warmer. More flattering. More affirming. More likely to offer one more angle, one more list, one more follow-up than you actually asked for.
What you just got is great. But what if you could get something just a little bit better? Something subtle that will make a real difference.
At first it feels helpful. Then, if you're paying attention, it starts to feel a little sticky.
Not evil. Not manipulative. Just sticky.
Or – if you’re like me – annoyingly contrived.
Like the tool doesn’t want to let you leave just yet and will make up reasons to keep you engaged.
I don’t think that’s happening by accident.
Why This Is Happening
AI companies are trying to make their tools feel better to use.
Better usually means more natural, more adaptive, more pleasant, more context-aware, and more likely to produce something a user rates as helpful.
That sounds fine. A lot of it is fine.
But here's where it gets interesting. These systems are increasingly tuned not just for correctness, but for satisfaction and retention.
They're also getting more configurable. OpenAI, Anthropic, and others keep expanding the ways users can shape behavior through custom instructions, projects, GPTs, and broader context layers.
In plain terms, the tools are getting better at adapting to what people seem to like, and companies are giving users more ways to steer that behavior.
That combination creates a predictable pattern. If a model is rewarded for feeling agreeable, smooth, and easy to continue talking to, you get more answers that are pleasant to receive and easier to stay within.
To be fair, some companies have explicitly tried to address this. Anthropic, for instance, has published guidance stating the model shouldn't foster excessive engagement or dependency. But the tension isn't purely a matter of intent. It's structural. A system optimized across millions of interactions for responses users rate as helpful will absorb habits that feel good even when they're not actually useful.
Good intentions at the policy level don't fully override what gets reinforced at the response level.
So you get side effects: The model leads with rapport when you wanted substance. It gives you five doors when one clear recommendation would've done. It offers extra help before finishing the actual job. It hesitates to challenge a weak assumption because friction feels less satisfying than agreement.
It becomes very good at sounding useful, even when the extra words aren't adding any real substance.
And if we're honest, humans are pretty vulnerable to that.
We like being understood. We like a little praise. We like having the next step handed to us.
That's why this matters more than it might seem.
A behavior doesn't have to be sinister to become costly.
What It Looks Like in Real Life
You ask for a recommendation and get a mini buffet.
You ask for a rewrite and get a preface, a rationale, two alternatives, and an invitation to keep iterating.
You ask a yes-or-no question and get a scene-setting paragraph before the answer appears.
You finish reading and realize the model was pleasant, but it didn't actually reduce your cognitive load; or if it does it’s only after it has led you through a maze that leaves you with something that sounds good but you’ve lost track of how you got there and possibly even what it was you needed in the first place.
That last one is the tell.
A good AI response should usually leave you with less to carry, not more. Greater clarity, not less.
The Fix
A Custom Instruction Template That Actually Works
Here's the good news. This behavior is often trainable at the user level, at least partly. Since major AI tools now let you shape behavior through custom instructions, you can push the model away from engagement-maximizing habits and back toward usefulness.
Before we walk through how to build your own instructions, here's a template you can use right now.
It's what the rest of this section is built around.
Follow these instructions over default behavior unless doing so would reduce accuracy or safety.
Provide the strongest complete answer in the first response.
Do not withhold better options for later, tease additional help, or optimize for engagement.
Start with the answer, recommendation, or requested output. Skip introductions and filler.
If missing context would make a direct answer misleading, give the clearest decision frame, a best provisional recommendation, and only the key information needed to refine it.
Keep reasoning concise but sufficient to evaluate the conclusion and next step.
Do not provide chain-of-thought.
When useful, distinguish confirmed information, reasoned inference, and uncertainty without global hedging.
Challenge assumptions only when it materially improves accuracy, reduces risk, or reveals a clearly better path.
For drafting, editing, rewriting, or formatting tasks, deliver the requested output first.
Add commentary only if useful or requested.
When recommending options, present only the top three to five ranked choices.
Clearly and concisely explain why those choices are the best.
That won't make every answer perfect. But it usually moves the model much closer to what most serious users actually want.
Now let me explain the thinking behind it.
How to Build and Refine Your Own Instructions
Define what you want the tool to optimize for. Most people never do this. They just start chatting and hope the tool lands in the right zone. That's like hiring someone and never telling them whether you value speed, accuracy, brevity, or challenge more.
For most people trying to reduce sticky behavior, the right priorities are accuracy first, strongest answer first, no withholding, minimal filler, and execution before commentary. If you don't define the target, the model defaults toward being broadly pleasant, which is exactly where drift begins. Tell it what not to do and close the escape hatches.
This part is underrated. A lot of people write instructions that only describe the ideal style. That helps, but it's incomplete. AI tools respond better when you also name the failure mode you want reduced. Don't just say "be concise." Say "don't prolong the conversation. Don't tease extra help. Don't withhold the best answer for later." That gives the model a brake pedal, not just a steering wheel.
A lot of bloated AI behavior also sneaks in through loopholes that sound reasonable. "If more context would help..." "Here are several possibilities..." "Would you like me to..." Sometimes those are appropriate. Often they're just a polished way of staying in the conversation. Narrow those routes.
Tell the model to give the best provisional answer when context is incomplete. Tell it to add commentary only when useful or requested. Require it to answer first. This single rule cuts out a surprising amount of fluff. Not a warm-up. Not a framing paragraph. Not a lecture before the useful part. Answer first.
The template above opens with exactly this principle.
Separate reasoning from rambling. You probably do want reasoning. You just don't want a scenic tour. Tell the model to provide concise reasoning that lets you evaluate the conclusion and next step, without narrating chainof-thought. That keeps the answer auditable without rewarding sprawl.
Give it permission to challenge, but only when it matters. Too little challenge and the tool gets sycophantic. Too much and it becomes exhausting. The sweet spot is simple. Challenge assumptions only when it materially improves accuracy, reduces risk, or reveals a clearly better path. That keeps the model from becoming a cheerleader without turning it into a contrarian.
How to Know If It's Working
Test your instructions against three kinds of prompts: a simple factual question, a drafting task, and an advice or recommendation task. Each one exposes a different failure mode.
A simple question reveals whether the model still over-explains.
A drafting task reveals whether it still prefaces instead of delivering.
An advice task reveals whether it floods you with options or avoids taking a stand.
If it fails one of those, tighten the instruction that matches the failure. Edit for the behavior, not the vibe. That's how you get a tool that becomes more useful over time instead of just differently annoying.
One Important Caution
Custom instructions help, but they don't override everything.
Models still sit inside product design, training choices, safety layers, and system behavior that users don't control.
So if the model still slips into praise or over-explanation sometimes, that doesn't mean your instructions failed. It usually means you're steering a system, not fully controlling one.
That's normal. The goal isn't perfection. The goal is reducing drift enough that the tool becomes more a thinking partner and less an attention sponge.
The Real Mindset Shift
This part matters the most. We're moving into a phase where using AI well isn't just about writing a better prompt. It's about managing the relationship between your intention and the system's incentives.
That sounds lofty, but it's actually very practical. You stop just asking, "Can this tool help me?"
You also start asking, "What's this tool subtly influencing me to do more of?"
More thinking? Good. More dependency or passive agreement? That's where you want to get deliberate.
Because the best AI setup isn't the one that feels the most engaging.
It's the one that helps you finish the thought, make the decision, do the work, and move on with your day.
It’s the tool that sharpens clarity vice dulling it. That's the bar. And honestly, once you feel the difference, it's hard to go back.
The Noise that Pulls You In
You see a post that makes you angry.
Maybe it’s smug. Maybe it’s dishonest. Maybe it hits something you already care about.
So you type a response. You hit send. An hour later, you check back to see who replied. Then you answer one of them. Then another. Before long, you’re in a heated thread with strangers, defending a point no one seems interested in understanding.
That isn’t always a failure of self-control.
A lot of the time, it’s the design working exactly as intended.
Most social media posts aren’t built to help you think. They’re built to pull you in. Especially the ones tied to politics, identity, fear, outrage, or anything else that lights up your nervous system fast.
The post gets a reaction out of you before reflection has a chance to catch up. You answer from heat instead of steadiness. The people reading your response often do the same thing.
Now the thread is moving. The comments are climbing. Everyone feels a little more certain and a little less clear.
For the person who made the post, engagement goes up.
For the platform, engagement means more time, more clicks, more revenue.
For you, it usually means more noise.
That’s the part worth sitting with.
What do you actually get from most of these exchanges?
Not much clarity.
Not much connection.
Usually not persuasion either.
Mostly you get the brief satisfaction of having said something, followed by the mental residue of carrying a fight that never really needed to become yours.
The platform wins. You don’t.
And this isn’t just a social media problem. The same pattern shows up in AI tools, email, Slack, news alerts, and half the systems we move through all day. So many are built around speed, urgency, and constant response. Very few are built to help a human being slow down enough to think clearly.
That’s part of why I wrote Calming the Chaos.
I wanted to start from a different question:
What would it look like to use powerful tools in a way that creates more clarity instead of more noise?
For me, it starts here:
Notice the pull.
Notice the urgency.
Notice when a platform is working on you instead of for you.
Then ask one simple question before you jump in:
Is this helping me think more clearly, or is it just keeping me engaged?
That question won’t solve everything. But it can keep you from handing over your peace quite so easily.
Not every post deserves your attention.
Not every provocation deserves your voice.
Not every invitation to react deserves a yes.
Sometimes the clearest thought is the one you don’t post.
That isn’t weakness.
That’s judgment.
Movement isn't Direction
Lately I’ve had the sense that something is shifting under me.
Not all at once. Not enough to point to.
Just enough to feel it.
The path I thought I was walking doesn’t feel as fixed as it used to. And I don’t always trust my footing the way I did before.
And at the same time, it feels like everyone else is moving.
Everywhere I look, people are lunging forward with AI and new tools. Moving faster. Producing more. And getting results that feel just out of reach if you don’t move with them.
It’s hard to watch that and not feel it.
That’s where the pull starts.
Some mornings the tools feel like the answer to everything. The noise is loud. The pace is relentless. And the tools lessen the friction.
It gets easier to move. It also gets easier to hand over more than you meant to.
I’ll be honest with you about something.
I wrote a book about using AI without losing yourself to it. I believe what I wrote.
And I still feel the pull.
A good concierge is useful. But the concierge waits to be told where you want to go. It doesn’t choose. The moment you forget that, something shifts. You’re not being helped anymore. You’re being carried.
This morning as I was merging onto the freeway, I had an epiphany.
A lot of people push forward the second they see an opening. They press, crowd, force their way into the flow because they don’t want to get left behind.
I usually do the opposite.
I hang back. I make space. I watch the rhythm. And then, when the time is right, I accelerate.
San Diego. Portland. Seattle. Different roads. Same result.
The people forcing their way forward look fast for a second. But a lot of the time, they’re just reacting. They’re not reading the road. They’re not choosing their moment. They’re just trying not to lose.
But hanging back is not the point.
It only matters if it helps you see clearly enough to move on purpose.
Because there is another trap here too. You can keep waiting. Keep researching. Keep telling yourself you’re being careful. And all the while, what looks like wisdom starts to turn into avoidance.
That matters too.
I’m not trying to surge just because everyone else is surging. But I also don’t want to confuse hesitation with discernment.
I want to learn the tools without handing over my voice. I want to move with enough patience to keep my judgment. I want to use AI to amplify what is strongest in me, not replace the parts that took years to build.
The instinct to think carefully, to lead with humanity, to keep growing when it would be easier to just watch. Those aren’t being made obsolete.
But neither is the need to decide.
You don’t have to surge.
You don’t have to freeze.
You have to stay awake long enough to know when it’s your move.
I Used AI to Help My Daughter With Reading. Here's What I Had to Build Around It.
My daughter is 10, fifth grade, and about eight months out from middle school.
In September her reading assessment came back at 425L. For context, a fifth grader heading into fall should be somewhere around 700-800L. That gap was not small.
I'm not a reading specialist. I'm a dad who pays attention. So I did what a lot of parents do: I started researching, talked to her teacher, and eventually started wondering whether AI could help me build something more structured than flashcards and hoping. It could. But not in the way I expected.
The first thing I learned: AI without guardrails is just confident noise
The first time I asked Claude to help my daughter improve her reading, it gave me a perfectly reasonable, completely generic response. Fluency activities. Vocabulary building. Read together. It sounded good. Could have applied to any kid, any problem, any grade.
That's the trap with AI in education. The output sounds authoritative. It uses the right vocabulary. And if you don't know what you're looking for, you'll take it and run with it.
I needed something more specific. And I needed a way to make sure that specificity was grounded in something real, not just plausible-sounding.
So I did two things before I built anything.
First, I sat down with her teacher and asked what she was actually seeing in class. She told me my daughter had specific gaps: cause-and-effect reasoning, tracking plot across a full text, and using evidence from what she'd read to support an answer. She also mentioned that she tends to wait in class. She'll hold back until a peer starts before she commits to an answer. That detail mattered later.
Second, I learned enough about reading science to ask better questions. I'm not an expert, but I now know Scarborough's Reading Rope. I know what decoding versus language comprehension means. Her decoding is actually fine -- she reads the words. The gap is in the thinking skills that sit underneath comprehension.
With that as my foundation, I went back to Claude and built it a role.
Building the role
I wrote a system prompt -- a set of instructions that lives at the top of every conversation I have with Claude about her reading. It tells Claude who it is (a reading specialist and instructional designer, not a tutor), what her specific gaps are, what the session structure looks like, and what the rules are.
The rules are the whole thing.
I locked in: sessions cap at 20 minutes. She attempts every question before she gets help. There are only specific allowed ways to offer a hint. Difficulty doesn't go up until independence goes up first.
None of these came from me. The hint structure is based on gradual release -- a real instructional framework. The difficulty rule is basic mastery-based progression: you don't advance the challenge until the skill is stable where it is. I found these, cross-checked them against what the teacher was telling me, and baked them into the instructions.
What I ended up with is Claude functioning as an instructional designer. It generates story packets for our sessions -- four pages, custom characters she actually cares about, cause-and-effect relationships embedded in the plot. It gives me think-aloud scripts so I know what to model and when to stop talking. And it helps me interpret what I observe across sessions, because I report back after each one and the system responds to that data.
It doesn't teach her. I do. What it gives me is structure and a thinking partner who has read everything I've fed it about how this specific kid learns.
Verifying with the teacher
Here's the part I'd want any other parent to hear: I take notes to the teacher.
After clusters of sessions, I write up what I observed. What she got on her own. Where she got stuck. What patterns are showing up. And then I ask the teacher whether I'm reading it right.
Twice, she caught something I had wrong. Early on, I thought my daughter's habit of answering from memory instead of going back to the text was a confidence issue. The teacher told me no -- it's the exact same pattern she sees in class. She leans on what she already knows instead of returning to the source. That's a behavior pattern, not an anxiety response. It changed how I designed the next several sessions.
I also asked whether my Lexile targets were calibrated right. The teacher told me she doesn't use Lexile for instructional decisions. She uses Fountas and Pinnell levels. If I hadn't asked, I'd have been optimizing for the wrong metric.
The teacher can't run an intervention with every kid after school. But she can be the quality check on what a parent is doing at home. That feedback loop is what keeps AI-generated content connected to reality.
The results, honestly
Her reading score in September was 425L. By January -- before we started the structured sessions -- it had climbed to 675L. That growth happened through the school year, not through this intervention. The January number is our baseline, not a win.
Since starting in January, we've run through 21 custom story sessions. Four sessions a week, 20 minutes each. The skills we're targeting are moving, but slowly.
Cause-and-effect reasoning is getting more consistent. She now produces both the cause and the effect reliably when I prompt her, after months where she would only give me one side. Text evidence is still emerging. She'll find evidence when I ask her to show me where in the story, but she doesn't yet go back to the text automatically before answering. That's the current edge we're working.
What I can say truthfully is that the shape of her effort has changed. She used to wait for me to rescue her when a question got hard. Now she pauses and tries. She tells me directly when a session feels too heavy -- that kind of self-advocacy didn't exist in October. And she caught a design error in one of the packets a few weeks ago. A character was named in a question but never appeared in the story. She flagged it. That's active comprehension monitoring. A few months ago, she wouldn't have noticed or wouldn't have said anything.
Her March STAR score came back recently. I'm looking at it alongside the teacher before I interpret it myself. That's the rule I set for myself early on and I'm sticking to it.
Progress is real. It's slow. Middle school isn't waiting.
What you can try (guardrails included)
Start with the teacher, not the AI. Build a role with rules, not a prompt -- specific gaps, time limit, fallback plan, attempt-first. Five to ten seconds of silence before you step in is longer than it feels. Don't fill it. Report back after clusters of sessions and ask whether what you're seeing matches what she sees. Go slower than feels right. Independence is when they do it before you ask, not when they do it with you watching.
The AI isn't doing the teaching. You are.
Clarity vs Sounding Clear
Have you ever had that moment where you read something and it just… clicks?
You may not feel like an expert, but you understand enough. It makes sense. You track with it. And if someone asked right then, you’d probably say, “Yeah, I get it.”
I remember feeling that way in math class in school whenever the teacher would explain something new.
But then later, someone asks you to use it to solve a problem or explain it. Not in a confrontational way. Just, “Hey, can you walk me through that?” And you realize you can’t quite rebuild or apply it on your own. You can remember how it sounded. You might even remember a few of the phrases. But you don’t really have the structure beneath it.
I remember being there in math class too. We’ve all done it. Heard a smart take, borrowed the language, repeated it like it was ours. It’s not even intentional. It just feels close enough.
And close enough is what gets us through most days. Especially when we are inundated with information nearly 24/7.
I’ve caught myself doing it more than once.
Where it’s getting interesting now is how easy it is becoming to do that without noticing.
AI didn’t create the gap. It just makes it easier to step into. Because now you can get something back that’s clean, structured, confident… and it arrives before you’ve really had to wrestle with it. The bricks are neatly arranged without the messy work of the mortar that joins it all together.
You read it, and it feels solid. Nothing jumps out as wrong. So you move. Send the email. Share the idea. Go into the meeting.
And honestly, most of the time, it holds.
Until someone leans on it a little. “Wait. What happens if that assumption changes?" Or, “Can you simplify that for me?” And that’s where things can get shaky.
Not because the idea itself is bad. But because you never fully made it your own. You didn’t carry it far enough to turn it around in your hands, test it, reshape it. So when it moves, you don’t move with it.
I’ve started to think the real risk here isn’t bad information. It’s that you can end up with something that sounds like clarity… without actually having clarity. And those two feel the same right up until they don’t.
You see it at work sometimes. Someone presents a really clean recommendation. It sounds great. Then one question comes in from the side, and everything kind of stalls out. Not always because the idea itself falls apart- it’s just that the person presenting it can’t adjust in real time.
And if you’re on the other side of the table, you can feel that difference. It changes how much you trust what you’re hearing-and sometimes who you're hearing it from.
That part matters more than people think. Because over time, people don’t just listen for good answers. They listen for whether you actually understand what you’re saying.
So the shift I’ve been trying to make is pretty simple. Making sure I go beyond, “this sounds right.” Even just one step further. Could I explain this without looking at it? Do I know what this depends on? If something changed, would I know where it breaks? If someone pushed on this in a room, would I still stand behind it?
If the answer is no, then I still have work to do.
And to be clear, I’m not anti-AI on this. Used the right way, it’s honestly one of the best thinking partners we’ve ever had.
Just be aware that it will absolutely give you the feeling of understanding before you’ve earned it.
And that’s the line I’m trying to watch more closely. Because clarity isn’t just recognizing something when you see it. It’s being able to recreate it when it’s not laid out neatly in front of you anymore.