Becoming 1% Better Every Day with AI

Posted on Mar 29, 2026
tl;dr: Start by asking Claude questions grounded in your real work tools, then progress to drafting artifacts, structuring tasks for agents, responding through Claude, and finally automating the whole loop. Each level compounds on the last — 1% better every day is 37x better in a year.

A practical guide to progressively automating management busywork so you can focus on what actually matters.

The Problem We All Share

Engineering management is a role with an enormous surface area. On any given day you’re context-switching between email triage, Slack threads, meeting follow-ups, Jira hygiene, doc reviews, approval queues, and calendar Tetris. None of this is the job. The job is strategy, technical direction, PM partnership, and investing in your people. But the busywork fills the time first, and the important stuff gets whatever’s left.

The opportunity isn’t to replace your judgment. It’s to stop spending your judgment on things that don’t deserve it. Every minute you spend figuring out which emails matter, extracting action items from meeting notes, or gathering context to respond to a doc comment is a minute you’re not spending on the work that only you can do.

What follows is a progression. Each step builds on the last. None of them require you to become an AI expert. They just ask you to change one small habit at a time, and let the compound effect do the work.

Level 1: Ask Questions With Real Context

The habit: Use Claude Code with MCP connections to your actual work tools (Gmail, Calendar, Slack, Jira, Confluence, Google Docs). Instead of asking generic questions, ask questions grounded in your real systems.

What changes: Today, when you need to understand the status of a project, you open Jira, scan tickets, check Slack, maybe read a doc. With MCPs connected, you ask Claude: “What’s the current state of the API migration? Check Jira, Slack, and the design doc.” Claude searches your actual systems and synthesizes an answer from real data. The research step — the one that takes 15 minutes of tab-switching — becomes a 30-second question.

Examples:

  • “What did my team ship last sprint? Check Jira for completed tickets.”
  • “Summarize the discussion in #db-engineering about the database migration from this week.”
  • “What’s on my calendar tomorrow and are there any conflicts?”
  • “What are the open comments on the Q1 planning doc?”

Why this matters: This is the foundation everything else builds on. Once you’re comfortable with Claude having access to your real work context, every subsequent level becomes natural. And the immediate payoff is real — you get faster, better-informed answers to the questions you’re already asking every day.

Getting started: Setup takes about 15 minutes. Connect the MCPs for the tools you use most (Gmail and Calendar are good starting points). Then just start asking questions you’d normally answer by opening three tabs.

Level 2: Do Work Through Claude

The habit: Stop using Claude to tell you things and start using it to do things. Instead of Claude giving you information that you then act on, have Claude produce the artifact directly.

What changes: The gap between “research” and “output” collapses. Instead of asking Claude to summarize a meeting and then writing the follow-up email yourself, you say: “Draft a follow-up email to the attendees of yesterday’s design review with the key decisions and action items. Pull the notes from the doc and the attendee list from the calendar event.” Claude reads the meeting notes, reads the calendar event, and creates an email draft in Gmail. You review, tweak, send.

Examples:

  • “Create a Google Doc summarizing our team’s Q1 priorities based on the planning spreadsheet and Jira epics.”
  • “Draft a response to Sarah’s email about the timeline — check the Jira tickets she referenced for current status.”
  • “Update my tasks based on what we just discussed.”
  • “Write a Confluence page documenting our on-call runbook based on the Slack thread in #incidents from last week.”

Why this matters: This is where the time savings become tangible. Each of these tasks would normally involve 15-30 minutes of context-gathering, tab-switching, and writing. Now you describe the outcome you want and review the result. You’re shifting from doing the work to directing and reviewing the work — which is what management is supposed to be.

Level 3: Structure Your Tasks for Agents

The habit: When you capture a task — in TickTick, Jira, or wherever — write it as if someone else needs to execute it without asking you any questions. Include links, context, and clear success criteria.

What changes: This is less a tool change and more a mindset change. Consider two versions of the same task:

  • Before: “Update the RKO deck”
  • After: “Update slides 3-5 of [link] with Q1 completion data from [sheet link]. Match the format of existing slides. Add a new slide for the API migration milestone.”

The first version requires you to hold all the context in your head. The second version can be executed by anyone — including an AI agent. This discipline pays dividends even outside of AI: it makes delegation to humans clearer, it makes your own future self more effective when you pick up a task after a week away, and it forces you to think through what “done” actually looks like before you start.

Why this matters: Most of the tasks on your list aren’t inherently hard — they’re just under-specified. The friction is in the context-gathering, not the execution. When you write tasks with full context, you unlock the ability to hand them off — to an agent, to a report, to a future version of yourself on a Friday afternoon. This is a management skill that happens to also be the prerequisite for AI execution.

Level 4: Respond Through Claude

The habit: When a signal comes in that needs a response — an email, a doc comment, a Jira mention, a Slack question — use Claude as your interface for responding instead of going to the tool directly.

What changes: A doc comment asks about your team’s capacity for a new project. Normally you’d open the roster spreadsheet, check Jira for current commitments, think through the tradeoffs, and type a reply. Instead: “Someone asked about capacity for Project X in [doc link]. Check our roster sheet and current Jira sprint. Draft a response with our availability and any caveats.” Claude does the research, drafts the response, and you review it before posting.

The key shift here is building the muscle of reviewing agent work rather than doing the work yourself. It feels slower at first — you’re reading a draft instead of writing one. But the draft comes with research you would have had to do manually, and over time you develop a sense for when the draft is good enough and when it needs adjustment.

Examples:

  • “Reply to this Confluence comment — check the Jira ticket for current status before drafting.”
  • “Draft a response to this email thread. Summarize where we landed in the Slack discussion first.”
  • “Someone asked about our incident response process in this doc. Pull our runbook from Confluence and draft a summary.”

Why this matters: Responding to things is one of the biggest time sinks in management — not because the responses are hard, but because each one requires gathering context from multiple sources. This level is where you start to feel like you have a chief of staff: someone who pulls together the background so you can focus on the judgment call.

Level 5: Automate the Loop

The habit: Instead of you initiating each interaction with Claude, the system watches for incoming signals and processes them automatically. You review the output, not the input.

What changes: This is where levels 1 through 4 compound. Meeting notes arrive and action items are automatically extracted into tasks. Approval emails are enriched with context before you see them. Noise is suppressed. Emails that need responses arrive pre-researched with draft replies. Your morning starts with a processed queue of decisions to make, not a pile of raw signals to sort through.

This level only works because you’ve built the habits and trust from the earlier levels. You know what good Claude output looks like. You’ve trained your sense for when to trust a draft and when to dig deeper. You’ve structured your workflow so that tasks and responses are well-specified. The automation doesn’t replace your judgment — it removes everything that isn’t judgment.

Why this matters: The average engineering manager gets 60-100+ signals per day across all tools. Most are noise. The ones that matter require context from multiple systems before you can act. If you can reduce the time spent on signal processing from an hour a day to ten minutes of reviewing pre-processed output, that’s five hours a week. That’s an extra 1:1. That’s time to actually read the design doc before the review. That’s the strategic thinking block that always gets bumped.

The Compound Effect

None of these levels is a dramatic transformation on its own. Level 1 saves you a few minutes of searching. Level 2 saves you some writing. Level 3 is just better task hygiene. But they compound. Each one makes the next one possible, and the cumulative effect changes the shape of your day.

The goal isn’t to become an AI power user. The goal is to become a better manager by reclaiming time from the parts of the job that don’t benefit from your experience, your relationships, or your judgment. The busywork will always expand to fill the time you give it. The question is whether you keep giving it your time, or whether you redirect that time toward the things that made you want to be a leader in the first place.

Start at Level 1. Get comfortable. Move to Level 2 when it feels natural. There’s no rush — 1% better every day is 37x better in a year.