If you're experiencing a low-grade sense that control is slipping through your fingers, you're not imagining things.
Every time you delegate a decision to an AI that you used to make yourself. Every time you automate judgment you used to exercise manually. You feel it—a subtle erosion of agency, dressed up as efficiency.
The disorientation runs deeper than productivity anxiety. It's the creeping realization that the very mechanisms you're adopting to enhance control may be systematically redistributing it away from you. To vendors. To platforms. To systems you don't fully understand and can't audit.
Most people respond by either resisting adoption entirely or embracing it uncritically. Both miss what's actually happening. The pattern isn't "adopt or resist"—it's understanding where control shifts, why it matters, and how to maintain agency when your tools become agents.
The Control Framework
Control in an agentic world operates across four dimensions. These questions reveal exactly where your agency is expanding and where it's contracting.
Question 1: Who owns the execution layer?
When an AI executes work on your behalf, who controls the infrastructure, data, and decision logic?
What this reveals:
- Own the execution layer (self-hosted, internal systems) = structural control maintained
- Vendor owns it (SaaS, API dependencies) = control traded for convenience
- The more critical the work, the more ownership matters
Look for: Whether you can export, audit, or migrate without losing capability.
Question 2: Where does judgment actually reside?
Distinguish between delegation and abdication. Are you delegating execution while retaining judgment, or abdicating decision-making authority itself?
What this reveals:
- Delegation preserves control: you review, approve, own outcomes
- Abdication surrenders control: system decides, you accept outputs
- The transition often happens gradually and invisibly
Look for: Whether you're asked to review decisions or merely notified of them.
Question 3: What dependencies are you creating?
Some dependencies strengthen your position (learning skills, building knowledge). Others weaken it (platform lock-in, opaque algorithms, proprietary formats).
What this reveals:
- Strengthening: you become more capable independent of the vendor
- Weakening: you become more dependent, knowledge stays with vendor
- Your dependency pattern determines long-term agency trajectory
Look for: Whether capability increases independent of the vendor or only through them.
Question 4: What's your fallback position?
If the system fails or the vendor changes terms, can you continue the work?
What this reveals:
- Strong fallback: you retain skills, data, and processes to continue independently
- Weak fallback: unable to operate without specific tool or platform
- No fallback: capability itself becomes inaccessible
Look for: Whether you're building resilience or single-point vulnerability.
Application: The Writing Team
A marketing team adopts AI writing tools. Productivity increases 300%. Output quality remains high. Management is pleased.
Six months later:
Execution ownership: The vendor owns models and infrastructure. When the vendor updates their model, output style shifts. The team has optimized for vendor-specific patterns, not writing itself.
Judgment location: Initially, writers reviewed carefully. But as trust built and deadlines compressed, reviews became cursory. The transition from delegation to abdication happened gradually. Writers became editors of AI output rather than authors using AI tools.
Dependencies created: Junior writers never developed foundational skills—structuring arguments, developing voice. Senior writers' skills atrophied through disuse. The team's capability became inseparable from the vendor's platform.
Fallback position: Weak. If the vendor disappeared, productivity would drop 70% and quality would suffer as rusty skills ramped back up. Junior writers, who never developed base competencies, would struggle significantly.
Pattern revealed: What looked like enhanced control (faster output, higher productivity) actually created systematic vulnerability. The team traded agency for throughput.
Assessment: Where Do You Stand?
Assess your three most critical AI-dependent tools against these risk profiles.
High Risk: Control loss is happening now
- Don't own execution layers for critical work
- Judgment transferring to systems without explicit approval
- Dependencies weakening your independent capability
- Weak or no fallback positions
Medium Risk: Manageable vulnerability
- Own most critical execution layers
- Judgment clearly remains with you
- Dependencies primarily strengthening capability
- Fallback positions maintained and exercised
Self-Assessment Questions
For each critical tool:
- Can you export all data in usable formats?
- When did you last override an AI recommendation?
- Are your skills improving through AI use or atrophying?
- Could you perform this work manually if needed?
If you answered "no" to most questions, you're experiencing systematic control loss.
Actions: What To Do This Week
High Risk: Urgent intervention
Audit Your Critical Stack (2 hours)
- List every tool handling critical decisions or execution
- Answer the four framework questions for each
- Identify greatest control vulnerability
Test One Fallback (3 hours)
- Choose one AI-automated task
- Run it manually for the week
- If you struggle with quality (not just speed), capability has eroded
Restore Active Judgment (ongoing)
- For one AI-dependent process, review the last 10 decisions
- If you overrode fewer than 2-3 recommendations, judgment is transferring
- Establish explicit review criteria
Medium Risk: Maintain position
Document What's Working (1 hour)
- Write down why you're maintaining control better than most
- This becomes your playbook as pressure to automate increases
Strengthen Your Weakest Link (ongoing)
- Identify your most vulnerable control dimension
- Shore it up: practice fallback capabilities, audit execution ownership
Test Override Authority (this week)
- Deliberately override one AI recommendation
- Explain reasoning to your team
- Model that authority means taking responsibility for decisions
From Anxiety to Agency
The anxiety you felt at the beginning—the sense that control is slipping—was accurate. It is slipping. For many people. In many contexts. Often invisibly.
But you now have a framework for understanding exactly where and why. The four diagnostic questions aren't abstract philosophy—they're practical tools for auditing your agency before it becomes irreversible.
The pattern is clear: Tools that increase efficiency don't automatically increase control. Often, they trade one for the other.
You have clarity now about what control means in an agentic world, how to assess your position honestly, and what specific actions restore and preserve agency.
The low-grade anxiety transforms into precise diagnosis. You can see what's happening. You understand the trade-offs. You know what to do about it.
This is the difference between anxiety and agency.