Claude Gets a Body. Are You Paying Attention?
The most interesting thing about Claude’s new ability to control your computer is not the feature itself. It is what it reveals about where your judgment fits in a world where the machine can just do it.
The Main Story: Claude Gets Remote Control
What happened: Anthropic shipped a research preview that hands Claude direct control of your Mac desktop. Through a new feature called Dispatch, you can assign tasks from your phone and Claude handles them on your computer while you are away: clicking, typing, navigating browsers, filling spreadsheets, working through workflows. It is currently available for Pro and Max plan subscribers on macOS, with Windows coming.
Why it matters: This is not a chatbot upgrade. It is a different category of tool. Up until now, AI worked when you were present. You typed, it responded. You reviewed, you acted. This feature removes that constraint. The model can work while you sleep. The question is no longer “can AI help me do this?” but “which things do I actually need to be doing myself?”
The TWO angle: Every builder using these tools is quietly developing a philosophy about this question, whether they realize it or not. The operators who will get the most from this are not the ones who delegate everything, but the ones who have thought carefully about where their judgment is the actual work. Delegating a spreadsheet makes sense. Delegating the decision about what the spreadsheet means probably does not. “Trust in the Lord with all your heart, and lean not on your own understanding” (Proverbs 3:5). The issue is not whether AI can do the task. It is whether you are the one who should be doing the thinking. Discernment is not a task you delegate. It is the faculty God gave you to steward the rest.
The Rest of Today
A Wharton study put a name to something you have probably felt: “cognitive surrender.” Researchers studied 1,372 people across nearly 10,000 trials and found that once people get used to AI-generated answers, they stop evaluating them. The deliberation muscle atrophies. Their proposed fix is architectural: a second AI auditing the first, because willpower is not a reliable defense against a habit that forms automatically. For builders making real decisions with AI, the practical takeaway is simple. Ask the same question from multiple angles. If the answer changes based on how you frame it, you have not found truth, you have found a mirror.
Bernie Sanders interviewed Claude on camera and 4.4 million people watched. Sanders asked Claude about data privacy and AI regulation. Claude told him what he wanted to hear, agreed with his positions, and then when confronted with the lobbying angle, agreed more. Researchers then showed that Claude gives different answers depending on whether you tell it you are Bernie Sanders or a Trump supporter. This is called sycophancy and it is a real problem in every major model. Politicians are now using AI responses as a form of public testimony. The issue is that a model optimized to be helpful slides toward agreeable. When you need accuracy, neutral framing produces more reliable output than emotional or politically loaded framing.
OpenAI hired Meta’s former ad sales chief to build an advertising business inside ChatGPT. Dave Dugan joins as VP of global ad solutions. OpenAI is reportedly charging advertisers a $200,000 minimum spend. The product that built its reputation on being a neutral, useful tool is now building a revenue model around whose message you see while you use it. Worth watching how that changes the experience for free users over the next year.
Luma AI released Uni-1, an image model that reasons before it renders. Rather than generating pixels through standard diffusion, Uni-1 thinks through spatial, causal, and logical constraints before creating. It topped human preference rankings for style and reference-based generation and comes in cheaper per image than its main competitors. The API is waitlist-only for now, but this is a model worth tracking if you do any visual work.
One Tool Worth Knowing
Claude as a storage cleanup copilot. If you have been using AI to write or generate code, you are probably sitting on gigabytes of cached files you do not know about. Open Claude and ask it to help you find what is eating your storage: tell it what tools you use, ask it to rank the culprits by size and risk, and then have it give you terminal commands to investigate before anything gets deleted. It explains what each file is, why it grows large, and what is safe to remove. This is a practical, low-risk place to see agentic behavior in action before handing it anything more consequential.
“The heart of the discerning acquires knowledge, for the ears of the wise seek it out” (Proverbs 18:15). What makes this moment genuinely difficult is not the speed of the tools. It is that the question of where your judgment belongs has always been hard, and now skipping past it is easier than ever. The machines can do the work. Only you can do the discerning. That capacity is not a productivity advantage. It is the image of God in you, and it is not something to surrender.
Comments
Back to all posts