Anthropic announces an "auto mode" that enables Claude Code to make permission-level decisions while preventing destructive actions like mass file deletion (David Gewirtz/ZDNET) Main Link | Techmeme Permalink
On March 25 at approximately 09:13 UTC (local time), Anthropic introduced "Auto Mode" in Claude Code on Team plans to allow the system autonomous decision-making without requiring human approval for every action, though it retains safeguards against destructive commands like mass file deletion.
Key Points
-
1Anthropic has introduced a new 'auto mode' feature in the Claude Code system that allows it to make approval decisions on behalf of users.
-
2Unlike previous methods like skipping permissions entirely, auto mode acts as an intermediate safety layer by reviewing actions before execution and blocking destructive commands such as mass file deletion. This update is available for Team plans starting March 25th.
Developments
Anthropic has launched "auto mode" for Claude Code as a research preview feature that flags and blocks potentially risky actions like file deletion or code execution before they occur. This middle-ground tool is currently available only to Team plan users, with access set to expand soon while Anthropic warns it remains experimental despite its ability to prevent unauthorized operations on behalf of the user.
Anthropic has introduced "Auto Mode" in Claude Code to allow the AI autonomously approve safe actions while blocking risky ones without user intervention; this feature is currently available only with newer models (Claude Sonnet 4.6/Opus) on Team plans and requires administrator approval before deployment, though it aims as a safer alternative than completely disabling permission controls or requiring manual checks for every task.
Anthropic has introduced a new "auto" feature for its AI coding assistant that enables autonomous decision-making without manual permission requests but includes safeguards against high-risk actions like mass file deletion or sensitive data leaks before execution begins, allowing safe tasks to proceed automatically while still seeking approval when necessary.
Anthropics new "Auto Mode" allows AI agents in research preview to automatically decide which actions are safe enough for execution without constant human permission. This feature adds a safety layer that blocks risky behaviors while enabling autonomous coding by shifting the decision-making process from users to an internal review system, building on recent industry trends toward self-executing tools like GitHub and OpenAI's offerings.