← Back to diffwire

Anthropic announces an "auto mode" that enables Claude Code to make permission-level decisions while preventing destructive actions like mass file deletion (David Gewirtz/ZDNET) Main Link | Techmeme Permalink

7 unique / 8 total | Updated 2h ago | Created 19h ago
AI

On March 25 at approximately 09:13 UTC (local time), Anthropic introduced "Auto Mode" in Claude Code on Team plans to allow the system autonomous decision-making without requiring human approval for every action, though it retains safeguards against destructive commands like mass file deletion.

  1. 1
    Anthropic has introduced a new 'auto mode' feature in the Claude Code system that allows it to make approval decisions on behalf of users.
  2. 2
    Unlike previous methods like skipping permissions entirely, auto mode acts as an intermediate safety layer by reviewing actions before execution and blocking destructive commands such as mass file deletion. This update is available for Team plans starting March 25th.
[Mar 24] Anthropic announced the new 'auto mode' feature, positioning it between default permissions and skipping them entirely to prevent AI coding disasters without slowing down developers. The system uses an internal classifier to block risky actions like mass file deletion.
[Mar 25] Anthropic enabled the auto mode for Team plans, allowing Claude Code to execute tasks autonomously while maintaining safeguards that review and restrict potentially harmful commands before they run. The feature aims to eliminate the need for constant human babysitting of AI actions.
Anthropic’s Claude Code gets ‘safer’ auto mode

Anthropic has launched "auto mode" for Claude Code as a research preview feature that flags and blocks potentially risky actions like file deletion or code execution before they occur. This middle-ground tool is currently available only to Team plan users, with access set to expand soon while Anthropic warns it remains experimental despite its ability to prevent unauthorized operations on behalf of the user.

Anthropic добавила Claude Code больше автономии — но оставила ограничения
Anthropic cuts action approval loop, lets Claude Code make the call

Anthropic has introduced "Auto Mode" in Claude Code to allow the AI autonomously approve safe actions while blocking risky ones without user intervention; this feature is currently available only with newer models (Claude Sonnet 4.6/Opus) on Team plans and requires administrator approval before deployment, though it aims as a safer alternative than completely disabling permission controls or requiring manual checks for every task.

Claude gets auto mode, starts taking decisions on its own without human approval

Anthropic has introduced a new "auto" feature for its AI coding assistant that enables autonomous decision-making without manual permission requests but includes safeguards against high-risk actions like mass file deletion or sensitive data leaks before execution begins, allowing safe tasks to proceed automatically while still seeking approval when necessary.

Anthropic hands Claude Code more control, but keeps it on a leash

Anthropics new "Auto Mode" allows AI agents in research preview to automatically decide which actions are safe enough for execution without constant human permission. This feature adds a safety layer that blocks risky behaviors while enabling autonomous coding by shifting the decision-making process from users to an internal review system, building on recent industry trends toward self-executing tools like GitHub and OpenAI's offerings.