Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required

Anthropic released Cowork on Monday, a fresh artificial intelligence agent capability that extends the power of its wildly successful Claude Code tool to non-technical users — and according to company insiders, the team built the entire feature in approximately a week and a half, largely using Claude Code itself.The launch marks a major inflection point in the race to deliver practical artificial intelligence agents to mainstream users, positioning Anthropic to compete not just with OpenAI and Google in conversational artificial intelligence, but with Microsoft’s Copilot in the burgeoning market for artificial intelligence-powered productivity tools.”Cowork lets you complete non-technical tasks much like how developers use Claude Code,” the company announced via its official Claude account on X. Cowork is the next logical step: bringing the same agentic architecture to users who may never touch a terminal.Who can access Cowork now, and what’s coming next for Windows and other platformsFor now, Cowork remains exclusive to Claude Max subscribers using the macOS desktop application. An artificial intelligence that can organize files can, theoretically, delete them.In a notable display of transparency, Anthropic devoted considerable space in its announcement to warning users about Cowork’s potential dangers — an unusual approach for a product launch.The company explicitly acknowledges that Claude “can take potentially destructive actions (such as deleting local files) if it’s instructed to.” Because Claude might occasionally misinterpret instructions, Anthropic urges users to provide “very clear guidance” about sensitive operations.More concerning is the risk of prompt injection attacks — a technique where malicious actors embed hidden instructions in content Claude might encounter online, potentially causing the agent to bypass safeguards or take harmful actions.”We’ve built sophisticated defenses against prompt injections,” Anthropic wrote, “but agent safety — that is, the task of securing Claude’s real-world actions — is still an active area of development in the industry.”The company characterized these risks as inherent to the current state of artificial intelligence agent tech rather than unique to Cowork. The bottleneck for artificial intelligence adoption is shifting — no longer is model intelligence the limiting factor, but rather workflow integration and user trust.Anthropic’s goal, as the company puts it, is to make working with Claude feel less like operating a tool and more like delegating to a colleague. “Try it to create a spreadsheet from a pile of screenshots, or produce a first draft from scattered notes.”The architecture relies on what is known as an “agentic loop.” When a user assigns a task, the artificial intelligence does not merely generate a text response. Within that sandbox, the artificial intelligence agent can read existing files, modify them, or create entirely fresh ones.Anthropic offers several illustrative examples: reorganizing a cluttered downloads folder by sorting and intelligently renaming each file, generating a spreadsheet of expenses from a collection of receipt screenshots, or drafting a report from scattered notes across multiple documents.”In Cowork, you give Claude access to a folder on your computer. The tool was a hit, but Anthropic noticed a peculiar trend: users were forcing the coding tool to perform non-coding labor.According to Boris Cherny, an engineer at Anthropic, the company observed users deploying the developer tool for an unexpectedly diverse array of tasks.”Since we launched Claude Code, we saw people using it for all sorts of non-coding work: doing vacation study, building slide decks, cleaning up your email, cancelling subscriptions, recovering wedding photos from a hard drive, monitoring plant growth, controlling your oven,” Cherny wrote on X. These build on the Skills for Claude framework the company announced in October, which provides specialized instruction sets Claude can load for particular types of tasks.Why Anthropic is warning users that its own artificial intelligence agent could delete their filesThe transition from a chatbot that suggests edits to an agent that makes edits introduces significant risk. In late 2024, the company released Claude Code, a terminal-based tool that allowed software engineers to automate rote programming tasks. Users can queue multiple tasks and let Claude process them simultaneously — a workflow Anthropic describes as feeling “much less like a back-and-forth and much more like leaving messages for a coworker.”The system is built on Anthropic’s Claude Agent SDK, meaning it shares the same underlying architecture as Claude Code. The company expanded access with a web interface in October 2025, followed by a Slack integration in December. “These risks aren’t fresh with Cowork, but it might be the first time you’re using a more advanced tool that moves beyond a simple conversation,” the announcement notes.Anthropic’s desktop agent strategy sets up a direct challenge to Microsoft CopilotThe launch of Cowork places Anthropic in direct competition with Microsoft, which has spent years attempting to integrate its Copilot artificial intelligence into the fabric of the Windows operating system with mixed adoption results.However, Anthropic’s approach differs in its isolation. Instead, it formulates a plan, executes steps in parallel, checks its own work, and asks for clarification if it hits a roadblock. Anthropic notes that Cowork “can take on many of the same tasks that Claude Code can handle, but in a more approachable form for non-coding tasks.”The recursive loop where artificial intelligence builds artificial intelligence: Claude Code reportedly wrote much of Claude CoworkPerhaps the most remarkable detail surrounding Cowork’s launch is the speed at which the tool was reportedly built — highlighting a recursive feedback loop where artificial intelligence tools are being used to build better artificial intelligence tools.During a livestream hosted by Dan Shipper, Felix Rieseberg, an Anthropic employee, confirmed that the team built Cowork in approximately a week and a half.Alex Volkov, who covers artificial intelligence developments, expressed surprise at the timeline: “Holy shit Anthropic built ‘Cowork’ in the last… The blog post explicitly mentions plans to add cross-device sync and bring Cowork to Windows as the company learns from the study preview.Cherny set expectations appropriately, describing the product as “early and raw, similar to what Claude Code felt like when it first launched.”To access Cowork, Max subscribers can download or update the Claude macOS app and click on “Cowork” in the sidebar.The real question facing enterprise artificial intelligence adoptionFor technical decision-makers, the implications of Cowork extend beyond any single product launch. Simon Smith, EVP of Generative artificial intelligence at Klick Health, put it bluntly on X: “Claude Code wrote all of Claude Cowork. week and a half?!”This prompted immediate speculation about how much of Cowork was itself built by Claude Code. Users who have configured these connections in the standard Claude interface can leverage them within Cowork sessions.Additionally, Cowork can pair with Claude in Chrome, Anthropic’s browser extension, to execute tasks requiring web access. What it learns to use next is anyone’s guess. Can we all agree that we’re in at least somewhat of a recursive improvement loop here?”The implication is profound: Anthropic’s artificial intelligence coding agent may have substantially contributed to building its own non-technical sibling product. The feature arrives as a study preview available exclusively to Claude Max subscribers — Anthropic’s power-user tier priced between $100 and $200 per month — through the macOS desktop application.For the past year, the industry narrative has focused on large language models that can write poetry or debug code. Users designate a specific folder on their local machine that Claude can access. Rather than designing an artificial intelligence assistant and retrofitting agent capabilities, Anthropic built a powerful coding agent first — Claude Code — and is now abstracting its capabilities for broader audiences. Users on other subscription tiers — Free, Pro, Team, or Enterprise — can join a waitlist for future access.Anthropic has signaled clear intentions to expand the feature’s reach. This combination allows the agent to navigate websites, click buttons, fill forms, and extract information from the internet — all while operating from the desktop application.”Cowork includes a number of novel UX and safety features that we think make the product really special,” Cherny explained, highlighting “a built-in VM [virtual machine] for isolation, out of the box support for browser automation, support for all your claude.artificial intelligence data connectors, asking you for clarification when it’s unsure.”Anthropic has also introduced an initial set of “skills” specifically designed for Cowork that enhance Claude’s ability to create documents, presentations, and other files. If true, this is one of the most visible examples yet of artificial intelligence systems being used to accelerate their own development and expansion — a strategy that could widen the gap between artificial intelligence labs that successfully deploy their own agents internally and those that do not.Connectors, browser automation, and skills extend Cowork’s reach beyond the local file systemCowork doesn’t operate in isolation. With Cowork, Anthropic is betting that the real enterprise value lies in an artificial intelligence that can open a folder, read a messy pile of receipts, and generate a structured expense report without human hand-holding.How developers using a coding tool for vacation study inspired Anthropic’s recent productThe genesis of Cowork lies in Anthropic’s recent success with the developer community. “These use cases are diverse and surprising — the reason is that the underlying Claude Agent is the best agent, and Opus 4.5 is the best model.”Recognizing this shadow usage, Anthropic effectively stripped the command-line complexity from their developer tool to create a consumer-friendly interface. The feature integrates with Anthropic’s existing ecosystem of connectors — tools that link Claude to external information sources and services such as Asana, Notion, PayPal, and other supported partners. By confining the agent to specific folders and requiring explicit connectors, they are attempting to strike a balance between the utility of an OS-level agent and the security of a sandboxed application.What distinguishes Anthropic’s approach is its bottom-up evolution. Whether mainstream users are ready to hand over folder access to an artificial intelligence that might misinterpret their instructions remains an open question.But the speed of Cowork’s development — a major feature built in ten days, possibly by the company’s own artificial intelligence — previews a future where the capabilities of these systems compound faster than organizations can evaluate them. In its blog post announcing the feature, Anthropic explained that developers “quickly began using it for almost everything else,” which “prompted us to build Cowork: a simpler way for anyone — not just developers — to work with Claude in the very same way.”Inside the folder-based architecture that lets Claude read, edit, and create files on your computerUnlike a standard chat interface where a user pastes text for analysis, Cowork requires a different level of trust and access. The chatbot has learned to use a file manager. This technical lineage may give Cowork more robust agentic behavior from the start.Claude Code has generated significant enthusiasm among developers since its initial launch as a command-line tool in late 2024. Claude can then read, edit, or create files in that folder,” the company explained on X.

(This article was automatically generated from an AI news feed and rewritten for originality.)