Computer programming jobs comprise a small but highly influential part of today’s economy. In recent years, the introduction of AI tools that assist with—and even automate—large parts of coding work has significantly changed these roles.
Earlier research from the Anthropic Economic Index found that U.S. workers in computer-related occupations used Claude at far higher rates than expected based on their overall workforce share. Similarly, students in Computer Science programs—fields heavily focused on coding—showed notably high levels of AI use.
To explore these changes further, we analysed 500,000 coding-related interactions across two platforms: Claude.ai (the standard interaction platform) and Claude Code (a specialist coding agent capable of independently completing complex digital tasks).
Our analysis revealed three main trends:
- Greater automation with specialist agents: On Claude Code, 79% of interactions involved full automation—where AI completed tasks independently—compared to 49% on Claude.ai, suggesting that we can expect increasing automation across coding work as AI agents become more capable.
- Heavy use in building user-facing applications: Web development languages like JavaScript and HTML dominated the dataset. Many tasks related to building user interfaces (UI) and user experiences (UX) indicate that AI may disrupt jobs focused on creating simple apps and interfaces earlier than backend development roles.
- Startups are adopting AI faster than enterprises: About 33% of Claude Code interactions were related to startup work, while only 13% were connected to large enterprises. This reflects a gap between fast-moving startups adopting new AI tools and more cautious traditional businesses.
How Anthropic conducted the analysis: Using a privacy-preserving tool to categorise conversations into broad topics (such as UI/UX development) and to classify interactions as either automation (AI completing tasks) or augmentation (AI assisting humans).
Consistent with previous Economic Index findings, Claude Code showed a much more decisive shift toward automation, with less reliance on human assistance. “Feedback Loop” patterns, where humans validate AI output, were almost twice as common on Claude Code as on Claude.ai. Similarly, fully “Directive” interactions—with minimal human input—were more frequent in the Claude Code. By contrast, “Learning” patterns, where users gained knowledge from the AI, were more common on the general Claude.ai platform.
Source: Anthropic.ai
Leave a Reply