The technological landscape of Silicon Valley has long been anchored by the concept of the "high-agency" individual—a person characterized by their ability to navigate ambiguity, think independently, and exert their will upon the world without the need for explicit instruction. However, as the capabilities of artificial intelligence transition from simple generative text to autonomous "agentic" systems, the industry is undergoing a fundamental re-evaluation of what human agency actually entails. In an environment where AI coding tools like Anthropic’s Claude Code and OpenAI’s Codex are increasingly capable of performing the heavy lifting of software development, the premium is shifting away from technical execution and toward the ability to orchestrate complex digital ecosystems.
Akshay Kothari, cofounder and Chief Operating Officer of Notion, a productivity startup valued at $11 billion, suggests that the traditional benchmarks of professional value are eroding. According to Kothari, today’s AI agents may already possess technical capabilities that surpass many human professionals. While "taste"—the subjective ability to design intuitive and aesthetically pleasing products—has remained a human stronghold, Kothari notes that agents are rapidly closing that gap. Consequently, the industry is reaching a consensus that "agency" itself—the human drive to direct, oversee, and take responsibility for these systems—may soon be the only unique value proposition left for human workers.
The Evolution of the Developer: From Coder to Orchestrator
The transition from manual coding to AI-assisted development has moved with startling velocity. For decades, the primary value of a software engineer was found in their "raw execution"—their ability to write clean, efficient, and bug-free code. This paradigm began to shift with the introduction of GitHub Copilot and similar autocomplete tools, but the arrival of autonomous agents represents a more seismic break from tradition. Unlike previous tools that required line-by-line human prompts, agentic AI can take a high-level goal, break it into sub-tasks, write the necessary code, test it, and iterate until the objective is achieved.
Simon Last, another cofounder of Notion, exemplifies this transition. After nearly two decades of manual coding, Last has essentially ceased writing code in the traditional sense. Instead, he operates as a "super IC" (Individual Contributor), managing up to four AI coding agents simultaneously. This workflow requires a different cognitive load, which Last describes as "context overload," as the human brain struggles to keep pace with the sheer volume of output generated by multiple autonomous entities.
The psychological impact of this shift is notable. Last reports experiencing "token anxiety"—a restlessness that occurs when he is not actively running AI agents in the background, even during social events or sleep. This phenomenon highlights a new reality in Silicon Valley: the most productive individuals are no longer those who work the hardest, but those who are most effective at keeping their digital "workforce" active around the clock.
Chronology of the Agentic Revolution
The rise of the "agentic" worker did not happen in a vacuum. It is the result of a rapid succession of technological breakthroughs and cultural shifts within the tech sector over the last three years:
- Late 2022: The release of ChatGPT-3.5 and ChatGPT-4 introduces the public to Large Language Models (LLMs) capable of generating snippets of code.
- 2023: GitHub Copilot becomes an industry standard, moving AI from a novelty to a productivity necessity for developers.
- Early 2024: The emergence of "Agentic Workflows." Researchers and startups begin focusing on "agents" that can use tools, browse the web, and execute terminal commands autonomously.
- Late 2024: Companies like Anthropic release specialized tools like Claude Code, designed specifically for deep integration into the developer’s workflow, allowing for the automation of entire feature builds.
- 2025 (Present): The "High-Agency" movement becomes a dominant hiring philosophy. Startups begin prioritizing "AI-orchestrators" over traditional "full-stack" developers.
Supporting Data: The Adoption Gap and Economic Realities
While Silicon Valley is moving at a breakneck pace, the broader labor market is showing a more gradual adoption curve. According to a recent survey from Gallup, while the number of Americans using AI at work is steadily rising, a significant majority of the workforce has yet to integrate these tools into their daily routines.
In the tech sector, however, the numbers tell a different story. Internal data from major software firms suggests that developers using agentic tools can ship features up to 50% faster than those using traditional methods. At Notion, despite the automation of many tasks, the company has not reduced its headcount. Instead, the team is shipping products at a significantly higher velocity. This suggests that AI is currently acting as a force multiplier rather than a direct replacement for human talent, provided that talent is "agentic" enough to harness the technology.
In the venture capital world, AI proficiency has become a litmus test for viability. Jennifer Li, a general partner at Andreessen Horowitz (a16z), notes that for founders and early-stage employees, being "oblivious" to AI coding tools is now considered a "big red flag." The expectation is that every modern founder must be a "high-agency" user of these systems to ensure their startup can compete in an increasingly fast-paced market.
The Venture Capital and Hiring Perspective
The shift in values is perhaps most visible in the job descriptions currently circulating in San Francisco. Yoni Rechtman, a partner at Slow Ventures, highlights a new trend in hiring at early-stage startups. For example, Phoebe, an AI healthcare startup in Rechtman’s portfolio, explicitly states in its job postings that it is not looking for "raw IC execution." Instead, the company seeks individuals who are "excited about building the machine" that allows agents to handle the end-to-end building of features.
This reflects a broader industry sentiment: the value of a single "Simon Last"—an engineer who can manage a fleet of agents—is now perceived as greater than that of a large team of traditional engineers. This has profound implications for the future of the "10x Developer" myth. In the agentic era, a "100x Developer" may be possible, but only for those who can effectively delegate to machines.
However, this new paradigm comes with strict quality control measures. Many firms have adopted a "no slop rule." This policy dictates that while agents may generate the code, the human "manager" is entirely responsible for the final output. If the code is buggy, insecure, or inefficient, the human cannot blame the AI. This reinforces the idea that agency is not just about taking action; it is about taking ultimate responsibility for the results.
Broader Impact and Industry Implications
While the current focus is on software engineering, the "agentic" shift is expected to ripple through other high-skill industries. Kothari and other industry leaders predict that finance, legal, and creative fields will soon face similar transformations.
- Finance: Analysts will shift from data entry and modeling to overseeing autonomous systems that monitor market trends and execute trades based on high-level strategy.
- Legal: Attorneys may move away from document review and toward managing agents that can parse thousands of pages of discovery in seconds, focusing their human energy on courtroom strategy and client empathy.
- Creative Industries: Designers and writers will increasingly act as creative directors for AI systems, where the "agency" lies in the ability to curate and refine AI output rather than generating it from scratch.
Analysis of the Cultural Backlash
Despite its utility, the term "agentic" has developed a controversial reputation. Within the tech community, critics argue that the obsession with agency creates a reductive worldview. Yoni Rechtman observes that the term often implies a binary social structure: the "NPCs" (Non-Player Characters) who follow instructions, and the "Main Characters" who exert agency. This "main character energy" can lead to a toxic work culture where attention-seeking and online posturing are confused with genuine productivity.
A viral essay in Harper’s Magazine recently critiqued this phenomenon, suggesting that for some young professionals in San Francisco, being "agentic" has less to do with building useful technology and more to do with "chasing attention online." This critique points to a potential pitfall of the movement: if "agency" becomes synonymous with "visibility," the actual work of building and maintaining systems may suffer.
Conclusion: The Future of Human Labor
The redefinition of agency in Silicon Valley represents a turning point in the relationship between humans and machines. As AI agents move from being tools to being teammates, the definition of professional excellence is being rewritten. The "high-agency" individual of the future is not necessarily the person who can write the most code or work the longest hours, but the person who can most effectively navigate the interface between human intent and machine execution.
For the global workforce, the question "Am I agentic?" is no longer a philosophical inquiry but a practical necessity. As autonomous systems become more integrated into the economy, the ability to think for oneself, take initiative, and responsibly manage AI "workers" will likely become the most sought-after skill in the 21st-century labor market. Whether this leads to a new era of human creativity or a fragmented society of "main characters" remains to be seen, but for Silicon Valley, the agentic era has already arrived.
