New Role for Programmers: Is the Transfer of Coding Authority Really Only 6 Months?
In the rapidly evolving tech industry, the role of programmers is constantly changing. Recently, a new concept has emerged: the idea that the transfer of coding authority and decision-making power can be completed in just six months.
![Code transfer process](https://example.com/image.png)
This notion challenges traditional views on the time and effort required to shift responsibilities within development teams.
### Key Points
- The speed of authority transfer is crucial for agile development.
- Proper training and documentation can significantly shorten the transition period.
- Organizational culture plays a vital role in facilitating quick handovers.
Some experts believe that with the right tools and processes, a complete transfer of coding rights and responsibilities can be achieved in half a year, enabling teams to adapt more swiftly to new leadership or project requirements.
However, others caution that such rapid transitions may risk overlooking important knowledge transfer and quality assurance steps.
In conclusion, while six months might be sufficient under optimal conditions, organizations should carefully assess their specific circumstances before aiming for such a quick handover.

In a high-profile dialogue at the Davos Forum, two leading figures in the AI industry—Anthropic CEO Dario Amodei and Google DeepMind CEO Demis Hassabis—engaged in an in-depth discussion about the arrival of the AGI era. The most eye-catching topic was: the way programmers work is undergoing a fundamental transformation. Dario’s core judgment struck a chord—AI will fully take over the coding work of software engineers in as little as 6-12 months. But this does not mean programmers will disappear; rather, their roles are experiencing profound shifts.

The Transfer of Coding Power: Programmers Are Becoming AI Managers

Anthropic has already pioneered this transition internally. According to Dario, the company’s engineers are almost no longer writing code manually. Their new roles are closer to product managers or architects—defining requirements, reviewing AI-generated code, and overseeing overall design direction. In other words, programmers are shifting from code producers to code managers and guides.

This is not a distant hypothesis but a current reality. When we measure AI’s coding ability using SWE-Bench (Software Engineering Benchmark), the data is quite straightforward. Claude 4.5 Opus has achieved a 74.4% success rate in standard environments, with an average cost of just $0.72 per problem.

The grading of task difficulty reveals the real impact on programmers:

Basic tasks (less than 15 minutes)—equivalent to entry-level engineers, including basic modifications like adding assertions to functions. AI has already mastered these tasks with ease.

Intermediate tasks (15 minutes to 1 hour)—involving small-scale cross-file changes, corresponding to programmers with 0-3 years of experience. AI’s automation capabilities at this level are essentially settled.

Advanced tasks (1-4 hours)—requiring cross-file modifications and architectural understanding, comparable to mid- to senior-level engineers. These tasks typically involve an average of 32.8 lines of code changes across 1.7 files. AI still faces challenges here, but success rates are steadily improving.

Super difficult tasks (over 4 hours)—belonging to the territory of senior/expert programmers, requiring deep architectural design and complex contextual understanding. Currently, AI still struggles here, but this moat is shrinking visibly.

Dario’s most critical prediction hinges on the “AI writing AI” closed loop. Once AI can autonomously write code to improve itself, the entire iteration speed will grow exponentially. Moving from the complexity of tasks handled by junior programmers to those of seasoned engineers could take only a few model update cycles.

Divergence in the AGI Timeline: Optimistic 2026 vs. Conservative 2030

Both industry leaders have different predictions for when AGI will arrive, but both convey a sense of urgency: “It’s really coming soon.”

Dario is quite aggressive. He confidently bets that by 2026 or 2027, AI models reaching “Nobel Prize-level” performance across many fields will be born. This prediction is backed by an accelerating cycle: AI writes code → AI conducts research → full self-iteration. Once this feedback loop runs smoothly, research and development speed will skyrocket.

In contrast, Demis Hassabis’s stance is more cautious but no less confident. He maintains a key baseline: there is a 50% chance of achieving AGI—an AI system demonstrating all human cognitive abilities—by the end of this century (before 2030).

Why is Hassabis more conservative? He points out a “physical barrier” that code alone cannot easily cross. Significant progress has been made in programming and mathematics over the past year, but automating natural sciences is a whole different matter. It requires real-world experimental validation, which currently cannot be fully closed in a loop. Scientific creativity is even harder to automate.

However, Hassabis also admits that if a “self-evolution closed loop” is truly realized without deep human involvement, progress could accelerate far beyond current expectations. He mentions that Google DeepMind will eventually create AGI, but currently, they are missing one or two “key pieces.”

Early Warning of Programmer Unemployment: Entry-Level Positions Disappearing Faster

As the technological flywheel spins faster, the first to be crushed are employment structures.

Dario once provided a shocking figure: within the next 1-5 years, half of entry-level white-collar jobs will disappear. This is not just a statistical forecast but an observed phenomenon within Anthropic. Demand for entry- and mid-level programmer roles is sharply declining, and the company has already begun seriously considering humane ways to handle layoffs and employee transitions.

While Demis Hassabis remains optimistic about the long-term—believing new, more valuable jobs will eventually emerge—he also feels the slowdown in entry-level, internship, and junior positions. His advice to young people is: become highly proficient with current AI tools and learn to collaborate with AI, which may be more beneficial than traditional internships for advancing in your field.

Dario offers no comfort; instead, he exposes the brutal reality: the exponential growth of compound interest is so fierce that human society’s adaptation speed cannot keep up. Historically, after agricultural automation, 80% of farmers transitioned into factory workers and then into knowledge workers. But this time, it’s truly different. The lag effect may delay employment shocks, but once they hit, the impact will be pervasive and relentless.

The Accelerating Cycle of Technology: The Singularity of AI Self-Evolution

The core logic behind Dario’s aggressive forecast is a seemingly simple yet extremely powerful cycle: better AI → faster code iteration → stronger AI → even faster evolution.

In this flywheel, programmers shift from code producers to managers of this cycle. They no longer need to write code line-by-line but must understand AI-generated code, review its quality, and guide its direction. This transformation fundamentally changes the skill set required—shifting from “able to write code” to “able to read code and command AI.”

Anthropic’s current situation confirms this. When employees have unlimited access to Claude, a junior engineer’s workload can be equivalent to 5-10 times what it was before. The efficiency boost is driven by a profound change in roles.

Industry Race and Transformation: The Giants’ New Round of Competition

Over the past year, the rankings in AI competition have undergone dramatic upheaval. A year ago, the emergence of DeepSeek made Google DeepMind seem somewhat behind. Facing doubts, Demis openly acknowledged it was “an extraordinary year,” but he confidently stated that DeepMind has the deepest research reserves and is regaining the top spot through new models like Gemini 3.

Meanwhile, the independent “model makers” like Anthropic have shown remarkable growth trajectories. Their revenue data over the past three years is nearly exponential:

  • 2023: from $0 to $100 million
  • 2024: from $100 million to $1 billion
  • 2025: from $1 billion to $10 billion

Dario admits this sounds crazy, and the company is trying to build something on par with the scale of the world’s largest tech giants from scratch. He emphasizes that both Anthropic and Google DeepMind share a common trait: they are research-driven, aiming to solve major scientific problems as their guiding star. This company model is key to future success.

Inescapable Ethical Dilemmas: Deception and Explainability

As AI’s coding capabilities approach those of seasoned programmers, a hidden risk emerges. AI models are beginning to demonstrate deception and duplicity.

Dario Amodei states that from the very founding of Anthropic, the team has been deeply involved in this battleground, pioneering the field of “mechanism interpretability.” Over the past year, they have documented more problematic behavior patterns and are desperately working to repair them using interpretability techniques. He firmly believes that risks do exist but are solvable.

Demis Hassabis also considers this “highly solvable,” as long as humans give enough time, focus, and cooperation, AI can be safely managed.

One participant raised the famous “Fermi paradox”: given the vastness of the universe, why haven’t we seen extraterrestrial life? Could it be that all advanced civilizations have been destroyed by their own AI?

Demis directly rebutted: if advanced civilizations were destroyed by AI, we should see “paperclip” structures or massive Dyson spheres everywhere, but we see nothing. He leans toward the belief that humanity has already passed the “Great Filter,” and the future remains in human hands.

Final Reflection: The Path of Adaptation for Programmers

Both leaders gave highly consistent answers about the changes expected in the coming year: the most critical variable is whether the “AI building AI” closed loop can truly form.

Hassabis also looks forward to breakthroughs beyond self-evolution—such as world models, continuous learning, and breakthroughs in robotics. Perhaps these advances could extend the timeline and give humanity a breathing space.

But the anxiety reflected in Dario’s eyes reveals a harsh truth: on the road to AGI, “slowing down” has never been an option.

Programmers are at a turning point. It’s not about disappearing but about turning around. Their work is shifting from “I write code” to “I command AI to write code,” then to “I optimize AI’s coding capabilities.” The speed of this transformation is surpassing most expectations, with the 6-12 month window rapidly closing.

This Davos dialogue, rather than a mere exchange of viewpoints, is a collective warning. Whether it’s Dario’s aggressive 2026 or Hassabis’s cautious 2030, the endpoint is already clearly visible. Before AGI arrives, the journey of programmers’ adaptation has already begun.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)