Sentry Co-founder: LLMs Slow Down Long-Term Development Speed, OpenClaw Generates Too Much Code to Self-Rescue

robot
Abstract generation in progress

CryptoWorld reports that according to 1M AI News monitoring, Sentry co-founder David Cramer posted on X today, stating that he is “completely convinced” that large language models currently do not improve productivity. He believes LLMs lower the barrier to entry but continue to generate increasingly complex and hard-to-maintain code, which, based on his own experience, is slowing down long-term development. Cramer questions “agentic engineering,” the approach of letting models automatically generate code and deploy it directly, arguing that the quality of the output code is significantly worse and, after accumulating in large quantities, becomes a net burden. Specific issues include poor performance during incremental development in complex codebases, inability to generate interfaces that conform to language idioms, and “pure slop test generation.” He specifically mentions OpenClaw: “If I had to bet, tools like OpenClaw, because they generate too much code, are already beyond recovery,” and emphasizes that “software is still very hard to build; it has never been about minimizing or maximizing lines of code.” Cramer adds that these judgments are mainly based on his experience developing features in mature codebases with normal complexity; his recent increase in contributions is due to “finding it interesting,” not because it has become easier, and he believes this is fundamentally a psychological change, with no essential difference in actual time spent.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin