
Software development was one of the first areas to adopt generative AI, but the promised revolution has so far delivered only modest productivity gains, and Bain says only a full rethink of the software lifecycle will shift the dial.
As things stand, generative AI in software development has failed to live up to the hype, the wide-ranging Technology Report 2025 from management consultants Bain & Company says. Two-thirds of software firms have rolled out GenAI tools, but developer adoption is low among those, and teams using AI assistants report a productivity boost of perhaps 10 to 15 percent.
Meanwhile, another recent study from nonprofit research group Model Evaluation & Threat Research (METR) found that AI coding tools actually made software developers slower, despite expectations to the contrary, because they had to spend time checking for and correcting errors made by the AI.
This is perhaps what Bain & Co means when it notes that the time saved often isn’t redirected toward higher-value work, so even the modest gains that have been made have not translated into positive returns.
Early initiatives focused on using generative AI to produce code faster, but writing and testing code typically accounts for about 25 to 35 percent of the total development process, the report states, so speeding up this stage alone is not going to be effective at reducing time to market. Perhaps greater value will be found from applying generative AI across the entire development life cycle?
Nearly every phase ought to benefit, the report authors posit, from the discovery and requirements stages, through planning and design, to testing, deployment, and maintenance. This will call for process changes as well, since code review, integration, and release must keep pace with AI-powered coding, the thinking goes.
At this point, we come to the latest trendy buzzword, “agentic AI.” Until now, generative AI has served as a smart assistant, a copilot with a human in control, the report says, but agentic AI will usher in more autonomous versions that can manage multiple steps of the development process with minimal human intervention.
Bain points to Cognition’s Devin, an AI “software engineer” unveiled last year and touted as capable of building whole applications from natural-language prompts.
However, as The Register has reported, Devin proved to be far from satisfactory at its job, completing just three out of 20 tasks successfully in tests conducted by a group of data scientists earlier this year, and often “getting stuck in technical dead-ends or producing overly complex, unusable solutions.”
Research biz Gartner forecasts that more than 40 percent of agentic AI projects will be cancelled by the end of 2027. And a benchmarking study by Carnegie Mellon finds that for multi-step office tasks, AI agents fail roughly 70 percent of the time.
The Bain report also points to a number of roadblocks that stand in the way of broader generative AI adoption in development.
First is a lack of executive direction, whereby any project is likely to run out of steam if senior leadership doesn’t set clear objectives.
But another factor is resistance. Some engineers distrust AI (we can’t imagine why) or worry that it will undermine their role, the report states. Three-quarters of companies say the hardest part of adoption is getting people to change how they work, and overcoming this requires strong change management.
The report flags an inevitable skills gap in areas such as writing prompts and reviewing AI output. Many firms have not bothered with training, the report claims.
A lack of adequate performance tracking is additionally blamed. Without clear key performance indicators, you cannot realistically prove generative AI’s value, the report authors say, and even real productivity gains won’t show up in business terms.
However, tech leaders at a recent Wall Street Journal Leadership Institute Technology Council Summit claimed that it’s nearly impossible to measure general productivity gains from using AI tools. This raises the question of why they are bothering to invest so much money in it.
Bain’s report asserts that to break out of “pilot mode” and get real returns from generative AI, firms must be radical and frame their roadmap as an AI-native reinvention of the software life cycle, integrating it seamlessly into every phase of development.
In other words, corporate leadership needs to be bold with their AI vision, then back it up with clear goals and measurable outcomes to ensure that investment pays off. Some companies already report 25 to 30 percent productivity boosts by pairing generative AI with end-to-end process transformation, the report claims. But that’s a tough call for a manager to make if a pilot project is just not showing the expected benefits. ®