Silicon Valley says 2025 is the year AI grows up and gets a job. Tech leaders envision armies of AI agents — autonomous digital workers that can actually get things done, not just chat. But after years of bold AI promises and mixed results, Wall Street analysts are questioning whether this trillion-dollar bet will finally pay off.
On one side stands tech leaders like Marc Benioff, Salesforce’s (CRM) CEO and one of tech’s most ardent AI optimists, who took to the cover of Time (which he owns) to say we’re entering an “Agentic Era” — one in which autonomous AI workers will unlock “massive capacity” and fundamentally redefine work as we know it.
On the other, Goldman Sachs warns that the roughly $1 trillion being poured into AI infrastructure may yield surprisingly modest returns, with skeptics arguing the technology isn’t yet capable of solving the complex problems needed to justify such massive investment.
According to Gartner’s research about a third of enterprise software applications will include some form of agentic AI by 2028, up from less than 1% today. The firm also predicts that at least 15% of day-to-day work decisions will be made autonomously through agentic AI by 2028, compared to essentially zero today.
But significant challenges remain to a wide release. Early deployments are likely to focus on narrow, well-defined tasks such as software development, customer service automation, and supply chain optimization. And even in these controlled environments, getting agents to work reliably remains a significant hurdle.
Companies are discovering they need multiple layers of security, including “guardian agents” that monitor other AI agents’ activities for mistakes or unauthorized actions. When organizations deploy thousands of agents, the complexity of managing their security and monitoring their activities becomes a major challenge that could require entirely new platforms just for oversight.
These challenges are compounded by the fundamental limitations of the AI technology itself. The same issues that plague large language models like hallucinations and inconsistent outputs become even more problematic when AI systems are empowered to take actions on their own.
“It’s not easy to get AI agents to work when they’re using a large language model as the way to create the plan of how to take action,” Coshow said. “We all know large language models can be tricky, and so it’s the same for AI agents.”
The road ahead may be longer and more nuanced than either extreme forecast suggests, according to Sampsa Samila, academic director of IESE Business School’s AI and the Future of Management Initiative in Barcelona, which has trained executives on AI implementation since 2019.
“I personally think this is transformational, this will change the way we work,” Samila said. “But how quickly will that happen? Probably in the order of 10 years, not in the order of one year.”
Samila pointed to historical examples like the electrification of factories, which took 30 years to fully transform manufacturing. While he acknowledged ChatGPT’s record-breaking adoption rate, he noted that two years after its launch, fundamental changes to how we work remain limited. “It has speeded up my productivity, but I still do the same things,” Samila said. “I just do them faster. I do more of it, but it hasn’t changed what I do in a significant sense.”
This slower timeline may repeat with AI agents, he suggested. While companies rush to promote autonomous AI assistants as the next breakthrough, Samila remains cautious about immediate transformation. The more likely scenario for 2025 may be incremental improvements in productivity and efficiency, rather than wholesale reinvention of work processes. “Every time we see these AI agents, they seem like a big leap, fantastic,” he said. “But then, in the short run, the change is never as drastic.”
You need to login in order to like this post: click here
YOU MIGHT ALSO LIKE