Hero imageMobile Hero image
  • Facebook
  • LinkedIn

October 15, 2025

Karl Fridlycke, Lead Gen AI Strategist, gives leadership a practical way to decide why to move now, when to use agents versus other tools, and how to greenlight a first production step that will stand up over time.

Most companies now report using AI somewhere, McKinsey puts it at 78%, yet only a small fraction convert it into measurable value, with Boston Consulting Group estimating roughly 5% realizing impact at scale. This piece explains why now is the moment to close that gap, and how to do it without hype or drama.

If you do not yet have a controlled agent pilot underway, you are already behind on the learning curve. Not because the models suddenly became magical. Because your stack did. Because governance did. Because the work you need now stretches across tools and time. The point is not speed for its own sake. The point is to establish guided experimentation, so you learn where agents belong, and where they do not, while risk stays contained and reversibility stays high.

You no longer need a shadow stack to try agentic work.

The enterprise stack has matured. What used to be scratch-built and unsustainable now exists out of the box – with plug and deploy solutions available in the platforms you already run. Identity, policy, observability, logging, key management, access control, versioning – ready on day one. At the same time, task horizons have changed. Agents can pursue goals over longer periods, plan, use tools, coordinate. And the guardrails are no longer an afterthought: evals, traceability, small blast radius, human review when it matters. None of this forces you into agents. It lets you choose them when they are the best instrument for a real problem.

Lead with the problem, not the technology.

Do not introduce agents to introduce agents. Start with the problem. You do not buy a drill; you buy a wall put up straight and true. The toolbox matters – automation, assistants, agents – but you hire the job, not the tool. Sometimes the cleanest fix is mechanical. A set screw beats any model when it removes the root cause. When logic is crisp and deterministic, classic automation will usually win on stability and cost. When it is a single bounded answer, an assistant or a search workflow is enough. Use an agent when the work benefits from sustained goal pursuit across tools within clear constraints. Agent is a means, not the goal.

Clarity lowers risk. Keep a shared language in the room. Automation is fixed logic with repeatable outcomes. An assistant is conversational help for short tasks with low autonomy. An agent acts toward a goal, plans, calls tools, and keeps state over time. In production, keep a human in the loop where it matters and set an explicit allowance for autopilot inside tight boundaries. Widen or tighten that allowance as evidence accumulates. Start where maturity is real and the surface area is small. Expand when the numbers, and the reviews, say so.

Before anything reaches real users, act like production from the start. Contain risk by design. Keep access on least privilege. Instrument every step so decisions can be reviewed. Define the success signal. Capture a baseline. Keep the feedback loop short enough to steer. Make rollback easy and owned by name. None of that is bureaucracy. It is what makes “now” possible.

Culture will decide whether the shift pays off. As I have argued before: judge outcomes rather than provenance, blind the obvious choke points for bias, and normalize assistance where it clearly raises quality or reduces lead time. Do it in policy and in practice. That is how adoption stays honest – and fast – without another spacecraft crashing because the trajectory was wrong.

What leadership should do now is straightforward and measured. Acknowledge why it is now: platform maturity, longer task horizons, real governance. Then behave like builders. Hire the job, not the tool. Be willing to conclude that a simple assistant or a piece of classic automation is the right answer when it is. Keep human in the loop where it matters and be explicit about the autopilot you allow. Treat reviewability as a design constraint, not a post-mortem wish. Run one carefully scoped pilot where maturity already exists, publish the deltas that matter, and decide – scale, pause, or retire. The point, throughout, is simple: it is “now” because the stack and governance have matured – it works because you lead with the problem. Make way for AI in Action – you can learn more about how some of our clients are  already putting these principles into practice.

If this helped make the landscape a little clearer, use it. Quote it. Share it with your leadership team. That’s how useful thinking travels.

Karl Fridlycke

Karl Fridlycke

Lead Gen AI Strategist

Read more articles

The role of women in shaping Ethical AI

Women are playing a key role in shaping ethical AI, driving inclusive and responsible innovation across industries.

Scaling AI to long web pages

Discover how a lightweight preprocessing strategy enables transformer models like BERT to classify long web pages more a…

Agentic design patterns – part 3

To support agentic AI at scale, organizations need orchestration, governance, and modular architecture that ensures trus…