top of page

Agentic AI Isn’t Magic — Why Businesses Need a Different Approach

  • Writer: David Cheung
    David Cheung
  • Oct 3, 2025
  • 4 min read

Updated: Oct 3, 2025

Over the past year, I’ve sat through countless conversations about “Agentic AI” — and now “Multi-Agentic AI.” These terms sound powerful, even futuristic. But when you look past the jargon, what you often find are approaches that are hard to operationalize: costs are difficult to forecast, outcomes can be hard to explain, and building trust in production settings becomes a real challenge.


My perspective comes not from theory, but from hands-on experience building and testing these solutions.

What Do We Mean by “Agentic”?

At its core, Generative AI (GenAI) is about using large language models (LLMs) to create, summarize, and interact with content.

  • Agentic AI takes this further: it gives the model a “goal” and lets it decide which steps to take.

  • Multi-Agentic AI extends the concept by having multiple agents talk to each other, each handling part of the task.

So when we talk about Agentic AI, we are still talking about GenAI — just wrapped in a particular way of using it. The challenge is not the label, but how sustainable the approach is when applied in the enterprise.

Challenges with Agentic AI

A key challenge with Agentic or Multi-Agentic AI setups is that they place a lot of responsibility on the model itself. You hand over a prompt, the agent decides what steps to take, and the outcome may not always align with expectations.

The challenge?

  • Fragile setups — Agents may need to be pre-built for each scenario, which means they can struggle when processes change.

  • Uncertainty — Decisions are not always transparent, so it’s hard to see why one path was chosen over another.

  • Rising costs — Uncontrolled token usage, retries, and cascading failures make costs unpredictable.

  • Trust becomes harder to build confidence — users and executives are cautious about relying on something they can’t easily explain.

This isn’t new. If you’ve worked with enterprise technology before — from ERP systems to data warehouses to data science projects — you’ve probably seen a similar pattern: adoption stalls when the technology doesn’t align with how businesses actually operate.

A Process-Oriented AI Alternative: Smart Flow

By contrast, a process-oriented approach — what I call Smart Flow — takes a different path. To me, Smart Flow means working smart:

  • You, the human, control the flow. You decide the steps and where AI fits in.

  • AI is applied only where reasoning is needed. For example, saving a file to SharePoint doesn’t need AI; interpreting a customer email does.

  • Every step is explainable. Because you design the flow, you can show how the outcome was reached.

  • Cost is optimized. You’re not throwing everything at AI; you’re using it precisely where it adds value.

Trust is built-in. Transparent, auditable steps make the system understandable to both users and executives.

Why ROI Matters

One of the biggest gaps I see in Agentic or Multi-Agent approaches is ROI. Costs are hard to predict — tokens consumed in ways you didn’t plan, retries piling up, and setups that may work in a pilot often require new design or enhancement when different scenarios are introduced.


When it comes to GenAI, ROI isn’t just a number at the end of the project — it’s about whether the business has a mechanism to track value as it unfolds. The challenge is that every enterprise may need a slightly different approach to make this work. What matters most is being pragmatic: identifying what can be measured, how it ties back to business outcomes, and whether it gives leaders confidence to move forward.

Adoption Factors: Human vs. Agentic vs. Process-Oriented AI (Smart Flow)

Here’s how I explain it to executives: when you break adoption down into seven factors, the differences between Human, Agentic, and Process-Oriented AI become clear.

Adoption Factors

Human Behavior / Process

Agentic / Multi-Agentic AI

Process-Oriented AI (Smart Flow)

Setup

Train and educate people on tasks and processes, ensuring they understand how the work should be done.

→ Creates a sustainable foundation driven by people.

Predefine and build agents for each scenario. Any change in task or process requires redesign.

→ Creates unsustainable and  brittle foundation.

Define processes and tasks once, with flexibility to adjust steps as work evolves.

→ Creates adaptable and sustainable foundation.

Execution

People choose steps to deliver outcomes, making reasoning explainable.

→ Creates sustainability through explainable outcomes.

Agents decide actions based on prompts, with little visibility to humans.

→ Creates uncertainty and lack of confidence.

Steps are clearly laid out, with AI applied where needed.

→ Creates confidence through transparency and repeatability.

Flexibility

People adapt naturally to new situations.

→ Creates human agility.

Agents often break when context changes.

→ Creates lack of agility.

Steps can be updated and reused easily.

→ Creates sustainable agility.

Explainability

Humans can narrate reasoning and decisions.

→ Creates traceability.

Logic is hidden inside the model, making it difficult to trace.

→ Creates lack of traceability.

Every step is logged and auditable.

→ Creates full traceability.

Trust

Built on personal judgment, reputation, and relationships.

→ Creates subjective trust.

Outcomes are inconsistent and unpredictable.

→ Creates lack of trust.

Trust comes from transparency, repeatability, and verifiable results.

→ Creates consistent and systemic trust.

Cost

Predictable in structure (salaries, overhead), but scaling is people-dependent.

→ Creates predictable but scaling-dependent cost.

Driven by tokens, retries, and inefficiencies, making forecasting hard.

→ Creates unpredictable cost.

Processes are managed and controlled, with AI applied only where it makes sense.

→ Creates optimized and predictable cost.

ROI

Efficiency and consistency are capped by human capacity; scaling requires more people.

 → Creates ROI limited by people.

Difficult to scale consistently; rework and oversight reduce clarity. → Creates unmeasurable and uncertain ROI.

Every step is explicit, measurable, and scalable.

→ Creates measurable and scalable ROI.

Closing Thought

Agentic AI isn’t “wrong” — but it is incomplete. The real opportunity is not in chasing the latest acronym, but in designing AI that enterprises can explain, audit, and sustain. That’s the difference between a proof-of-concept and a platform you can trust.


Comments


bottom of page