Your AI is running.
Your P&L isn’t moving.

I’m Jitin Kapila. I find exactly where your AI investment stops returning value, and fix the decision logic causing it. I call it the Logic Leak.

30 minutes. No pitch. No deck. You’ll leave knowing whether you have a problem and roughly where it is.

Systems Designed for Leaders at:
Mercedes Benz
L'Oréal
Maruti Suzuki
British Telecom
Harman
Zespri
Betsson
ALJ
Zeta
Dish
Mercedes Benz
L'Oréal
Maruti Suzuki
British Telecom
Harman
Zespri
Betsson
ALJ
Zeta
Dish

For 15 years (before AI was a buzzword), I’ve been building the systems that run inside Fortune 500 companies.


Most AI pilots don’t fail because the technology is wrong. They fail because nobody mapped where it should connect to the P&L.


That gap, the specific point where data exists but never reaches the decision. What I find and fix is the value it should inform. I call it the Logic Leak.


$90M+ in AI portfolios. Not as a consultant who hands over decks. As the person who architects the logic, owns the P&L impact, and can explain it to the CFO and the engineering team in the same room.


The gearbox doesn’t fix itself. Neither does a Logic Leak.



The Problem Nobody Is Naming

Organisations run AI pilots that succeed in isolation but fail in production. The few that launch rarely move the P&L.

Post-mortems usually blame the vendor, the data, or user adoption. That is a comfortable excuse. The harder truth is that most organisations build AI before they understand what they are building it for.

Every operation has a specific point where data exists, but the intelligence never reaches the decision it should inform. Models run in isolation. Vendors deliver exactly what was contracted, but the wrong problem was specified. The structural connection between data and decision is missing.

That gap has a cost.

The inventory left on the wrong shelf, the defects reaching the field, the platform procured without a quantified use case, these are not technology failures.

They are architectural failures.

Finding these missing connection and building the architecture to close it, that is the real work.

“The gap between a pilot that works and one that moves the P&L has nothing to do with the AI model.”


The method

Three steps. In sequence. Every engagement runs on these.

Step 1 - Test the use case before building it

Most organisations skip straight to build. Before any commitment, three questions need answers:
- Is this the right problem to solve given the operational constraint?
- Does the data actually exist to support it?
- Is this the right priority given what else is in motion?

Most bad vendor contracts begin precisely here, in the gap between a good idea and a tested one.

Step 2 - Specify it in business terms

Every use case worth building can be written in four terms: what outcome is being optimised, what business logic connects input to decision, what operational constraints the solution must respect, and what data actually exists. If it cannot be written this way, it is yet to become a use case. Till then it is just a hypothesis.

Step 3 - Put a number on it before spending

A structured model that converts a use case into a P&L figure, before even a line of code is written. Not a range of assumptions. A specific, defensible number built from the actual constraints of the business, in language a CFO can interrogate without a data scientist in the room.


Where it’s been applied

CASE STUDY • Retail

Global FMCG - $35M inventory correction

Fortune 500. Beauty and personal care. SA-MENA region.

Four previous vendors had attempted to solve a forecast accuracy problem below 60%. The stated issue was model quality. The actual issue was signal contamination: the entire supply chain was reacting to delivery data, a lagging indicator polluted by 3–10 days of logistical noise rather than order intent.

The intervention reframed the problem from forecasting sales to modelling customer intent. Outcome: forecast accuracy from below 60% to 94%. Planning cycle from 7 days to 2. $36M in annual value recovered in form of $24M in inventory correction and $12M in margin expansion.

Metric Before After
Forecast accuracy < 60% 94%
Data Signals Only sales Sales, Spends, Macro Factors, etc.
Planning cycle 7 days 2 days
Annual value recovered - $35M

Read the full case →

CASE STUDY • Automotive

Automotive OEM - 100% quality inspection

One of the world’s largest automotive plants. ~100,000 units/month i.e at 1.2M units/year

The plant was running statistical sampling on quality control catching defects reactively and because existing ML models took 3 minutes to produce a result on a production line with a 30-second cycle. The internal team was using computer vision logic on a signal processing problem.

The fix came from cardiology, not industrial AI. Engine vibration data behaves like a heartbeat. A defect is an arrhythmia. By applying biomedical signal processing instead of deep learning, inference dropped from 3 minutes to under 5 seconds. The plant moved from sampling to 100% digital inspection across 1.2 million units per year.

Metric Before After
Inference time 3 minutes < 5 seconds
Inspection coverage Statistical sampling 100% - 1.2M units/year
Critical faults caught Reactive / field claims ~100 per year, prevented
Line stoppages Frequent Zero

Read the full case →


$35M

Annual value recovered, large FMCG

95%

Forecast accuracy (from below 60%)

2 days

Planning cycle reduction from 7 Days

1.2M

Units inspected annually, automotive


“Four vendors attempted to solve the same forecasting problem and treind with better models each time, yet failed. The fix was identifying that the entire system was trained on the wrong input variable.”

  • Jitin Kapila, on the global FMCG engagement ($36M outcome)


What clients say

“15% profit margin growth in six months. We’d spent 18 months on dashboards that told us what happened. Jitin shifted us to systems that change what happens next.”

  • VP Operations, Zespri

“He doesn’t deliver a deck and disappear. He stays until the logic is in the system and the team can run it. That’s rare.”

  • Nikhil Jain, Head of Tech Accelerator & AI Innovation, L’Oréal



Three ways to
work with me

Phase 1 - Find the problem worth solving

AI Profit OS, Executive AI Defense System

For VPs, COOs, CFOs, and senior leaders who need clarity before commitment. A 3-day sprint that produces a ranked use case list, an ROI formula anchored to your specific P&L, and a board-ready investment memo. The defense system that prevents bad vendor contracts.

From $1,500 per seat

See the full programme →

Phase 2 - Roadmap the solution

AI Strategy Audit

For organisations that know the problem and need the architecture. A 2–3 week Red Team diagnostic that identifies where operations are failing and builds a 12-month implementation roadmap. Leadership gets the numbers. The technical team gets the blueprints.

From $2,500

Request a briefing →

Phase 3 - Architect what to build

Consulting & Fractional CTO

For organisations with active AI programmes that need an architect in the room and not another vendor. Monthly retainer or 6–12 month embedded engagement. Vendor review. Technical proposal oversight. Scope management. Board communication.

Custom engagement

Book a discovery call →


Not sure which fits your situation? A 30-minute Clarity Call costs nothing and ends with a specific answer: whether you have a Logic Leak, and roughly where. No pitch. No deck. Book a Clarity Call →




The background

I’m Jitin Kapila. I trained as a mechanical engineer and have spent fifteen years working at the intersection of AI strategy and operational delivery across manufacturing, FMCG, logistics, telecom, and automotive.

The companies whose logos appear above are ones I’ve built systems for. Not advised on strategy decks. Built for.

I still debug production systems. I read the code. I understand what the data science team is actually saying, and I can translate it into language the CFO can act on. That is not a common combination.

What I am not: a vendor. I don’t sell platforms, I don’t take referral fees, and I don’t recommend technology I haven’t evaluated against the specific constraints of your business.

More about me → · LinkedIn → · Read the blog →

Jitin Kapila




The Weekly AI Decision Brief

Every Wednesday: the question your AI budget should be answering and isn’t. Frameworks from actual deployments. Case studies with real numbers. Written for operations leaders.

Get the first issue →


Not ready for a call?

Take the AI Profit Quotient, a 12-question diagnostic that scores your operation across all three A.R.T. dimensions and tells you exactly where the leak is.

At $93, take the AI Profit Quotient →

I take a limited number of new diagnostic conversations each month.


Common Questions

An AI strategy consultant identifies which operational problems are worth solving with AI, defines those problems in terms a technical team can build from, quantifies the expected return before investment, and provides independent guidance that is not tied to any vendor or platform. The role bridges the gap between what an engineering team can build and what a leadership team needs to justify.
A Logic Leak is the specific point in an organisation’s operations where data exists and decisions are being made, but intelligence is not flowing between them. The data that could improve a decision is not reaching it is due to structural, architectural, or process gaps. Identifying and closing the Logic Leak is the central diagnostic task in most AI strategy engagements.
Three signals indicate the need: (1) your organisation has run AI pilots that produced results in isolation but have not moved the P&L; (2) you are being asked to make an AI investment decision without a framework for evaluating whether it is the right one; or (3) an active AI programme is underway but results are not tracking to the original business case.
Manufacturing, FMCG, logistics, telecom, and automotive, with engagements across Australia, New Zealand, the UK, Europe, the US, and the Middle East. All engagements run remotely. The diagnostic frameworks travel across sectors; the specific application is always built for the operational context.
I architect the solution, quantify the expected return, and oversee the build, acting as an advisory lead or Fractional CTO. I do not write the production code or sell proprietary platforms. Though I can help team with code paradigms, structure and best practices. My value is in independent specification and vendor management, ensuring the technical team builds exactly what the P&L requires, without scope creep or vendor lock-in.
ROI is calculated by mapping your current operational baseline (e.g., defect rates, inventory holding costs, manual processing time) against the known capabilities of standard AI architectures. We don’t guess what the model will do; we calculate what the business process must achieve to justify the investment, establishing a strict performance target for the build.
An IT vendor’s business model relies on selling software licenses and billable development hours. Asking them for an objective AI strategy often results in a recommendation to buy their platform. Independent strategy consulting separates the diagnosis from the prescription, ensuring your architecture is driven solely by operational constraints.




Subscribe to Weekly AI Decision Brief

One sharp insight on making AI work for your business — every week. Frameworks from actual deployments. Case studies with real numbers. The questions your AI vendor hopes you never ask.

No hype. No vendor pitch. Written for operations leaders, not technology teams.



Back to top