What organizations are spending millions not to say out loud — and why the silence is costing them far more than the technology ever will.

I’m about to board a flight to Mumbai, then Colombo, before traveling across the subcontinent. One of the books I’ve packed for the journey is Albert Camus’ The Plague — and somewhere over the Arabian Sea, I will reread it. Camus wasn’t writing about a disease. He was writing about denial. About the particular human genius for looking directly at a catastrophe and deciding, collectively, that it probably isn’t one.

That felt like the right undertone for this week’s post. Because for the next several weeks I’ll be sitting across from boards, executive teams, HR leaders, and educational leaders who are all wrestling with the same crisis — not AI itself, but the organizational reckoning it is forcing: a confrontation with their own relevance, readiness, and survival. And the most striking thing, every time, is not the urgency in the room. It is the elaborate, well-funded, professionally managed denial.

White Elephant 01: AI Is an X-Ray for Leadership Incompetence — and the Only Antidote Is Development, Not More Tools

Let’s be direct: AI does not expose technology gaps first. It exposes decision-making gaps, strategic clarity gaps, and leadership development deficits — at the very top — before it touches anything else.

When you introduce AI into an organization running on vague strategy and managers who survive by controlling information rather than generating insight, AI makes all of it immediately and brutally legible. The quality of thinking becomes visible. The gap between those who were performing competence and those who actually possess it becomes impossible to ignore.

The instinctive response is to throw tools and resources at the problem. Buy the platform. Commission the vendor demo. Launch the pilot. These are displacement activities. They create motion without transformation. You cannot tool your way out of a leadership development deficit.

The question is not which AI platform to buy — or even which to build. The question is whether your leadership has been developed to the standard that AI now demands.

This is where the long game diverges sharply from the short one. Organizations chasing vendor platforms are renting capability they do not understand, at a cost that compounds with every renewal cycle. Organizations investing in developing their own people to understand, evaluate, and eventually build tailored AI solutions are accumulating something vendors cannot sell: institutional intelligence. Highly personalized, organizationally embedded tools built by teams who know the domain, the data, and the decision context will outperform generic platforms every time — but only if the human capability to build and govern them exists first.

That capability does not come from a tooling budget. It comes from deliberate, sustained professional development at every tier of leadership. When that foundation is weak, AI does not compensate — it amplifies every gap, faster.

The Real Investment Question
Before spending another dollar on AI infrastructure: what is your current annual investment in structured leadership and professional development at every tier? If the answer is “minimal,” you are building on sand. Develop your people first — with rigor and intention — and the right tools will follow. Reverse that order at your peril.

◆ ◆ ◆

White Elephant 02: Your “AI Strategy” Is a Fear-Management Strategy in Disguise

Read most AI roadmaps carefully and a pattern emerges. The language is ambitious — transformation, competitive differentiation, future-readiness. But the actual initiatives are almost entirely defensive: governance frameworks, risk registers, responsible use policies, ethics committees, pilot programs that stay permanently in pilot.

This is not an AI strategy. It is an anxiety-containment strategy dressed in the vocabulary of innovation.

The fear is real. Move too fast and something goes wrong — reputational exposure, regulatory risk. Move too slow and a competitor accelerates — a different board conversation. So organizations build elaborate frameworks that create the appearance of forward motion while systematically avoiding the decisions that would actually produce it.

The governance meeting is not where AI strategies are born. It is where they are safely entombed.

The tell is in the resource allocation. Organizations genuinely committed to transformation fund learning, experimentation, and failure tolerance. Fear-management organizations fund oversight infrastructure — committees, audits, vendor panels — that consumes energy without producing capability.

Ask yourself: In the past twelve months, what has your AI governance structure actually enabled? What did it permit that would not have happened without it? If the answer is nothing, you are not managing AI. You are managing anxiety about it. That is a legitimate thing to manage. It is just not a strategy.

Diagnostic Signal
Count the ratio of governance meetings to deployment decisions over the past six months. Above 3:1, your organization is using process to substitute for judgment. Governance should accelerate responsible action — not provide institutional cover for inaction.

◆ ◆ ◆

White Elephant 03: Letting Employees “Just Use AI” Is Not Empowerment. It Is Abdication.

While leadership debates policy and IT finalizes its approved-tool list, the marketing manager is drafting all her briefs with Claude. The analyst is generating report commentary with GPT. The customer service lead is training his team on prompts he found on YouTube at 11pm. The underground AI economy inside your organization is already running at scale — and leadership has sanctioned it, mostly through silence.

Some call this “democratization.” Some call it “agility.” It is, in practice, a short-term fix building a long-term catastrophe.

You are not empowering your people by letting them use AI without alignment. You are outsourcing your organizational intelligence to large language models you have never evaluated, governed, or understood.

Here is what unmanaged AI adoption looks like at scale in 18 months. Half your workforce is operating as AI-augmented agents — but you do not know which models they are using, with what prompts, against what data, or with what level of critical judgment applied to outputs. No organizational learning is being captured. No prompt standards exist. Confidential information — client data, unreleased financials, legal correspondence — is potentially flowing through unvetted external systems. And you have zero institutional memory of any of it, because every individual has built their own private AI practice in isolation.

The financial, regulatory, and ethical exposure is not theoretical. In most Asian and global markets, regulatory frameworks are accelerating fast. The reputational exposure, when something surfaces publicly, will arrive without warning.

The core error: You cannot build organizational AI capability by aggregating individual AI habits. Capability is collective, aligned, and intentional. Habit is individual, invisible, and ungoverned. Conflating them is one of the most dangerous mistakes in AI adoption today.

The organizations that win will not be those that gave people the most freedom to experiment in isolation. They will be the ones that built a strategic framework — clear principles, deliberate training, aligned prompting standards, defined boundaries — and developed their people rigorously within it. Freedom without architecture is not transformation. It is entropy with a productivity veneer.

What Governance Actually Looks Like
An AI governance framework is not a list of prohibited tools. It is a structured answer to four questions: What are we using AI for, and why? Which models are sanctioned, against what data? How are we developing people to use AI with judgment and accountability? And how are we capturing what we are collectively learning? If you cannot answer all four today, you do not have an AI strategy — you have an AI situation.

◆ ◆ ◆

White Elephant 04: AI Is Handing You Back Time You Have Already Promised to Waste

When AI delivers what it promises — real time savings, genuine efficiency gains — most organizations do the single most wasteful thing possible: they pour that reclaimed time straight back into more of the same work. More reports. More emails. More meetings to review AI outputs. The efficiency gain evaporates into volume, and the structural conditions creating overload remain untouched.

The point of reclaiming time from AI is not to work more. It is to finally do the things your organization has always claimed it had no time for — developing people, deepening partnerships, redesigning business models, building the future instead of administering the present.

If AI can return 20–30% of a knowledge worker’s time, that is not a productivity statistic. It is a strategic resource of extraordinary value — and most organizations have no plan for it. The question every leadership team must answer explicitly, before deploying AI at scale, is: what will we do with what AI gives back?

Will you invest it in structured development and mentoring — the first casualty of organizational busyness? In the partner and client relationships that transactional work has crowded out? In redesigning business models running on inertia? These require decisions, commitments, and budget allocations made in advance — or the time will be silently absorbed back into the machine.

Organizations that punish experimentation while claiming transformation face the same trap: reclaimed time never materializes because the incentive system never changes. Performance management still rewards volume. Promotions still go to the visibly busy. AI’s efficiency is taxed at 100% by a culture that cannot imagine what it means to work differently.

The Question That Changes Everything
Ask your senior team directly: “If AI returns us 25% of our collective bandwidth in the next 12 months, what are we committed to doing with it that we are not doing today?” Write the answers. Assign ownership. Build it into your operating plan. If you cannot answer before deployment, you are not ready — because you have not yet decided what transformation actually means.

◆ ◆ ◆

White Elephant 05: The New Scarcest Resource Is Not Intelligence. It Is Judgment at Speed.

Every conversation about AI and workforce eventually collapses into the same exhausted framing: which jobs survive, which do not, who is safe. That question is the wrong one. The question that determines organizational competitiveness in the next decade is sharper:

Who in your organization can exercise sound judgment on AI-generated outputs, at the pace AI now demands?

This is not about who can use the tools. Tools are learnable in days. It is about who has the intellectual discipline to interrogate an AI output — to recognize what is missing, what is distorted, what looks plausible but is wrong, and what decision to make given that uncertainty. That capability is rare, developed slowly, and almost universally being neglected.

AI does not need fewer humans. It needs better-developed ones — people who can think alongside it without deferring to it, challenge it without dismissing it, and integrate its outputs into judgment without outsourcing the judgment itself.

The compounding risk: as AI outputs become more fluent and more superficially convincing, the threshold of judgment required to evaluate them rises — not falls. An organization whose people are trained only to use AI, but not to interrogate it, will produce increasingly polished errors at increasing speed. The audit will come. The question is whether your people are developed enough to catch the problem before it does.

Judgment at speed is not a trait. It is a capability — built through deliberate development, structured practice, and cultures that reward intellectual honesty over confident performance. Organizations that invest in it now will accumulate an advantage no tooling budget can replicate or acquire.

The Development Imperative
Find the people in your organization who naturally interrogate — who ask the second question, who challenge the confident answer, who sit with uncertainty without rushing to resolution. These are your most valuable AI-era assets, and they are almost certainly not being developed, recognized, or retained with that in mind. Build your human development strategy around this capability. It is the one thing AI cannot generate for you.

What I Will Say When I Land

In Mumbai, Colombo, and across the subcontinent, I will sit with organizations at very different stages of this journey. Some are at the beginning. Some are mid-struggle. A few are further ahead — and those are always the most instructive conversations, because they have already confronted what the others are still avoiding.

What I will tell all of them is the same: the bottleneck is never the technology. The technology is available, improving, and getting cheaper by the quarter. The bottleneck is the willingness to see the organization as it actually is — its real culture, its actual incentives, its genuine leadership capability — and to make decisions based on that truth rather than the version in last year’s strategy document.

The organizations that lead in the AI era will not have the most sophisticated tools or the most impressive governance architecture. They will be the ones whose leadership had the clarity to name the elephants — and the courage to act on what they saw.

Camus ends The Plague with a warning that has never aged. His narrator watches the celebrations in the streets of Oran as the gates reopen — and feels not relief, but dread. Because he knows what the celebrating crowds have already forgotten: that the plague never dies, never disappears, that it can lie dormant for years in furniture and linen-closets, that it waits patiently in bedrooms, cellars, and trunks — biding its time.

The white elephants in your organization are not so different. They do not announce themselves. They do not demand attention. They are simply there — in the boardroom, in the strategy document, in the performance review that has not changed in a decade. Patient. Quiet. Entirely content to wait.

The question Camus was really asking — and the one my team and I will continue asking as long as we are doing this work — is not whether the threat is real. It is whether, this time, you will act before the gates close again.

One response to “The Five White Elephants in AI Adoption”

  1. Love the elephant cautions and their antidotes, human ingenuity, and critical strategic thinking.

Leave a Reply

Discover more from UnconstrainED

Subscribe now to keep reading and get access to the full archive.

Continue reading