A note before we begin…

A CEO asked me a question recently. It was short — barely a sentence. We were at the end of a conversation about his company’s direction, and he said it almost as an aside, the way people ask questions they’ve been sitting with for a while but haven’t found the right moment to voice.

At UnconstrainED, we work across an unusually wide range of organizations — global corporations, government bodies, community foundations, independent schools, healthcare systems, and everything in between. The questions we field vary enormously in surface form. But underneath the variation, a remarkable number resolve to the same thing.

A simple question from someone in a position of responsibility is rarely a request for a simple answer. It is an invitation to be honest about something complex. This blog is my attempt to honor that — the structured answer I would give to a CEO who sat across from me and asked what I consider one of the most consequential questions in business right now.

The question

“How should I think about AI?”

Six words. Buried inside them is almost everything: strategy, cost, competition, people, technology, timing, and risk. The CEO who asks it is really asking several things at once: Is this real, or hype? Are my competitors ahead of me? What should I actually do, and when? And underneath all of that, the question no one says aloud: Am I already behind?

What follows is my honest, structured answer — written for executives, operators, and investors who need insight they can act on, not enthusiasm they have to decode.

01 — Overview / Thesis

This time is structurally different

Every decade produces a technology that executives are told will change everything. Most disappoint. AI will not.

What separates this moment from previous waves — cloud, mobile, big data — is the nature of what is being automated. Prior technologies made processes faster or cheaper. AI is beginning to perform cognitive work: reading contracts, writing code, analyzing financials, managing customer relationships. Cognitive labor is not a support function. It is the core of how most corporations create value.

The shift is also architectural. AI is not being bolted onto existing systems as a feature — it is becoming the operating layer through which data, decisions, and workflows are routed. Companies that understand this early will redesign around it. Those that don’t will retrofit, and retrofitting always costs more.

Three structural changes define the moment: capability has crossed commercial viability; deployment costs have collapsed; and tooling has matured to the point where the gap between a model’s raw capability and a company’s ability to integrate it has narrowed from years to weeks.

“The right mental model is not ‘AI as a tool.’ It is ‘AI as a new category of worker’ — infinitely scalable, works without sleep, costs a fraction of human labor for defined cognitive tasks. The strategic question is: what do you do with that?”

02 — Workflow Transformation

The human-AI workflow is already here

Across corporate functions, the pattern is consistent: AI handles volume, humans handle judgment. But that boundary is shifting quickly.

In operations, AI transforms demand forecasting, logistics routing, and quality control. In marketing, personalization at scale is now table stakes — AI generates variants, segments audiences, and optimizes spend without human intervention at each step. In engineering, AI code assistants deliver 20–40% productivity gains on routine tasks. In finance, AI handles reconciliation, anomaly detection, and draft management commentary — freeing FP&A teams from data gathering toward interpretation. In legal and compliance, contract review and regulatory monitoring are being substantially automated.

The augmentation/replacement framing obscures the more important dynamic: task displacement within roles. No CFO is being replaced by AI. But a significant proportion of tasks that occupied a CFO’s analyst team are being automated. The question is not whether roles disappear — most won’t, near term — but whether organizations capture the freed capacity or let it evaporate into Slack and meetings.

Think of AI as adding a “10x analyst” to every team — one who synthesizes data, drafts outputs, and runs scenarios at speed but cannot yet own a decision. The organizational question is not whether to hire that analyst. It’s whether your managers know how to direct one.

Mental Model

03 — Infrastructure Shift

The evolving AI stack

At the foundation sits compute — GPUs and specialized chips, dominated by Nvidia, distributed via cloud providers. Access is no longer constrained for most enterprises; cost management is the emerging challenge.

Above compute sit the foundation models. OpenAI, Anthropic, Google DeepMind, Meta, and Mistral compete here, but the gap between providers is narrowing. Most enterprises will work with two or three interchangeably within two years. Betting the architecture on one provider is a strategic risk.

The most consequential layer for most enterprises is the application and integration layer: RAG pipelines connecting AI to internal data, orchestration frameworks, agent systems, and proprietary fine-tuned models. This is where defensible differentiation lives.

Best practice is a hub-and-spoke model: a centralized AI platform team owning governance and foundation model access, with decentralized deployment of use-case-specific applications on top. IT departments that try to own all AI deployment create bottlenecks. Those that abdicate governance entirely create technical debt. The CIO’s new mandate is not “manage systems” but “manage the AI operating environment” — a different job requiring new skills in model evaluation, data governance, and AI risk.

04 — Business Model Innovation

New models, new moats

The most underappreciated competitive threat to incumbents is not that AI-native startups are smarter — it is that they are unburdened. A startup building a legal research tool designs around AI from day one: the workflow, team structure, cost model, and product. An incumbent grafts AI onto a process designed for billable hours. The physics of transformation favor the greenfield.

Incumbents still hold durable advantages: customer relationships, regulatory licenses, decades of proprietary data, distribution, and brand trust. The competitive game is not lost — but it requires uncommon speed and organizational honesty.

Raw data volume is not defensible. What is defensible is proprietary, structured, high-signal data used to fine-tune AI models in ways competitors cannot replicate. Companies that have treated data as an analytics function are discovering it is a strategic asset requiring commensurate investment.

Morgan Stanley deployed an AI assistant giving financial advisors instant access to 100,000+ research documents and internal analyses. It compressed response time from hours to seconds. The competitive advantage is not the model — it is the proprietary content library the model accesses. Replicating that library takes decades, not months.

Case Study: Morgan Stanley

Klarna’s AI customer service assistant handled the equivalent workload of 700 full-time agents in its first month, with resolution rates and satisfaction scores matching human agents. The lesson: this technology is not experimental. It is in production at scale, across millions of interactions, at a major consumer financial company.

Case Study: Klarna

05 — Revenue Implications

New revenue surfaces

AI expands revenue opportunity in three ways: enabling products that were previously impossible, scaling existing products to previously unreachable customers, and unlocking personalization that increases conversion and retention.

Products previously impossible include AI-powered services delivering high-quality expert output at consumer price points — legal review, financial planning, medical triage, educational tutoring. These markets existed at high prices, accessible to few. AI makes them accessible to many. That is market expansion, not substitution.

AI is also accelerating a pricing transition that SaaS began: from seat-based to usage-based and outcome-based models. When AI performs measurable work at scale, the natural unit is the outcome delivered — contracts reviewed, leads qualified, calls resolved — not the software license. Companies building this capability early will set pricing norms for their categories.

06 — Cost Structure & Expenses

Where the economics move

The cost reduction opportunity is real, large, and unevenly distributed. Functions with high volumes of structured, repeatable cognitive work will see the largest gains earliest. Customer support is the most mature example: companies report 40–70% reductions in human-handled ticket volume — not by cutting headcount, but by handling far greater volume with the same people.

Long term, demand for human cognitive labor will shift rather than simply contract. Fewer junior analysts; more senior judgment. Fewer support agents; more escalation specialists. Plan for role transformation, not just headcount reduction.

AI also introduces new cost centers: compute at scale, AI talent, data infrastructure, and risk management. The net economics are favorable — but only for companies that manage these deliberately. Undisciplined deployment generates spiraling compute costs and technical debt that erode the savings from labor displacement.

Model AI cost transformation like a power plant conversion: fuel costs drop dramatically, but transition infrastructure is real and front-loaded. Account for it honestly, and the ROI timeline becomes far less surprising.

Mental Model

07 — Time Horizons

What changes, and when

Now — 2 years

Already in production

  • AI-assisted coding
  • Automated customer support (tier 1–2)
  • Personalization at marketing scale
  • Document review & summarization
  • AI co-pilots in finance & legal
  • Enterprise knowledge tools (RAG)
  • Automated reporting
3 — 5 years

Emerging transformation

  • Autonomous AI agents in workflows
  • AI-led product management & R&D
  • Full-stack AI in sales & CRM
  • Outcome-based pricing as standard
  • AI-native enterprise software dominates
  • Deep proprietary model fine-tuning
  • AI-augmented board reporting
5 — 10+ years

Structural scenarios

  • Dramatically smaller white-collar teams
  • AI as primary strategic analyst
  • Self-modifying AI systems
  • Radical compression of company formation
  • Corporate structures redesigned for AI
  • New regulatory regimes for AI labor
  • Human roles redefined around oversight

The short-term changes are not preparation for the real transformation — they are the real transformation, beginning. Companies treating current AI adoption as a pilot phase before the serious work starts are miscalibrating. The serious work is now.

08 — Risks & Constraints

What slows it down — and what breaks it

Technical constraints are real. Large language models hallucinate — producing confident, plausible outputs that are factually wrong. For low-stakes tasks, this is tolerable with human review. For high-stakes decisions in medicine, law, or finance, it requires architectural solutions, not dismissal.

Organizational constraints are the most underestimated barrier to AI ROI. The technology is often ready before the organization is. Successful deployment requires data in a condition AI can use, workflows redesigned for human-AI collaboration, and managers who know how to direct AI-assisted teams. These are change management problems, not technology problems.

Regulatory risk is accelerating. The EU AI Act is in force; US regulations are proliferating; sectors with existing data protection regimes face layered compliance requirements. Deploying AI at scale without a regulatory roadmap is accumulating unpriced liability. And ethical risk is material: biased outputs in hiring or credit decisions, customer service failures at scale — these carry brand and legal consequences. Fairness, transparency, and auditability must be engineering requirements, not afterthoughts.

Duolingo shifted its content team from writing individual exercises to reviewing AI-generated ones, dramatically expanding available content without proportional headcount growth. Human editorial oversight caught quality failures before they reached learners. The model works because the failure mode is low-stakes and correctable — a design choice, not an accident.

Case Study: Duolingo

09 — Strategic Takeaways

What leaders should do now

  1. Audit your cognitive labor inventory. Map which work is repeatable, high-volume, and data-rich. These are your highest-probability AI wins — and your most urgent competitive exposure if competitors automate them first.
  2. Invest in data infrastructure before AI tooling. The most common failure mode is applying AI to data that isn’t structured or accessible enough to support it. Data readiness is the prerequisite, not the afterthought.
  3. Appoint a senior AI executive with technical credibility and business authority — one who sits in the room where strategy is made. AI decisions are strategy decisions.
  4. Design for human-AI collaboration, not replacement. Workflows that treat AI as a junior team member requiring direction outperform those that attempt full automation prematurely. Build governance mechanisms — verification, escalation, monitoring — from the start.
  5. Run a portfolio of bets across time horizons. Some AI investments should have 90-day ROI; others are infrastructure for a 3–5 year capability build. One budget line and one set of evaluation criteria will produce the wrong outcomes.
  6. Build AI fluency at every level. The bottleneck is not budget or technology — it is human capacity to work with AI effectively. Managers who can direct AI tools, evaluate outputs critically, and redesign workflows are the scarcest resource in the next five years.
  7. Develop a regulatory posture, not just a compliance checklist. Companies that engage proactively with regulators and build auditable systems by default will have more operational freedom — and fewer crisis-driven retrofits.

Conclusion

The competitive reset

I want to return to where this began — to that CEO, and to that question. What made it land with me was not its brevity. It was its honesty. Asking “how should I think about AI?” requires intellectual humility that not every leader is willing to show. It is an admission that the ground is uncertain, and that certainty must be earned rather than assumed.

That honesty is the right starting posture. Because the honest answer is that AI is genuinely transformative and genuinely immature, simultaneously. The companies that navigate this well will not be those that move most aggressively. They will be those that move most clearly — with a real understanding of where AI delivers durable advantage, the organizational infrastructure to sustain it, and the discipline to resist the hype that distorts both the opportunity and the risk.

The question for every executive is not “should we invest in AI?” That question has been answered. The question is: “Are we building the infrastructure to compound our AI advantage over time — or making isolated bets that a well-organized competitor will eventually outpace?” The intelligence layer is being built right now, inside every competitive industry. The window to shape how it is built inside your organization is open. It will not remain so indefinitely. And that, in the end, is what that simple question was really asking.

Leave a Reply

Discover more from UnconstrainED

Subscribe now to keep reading and get access to the full archive.

Continue reading