During a recent masterclass for coaches, when my wife mentioned AI, she was met with unexpected silence. Eventually, one executive coach spoke up, noting that he found AI to be an excellent thought partner in his client work. Another coach brought up the Chinese Room analogy, suggesting that no matter how advanced machines become, they cannot understand or coach like humans do. And that was the extent of the discussion before it shifted to other topics.
The Chinese Room is a philosophical thought experiment created by John Searle in 1980 to question the notion that a machine can truly "understand" or possess consciousness just because it acts as if it does. Although today's leading chatbots are not conscious in the human sense, they often behave as if they are. By referencing this experiment, the coach was dismissing the utility of these chatbots, implying they couldn't contribute to effective executive coaching.
It was a brief moment, but the story was telling. Why did the conversation stall? What was beneath that philosophical objection? Was it discomfort, skepticism, or something more fundamental?
A few days later, I conversed with a healthcare administrator and conference organizer. She mentioned that although her large hospital chain had enterprise access to Gemini, many staff had yet to explore its capabilities. As I explained how AI is transforming healthcare workflows, from documentation to diagnostics, it became apparent that much of this was still unfamiliar.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO
These anecdotes hint at a deeper pattern reshaping the professional value landscape. As with previous technological shifts, the early adopters are not just crossing a threshold; they are defining it. This may sound familiar. AI is following the trajectory of past technological revolutions: starting with a small group of early adopters, followed by a larger wave of pragmatic followers, and finally, a hesitant remainder. As with electricity, the internet, or mobile computing, value tends to concentrate early, and pressure to conform builds.
This transition, however, is distinct in at least three significant ways. Firstly, AI doesn't just automate tasks; it begins to appropriate judgment, language, and creative expression, blurring the lines between machine functions and human roles. Secondly, adoption is outpacing understanding. People are using AI daily while still questioning its trustworthiness, believing in it, or even comprehending its actions. Thirdly, AI not only changes our actions but reshapes our perceptions. Personalized responses and generative tools alter the very fabric of shared reality, fragmenting the cognitive commons that previous technologies largely left intact.
We are at the beginning of what I call a great cognitive migration, a gradual yet profound shift from traditional domains of human expertise toward new realms where intelligence is increasingly ambient, machine-augmented, and organizationally centralized. But not everyone is migrating at the same speed. Not everyone is eager to make the transition. Some hesitate. Some resist.
This isn't merely about risk aversion or fear of change. For many professionals, especially in fields like coaching, education, healthcare administration, or communications, their contribution is rooted in attentiveness, discretion, and human connection. These values do not easily translate into metrics of speed or scale.
Yet AI tools often arrive wrapped in metaphors of orchestration and optimization, shaped by engineering logic and computational efficiency. In work defined by relational insight or contextual judgment, these metaphors can feel alien or even diminishing. If you do not see your value reflected in the tools, why would you rush to embrace them?
Therefore, we must ask: What happens if this migration accelerates and significant portions of the workforce are slow to move? Not because they cannot, but because they do not view the destination — the use of AI — as appealing. Or because this destination does not yet feel like home.
History offers a metaphor. In the biblical story of Exodus, not everyone was eager to leave Egypt. Some questioned the journey. Others longed for the predictability of what they knew, even as they acknowledged its costs. Migration is rarely just a matter of geography or progress. It is also about identity, trust, and what is at stake in leaving something known for something uncertain.
Cognitive migration is no different. If we treat it purely as a technical or economic challenge, we risk missing its human contours. Some will move quickly. Others will wait. Still others will ask if the new land honors what they hold most dear. Nevertheless, this migration has already begun. And while we might hope to design a path that honors diverse ways of knowing and working, the terrain is already being shaped by those who move fastest.
Pathways of cognitive migration
The journey is not the same for everyone.
Some people have already embraced AI, drawn by its promise, energized by its potential, or aligned with its accelerating relevance. Others are moving more hesitantly, adapting because the landscape demands it, not because they sought it. Still others are resisting, not necessarily out of ignorance but fear, uncertainty, or conviction, and are protecting values they do not yet see reflected in the tools. A fourth group remains outside the migration path, not because they overtly object to it, but because their work has not yet been touched by it. And finally, some are disconnected more fundamentally, already at the margins of the digital economy, lacking access, education, or the opportunity to participate.
These are not just attitudes. They are positions on a shifting map. They reveal who migrates by choice or pressure, who resists on principle, and who might never join.
The willing
Some people have not hesitated. Like early gold miners heading for California, they have embraced AI out of curiosity, enthusiasm, or a sense that it aligns naturally with their outlook. These are the willing migrants, those comfortable at or near the frontier: Consultants using language models to refine client proposals, developers accelerating their coding process, storytellers using AI-generated video. Some are exploring AI as a creative partner, others as a tactical advantage. For this group, the terrain feels not just navigable, but exciting.
But even within this group, motivations differ. Some see how AI can amplify their own productivity or extend their reach. Others are drawn to the novelty and enjoy playing with the tools. Many are experimenting in a relatively unstructured environment, learning what AI can do before it is formally required or widely governed. To them, this is still the wild west. And what they adopt, refine, or normalize will shape the cognitive landscape the rest of us enter.
Their enthusiasm is valuable. It pushes cognitive migration forward and carries quiet power: Even if they do not know it, they are setting the terms for how value, fluency, and legitimacy are being redefined.
The pressured
For many, migration is not optional; it is expected. These are the pressured migrants: Those adapting because their organization, industry, or clients demand it. AI is now embedded in areas like project management, customer service, and marketing workflows, making fluency less of a differentiator and more of a baseline requirement.
Yet, formal support is often lacking. A 2025 global KPMG–University of Melbourne study found that 58% of employees intentionally use AI at work, with a third doing so weekly or daily. However, a McKinsey survey found a fifth of employees had received minimal to no support from their companies, and nearly half want more formal training. For example, a marketing manager is now expected to generate first drafts with AI, even though no one has shown her how to prompt effectively.
These migrants navigate a tenuous middle ground. Some are cautiously optimistic, seeing AI as essential for staying relevant. Others are anxious, sensing that falling behind could mean irrelevance or redundancy. If the “willing migrants” are blazing the trail, the pressured are following close behind. They often do so warily, with little bandwidth to question the terrain, but a clear awareness that stopping is not an option.
The resistant
Some have chosen not to migrate, at least not yet, and perhaps not at all. These are the resistant migrants: Those who hesitate out of fear, uncertainty, or conviction. Many perform roles grounded in presence, empathy, discretion, or ethics. They may be therapists, teachers, writers, chaplains, or coaches. For them, the premise of cognitive outsourcing raises not just technical questions, but existential ones.
This group often sees AI tools as misaligned with the deeper value they offer. In their view, tools may simplify what should be nuanced or automate what requires trust and human connection. They might worry that using AI to draft a letter, summarize a meeting, or respond to a client flattens nuance, dilutes trust, or undermines relationships built over time. A longtime therapist could plausibly suspect that AI-generated notes miss the emotional texture of a session.
Their resistance is not a refusal to evolve. It is, in many cases, a defense of meaning, judgment, and humans themselves. This echoes a theme in Jen Gish’s “The Resisters”: A quiet defiance, not of technology itself, but of the belief that everything worth doing can be done by a machine.
The unreached
Another group of people are not migrating, at least not yet. These are the unreached migrants: Workers whose roles have not been meaningfully affected by AI. They include tradespeople, farm workers, bus drivers, and line cooks. These are people whose daily work is physical, place-based, and shaped more by coordination or skill than purely by cognition. They may have considerable domain knowledge, but they are not broadly considered knowledge workers. For them, AI may appear in the headlines or workplace chatter, but it has little relevance to their routines.
Their distance from this migration is not about resistance or lack of interest. The cognitive landscape that AI is currently reshaping is not the one they occupy. The embodied AI tools are not yet available for what they do. The physical robots have not much invaded their workplace. Whether that remains true will depend on how AI evolves, and whether the physical and manual domains of work eventually become targets of transformation. For now, most of them are watching a journey that feels like it is happening somewhere else, to someone else.
The disconnected
Then there are those for whom migration is not just irrelevant, but out of reach. These are the disconnected: Individuals who are already marginalized within the digital economy. They may lack access to technology, consistent connectivity, formal education, or the support systems that make digital learning and adaptation possible. AI may be in the news or their communities, but it is not part of their world in a usable or trustworthy form.
This group is aware of change, but they are often left out of it. If this cognitive migration continues to define new norms of value, intelligence, and legitimacy, they risk becoming a new underclass, not because they opted out, but because they were never truly included.
This migration, and others before it
Before we look at how this moment compares to past technology-driven shifts, it is worth acknowledging that the typology above is, by design, a simplification. People do not always migrate into clean categories. They move in and out of roles, contexts, and stances. A plumber might use AI to write a children’s book after hours. Some may shift from enthusiastic to cautious depending on the context.
Yet even these broad strokes reveal something essential about how AI adoption is unfolding. And they offer a lens through which to revisit a familiar question: How does this migration compare to technological shifts we have seen before?
We have seen this pattern. The arrival of electricity, the internet, and mobile computing each followed a similar arc. In every case, the tools began with promise, spread unevenly, and gradually redrew the boundaries of work, skill, and participation.
This migration also reflects a familiar tension between productivity and displacement. Just as machines replaced manual labor during the Industrial Revolution, AI is reshaping what it means to be useful, efficient, or skilled in the cognitive domain. And as with other transitions, early benefits tend to concentrate among those with access, fluency, and flexibility, while the risks fall more heavily on those slower to adapt.
Yet even as we recognize these familiar rhythms of technological change, three fundamental differences suggest this migration may unfold in ways that surprise us. It is not just changing how we work. It is redrawing the boundary between human and machine. Where earlier technologies extended physical power or accelerated communication, AI appropriates judgment, language, and creativity. It does not just speed up cognition; it starts to perform it.
What makes this shift more disorienting is the pace and the reach. AI is being integrated into everyday tools faster than governance or understanding can keep up. It is so tantalizing that many are using it before they fully trust it or even comprehend what it is doing. Adoption is outpacing orientation.
Perhaps most consequentially, AI alters not just what we do, but how we see. Personalized outputs and generative interfaces are fragmenting the shared cognitive terrain that once underpinned professional and personal identity, institutional norms, and cultural consensus. This is not merely a migration of function. It is a migration of meaning.
The road ahead
Cognitive migration is not just a change in tools. As multiple technology leaders have suggested, it may be as significant as the discovery of fire. It could lead to remarkable abundance, offering greater knowledge, improved financial circumstances, and more creative outlets. But it could also result in a more dystopian outcome, marked by concentrated wealth, widespread unemployment, and narrowed opportunity. In either case, this migration will reorder roles, values, and entire professional classes.
For some, it may be a season of experimentation, adaptation, and fulfillment. For others, it could be a forced migration, shaped less by choice than by economic necessity. Anthropic CEO Dario Amodei recently warned that AI could eliminate half of all entry-level white-collar jobs and drive unemployment to 10 to 20% within five years. This was amplified by OpenAI CEO Sam Altman, who said that certain job categories, such as customer support, would be eliminated by AI. It is evident now that what AI can do is expanding faster than most institutions or individuals are prepared for.
And it is not just entry-level work that may be affected. Fidji Simo, OpenAI’s incoming CEO for Applications, recently described AI as “the greatest source of empowerment for all.” In a widely shared essay, she praised her own business coach and noted that “personalized coaching has obviously been a privilege reserved for a few, but now with ChatGPT, it can be available to many.” What then becomes of the coach at the beginning of this article, a member of what we might now call the ‘resistant’ class?
We do not know how this migration will unfold. There will likely be no single moment when it is declared complete. But many may find themselves suddenly outside the borders of professional relevance, with little warning and fewer options. In the push for efficiency, competitive pressures rarely wait for consensus or lead to soft landings.
Institutions must quickly develop concrete responses, such as retraining programs that go beyond basic AI literacy, social safety nets that account for cognitive rather than just physical displacement, and new frameworks for measuring contribution that honor human qualities that AI cannot replicate. Otherwise, the fallout may be as psychologically dislocating as it is economically profound.
This is not a call for panic. It is a call for clarity.
The migration has already begun. The question is not whether it will reshape work, identity, and opportunity, but how prepared we are to live with the shape it takes.
