The current enthusiasm around artificial intelligence assumes that the world’s knowledge can eventually be captured, indexed, and made universally accessible. If we build sufficiently large models, integrate enough documents, and connect enough databases, the belief goes, we will surface the knowledge needed for better decisions and better project outcomes. This assumption misunderstands the nature of knowledge itself.
The problem is not technological capacity. It is epistemological.
Knowledge is not primarily stored in documents, systems, or databases. Much of it exists only in the minds of individuals as tacit knowledge—skills, experiences, intuitions, and judgment that people possess but cannot fully articulate. As Michael Polanyi famously observed, “we know more than we can tell.” Tacit knowledge is inherently difficult to codify, because it is embedded in experience, action, and context rather than explicit statements.
This creates a structural limit for AI.
AI systems depend on accessible information: text, code, datasets, or explicit representations of knowledge. But the most valuable knowledge in organizations—how to diagnose a failing system, how to interpret a weak signal in a project, how to negotiate with a difficult stakeholder, how to recognize subtle patterns in operations—exists largely in tacit form. It lives in people, not in files.
Even advanced knowledge-management systems have struggled with this reality. Tacit knowledge is personal, contextual, and difficult to formalize; it is usually learned through experience and shared through interaction rather than documentation. Because of this, organizations built around tacit knowledge often rely on decentralized structures and informal coordination rather than centralized information systems.
AI does not eliminate this constraint. It inherits it.
To make knowledge fully accessible to AI, the knowledge would first have to become explicit. But that would require extracting what individuals themselves often cannot articulate. Unless knowledge is literally recorded at the moment of thought—imagine a hypothetical neural interface capturing internal cognition—the majority of human expertise remains outside the reach of databases and algorithms.
This leads to a dangerous illusion: the illusion of completeness.
When organizations rely heavily on centralized knowledge systems, dashboards, or AI-driven analytics, they often assume that what is visible in the system represents the knowledge that exists. But this assumption is false. The most critical insights—those formed through years of practice, subtle observation, and informal networks—often remain invisible to these systems.
In other words, the system may appear comprehensive while actually representing only a thin layer of explicit knowledge.
This creates a false sense of security.
Projects fail not because information is unavailable, but because the right knowledge is never surfaced at the right moment. The hidden variable in project success is not data volume or computational power; it is whether tacit knowledge held by individuals becomes relevant to the decision process. If that knowledge never enters the conversation, the project proceeds under incomplete understanding.
AI can assist in organizing explicit knowledge. It can accelerate search, summarize information, and identify patterns across documented data. But it cannot fundamentally solve the problem of distributed human knowledge.
Knowledge is not centralized by default. It is dispersed across individuals, experiences, and relationships.
Until systems can directly access the cognitive processes inside people’s minds—a prospect that raises profound ethical and technical questions—the majority of meaningful knowledge will remain decentralized and partially hidden.
The challenge, therefore, is not simply building better AI systems. It is designing organizations and decision processes that acknowledge what cannot be captured. Because the most important knowledge in any system may be precisely the knowledge that no system can see.