Knowledge Management: 5 Myths You Can Safely Ignore in Today’s Complex Environment
1. Introduction
Knowledge, and not wealth, land, or equipment, has undeniably become the principal asset of major organizations, especially those operating in the digital space. For such organizations, the priority has now shifted towards capturing, codifying, and preserving this knowledge.
To address the challenge of gathering and saving intangible, fluid, and potentially unbound asset forms, modern organizations allocate significant budgets to create the necessary roles (such as Chief Knowledge Officer) and acquire the right tools and expertise to ensure longevity and effective utilization of intellectual capital.
As with any other organisational endeavour, the pendulum sometimes swings too far, and the cost of running and maintaining the initiative eventually outweighs its benefits. In extreme cases, what started as a process improvement exercise in knowledge management can, at best, become a distraction and, at worse, adversely affect the organisation’s performance.
This article will examine five myths of knowledge management that have pushed organisational initiatives into the non-productive or even counter-productive zones of knowledge management.
The stories we are about to tell are from the perspective of middle management attempting to capture specific knowledge on software development processes. While these stories can be relevant to wider sections of the organisations, they are best interpreted in that slightly narrow context.
2. Myth 1: All Human Knowledge Can be Codified
2.1 Where Is Knowledge Stored?

Art, music, architecture, culture, oral traditions (stories, myths), and written texts (literature, poetry, scriptures) are the traditional forms of capturing and preserving human knowledge.
In software teams and organisations in general, knowledge is stored in:
Code comments are one area programmers can relate to when examining the challenge of documenting knowledge. Most developers will have pondered at some point how much comment is enough and what to include or exclude.
However, most developers would agree that it’s not feasible or economical to write everything one knows about a certain piece of code. The important stuff related to the product’s development, design, architecture, roadmap, and future is probably shared knowledge distributed among the team.
2.2 Tacit vs. Explicit Knowledge
Knowledge can often be classified into two main categories: tacit and explicit.
The SECI model, developed by Ikujiro Nonaka and Hirotaka Takeuchi, is a conceptual framework that describes the process of knowledge creation and conversion (from tacit to explicit) within organizations. It focuses on how tacit and explicit knowledge is generated, shared, and transformed.
The SECI model is divided into four modes of knowledge conversion, each represented by an acronym:
These four modes of knowledge conversion are not necessarily sequential; they can happen in parallel and often feed into each other. The SECI model underscores the importance of a dynamic and iterative knowledge-creation process within organizations. It’s a useful framework for understanding how knowledge flows and evolves in business.
Management consultant Dave Snowden would disagree that SECI is the way forward in knowledge creation and management, especially around transforming tacit knowledge into explicit. Snowden believes that “knowledge can only be volunteered, not conscripted” and favours apprenticeship models where new joiners can learn through a combination of theory and practice.
New joiners tasked with developing their skills by reading code or reference manuals will take considerable time to appreciate the application’s design and architecture and acquire how-to knowledge for developing the product.
Vital knowledge is mostly tacit and would not be found in written documents. A form of apprenticeship in such cases is remarkably more effective.
2.3 Language and Knowledge Codification

Noam Chomsky, a renowned linguist and cognitive scientist, has contributed significantly to our understanding of human language. Central to Chomsky’s theory of language is the concept of the “poverty of the stimulus,” which forms the basis for his argument that language is inherently limited in its ability to capture the full range of human thought and experience.
Not only is it uneconomical to write down everything we know, but, according to Chomsky, it might also be impossible altogether.
2.4 Knowledge Storage in Neural Nets and Its Interpretability
Unlike humans, machines are not (yet!) good storytellers. […] This creates three risks. First, the machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system.
— Erik Brynjolfsson and Andrew McAfee, The Business of Artificial Intelligence, 2017
The quote above is from an article titled “The Business of Artificial Intelligence“, published recently in the Harvard Business Review. Its authors, Brynjolfsson and McAfee, highlighted three risks of using neural nets to aid decision-making.

The first risk concerns the challenge of interpreting the neural network parameters that make up the solution to the business problem. The other two have to do with troubleshooting and coverage, which we will ignore for now as they are irrelevant to our discussion.
The first risk, created by our difficulty in interpreting neural network parameters, comes from the neural networks’ overall design and learning algorithm. The complicated non-linear equations that compute the output of a neural net are very different from decision-support systems that use logical statements and decision trees to make a judgment. While we can promptly interpret logical statements and decision trees, we cannot say the same for neural networks.
Using the supplied training data, the backpropagation algorithm will select a suitable choice of parameters whose values allow neural nets to map the inputs to the outputs, to put it very crudely. The ability to iteratively determine (without understanding) the rules associating inputs and outputs, no matter how complicated they are, is what makes neural networks so powerful. In this sense, we say that neural networks “learn”.
Neural networks can learn (distil knowledge) from raw data by discovering the correct associations between inputs and outputs. Despite this powerful capability, neural networks remain far (very far) behind what human brains can do.
The point I am driving at is as follows. If decision algorithms implemented with neural nets are so hard to interpret, human judgments must be infinitely more difficult. How we make judgments, retrieve and associate memory patterns, and solve complex problems is yet to be fully understood, let alone written on paper using terminology and concepts already available to us.
Information and data are insufficient for making decisions as far as humans are concerned; emotions, culture, context, and many other factors influence our final decisions, and these are being built and rebuilt every day, making static algorithms that are supposed to embody process knowledge almost useless.
3. Myth 2: Documentation is Good, More Is Better
3.1 Is There Such a Thing as Too Much Documentation?
To examine whether there is such a thing as “too much documentation”, let’s consider the (many) types of documents organisations maintain for their software products. Below is a sample list:
These documents collectively ensure that software products are developed, maintained, and used effectively, meeting technical and business objectives. The documents’ details may vary depending on the organization, project complexity, and industry regulations.
The list above only covers product documentation and does not cover all types of documents that organisations use. As we shall see in section 3.3, organisational processes require another, equally long, set. Is that too much documentation?
At face value, more quality, specialized documentation serving specific business aspects is always highly desirable. However, the cost of documentation outweighs its benefits when a) documentation becomes an objective in itself or b) the documentation produced is not useable or helpful.
3.2 Documentation Usability, Not Size, Format, or Used Tools, as a Measure of Success
Product documentation is a great example of a value-adding activity as it contributes little to the final product design. It also has a cost, especially when valuable resources interrupt their value-adding activities (analysis, design, development) to create it.
To paraphrase Winston Royce in his famous paper on large software project delivery, when it comes to performing non-value-adding activities, the management works hard to convince the developers to do them and the customer to pay for them. Therefore, the cost-to-benefit ratio of documentation and other similar tasks must be kept as low as possible.
Valuable documentation has the following characteristics:
The above attributes, not documentation size or tools used, ensure the documentation is useable and valuable to the organisation and its clients.
4. Myth 3: Processes Can Be Engineered (and Documented) to Cover Any Scenario
Organisations tend to study past events to create predictive and prescriptive models for future decisions based on the assumption that they are dealing with a complicated system in which the components and associated relationships are capable of discovery and management. This arises from Taylor’s application, over a hundred years ago, of the conceptual models of Newtonian Physics to management theory in the principles of scientific management.
— Dave Snowden, Complex Acts of Knowing
4.1 Process Documentation: What Do We Really Need?

Organizations maintain various types of documentation to describe their organizational processes. Here are some common types of documentation related to organizational processes:
This long list prompts curious minds to put forward the following concerns:
To answer these questions, we first start by dividing the information these documents cover into three categories.
Category | What does it include? | Useful? |
---|---|---|
Compliance | SOPs, Process Change Requests, Process Governance Documents | Yes, mission-critical |
Wikis and How-Tos | Knowledge Base, Wiki Articles, and How-To guides | Are valuable if they adhere to the characteristics from section 3.2 |
Others | Process Maps/Flowcharts, Narratives, KPIs | Limited usefulness |
The last category is where things can start to go wrong. To understand why, imagine that you need to write a guide of what needs to be done when investigating a production bug.
Sure, you can include a flow chart that starts by checking whether you have enough information to start the troubleshooting process and whether the bug is reproducible or not. Next, you will have a box that says “Analyze”, followed by a detailed process of producing a fix.
Where does the bulk of the work lie in that three-step process (Check, Analyze, Fix)? Probably in step 2, Analyze. In this case, is it not reasonable to break it down into more details? A more detailed analysis step would need to include all the following:
As you might have guessed, the dissection of the Analysis step can immediately overflow with infinitely many branches and paths leading to a solution. The analysis stage is rich, subjective, and contextual and has to deal with uncertainty in future situations, making it nearly impossible to commit to a piece of paper.
What you do every time you want to investigate a production issue involves creative effort, affordability of tools and support, bug criticality, expertise, time at hand, and the specific details of the feature itself. In Snowden words, “We only know what we know when we need to know it” [1].
4.2 Documenting the Desirable Object State vs. The Process of How to Get There
All process documentation is typically a mixture of two components:
The first component, a desirable end-state, is much more robust than the others. It also allows for creativity, innovation, and the satisfaction of discovering new solutions. In many cases, documenting the end state and the path to reach it can make it harder for people to explore alternative paths that may be better than the ones currently held.
5. Myth 4: Telling People What to Do Instead of What Not to Do and When
5.1 Narratives, Parables, and Bedtime Stories

Narratives, parables, and bedtime stories serve distinct but interconnected roles in the lives of individuals, groups, and organizations. These roles are based on their ability to convey information, illustrate concepts, and shape perspectives.
Narratives are essential for personal growth and learning. They allow individuals to process their own experiences, understand their emotions, and make sense of their lives. People often create personal narratives to make meaning from their journeys and experiences.
Parables are concise stories that convey moral or ethical lessons. They serve as valuable tools for personal reflection and growth. Individuals often use parables to explore ethical dilemmas and make decisions based on timeless wisdom.
In group settings, narratives help build a shared sense of identity and purpose. Organizational leaders often use narratives to align team members with the company’s mission and values. Narratives can also be a powerful tool for conveying community history, culture, and traditions.
Parables are employed in group settings to communicate moral and ethical principles. They can be particularly useful in educational and religious contexts, where they simplify complex ethical concepts for broader understanding.
Narratives are crucial for communicating an organization’s history, vision, and goals. They create a sense of continuity and purpose among employees, which can foster commitment and motivation. Effective storytelling can also be used to convey complex strategies or changes within the organization.
Parables and bedtime stories have almost no place in organisations. Parables are highly abstract and require an extensively shared context between the speaker and the audience. This shared context may cover education, culture, and belief and value systems. Without this common denominator, communicating in parables will be inefficient.
What about bedtime stories? Dave Snowden uses bedtime stories as a metaphor for lesson learning. Snowden believes that organisations should focus on creating knowledge bases documenting what not to do rather than what to do, as the latter is much harder to pinpoint (in a complex system, at least) and is more easily forgotten.
5.2 Worse Practice (Negative Stories) Knowledge Bases
Dave Snowden, the creator of the Cynefin framework, proposed the idea of worse practice knowledge bases as a radically different approach to knowledge management than what we currently have. Worse practice knowledge bases document failures rather than successes. Here are some of the arguments for them:
Despite the forceful arguments for them, worse practice knowledge bases are not that common for two reasons.
Negative stories and past failures set boundaries on what can or can’t be done. It is one of only three interventions that can be used to manage complex adaptive systems, the two others being managing attractors.
6. Myth 5: Collective Knowledge Is the Sum of Individual Knowledge
6.1 You Have The Specifications, Why Can’t You Make It Work?

When writing down product specifications, user manuals, or reference guides, we make vast assumptions about what the reader should know for this document to be usable. Most of these assumptions are usually unconscious.
The expert is asked to codify their knowledge in anticipation of potential future uses of that knowledge. Assuming willingness to volunteer, the process of creating shared context requires the expert to write a book.
— David Snowden, Complex Acts of Knowing
The reality is that what is music to our ears might not be for someone who does not share our expertise with the product, knowledge of its design and history, education in computer science, and perhaps even culture.
To use Max Boisot’s definitions in I-Spaces, specifications are high-abstraction, high-codification. Although they have plenty of information, they don’t tell you everything you need to know.
For example, if you are given the design specifications of an Oracle database, it doesn’t automatically follow that you can build one in your garage. The author of these specifications has probably assumed that you already know something about database engine design, software development and testing, distributed software systems, networking, security, etc. To build a database engine, you need a book that covers all these fields.
6.2 Collective vs. Individual Driven Problem-Solving
During intense, complex problem-solving, we remember things we thought we had forgotten. Discussions with customers, managers, or senior developers from the current or past organisations suddenly become relevant and offer solutions we did not know existed.
The capacity to tap into potentially unlimited knowledge stored in the collective memory of a group is key to addressing complex issues; predicting group behaviour in future hypothetical situations is impossible, making it even harder to write down.
There is also something else that a group can do but an individual cannot: generating a rich set of alternative solutions or views of the same problem. You cannot teach people to think differently, but you can give a problem to a sufficiently large group, and some of them will notice something that others won’t.
For complicated solutions, an expert opinion might be enough or even worthier than the collective opinions of the many. In contrast, complex problem-solving in a group generates optimal solutions more effectively than individually created ones.