Knowledge Management: 5 Myths You Can Safely Ignore in Today’s Complex Environment

1. Introduction

Knowledge, and not wealth, land, or equipment, has undeniably become the principal asset of major organizations, especially those operating in the digital space. For such organizations, the priority has now shifted towards capturing, codifying, and preserving this knowledge.

To address the challenge of gathering and saving intangible, fluid, and potentially unbound asset forms, modern organizations allocate significant budgets to create the necessary roles (such as Chief Knowledge Officer) and acquire the right tools and expertise to ensure longevity and effective utilization of intellectual capital.

As with any other organisational endeavour, the pendulum sometimes swings too far, and the cost of running and maintaining the initiative eventually outweighs its benefits. In extreme cases, what started as a process improvement exercise in knowledge management can, at best, become a distraction and, at worse, adversely affect the organisation’s performance.

This article will examine five myths of knowledge management that have pushed organisational initiatives into the non-productive or even counter-productive zones of knowledge management.

The stories we are about to tell are from the perspective of middle management attempting to capture specific knowledge on software development processes. While these stories can be relevant to wider sections of the organisations, they are best interpreted in that slightly narrow context.

2. Myth 1: All Human Knowledge Can be Codified

2.1 Where Is Knowledge Stored?

Can you write down everything you know? Human knowledge is codified in various forms, such as oral and written traditions, music, dance, architecture, and culture.
Can you write down everything you know? Human knowledge is codified in various forms, such as oral and written traditions, music, dance, architecture, and culture.

Art, music, architecture, culture, oral traditions (stories, myths), and written texts (literature, poetry, scriptures) are the traditional forms of capturing and preserving human knowledge.

In software teams and organisations in general, knowledge is stored in:

  • Organisational cultures and subcultures
  • Production processes
  • Narratives
  • Code bases
  • Reference manuals
  • Collaboration tools (Jira, Teams, Confluence, Sharepoint, etc.)

Code comments are one area programmers can relate to when examining the challenge of documenting knowledge. Most developers will have pondered at some point how much comment is enough and what to include or exclude.

However, most developers would agree that it’s not feasible or economical to write everything one knows about a certain piece of code. The important stuff related to the product’s development, design, architecture, roadmap, and future is probably shared knowledge distributed among the team.

2.2 Tacit vs. Explicit Knowledge

Knowledge can often be classified into two main categories: tacit and explicit.

  • Tacit knowledge refers to the knowledge that is difficult to articulate or transfer to others. It is often acquired through personal experience, observation, and intuition. This form of knowledge resides in an individual’s mind and is often subconscious. Tacit knowledge includes skills, insights, and understanding that are gained through practice and familiarity.
  • Explicit knowledge is more formalized and easy to articulate and document. It is the type of knowledge that can be codified, shared, and transferred explicitly through language, text, or other forms of communication. Explicit knowledge can be found in textbooks, manuals, databases, and other written or recorded sources. Examples of explicit knowledge include scientific principles, laws, rules, and procedures.

The SECI model, developed by Ikujiro Nonaka and Hirotaka Takeuchi, is a conceptual framework that describes the process of knowledge creation and conversion (from tacit to explicit) within organizations. It focuses on how tacit and explicit knowledge is generated, shared, and transformed.

The SECI model is divided into four modes of knowledge conversion, each represented by an acronym:

  • Socialization (S): In this mode, tacit knowledge is shared and created through face-to-face interactions and shared experiences. It emphasizes the importance of informal communication and observation. For example, when team members discuss their experiences or work closely together, they exchange tacit knowledge.
  • Externalization (E): Externalization involves articulating tacit knowledge into explicit forms. This mode involves taking the knowledge gained through socialization and transforming it into tangible and shareable concepts, such as documents, diagrams, or models. This is often done through dialogue and reflection.
  • Combination (C): In the combination mode, explicit knowledge is combined and reconfigured into new explicit knowledge. This can involve processes like categorization, reorganization, and summarization. It’s about creating structured knowledge repositories and databases where information can be easily accessed and reused.
  • Internalization (I): Internalization refers to the process of converting explicit knowledge back into tacit knowledge. This occurs when individuals or teams take explicit knowledge, such as documents or manuals, and apply it to their own experiences, incorporating it into their tacit knowledge. It’s essentially the reverse of externalization.

These four modes of knowledge conversion are not necessarily sequential; they can happen in parallel and often feed into each other. The SECI model underscores the importance of a dynamic and iterative knowledge-creation process within organizations. It’s a useful framework for understanding how knowledge flows and evolves in business.

Management consultant Dave Snowden would disagree that SECI is the way forward in knowledge creation and management, especially around transforming tacit knowledge into explicit. Snowden believes that “knowledge can only be volunteered, not conscripted” and favours apprenticeship models where new joiners can learn through a combination of theory and practice.

New joiners tasked with developing their skills by reading code or reference manuals will take considerable time to appreciate the application’s design and architecture and acquire how-to knowledge for developing the product.

Vital knowledge is mostly tacit and would not be found in written documents. A form of apprenticeship in such cases is remarkably more effective.

2.3 Language and Knowledge Codification

Why did language evolve in humans? Was it solely for communication and describing the world? If yes, why can we only think about imaginary objects and abstract ideas instead of physical objects?
Why did language evolve in humans? Was it solely for communication and describing the world? If yes, why can we only think about imaginary objects and abstract ideas instead of physical objects?

Noam Chomsky, a renowned linguist and cognitive scientist, has contributed significantly to our understanding of human language. Central to Chomsky’s theory of language is the concept of the “poverty of the stimulus,” which forms the basis for his argument that language is inherently limited in its ability to capture the full range of human thought and experience.

  • Inherent Structural Limitations: Chomsky posits that inherent structural limitations characterize human language. This means the grammatical rules and syntactical structures that govern language inherently restrict how we express our thoughts. For instance, natural languages often struggle to convey abstract concepts or complex emotions precisely due to these structural constraints.
  • Semantic Gaps: Language relies on finite words and symbols to represent an infinitely diverse range of thoughts and experiences. Consequently, there are inherent semantic gaps in any language. Complex, nuanced, or highly individualized experiences may not have direct linguistic equivalents, leading to imprecision in communication.
  • Subjective Interpretations: Chomsky’s views also touch upon the subjective nature of language. Individuals can interpret Words and phrases differently based on their unique cultural backgrounds, experiences, and perspectives. This subjectivity introduces an additional layer of complexity, making it challenging for language to convey a universally consistent meaning.
  • Context Dependency: Language often relies on context to convey meaning. However, not all aspects of human experience can be effectively contextualized through language. For example, certain sensory experiences or emotional states may be challenging to describe accurately without direct sensory perception.
  • The ineffability of Certain Concepts: Some concepts, particularly those related to profound philosophical or metaphysical ideas, may be inherently ineffable, meaning they cannot be adequately expressed in words. This aligns with Chomsky’s argument that language has inherent limitations when it comes to capturing abstract or deeply philosophical concepts.

Not only is it uneconomical to write down everything we know, but, according to Chomsky, it might also be impossible altogether.

2.4 Knowledge Storage in Neural Nets and Its Interpretability

Unlike humans, machines are not (yet!) good storytellers. […] This creates three risks. First, the machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system.

— Erik Brynjolfsson and Andrew McAfee, The Business of Artificial Intelligence, 2017

The quote above is from an article titled “The Business of Artificial Intelligence“, published recently in the Harvard Business Review. Its authors, Brynjolfsson and McAfee, highlighted three risks of using neural nets to aid decision-making.

One of the challenges of using neural networks and machine learning is our inability to interpret their underlying decision rules. This challenge prevents us from easily rooting out bugs, errors, and biases caused by incomplete or biased training data sets.
One of the challenges of using neural networks and machine learning is our inability to interpret their underlying decision rules. This challenge prevents us from easily rooting out bugs, errors, and biases caused by incomplete or biased training data sets.

The first risk concerns the challenge of interpreting the neural network parameters that make up the solution to the business problem. The other two have to do with troubleshooting and coverage, which we will ignore for now as they are irrelevant to our discussion.

The first risk, created by our difficulty in interpreting neural network parameters, comes from the neural networks’ overall design and learning algorithm. The complicated non-linear equations that compute the output of a neural net are very different from decision-support systems that use logical statements and decision trees to make a judgment. While we can promptly interpret logical statements and decision trees, we cannot say the same for neural networks.

Using the supplied training data, the backpropagation algorithm will select a suitable choice of parameters whose values allow neural nets to map the inputs to the outputs, to put it very crudely. The ability to iteratively determine (without understanding) the rules associating inputs and outputs, no matter how complicated they are, is what makes neural networks so powerful. In this sense, we say that neural networks “learn”.

Neural networks can learn (distil knowledge) from raw data by discovering the correct associations between inputs and outputs. Despite this powerful capability, neural networks remain far (very far) behind what human brains can do.

The point I am driving at is as follows. If decision algorithms implemented with neural nets are so hard to interpret, human judgments must be infinitely more difficult. How we make judgments, retrieve and associate memory patterns, and solve complex problems is yet to be fully understood, let alone written on paper using terminology and concepts already available to us.

Information and data are insufficient for making decisions as far as humans are concerned; emotions, culture, context, and many other factors influence our final decisions, and these are being built and rebuilt every day, making static algorithms that are supposed to embody process knowledge almost useless.

3. Myth 2: Documentation is Good, More Is Better

3.1 Is There Such a Thing as Too Much Documentation?

To examine whether there is such a thing as “too much documentation”, let’s consider the (many) types of documents organisations maintain for their software products. Below is a sample list:

  • Project Requirements Document: This document outlines the functional and non-functional requirements of the software product. It serves as the foundation for development and helps ensure alignment with the client’s needs.
  • Software Design Document: This document provides a detailed technical blueprint of the software, including architecture, data flow, algorithms, and component interactions.
  • Code Documentation: This includes inline comments, code documentation files (e.g., Javadoc, Doxygen), and code style guides. Proper code documentation helps developers understand and maintain the codebase.
  • User Manuals and Guides: User documentation explains how to install, configure, and use the software. It includes user manuals, online help, and FAQs.
  • Change Request/Issue Tracker: Software organizations use tools like JIRA or Trello to track and manage change requests, bug reports, and feature requests from stakeholders.
  • Version Control Documentation: Guidelines and best practices for using version control systems (e.g., Git) to manage source code.
  • Release Notes: These documents detail each software release’s changes, enhancements, and bug fixes. They help users and stakeholders understand what’s new.
  • Security Documentation: Security assessments, penetration test reports, and security policies ensure the software meets security standards.
  • Compliance Documents: For regulated industries, compliance documents demonstrate adherence to specific standards or regulations (e.g., HIPAA, GDPR, ISO 27001).
  • Project Schedule and Timeline: Timelines, milestones, and Gantt charts that outline the project’s schedule and progress.
  • Product Roadmap: A high-level document outlining the future direction of the software, including planned features and enhancements.
  • Disaster Recovery Plan: This plan outlines procedures for data backup, system recovery, and business continuity in case of unexpected events.
  • License Agreements: Documentation related to licensing and usage rights, including open-source licenses if applicable.
  • Maintenance and Support Procedures: Documentation detailing how the organization provides ongoing maintenance and support for the software.
  • Quality Assurance and Testing Reports: Reports on quality assurance processes, test results, and verification and validation activities.
  • Configuration Management Plan: Guidelines for managing software configurations, including versioning, branching, and merging strategies.
  • Training Materials: Materials used for training developers, support staff, and end-users on the software.
  • Performance Reports: Documentation related to the software’s performance metrics and optimization efforts.
  • Technical Specifications: Detailed technical specifications for hardware and software requirements.

These documents collectively ensure that software products are developed, maintained, and used effectively, meeting technical and business objectives. The documents’ details may vary depending on the organization, project complexity, and industry regulations.

The list above only covers product documentation and does not cover all types of documents that organisations use. As we shall see in section 3.3, organisational processes require another, equally long, set. Is that too much documentation?

At face value, more quality, specialized documentation serving specific business aspects is always highly desirable. However, the cost of documentation outweighs its benefits when a) documentation becomes an objective in itself or b) the documentation produced is not useable or helpful.

3.2 Documentation Usability, Not Size, Format, or Used Tools, as a Measure of Success

Product documentation is a great example of a value-adding activity as it contributes little to the final product design. It also has a cost, especially when valuable resources interrupt their value-adding activities (analysis, design, development) to create it.

To paraphrase Winston Royce in his famous paper on large software project delivery, when it comes to performing non-value-adding activities, the management works hard to convince the developers to do them and the customer to pay for them. Therefore, the cost-to-benefit ratio of documentation and other similar tasks must be kept as low as possible.

Valuable documentation has the following characteristics:

  • Clarity: Good documentation is easy to understand and leaves no room for ambiguity. It uses plain language and avoids unnecessary jargon or technical terms when not required.
  • Conciseness: While providing comprehensive information, good documentation is concise. It gets to the point without unnecessary elaboration, making it easier to digest.
  • Organization: Information is logically structured and organized, making it easy to navigate. Clear headings, subheadings, and a table of contents (for longer documents) help users find what they need quickly.
  • Consistency: Documentation maintains a consistent style and format throughout. Consistency in terminology and formatting enhances readability and reduces confusion.
  • Accuracy: Information presented in the documentation is accurate and up-to-date. It should be reviewed and revised as needed to reflect changes.
  • Relevance: Good documentation focuses on the most relevant information for the intended audience. It avoids including unnecessary details that might overwhelm or distract readers.
  • Completeness: It covers all necessary topics and comprehensively covers the subject matter. It doesn’t leave readers with unanswered questions.
  • Accessibility: Documentation should be easily accessible to its intended audience. This includes providing multiple formats (e.g., PDF, online, print) and ensuring compatibility with assistive technologies for accessibility.
  • User-Centered: Effective documentation is created with the end-user in mind. It anticipates the questions and needs of the target audience and addresses them proactively.
  • Visual Elements: Graphics, charts, diagrams, and illustrations are used when appropriate to enhance understanding. Visual elements should be clear and relevant.
  • Searchability: In digital documentation, a search function or index is essential to help users quickly locate specific information.
  • Version Control: Especially in software development, version-specific documentation ensures that users can access the information relevant to their software version.
  • Examples and Samples: Providing real-world examples, code snippets, or sample documents can be immensely helpful for users to understand concepts and apply them.
  • Maintenance: Documentation should be regularly updated to reflect changes, improvements, or new information. Outdated documentation can lead to confusion and errors.
  • Feedback Mechanism: Users should have a way to provide feedback or seek clarification on documentation. This can help improve future versions.
  • Security: Sensitive information should be appropriately protected or redacted in documentation to prevent unauthorized access.
  • Cross-Referencing: Where relevant, documentation should include cross-references to related sections or documents, aiding users in exploring related topics.
  • Internationalization and Localization: For a global audience, documentation should consider language and cultural differences and be adaptable to various regions.
  • Ownership: Clearly define documentation ownership to ensure accountability for updates and maintenance.

The above attributes, not documentation size or tools used, ensure the documentation is useable and valuable to the organisation and its clients.

4. Myth 3: Processes Can Be Engineered (and Documented) to Cover Any Scenario

Organisations tend to study past events to create predictive and prescriptive models for future decisions based on the assumption that they are dealing with a complicated system in which the components and associated relationships are capable of discovery and management. This arises from Taylor’s application, over a hundred years ago, of the conceptual models of Newtonian Physics to management theory in the principles of scientific management.

— Dave Snowden, Complex Acts of Knowing

4.1 Process Documentation: What Do We Really Need?

Can you document how a football match should be played such that every kick, jump, sprint, and shoot must be planned, scheduled and written down ahead of the game?
Can you document how a football match should be played such that every kick, jump, sprint, and shoot must be planned, scheduled and written down ahead of the game?

Organizations maintain various types of documentation to describe their organizational processes. Here are some common types of documentation related to organizational processes:

  • Standard Operating Procedures (SOPs): SOPs are detailed documents that outline step-by-step instructions for performing specific organisational tasks or processes. They provide a standardized approach to completing activities and ensure consistency in operations.
  • Process Maps/Flowcharts: Visual representations of processes using flowcharts or process maps help employees and stakeholders understand the sequential flow of activities, decision points, and interactions involved in a process.
  • Process Narratives: These are detailed narratives that describe the context, purpose, and execution of a process, often including examples and real-world scenarios.
  • Process Metrics and Key Performance Indicators (KPIs): Documentation of the specific metrics and KPIs associated with each process helps organizations track and measure process performance and identify areas for improvement.
  • Process Improvement Plans: Documentation outlining plans for optimizing or enhancing existing processes, including goals, strategies, and timelines.
  • Process Ownership and Responsibilities: Clearly defined roles and responsibilities for each step in a process, including process owners, stakeholders, and those responsible for execution.
  • Process Training Materials: Materials like training manuals, videos, or e-learning modules that educate employees on how to execute processes correctly.
  • Process Change Requests: Documentation of proposed changes or updates to existing processes, including justification and impact assessments.
  • Compliance and Regulatory Documentation: Documentation that ensures processes align with industry regulations, standards, and legal requirements.
  • Process Governance Documents: Documents outlining the governance structure for managing and overseeing processes, including committees, roles, and decision-making processes.
  • Knowledge Base or Wiki Articles: Online repositories of process-related information accessible to employees for quick reference and learning.

This long list prompts curious minds to put forward the following concerns:

  • How efficient is it to produce these documents and maintain them?
  • Are they valuable, useable, and effective all the time?
  • If we had to reduce the documentation cost of a specific department, which items from the list above would have to be left out?

To answer these questions, we first start by dividing the information these documents cover into three categories.

CategoryWhat does it include?Useful?
ComplianceSOPs, Process Change Requests, Process Governance DocumentsYes, mission-critical
Wikis and How-TosKnowledge Base, Wiki Articles, and How-To guidesAre valuable if they adhere to the characteristics from section 3.2
OthersProcess Maps/Flowcharts, Narratives, KPIsLimited usefulness

The last category is where things can start to go wrong. To understand why, imagine that you need to write a guide of what needs to be done when investigating a production bug.

Sure, you can include a flow chart that starts by checking whether you have enough information to start the troubleshooting process and whether the bug is reproducible or not. Next, you will have a box that says “Analyze”, followed by a detailed process of producing a fix.

Where does the bulk of the work lie in that three-step process (Check, Analyze, Fix)? Probably in step 2, Analyze. In this case, is it not reasonable to break it down into more details? A more detailed analysis step would need to include all the following:

  • All potential bugs and scenarios that might arise in the future
  • The steps required to find the root cause per scenario
  • All stakeholders to be informed or consulted before a specific solution is adopted
  • At what point does the troubleshooter seek assistance from senior coworkers

As you might have guessed, the dissection of the Analysis step can immediately overflow with infinitely many branches and paths leading to a solution. The analysis stage is rich, subjective, and contextual and has to deal with uncertainty in future situations, making it nearly impossible to commit to a piece of paper.

What you do every time you want to investigate a production issue involves creative effort, affordability of tools and support, bug criticality, expertise, time at hand, and the specific details of the feature itself. In Snowden words, “We only know what we know when we need to know it” [1].

4.2 Documenting the Desirable Object State vs. The Process of How to Get There

All process documentation is typically a mixture of two components:

  • One component describes an object’s final (typically desirable) state. The object can be a new feature, a successful test, or a speedy delivery.
  • The other component describes a process of how to get there. The process describes one or multiple paths on how a feature can be implemented, a test is conducted, and a delivery is completed.

The first component, a desirable end-state, is much more robust than the others. It also allows for creativity, innovation, and the satisfaction of discovering new solutions. In many cases, documenting the end state and the path to reach it can make it harder for people to explore alternative paths that may be better than the ones currently held.

5. Myth 4: Telling People What to Do Instead of What Not to Do and When

5.1 Narratives, Parables, and Bedtime Stories

Narratives, parables, and bedtime stories are great ways of storing and sharing knowledge.

Narratives, parables, and bedtime stories serve distinct but interconnected roles in the lives of individuals, groups, and organizations. These roles are based on their ability to convey information, illustrate concepts, and shape perspectives.

  • Individuals

Narratives are essential for personal growth and learning. They allow individuals to process their own experiences, understand their emotions, and make sense of their lives. People often create personal narratives to make meaning from their journeys and experiences.

Parables are concise stories that convey moral or ethical lessons. They serve as valuable tools for personal reflection and growth. Individuals often use parables to explore ethical dilemmas and make decisions based on timeless wisdom.

  • Groups

In group settings, narratives help build a shared sense of identity and purpose. Organizational leaders often use narratives to align team members with the company’s mission and values. Narratives can also be a powerful tool for conveying community history, culture, and traditions.

Parables are employed in group settings to communicate moral and ethical principles. They can be particularly useful in educational and religious contexts, where they simplify complex ethical concepts for broader understanding.

  • Organizations

Narratives are crucial for communicating an organization’s history, vision, and goals. They create a sense of continuity and purpose among employees, which can foster commitment and motivation. Effective storytelling can also be used to convey complex strategies or changes within the organization.

Parables and bedtime stories have almost no place in organisations. Parables are highly abstract and require an extensively shared context between the speaker and the audience. This shared context may cover education, culture, and belief and value systems. Without this common denominator, communicating in parables will be inefficient.

What about bedtime stories? Dave Snowden uses bedtime stories as a metaphor for lesson learning. Snowden believes that organisations should focus on creating knowledge bases documenting what not to do rather than what to do, as the latter is much harder to pinpoint (in a complex system, at least) and is more easily forgotten.

5.2 Worse Practice (Negative Stories) Knowledge Bases

Dave Snowden, the creator of the Cynefin framework, proposed the idea of worse practice knowledge bases as a radically different approach to knowledge management than what we currently have. Worse practice knowledge bases document failures rather than successes. Here are some of the arguments for them:

  • Telling people what not to do is far more effective and powerful than telling them what to do since our responses to information on adverse events are disproportionately more elevated than our responses to favourable ones. We pay more attention to sad stories than happy ones.
  • It is much easier to document past actions that resulted in adverse outcomes than to try and predict all the future actions that will lead to success or assert that what worked in the past will continue to work in the future despite changing circumstances.
  • Articulating what we should not do leaves the door open to serendipity or the openness to discover good things along the way. When we focus on what needs to be done, we become blind to useful opportunities that do not necessarily align with our stated goals.

Despite the forceful arguments for them, worse practice knowledge bases are not that common for two reasons.

  • First, although the idea of bedtime stories that start badly and end well is ancient and universal in human cultures, documenting and sharing knowledge via “lessons learned” in organisations seemed more favourable and straightforward as it provided new employees with “recipes” ready to implement. The other approach, telling people what not to do and letting them figure out what they should do, is more laborious and time-consuming. Naturally, we went for the former.
  • Second, sharing failures requires a level of trust and psychological safety within organisations that might not be readily available, as such information can be easily used (or misused) as political ammunition.

Negative stories and past failures set boundaries on what can or can’t be done. It is one of only three interventions that can be used to manage complex adaptive systems, the two others being managing attractors.

6. Myth 5: Collective Knowledge Is the Sum of Individual Knowledge

6.1 You Have The Specifications, Why Can’t You Make It Work?

We gave you the specifications, why can't you fix the problem?
We gave you the specifications, why can’t you fix the problem?

When writing down product specifications, user manuals, or reference guides, we make vast assumptions about what the reader should know for this document to be usable. Most of these assumptions are usually unconscious.

The expert is asked to codify their knowledge in anticipation of potential future uses of that knowledge. Assuming willingness to volunteer, the process of creating shared context requires the expert to write a book.

— David Snowden, Complex Acts of Knowing

The reality is that what is music to our ears might not be for someone who does not share our expertise with the product, knowledge of its design and history, education in computer science, and perhaps even culture.

To use Max Boisot’s definitions in I-Spaces, specifications are high-abstraction, high-codification. Although they have plenty of information, they don’t tell you everything you need to know.

For example, if you are given the design specifications of an Oracle database, it doesn’t automatically follow that you can build one in your garage. The author of these specifications has probably assumed that you already know something about database engine design, software development and testing, distributed software systems, networking, security, etc. To build a database engine, you need a book that covers all these fields.

6.2 Collective vs. Individual Driven Problem-Solving

During intense, complex problem-solving, we remember things we thought we had forgotten. Discussions with customers, managers, or senior developers from the current or past organisations suddenly become relevant and offer solutions we did not know existed.

The capacity to tap into potentially unlimited knowledge stored in the collective memory of a group is key to addressing complex issues; predicting group behaviour in future hypothetical situations is impossible, making it even harder to write down.

There is also something else that a group can do but an individual cannot: generating a rich set of alternative solutions or views of the same problem. You cannot teach people to think differently, but you can give a problem to a sufficiently large group, and some of them will notice something that others won’t.

For complicated solutions, an expert opinion might be enough or even worthier than the collective opinions of the many. In contrast, complex problem-solving in a group generates optimal solutions more effectively than individually created ones.

7. Summary

  • Documentation must not be an end in itself, measured by size, price or sophistication of the tools used.
  • Documentation can serve many purposes, including keeping audit trails, satisfying compliance requirements, and knowledge preservation.
  • Not everything people know can be codified; therefore, documentation is not equivalent to knowledge management.
  • Knowledge management is about managing information flow and connecting the right people.
  • Knowledge management is also about understanding where knowledge is stored and how to preserve its assets within the organisation.
  • Acting in a complex environment requires theory and practice, a collective effort of knowledge sharing and knowledge creation.
  • High abstraction and high-codification documents (maps, technical specifications, etc.) have limited usefulness and can be detrimental if relied on as a single source of knowledge.

8. References

Leave a Reply

Your email address will not be published. Required fields are marked *