Many knowledge management programs seek to gain ideas, insight and feedback from diverse members of the workforce. One type of exercise designed to generated diverse ideas is the Knowledge Café.
In the Knowledge Café, people from diverse areas of the business sit themselves around a table and engage in a conversation about a particular topic. Ideas that are stimulated by the discussion are captured by the moderator, written on the table cloth, or inscribed by participants on post-it notes and stuck to a wall. An exercise of this nature can rapidly generate many hundreds of comments.
Once the event is successfully concluded, and the human resources department have more than a thousand responses, a range of questions naturally emerge:
- What do we do with the comments?
- How can we possibly get a handle on them all?
- How can we give feedback to busy managers?
- How do we characterise the results of the discussion?
- What is the general mood of the organisation?
What not to do
Many corporates have fallen prey to consultancies who sell them on the idea of having their experts interpret the results of the knowledge café. The reports come back with fancy language and heatmaps and recommendations that generally necessitate further study. And the organisation, whether actively or passively, resists.
Self-interpretation is a better approach
It is far better to allow staff and management to interpret the results of a knowledge café. Not being analysts, staff and management obviously cannot get their heads around the entire corpus. What they need is a way to slice through the corpus to access the comments relevant to the person, the role or the current need.
How to build a knowledge slicing machine with Tinderbox
My approach is to build a mechanism to slice the comments into multiple vectors. With those vectors, I generate HTML pages that enable staff and management to access the cluster of topics that interest them.
To build a knowledge slicing machine:
- Using a concordancing tool, generate a list of all words included in the comment. Exclude common words with a stoplist. Cluster multiple word forms (stemmatization). Sort by frequency of usage.
- Import these words into Tinderbox and throw them onto a map sorted alphabetically. Cluster the words into groups semantically-related sets.
- For each set, construct an agent that returns all comments containing the key terms in the semantic set.
- Import into Tinderbox the comments themselves, storing the comments themselves in a carefully marked container. Watch as the agents suddenly cluster their underlying comments.
- Sort the agents by number of children. Take the largest result sets and begin to construct agents that cluster sub-topic. Structure a range of sub-topics.
- Having sliced and diced the comments a hundred different ways, export it into HTML for general sharing back to the organisational community.
Example from a large government trading organisation
The example presented here consists of 1042 comments that have been clustered into 249 semantic sets. I'm not at liberty to actually show you the comments themselves, but whenever you see the disclosure triangles you can be sure the comments are lurking just a click away.
These terms are not simple word searches, as they cluster a range of words that signify a particular meaning. For example, the staff query is written to aggregate comments with the words staff, employee and people.
In this organisation, "people" is a synonym of employee; never of passenger. Here's the comparison with the passenger query.
One of the major topics in this discussion surrounded incidents. The incident topic consists of a range of sub-topics, which are ordered in this image by the number of comments within the sub-topic.
Again, each of these groupings are powered by queries that build on the initial semantic sets. Here's the query for "incident alerting".
One of the key needs of the Knowledge Management function within the Human Resources department of this organisation was to get a handle on natural language in context. To assist in this, I structured various key terms into a thesaurus-like outline.
The image that follows shows a subset of the unfurled ‘people’ outline.
Not only is it useful to identify a structure of the nouns in the language, but also of the verbs, modal auxiliaries and key objects that signify change, as shown below.
The terminology used as signifiers of change was interesting enough to classify semantically; separators group related usages.
The analysis for the thesaurus outline was all performed manually. But including it within the tool enables people to link across to the comments that embody the term.
All images in this article represent views of the Tinderbox work environment. In order to share the ideas freely across the organisation, the data is exported into HTML pages. The HTML captures all the views of the data shown in this article. Each semantic set and topic structure is presented with word clouds auto-generated into each major grouping.
In addition to the inherent usefulness for slicing through the mass of data in order to derive meaning, a major benefit of this approach is the analytical reuse. Subsequent knowledge cafés in this organisation already have a pre-built analytical framework. Leveraging this framework for further knowledge café events involves:
- Pouring the comments into the tool.
- Testing for unique terms not yet covered; extending as needed.
- Automatic generation into HTML.
Because of this approach to dynamically building result sets based on a domain-specific semantic analysis, further knowledge cafés have been processed within a few hours. Virtually all of those hours are spent performing step 2.
Tinderbox performs admirably running a document containing ~300 agents. When you consider the minimal time for constructing the initial analysis, the opportunities for reuse, and the flexibility in constructing solutions to new challenges that may arise, you realize that Tinderbox provides an ideal base for building custom, domain-specific knowledge management applications.