Recently, we’ve been looking at the following ten steps in the life of a lesson:
- Event takes place – an experience, idea, incident or accident
- Analysis and capture – through interview, AAR, workshop, report-writing etc.
- Packaging – write-up of lesson
- Review for accuracy – editing and improvement by person who identified the lesson
- Validation – quality check, ownership assigned and upload into a management system
- Review for accountability – periodic checks on progress
- Implement recommendations – to avoid/ensure recurrence of bad/good alike
- Review for effectiveness – observe changes to ensure they have had desired effect
- Closure – lesson status updated but retained in system for reference and to aid analysis
- Assurance – as part of risk management, periodic review to ensure closed status remains justified
We’ll now look at the analysis and capture processes used to extract lessons from such events.
Lessons can be captured from different sources, such as:
- Interviews – conducted 1-1 with key project team members or employees approaching retirement;
- After Action Reviews – regular, informal meetings conducted within teams;
- Lessons capture workshops – deliberate, formal sessions at end of stage or project closure;
- Reports – from project closure or following an unplanned event such as an accident.
Each of these has advantages and disadvantages. For example, 1-1 interviews give only one perspective and anonymity will encourage honesty whilst limiting credibility and ‘right to respond’. Conversely, workshops may provide a broad overview whilst lacking detail and few participants will be truly honest in front of others, not least when it comes to discussing personal errors.
These are simple facts relating to how people and organisations handle reflections on past performance but a good KM team will ensure that lessons are captured from as many sources as possible.
Whilst sources vary, lessons are captured consistently by analysing past events in the following way:
- What was expected or meant to happen?
- What actually happened?
- How does what happened differ from what was expected?
- What were the root causes or contributory factors?
- What can we learn? This learning comes in two forms:
- What should we do next time round, when faced with this issue?
- What changes should the host organisation make to prevent or ensure recurrence?
- What impact did this issue have? How much money/time did it cost or save us?
People may observe that this seems an artificial structure and they would be correct. The thing is, without a logical flow of inquiry such as this, people will launch thus, "Right, what we need to do next time is...." The priority for any lessons capture method should be consideration of the end-user (i.e. the person, team or department that can use the lesson to drive change).
The end-user is not in the room when the workshop takes place, nor are they reading over your shoulder as you write your report; so we have to do as much as we can to help them understand the issue, by giving them background and an understanding of what was meant to happen.
If they are to act upon our recommendations, by spending budgets differently, or by endorsing and embedding new processes, we need to make our case, which requires logic and persuasion, hence the flow of the lesson and the need for quantifiable impact.
Having captured answers to each of these questions, the lesson now needs to be written up in neat form, which is our next step.