By Tony DePrato | Follow me on Twitter @tdeprato
Willis Towers Watson (NASDAQ: WLTW) is a leading global advisory, brokering and solutions company. They did an extensive study on change management. In the study they state immediately and without hesitation, “Measurement is among the biggest drivers of overall change success, and it cannot start midway through the initiative”.
Telling someone, “Hey! Great idea. Measure it!”, is not going to work either. Stakeholders in any initiative need a plan with third-party indicators. Oversight requires those metrics to not be solely dictated by people directly working on a project or process. Because many activities are difficult to reduce to a number, observable criteria and anecdotal evidence need to be considered.
Measurement in Technology Driven Initiatives and Processes
Here is a common scenario. A new laptop is requested by a Department Head. The laptop is delivered by the IT department, and all paperwork formalities are completed. The laptop starts up, makes a little beep, and a nice picture appears on the screen. The person delivering the laptop would probably conclude they were successful. They might even give themselves a silent high-five and fist bump.
The fact is, the laptop is still useless. The end user needs to go through many steps in order to apply this new piece of technology to their work. Stating something works, because there has been no sign of error, is a common problem with technology implementation. Those doing the implementation are measuring their success based-on criteria that suits their scope of work.
In order to avoid any type of tunnel vision with regards to measurement, school leadership need to be certain all plans outline what success will look like, and how it can and will be reported. One of the best ways to accomplish this is to find an example from a similar project. Normally, this requires networking with other schools or organizations, but it is well worth the effort. Merely describing success with a list of features is less productive than demonstrating success with a set of functions.
Beware of Examples and Case Studies from Vendors
Many technology initiatives require new or upgraded products or services. Vendors normally include best case scenario examples and case studies. These case studies of successful implementation form the foundation that vendors use to create metrics of success.
Vendors are motivated by their sales and margins. Most of the examples and case studies will not fit into the actual plan most organizations have created. The true details and complications faced by other clients are usually left out or simply not known. The triumph of the implementation is within these hidden organizational specific criteria, problems, and solutions.
I was once contacted by a company and asked to write a case study about an implementation that my team and I had completed very successfully. I was excited, because this meant we had done an excellent job, we were the example. I spent two days putting all the details together. When the case study was finally published, it contained about 25% of the details, and left out what I considered to be core information other schools would need to follow in our footsteps.
Vendors work for the benefit of themselves, so their metrics cannot be used to measure the success of any school’s (or school district’s) local project. Schools need to set their own standards and criteria for success.
Measuring Technology Projects
Technology projects can be challenging to measure. Although the implementation phases usually have a checklist, the professional development processes and various data management pieces can become very cumbersome for a quality oversight initiative. I recommend creating some simple metrics before the project or new process is launched.
For example, let’s take the topic of attendance. Imagine for a moment that the school will grow from 200 students to 500 students within a year. When it is at 500 students, attendance taking will be more difficult. The current process is fairly informal, and done via emails sent to the office. Therefore, a new IT system has been approved for attendance.
For an administrative team to track and measure a project like this, they could and should:
- Evaluate the current system.
- List the top 5-10 aspects that the team feels are part of successful attendance. For example, timeliness, reporting format, alerting parents, etc.
- Rate those aspects on a simple scale (1-5, 1-7, etc.) until the group comes to a consensus on the ratings.
Using these simple steps, the new attendance process can be measured against the previous one. The ratings system is independent of the technology implementation checklist.
Measurement does not need to be complicated. It does need to be consistent and deliberate. Although we may strive to measure ourselves with as little bias as possible, only a third-party measurement can protect us from ourselves.
One thought on “Measurement in the Change Process”