The Third Metric: User Satisfaction

In agile it is said that a user story “Is a place holder for a conversation.” In much the same way, in agile “A metric is a trigger for a conversation.”  Those seeking enterprise agility are almost never interested in stop-light metrics “red—yellow—green.” Instead they are interested in trending metrics. For example, are we making the users more or less happy this week?

Lean and agile businesses measure user satisfaction. The first way to measure user satisfaction is to directly ask the users for their impression of your products and services. This is a simple rating with a follow-on question.

One of our favorite simple forms was suggested by Eric Ries in his  Lean Startup blog.

Ask a random sample of your users the following questions about your product, service, or feature.

  • How would you rate this service?                   (poor)  1  3  5  7  9 (great)
  • Would you recommend this to a friend?          No  or  Yes
  • Why?

For products with large enough user bases, the average answer to this question are plotted and publicly displayed daily to get a feel for user satisfaction level and trends over time. If there is suddenly a sharp rise or fall this is clearly evident in the metrics review ceremony and we can investigate the cause. Perhaps a specific software change correlates with the change in user satisfaction? If we trended negative, we may wish to wait to see if we recover. Sometimes any change causes an initial negative user response.

Asking a question about satisfaction can cover not only the entire product or service but also the addition of new features within the existing line. If you roll out a new feature to a mobile app it is possible to ask this question just about the feature. Do users really like the feature? Are they actually using the feature? If not, why?

This is the second gauge of user satisfaction. We can get a significant feel for at least a level of user satisfaction simply by observing the users interaction with the product, service, or feature.

Interactions to observe include:

  • How frequently do users lose the ability to access the service (down time).
  • How frequently do users abandon our digital interfaces mid-transaction (abandoned cart).
  • How frequently do abandon cart users end up calling our call center instead.

With insight like these we gain a great deal of knowledge as to what our users actually do, not what we think they do or what they say they do. This data helps establish a foundation to improve our users’ experience and grow their satisfaction. Furthermore, we can automate data collection and reporting so the metrics function as an integral part of our product and are not easily subject to significant gaming.

It turns out, by the way, when people rigorously measure feature use they always discover that most of the features you release are almost never be used.  Yes, we really mean over 50% of the features! You need to then investigate why they aren’t being used. For many features the best course of action is to remove them entirely from your system. The user interface required to support these unwanted features is getting in the way and simply slowing the users down.

An objective measure of User Satisfaction is a first-order metric. Feedback from real users is a primary control in agile systems.

 

After attending a less than impressive presentation on agile metrics by a top metrics guru I was motivated to write this series on real agile metrics. This is the second metric, or,  perhaps better stated the forgotten metric. Some of the earliest and best writings on what would be later…
A shift toward enterprise agility is a significant change for anyone, but especially for large bureaucracies. Employees self-select to work in large bureaucracies precisely because they find them comforting. Changing work patterns solidified through decades of operant conditioning is extremely threatening. For employees the status quo at least is predictable…