Thoughts on design quality and its measurement

General Add comments
by:

Implementation of functionality can vastly differ between organizations and sometimes even between consultants from within the same organization. This is due to the fact that the mind is unique for everyone and decisions are based on a mix of facts and hunches. The variety that is caused by this, makes it difficult to determine the proper practices that are needed to maintain quality across tool functionality, or even across organizational implementations.

Quality is embedded in the way designs originate, are actualized and maintained. I’ve been working as an implementation consultant for quite some time now, and I have noticed that the rigors of proper quality management during development and design phases is often lost because of external factors, time pressure and/or lack of knowledge.

It is hard to define the concept of quality as one of measurements that are testable and iterable. Yet it is one of the key components in determining perception of quality for your customer. And it is not a new practice. We perform to provide the maximum amount of quality as long as we can remain in the boundaries that are set by time, money, effort, knowledge, etc. Quality, or the perception thereof, however differs greatly between designs.

This situation is not new to software design practices. Some of you may already be familiar with ISO/IEC 9126, or the international standard for measuring and evaluating software quality. Its hierarchical approach to create a quality breakdown is alluring because of 2 reasons:

  • it provides a foundation for consensual discussions with regards to functionality
  • it allows for measurability of the quality of a product

These 2 reasons alone should be sufficient to incorporate it in the design phase of any functionality. If not outspoken, keeping the factors in mind when trying to design new functionality, will definitely help in broadening your view on the design.

The breakdown of quality according to ISO 9126 is based on 6 factors:

Source: Virtualspec.org.

Some of these factors are probably familiar to many designers already. Implementing them in a more structured and rigorous manner will aid in testing these designs, and make it a second nature for designers to consider them.

Testing a design against these factors is not always as straightforward as it may seem, and every client has its own expectations of the design. It is a good idea to create a fundament on which these qualitative aspects have been mutually agreed and weighed. After they are set, it is the job of the designer to repeatedly test the quality of the design and maximize it where possible. At the very least you will have a testable set of measurements that can be used to inform your client of the quality and to overview the quality provided by the designers. This keeps both the client  and organization satisfied and informed with regards to the quality performance.

Keeping that in mind, you could consider designing measurements for the following KPIs in your design quality. Note that the design quality in this case can be applied to any design, but factorization should be included on the axis to measure the quality. Consider the following KPI’s for determining quality in a software/documentation setting, within a typical organization:

Functionality

  • The extent of coverage of the actual design fitted on its intended purpose
  • Scope versus the actual needs
  • Efficacy/efficiency of the design

Reliability

  • The degree of mitigation strategies for expected/unexpected breakdowns
  • Quality and speed of the recovery procedures when failure occurs
  • Ability to retain the intended effect upon breakdown

Maintainability

  • Inclusion of modifiers and configurable parameters in the design
  • Depth of granularity of the design
  • Verifiability of the functional expectations of a design and to a lesser degree the ability to define expectations of a design.
  • Ability to repair breakdowns
  • Clarity and understandability of the design for outsiders

Portability

  • Extent to which future extensions and/or reductions can be integrated in the design
  • The ability to modify and change the existing design to suit other needs
  • Ability to effectuate the design in the organization

Performance

  • Fit between stressors in resources and time

Usability

  • Ability to work with the design
  • Potential impact of redesign on functional needs
  • Ease of understanding the design for insiders

The testing approach would be one of attribution. Each of these factors consist of sub-factors, that can be measured by artifacts in the design. For instance, when referring to performance in software, speed of execution of scripts can be interesting to measure as artifact. The goal of the quality measurement for this factor might be ‘quickest execution of achieving goal by using functionality’. If execution time is reduced, quality is improved. This is easily measurable, as designs can often be iterated 10000 times in several different setups, to recognize the one achieving the best quality.

Imagine the same situation in client communication for a design. Performance in client communication is measured not in speed, but in client perception and expectation. The quality measurement here could be ‘highest satisfaction achieved for client’. Measuring this has a tendency to be intrusive and not very efficient.

Organizations tend to solve this by focusing on quantifiable aspects of quality, within the locus of control of the organization. A measurement that can be used here might be ‘highest scoring rate of acceptance of clients, with the least amount of negative performance appraisals’.

This might seem like a suitable measurement, but can also create a mismatch between quality measured and perceived quality. If the fundaments of the measurements are wrong, it is becoming increasingly difficult to measure the actual performance, which automatically reduces the control an organization can exert over this.

If perceived quality is high within the organization there may be no indication that the client decides to turn to a different provider of the same service next time. This could have easily been resolved by changing the approach to the customer, allowing them to have more time in deciding on functionality.

Not only is it important to determine what needs to be measured; it is just as important to determine how it is measured. If there is no clean cut and effective approach to measuring the qualitative aspect, it might as well be considered a useless performance indicator. Approaching quality like this will take considerable time and effort for both client and supplier, but the end result will be valuable as well. So consider this as a proposition to ask yourself: How important is quality for my organization and me personally? How do we measure quality in my organization?

In case of questions, feedback or interesting additions, please let me know at .img[at].img.

Leave a Reply