Labs/Ubiquity/Usability/Usability Testing/UI Triangulation

From MozillaWiki
Jump to: navigation, search

Triangulation helps categorize usability issues by their severity as a problem and the frequency at which they occur by looking for the problem across multiple datasources.

It also raises the visibility of problems by making sure that it is reported in all the appropriate areas, cross referencing usability studies, bug trackers, and customer facing issue trackers. This helps to ensure that they receive earlier and higher priority in the overall development cycle at least equal to programming bugs. It also increases the value of each of these development tools beyond their specific, isolated purposes.

UI Triangulation

Triangulation involves comparing observations across data collection points such as Trac tickets, Get Satisfaction tickets, traditional usability testing, mailing group messages, server statistics and cross referencing them.

Generally analytics (server stats, volume of Trac tickets, volume of GSFN complaints, % of usability testers, etc) are used to quantify the frequency of the problem. The raw data can also be used for severity (exit points on web sites for example) but the content must be examined as well to determine how detrimental the issue is.

Traditional usability testing allows for very focused, clear understanding of problems and for root analysis of deeper issues. Bug reports allow for not only identification of the problem but for how users expect the issue to be resolved.

The cross referencing is the tying together disparate system in either an automated manner (DB sync) semi-automated manner (forwarding new ticket notification to bug tracker) or manually (managers log bugs as they receive them into multiple systems).

It is important that basic reasoning, root analysis, and problem solving are carried with the actual bug description to categorize the bug and motivate other to fix it.

Rankings

Rankings are a way of distilling how important the problem is by considering the severity of the problem and the % of users affected. Human factoring is there to temper the numbers, ranking and UI triangulation only bring a level of objectivity to what is essentially subjective observations.

The equation is: Severity + Frequency + human factoring = Priority

Severity

  • S4 = Makes Ubiquity either inaccessible or broken
  • S3 = There is no S3!
  • S2 = Minor loss of data or deterrent to use
  • S1 = Annoyance

There is no S3 otherwise a problem that "breaks" ubiquity would be S3 + F1 = P4/Major even though it is a critical issue.

Frequency

  • F1 = 1-10%
  • F2 = 11%-50%
  • F3 = 51%-89%
  • F4 = 90%-100%

Frequency is tricky, at must be used according to scale. If a usability test has 10 participants than generally each participant is worth 10%. GSFN and Bug databases should similarly be compared to a recent bug with the largest number of votes being the 100% of users.

It is important to reiterate the human factoring in this equation in correcting sampling issues. Subgroups that represent a smaller portion of the population will need to be factored down.

For example, if there is a bug that occurs only to elite users who could figure out how to vote on a Trac ticket then the Frequency will have to be scaled back, as they only represent a portion of the user base.

Conversly, if a bug affects a large protion of the user base but one that is not inclined to vote on a particular bug then the frequency must be scaled up.

Priority

Priority scales corospond to the default Trac rankings.

  • P6 = Blocker
  • P5 = Critical
  • P4 = Major
  • P3 = Minor
  • P2 = Trivial

The scale starts at 2 becuase that is the minum ranking given S1+F1.

See also

Usability

Usability testing

External links

Jacob Nielsons use of severity ranking in hueristics analysis. [1]

Triangulating Across Data Sources: Usability and Analytics [2]