Labs/Ubiquity/Usability/Usability Testing/Fall 08 1.2 Tests/Tester 08a
Embed video here.
- Mistaking the awesome bar for Ub 04:20 -Specifically the Google "feeling lucky" function 05:00!
- Random guessing of commands 29:30
This tester highlights a deficit of statistical UI testing, remote testing of click through on a page cannot show when the user clicks on something thinking it will do something it doesn't. We can't very well log all of the users keystrokes. Is there a way to monitor this behavior?
- Merge Ub with the awesome bar
- Use data gathering to capture failed commands to increase intelligence of the thesaurus
- Consider inserting iframes (as opposed to JPEG screen captures), working with providers to support commands directly.
- Make Google a fallback
- Make help non-linear
Raskin's 1st law of Interface Design "A computer shall not harm your work or, through inaction, allow your work to come to harm " 22:15 -I believe a single user has guessed to reload the page, after three previous failed attempts.
| Research Questions
|| Performance Benchmarks|
| How do users try and access Ubiquity
|How do they learn the command syntax?|
| Do users value Ubiquity?
| How would we identify problematic commands via statistical analysis?
||*Tester put in commands elsewhere that they did not belong, can we monitor that?|
- "Take the Ubiquity Tutorial, that sounds boring" 00:50
- Reads everything but skips over hot-key.
- Decides to try tutorial 2:15, immediately hates visual presentation.
- Immediately skips past the hot key explanation
- Tries typing in command and hitting enter without trying hotkey. 04:00
- Mistakes the Awesome bar for Ub 4:20
- Mistakes Google's "feeling lucky" function for Ub 05:00
- 12:08 "My idea is that the interface should be so intuitive that one doesn't even have to try, it should just do what you think it should do."
- Gives up on Tutorial after almost 10 minutes 13:00
- Tries video 13:30
- F*ng loves the demo 14:00
- Randomly guesses commands 29:30