Users are Losers! They’ll Like Whatever we Make! and Other Fallacies.
-
Upload
carol-smith -
Category
Self Improvement
-
view
3.043 -
download
0
description
Transcript of Users are Losers! They’ll Like Whatever we Make! and Other Fallacies.
“Users are Losers!”“They’ll Like Whatever we Make” and Other FallaciesPresented by Carol Smith @carologic
CodeMash 2013
Many names…
User Experience Ethnography Customer Insight Usability Interaction Design User Research
“Users are Losers!”
Plan A: R-E-S-P-E-C-T
Plan B: Stay
Let’s find out about those losers users! Share what is known Existing users = usability study Observations and interviews Web site – use analytics Social listening
Observations & Interviews Learn about:
User’s environment Real process Interruptions Attitudes and
opinions Problems Goals
Preparation
Plan with a goal/hypothesis Questions
1. Make a guide2. Review3. Test4. Start study
Watch Closely
Share little Related tasks Wait for patterns Save questions Stay out of their
“space” Don’t interrupt
Interview
Clarify observations Why doing? Goal? How typical was this?
Use prepared questions Don’t lead the witness Do listen closely Use their language
http://www.flickr.com/photos/heygabe/ via http://creativecommons.org/licenses/by-nc-sa/2.0/Actual Photo: http://www.flickr.com/photos/heygabe/47206241/
Artifacts!
Logistics
Explicit consent Record video, photo, audio Take notes Give incentives
“We Know Our Users”
How well?
When do they think about your product? In what context? Most important to them? Most like to change?
Web sites used most frequently? Phone? What kind? Etc. Etc.
“I don’t know”
Let’s find out! Market research / segments are a start Go where (they *think*) they are ▪ Starbucks▪ Wal-Mart*▪ Conferences/User Groups
Card sort to test organization of info
Card Sorting
Use to determine: Order of
information Relationships Labels for
navigation Verify correct
audience
http://www.flickr.com/photos/rosenfeldmedia/ via http://creativecommons.org/licenses/by-nc-sa/2.0/
Card Sorting
Maximize probability of users finding content
Explore how people are likely to group items
Identify content likely to be: Difficult to categorize Difficult to find Misunderstood
Gaffney, Gerry. (2000) What is Card Sorting? Usability Techniques Series, Information & Design. http://www.infodesign.com.au/usabilityresources/design/cardsorting.asp
http://www.flickr.com/photos/richtpt via http://creativecommons.org/licenses/by-nc-sa/2.0/
Preventive Care Guidelines
36
One title/subjectConcise and clear
Printed stickers
Numbered for analysis
Short description on back of card if needed
Practice session Allow 1 hr for 50 items - Total of 30 –
100 Name groups of cards Moderated (in-person or remote) Un-moderated (online)
Conducting Study
Conversation (if moderated)
Ask to Describe overall rationale for grouping
cards Show best example What was difficult? What was easy? Happy with final outcome?
Analysis
Code cards = faster data analysis Look for patterns
Excel Spreadsheet (Donna Spencer) Online tools - limited analysis
Screenshot of OptimalSort online tool’s analysis - http://www.optimalworkshop.com/optimalsort.htm
“They’ll Like Whatever we Make”
Really?
Let’s test that Usability test prototypes Rapid, iterative cycles of design and
evaluation Web - feedback from on-site tools Customer feedback/Help desk
Usability Testing
Real users doing real tasks
Using prototypes or live products
Doing assigned tasks without guidance
Observed closely
http://creativecommons.org/licenses/by-sa/2.0/http://www.flickr.com/photos/raphaelquinet/513351385/sizes/l/in/photostream/http://www.flickr.com/photos/raphaelquinet/
Rapid Iterative Testing & Evaluation (RITE)
Qualitative – not quantitative actions + comments
Series of small usability tests 3 participants each day At least 3 days of testing Changes made between testing days
RITE Process
Test Update Test
1
2
3
High
Medium
Low
Priority& Level of
Effort
Day 1 Day 2 Day 3
Recap Sessions
End of each day - after the last session Room with a whiteboard About 30 minutes Discuss
trends seen concerns recommendations prioritize changes for the next round list lower priority changes for future
iterations
RITE Results
Final prototype Vetted with users Base for recommendations
Light Report: “Caterpillar to Butterfly” Screenshots show progressions What changes were made and why
Testing
Traditional Testing In-Person Remote
Moderated or Un-moderated
Habitual Testing
(Yes, this is an old idea; a great one!)
Bring it On!
Small focused tests Reduce waiting for recruitment Once per week/sprint Same day mid-week Less users, shorter sessions: analyze
at lunch 3 or more participants recommended Half hour to 1 hour each
User Testing Day!
Make team aware Invite everyone Recurring meeting invites for
stakeholders
What could I test?
Work in Progress Multiple projects Prototypes Concepts, rough ideas, brainstorming Competing designs, (A/B testing) Comparative studies across market Conduct interviews to inform research More…
“Teams should stretch to get work into that day’s test and use the cadence to drive productivity.”
- Jeff Gothelf - http://blog.usabilla.com/5-effective-ways-for-usability-testing-to-play-nice-with-agile/
Why Regular?
Team becomes accustomed to steady stream of
qualitative insight ensures quick decisions lines up with business and user goals
Adapted from Jeff Gothelf - http://blog.usabilla.com/5-effective-ways-for-usability-testing-to-play-nice-with-agile/
Include PWD
“We are all only temporarily able-bodied. Accessibility is good for us all.”
Spirit of the law WCAG 2.0 Country specific (Section 508)
-@mollydotcom at #stirtrek 2011 via @carologic
Make it Repeatable
Pre-Book Your Rooms
Test & Observation Rooms Any location will do
Conference rooms Offices Quiet corner of cafeteria Remote
Purchase software - always ready
Create Reusable Templates Screener
Technology use/experience Knowledge of topic
Scripts/Guides Consent Forms Data Collection
“We have a survey set up and are getting data from it.”“Why would we need anything more?”
Surveys
Great way to get quantitative information
Questions Words can have multiple meanings Un-intended meanings
Less people participate now than in past
People save face “It’s not that bad”, “It’s my fault”
Vendors requesting Perfect 10
“We’ll just ask employees to save time and money”
Employees
Too close to the project Know things others wouldn’t about
product Concerns about ego, job, co-workers,
etc. Not the intended user!
“Don’t we need to test 100s of users to get real results?”
5-6 Participants
Studies have shown that testing 5-6 representative users of each user type will reveal 80% of usability issues.
http://www.useit.com/alertbox/20000319.htmlJakob Nielsen’s Alertbox. Why You Only Need to Test with 5 Users. March 19, 2000.
Look for Patterns
Identify repetition After pattern is
found, continuation of
study: Adds cost Delays reporting Low probability of
many new findings
Does Not Mean That…
Testing five users is always enough Can test anyone and have the same
results Smaller groups equate better
findings
“Our design has won awards. Why would we want to change it?”
Why Change?
Visual appearance is important
Must also be usable Designed for users Tasks able to be
completed Organized well
http://www.brainjuicer.com
“We Know it’s Difficult, We Have a Training Program!”
http://www.flickr.com/photos/kaptainkobold/5181464194/sizes/o/in/photostream/http://www.flickr.com/photos/kaptainkobold/
Training
Costs more time and money How long will product be used? Less costly to find and correct issues
than provide training to work around the problem
Other Arguments Against UX
Time Money Can’t talk to our Customers Liability Not needed Invisible ROI
Prepare for Arguments
Be armed with Facts Questions
Don’t just pick a method What do you need to know? What will the stakeholders respond to?
Recommended Readings
54
Contact Carol
@carologic
slideshare.net/carologic
speakerrate.com/speakers/15585-caroljsmith
References
Albert, Bill, Tom Tullis, and Donna Tedesco. Beyond the Usability Lab Albert, Bill, Tom Tullis. Measuring the User Experience Beyer, Hugh. User-Centered Agile Methods (Synthesis Lectures on Human-
Centered Informatics) Gothelf , Jeff. http://blog.usabilla.com/5-effective-ways-for-usability-testing-to-
play-nice-with-agile/ Bias, , Randolph G. and Deborah J. Mayhew Cost-Justifying Usability: An Update
for the Internet Age. Henry, S.L. and Martinson, M. Evaluating for Accessibility, Usability Testing in
Diverse Situations. Tutorial, 2003 UPA Conference. Krug, Steve. Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and
Fixing Usability Problems. Molich, Rolf. A Critique of “How to Specify the Participant Group Size for Usability
Studies: A Practitioner’s Guide” by Macefield. Journal of Usability Studies. Vol. 5, Issue 3, May 2010. pg. 124-128.
Nielsen, Jakob’s Alertbox. Why You Only Need to Test with 5 Users. March 19, 2000. and Usability Evangelism: Beneficial or Land Grab? by Jakob Nielsen, Ph.D
Ratcliffe, Lindsay and Marc McNeill. Agile Experience Design: A Digital Designer's Guide to Agile, Lean, and Continuous.
Rubin, Jeffrey and Dana Chisnell. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. John Wiley & Sons, Inc.
The $300 Million Button by Jared Spool