Interaction with users can help drive product decisions, but getting from conversations with users to tangible customer data can still be a big mystery. Here are three steps for getting from a casual, interview-esque conversation with a handful of target users to concrete survey results about the audience for which you are designing.
A stupid question can turn a user survey from useful tool into misguided hammer of destruction. Instead, get your stupid questions out of the way – productively! – in conversations and user interviews to build more valid, reliable surveys. The end result: software that actually does what the user needs and wants.
Imagine that you’re interviewing some poor soul about Dropbox and selective sync, and this happens:
Q: "Do you use selective sync in Dropbox?"
A: "Yeah, definitely, it's great; you can just download stuff from the website."
Q: "O....kay, so do you have Dropbox installed on different machines or devices?"
A: "Oh, yeah, but sometimes it's weird that some, like, weirdly-formatted spreadsheets don't show up when I open them on my tablet. That's a pain, so I just download it from the Web."
Q: "What do you mean? What do you see?"
A: "Well when I go to open my drive..."
If I was the interviewer, and I heard the word "drive," my heart would sink even further, because there's a real possibility that this person doesn't distinguish between Dropbox and other cloud services. I might guess that the user is probably thinking of Google Drive, unless by "tablet" he means "surface," in which case it might be SkyDrive. Or, if truly I have displeased the heavens, this person is now talking about using Evernote on his Kindle, so, you know, good luck. There is a terrifying Pandora's Box of words, tools, buttons, services, apps, and so forth opened by that first time I say "Dropbox." Forget about selective sync. Here there be bigger monsters.
This time, I'm going to point out another invaluable benefit of talking to actual human beings about whatever you're building: The developer's model of a system usually doesn't match the user's model of the system. This breakdown in conceptual models is discussed at length in Design of Everyday Things, a short book that's a sort of call to arms for user-centered design.
If you distributed a survey to several thousand users asking the initial question ("Do you use selective sync in Dropbox?"), you're just as likely to get garbage as real answers. A lot of people have no idea what selective sync is, or don't call it that. You don't know what that proportion is, or how it's changed since the feature became available. And you don't know what they're going to guess the question is asking. The worst thing to do in a survey is ask a question that the user has to interpret.
So what do you do? First of all, keep talking, and never correct the user. Don’t tell her how anything really works, what she should do instead, or even how to pronounce something. (My favorite creative online application name pronunciation? “Pee-interest,” hands down.) Being a smarty pants frustrates anyone you’re talking to, and it gets you no information about what the conceptual model breakdown is; ultimately, it's in your interest to understand how the user sees the world.
To a lot of people, Dropbox makes perfect sense, and that's why it's so successful. The Dropbox conceptual model maps onto the existing model of the file system. Google Drive, on the other hand, attempts a similar file-and-folder model but in a browser; that's a bit more of a risk, and if you go approaching strangers to ask about it you might find that it’s actually pretty confusing. (Guess how I know.) The worst breakdowns happen when different models that co-exist in an ecosystem of services clash over similar objects and actions, at which point your only hope in trying to figure out what’s going on is to let them keep talking.
In this case, I could ask next: "Can you walk me through the last time you couldn't get a spreadsheet to open with the right formatting?" I would have no idea where this is going to get me, and it would be a bit of a gamble, because I don't actually care about spreadsheets. What I do care about, however, is the process at the user sees it: What is a document object? Where do you go to access it? What are the actions you can take on it? How do you recover from errors? What are the errors?
Once you have a sense for what words to use, you’re almost there. Here’s some general survey tips to keep going:
Don’t reinvent the wheel.
A variety of standardized questionnaires can help you measure user satisfaction and effectiveness using a survey, which show how to word questions carefully and generally well. They are exhaustive – and exhausting! – so cherry-picking certain sections that are more useful can be preferable to using the whole thing.
Ask agree-disagree questions thoughtfully.
The 5- or 7- point “Likert” scale is a common tool for getting hard numbers on subjective questions. It’s important to have some redundancy (those usability questionnaires can be a helpful example), so you can sanity-check responses. Remember, too, that some surveys have the scale from left to right (“strongly agree to strongly disagree”), while others reverse it; make those labels prominent just in case, and have a throw-away question or two that would help filter out confused responses. Finally, a 7-point scale is preferable to a 5-point scale, if simply because many respondents either shy away from extreme options, or prefer them, so if you want to maintain some meaningful gradation of response strength, 7 points works better.
Although using a scale with an even number of points (e.g. 6) can force people to decide one way or another, the reality is that with many usability or satisfaction scales, some things simply don’t matter to some people. Imagine if you have a third of your responses as “strongly negative” and two thirds as “don’t care.” This means there are (at least) two different kinds of users, and whether you act on that response may affect about a third of the target audience, which may be a smaller priority than if everyone was forced to pick, and something that is a non-issue as a result appeared as a pressing problem. For a similar reason, be sure to include a “N/A” option.
Have open-ended prompts.
Even if you’re only planning on using the scale responses, I highly recommend leaving in several open-ended (or “write-in”) prompts, including one at the end for general questions. Open-ended “Tell me about ___” questions serve three purposes:
- They prime the respondents for the upcoming quantitative question, so they have some sense for what you care about;
- They help the respondents feel like their opinion is valued and they aren’t being reduced to a number;
- They help you debug your survey. If the open-ended responses make no sense, that could be a good sign to pull the plug and try again, with better language.
I am an avid advocate of more contact with customers and users. It sets the stage for new directions and builds morale – and it helps you figure out the right language. So, yes, by all means, ask stupid questions! It’s the path to getting better answers later.