Archive for April, 2010

The dreaded “dark side”

[tweetmeme source=”chenelaine” only_single=”false” service=”bit.ly”]

I am currently working on a product and service combination where there is a hardware component and an online software component.  Predictably, our customers are split between those who go on line and those who don’t.  Also predictably, we know vastly more about our on-line customers than our off-line customers.

This really came to a head when we were doing customer satisfaction surveys.  Almost all respondants are on-line users.  But what about the “dark side” – our off line users?  Are they happily off line, or did our product become a paperweight?

Of course, there is only one way to find out: interview them by phone.  These folks have proved to us that they don’t go on-line to use our software tools, or respond to our emails.   They most definitely won’t post anything on a forum or tweet us their thoughts.  We would have to revert to old fashioned phone calls.

The issue is the time commitment: we can blast off an online survey to on-line users any time we want.  We can always squeeze one in no matter how many other initiatives we are chasing.  But to get 20 quality phone interviews from the dark side, we are probably going to have to actually call 200-400 people, maybe even more.  The time commitment is so immense that it nearly always get postponed when the subject comes up.

Some time soon, we’ll bite the bullet and call our “dark side” customers and find out what they love or hate about our overall user experience.  It’s no different from how anyone else did primary market research 20 or 30 years ago.  It’s just interesting to observe how our expectations have changed with these awesome on-line tools. We are so spoiled 🙂

Advertisements

On-line surveys

[tweetmeme source=”chenelaine” only_single=”false” service=”bit.ly”]

This is the ninth post in my Customer Research series.

I’ve been dragging my feet on this post because I have imposter complex on primary quantitative research.   I learned how to do qualitative research by working shoulder to shoulder with established masters.  But I never was formally trained in quantitative research.   So I know what I don’t know.  I’m not terribly deep in my expertise.

For what it’s worth, I’ll share my not too profound thoughts on on-line surveys.

There are certain pieces of data that cannot be efficiently or effectively collected with qualitative research, either due to the sample size required to get a trustworthy answer, or because qualitative techniques introduce an unacceptable risk of observation bias in emotionally charged questions.

Here are some example questions that I believe are better asked with an on-line survey:

  • How many people in my customer base own which smart phones, and how does this compare with the population at large?
  • How satisfied are my customers with their experience as a whole? (This is the top box question)
  • How likely are my customers to recommend my products to their friends and family? (This is the Net Promoter question)
  • What attributes of the product or service affect purchase interest most? (Probably best done with a conjoint analysis)
  • What is the purchase intent for a particular product or service, presented with our best creative messaging efforts, and set at a particular price?
  • What is the pricing elasticity for a particular product or service? (Efficiently asked with the Van Westendorp methodology)

In all of the above examples, statistically significant data is required to answer the questions posed.  For instance, the proportion of people with iPhones in your customer base can hardly be credibly answered by interviewing 60 people.  You would want to survey 5000 people and get results from 500 respondants (assuming a 10% participation rate).  Same with top box/net promoter metrics – they become meaningless when the sample size is too small.

As for pricing, that’s an emotionally charged topic and the presence of researchers can very well affect what subjects say and do – some may be driven to present themselves in a different light than their natural tendencies.  Anonymity is a great way to minimize observation bias here.

The only one that I can go either way on is product features or attributes.  Yes, you can do a conjoint analysis, and yet,  I think you learn much more by qualitative product discovery research.  I prefer qualitative in early stages of product ideation, and then one can always follow up with a conjoint analysis in survey form to verify the directional input from the qualitative phase still holds when you cast a wider net.

Running the survey is only half of the equation.  Analyzing the result is something else entirely.  There is the naive data crunching from raw responses, and then there’s data mining – swimming around in the raw data looking for patterns.   I love the fishing exercise, but there are many structured approaches that make this work faster and better.  One nifty example is “verbatim tagging”.  This basically involves looking for keywords in essay responses to open ended questions.  You read all the responses, figure out which words keep coming up, then tag each response for the presence and absence of those words for each response, then you tally them up.  This is hugely time consuming but can also offer incredible insights.

I would love to get deeper in quantitative research – anyone has suggestions for books to read or websites to peruse?


Twitter Updates


%d bloggers like this: