All About Customer Satisfaction Surveys

dinerWhen web managers are looking to get the “voice of the customer” as a metric for their sites, the issue of how to field a satisfaction survey often comes up. Satisfaction surveys — in combination with usability testing and web analytics as well as adherence in meeting your organization’s business goals – can offer some of the most valuable data out there if properly implemented.

But people often make mistakes in deploying a satisfaction surveys for the first time – and the mistakes often repeat themselves for those who manage existing satisfaction survey programs.


This is my summary of eight tips to consider if your organization is evaluating the way you measure satisfaction with your website.

1. Don’t reinvent the wheel on questions.

One is it’s been my experience that many folks try to reinvent the wheel when it comes to your question set (called a survey instrument by the statistics crowd). I don’t recommend it, especially when there are so many good tested questions out there. You want to trend data over time and compare it to several cycles of the same survey to gain good insights on how you are doing.

Personally, I am a huge fan of the System Usability Score. Not only has it been found by a UPA study as one of the most effective question sets out there for judging customer satisfaction, but it also has the benefit of letting you benchmark performance (using your overall score) against others.
I think one of the reasons this question set probably works well is it asks a series of questions from both a positive and negative point of view. This makes those taking the survey think a bit more about the answer, in my opinion. So you get fewer responses where the middle option is selected for every question.

A 2004 paper presented at the Usability Professionals Association includes a great overview of a series of non-proprietary question sets including the SUS questions if you are looking for inspiration.

2. Why did people visit and did they leave happy?

There are a bunch of smart people out there who advocate taking a task-based approach to the questions, focusing on finding out the top tasks people want to complete when they click on your site and if they were successful accomplishing it. Avinash Kaushik of Google, for example, says you should ask only three questions in your survey:

  • What was the purpose of your visit to the site today?
  • Were you able to complete the task?
  • If not, why not?

3. Need a more classic satisfaction metric? 

Go for the golden three questions. Marketing gurus have for ages used three simple questions to gauge satisfaction:

  1. Were you satisfied?
  2. Would you come back?
  3. Would you recommend to others?

Bear in mind if you have no insights on why they are satisfied or not satisfied it becomes impossible to remedy anything for those who are disappointed.

Still, this threesome is recognized widely as the “right” questions to judge happiness in the abstract.

4. Make sure you don’t overdo the sample size.

The pushy surveys (like pop-ups) tend to be left on the sites for a long time period and give you tons of replies, far more than you need for survey research. Unless you are doing a heck of a lot of segmentation, if your audience is over 500K you only need 384 (I think that’s the number, if not its close) for a plus or minus five on your confidence level. This is the acceptable target range for actionable results in my experience.

When you report results you should always include the universe or population that participated (usually called the N as in N=384) and the plus or minus info somewhere near the date. It’s even better if you can do a rolling report and show the last four reporting periods (using quarters, months, or whatever time frequently you do the survey) so you can spot trends at a glance.

Not sure about what sample size to use? A sample size calculator on the web can help you be a pro.

5. Deliver the survey to a random sample.

When you pick your delivery method, be sure to put the need for a random sample on the front burner. Some people shy away from delivering survey opportunities via a website. A static survey where the user opts in is probably the least desirable from a statistical point of view. But it does have the benefit of signaling to visitors who see it that you care about their opinion.

The desire for randomness sometimes leads folks to lists of known users (via newsletter sign ups, etc.). Here, you’d pick a number of people randomly, assume a response rate of a certain percent to decide the overall number you need so that you get the plus or minus five percent you are seeking at the end, and deliver an invitation via an email. While it sounds rigorous, this method also has its own selection error issues. You may well find that because the email comes out of context and users may not remember your site that well. So your results are less meaningful.

Some systems are set up to deliver pop-ups to every x-number visitor, which is a way you can do random invites with pop-ups, which makes it easier to defend. My preference is to deliver the survey to the user during their experience on your site.

Other tools offer a permission-based exit survey. Here, the site owner controls how often visitors get the invite (i.e. every 10th visitor). It lets visitors easily say no and X-out of the invitation easily to get on with your task if they don’t want to take part.

When you select your survey technology tool, be sure to explore the vital question of how the system selects who will respond: it matters for the results you’ll be gathering.

6. Time zone settings are a consideration.

For global sites, you also have to be careful about the time zone too. I was involved in one survey where it became clear that because the system was set up to deliver the invite at midnight (something we had no control over) so that you got to the number of responses you needed by morning — ALL the responses were from Europe and overseas! Lesson learned: it’s tricky out there in survey-land. Work with your vendor to know the limits of your tool and what workarounds are possible.

7. Remember to offer “I don’t know” as an option.

Answers don’t always fall neatly into the five scale or seven scale set of possible answers. Sometimes users want to tell you “I don’t know” and if you haven’t left that answer open as an option to them, they’ll get frustrated at the limitations you put around your questions. So do everyone a favor and tack on “don’t know” at the end of the answers you give in your survey.

8. Explore both paid and free survey tool solutions.

Not everyone can afford a proprietary survey tool and their sometimes expensive fee structures. So deploy the solution that fits your needs and your budget. There are lots of free survey tools out there. The paid tools often include consulting services and are less self-service oriented. Many of the paid tools have a free pricing tier as their loss-leader to get you in the door to offer their paid services.

  1. Google Survey App – forms
  2. Poll Daddy – free tier
  3. iPerceptions 4Q – free tier
  1. Customer Centric Index – Customer Carewords
  2. Survey Monkey
  3. Survey Gizmo
  4. QuestionPro
  5. iPerceptions (free tier is 4Q)
  6. Foresee Results


If you are not doing a web survey on your website, what’s stopping you?