Survey Design: Part 3

SURVEY INTERVALS


How often should you send surveys? The answer can be quite complicated, but I’ll try to keep things simple here. The survey interval discussion has two main facets: how often you send surveys in general (daily, monthly, annual process) and how often you survey the same customer. How often you send surveys in general should really be based on the resources you have to manage the feedback and what they can handle. And also, it should be based on the type of experience, whether it is a survey about a recent interaction with a customer or a survey about the long-term relationship between you and your customer (depending on your type of business). How often you send a survey to the same customer should be based on the number of interactions they have with you in a period of time and the number of times you can reasonably expect them to respond before getting tired of seeing surveys. Here are some tips:

After a transaction. If you want to send surveys after a particular transaction, it helps to start by evaluating the number of transactions you have per day/month/year. If you triggered a survey after every transaction, would that (a) give you too many responses to manage? and/or (b) touch the same customer too many times in a short period?  You want to keep the interval between transactional surveys long enough that you don’t aggravate your customers causing them to stop responding. The best thing to do is test out some scenarios with different rules on when to exclude someone from a survey invitation. For example, take your transactional data for a year. If you were to exclude a customer from a survey if they’ve received one in the last 90 days, how many would you end up sending out for the year (or month, or whatever period is applicable for your business)? Does that volume feel right to you and your customers?

During a lifecycle. Sending surveys during a customer lifecycle or relationship is a great way to reach out if you don’t have a high-touch relationship with your customers (it will remind you that you need to find more reasons to communicate with your customers if you haven’t been!). Typically these are done annually and that interval can provide great YOY performance data, especially if done at the same time every year.

 

REVIEWING AND ADAPTING YOUR SURVEYS


You’ve successfully implemented your survey program but the fun isn’t over yet. The next step is to carefully review your survey results to make sure you’re getting what you need.

  1. The right number of responses — Based on the business rules and survey intervals that you set, are you getting the volume you expected? Do you have enough responses to provide significant findings without irritating your customer base by over surveying?
  2. Abandonment rate — Are your customers leaving the survey before completing it? If so, you should investigate if it is happening because of a bad question or if the survey is just too long.
  3. Actionable response data — Are you getting the answers you expected from your questions? More specifically, are customers correctly interpreting what you’re asking? Check to make sure the data you’re getting back is valuable and actionable. Make sure it is measuring what you intended it to measure and make sure you’re capturing the most critical aspects of the experience (you’ll be able to tell from the comments what is most important).
  4. Survey complaints — If you have a large number of customers complaining about a particular aspect of your survey process, consider changing it. And be sure to check the opt-out rate as it may indicate a problem if you see an increase.

As we improve the customer experience in our organizations, we want to capture that through our survey results, so don’t be too quick to change everything up. However, we want to keep our customers engaged and keep things fresh in how we collect feedback. It is no doubt a balancing act.

 

Survey Design Part 1: What to Ask and How to Ask It

Survey Design Part 2: Standard Questions, Question Scale, and Survey Length

 

Survey Design: Part 2

STANDARD QUESTION SUGGESTIONS & QUESTION SCALE


Net Promoter Question or Likelihood to Recommend – “How likely are you to recommend X to a friend or colleague?”  This question makes more sense when you know the customer has had enough of an experience with your company to make a recommendation. This question gives you a sense of whether or not customers are willing to spread positive word of mouth about you. And NPS is a widely accepted calculation as a success/loyalty indicator. It also really gets the customer in the mindset of putting their personal credibility on the line. Was the experience good enough that they would personally tell others to try it? The likelihood to recommend question might not make sense in all scenarios or for all organizations. For example, if your customers can’t recommend you for some reason, you shouldn’t ask this question. But it’s a great addition if you’re interested in benchmarking, and if want to perform additional calculations on loyalty and word-of-mouth with the results.

Scale: The standard scale for likelihood to recommend (if you want to do NPS calculations) is 0-10.

More information on Net Promoter here.

Satisfaction – “How satisfied are you with ABC’s product/process/experience?” A satisfaction question is standard, it’s easy, and everyone understands it. While this is standard, it is not necessarily considered the best indicator of loyalty. Just because a customer is satisfied, it doesn’t mean they will stay. However, a question on satisfaction does have its place. You can ask satisfaction as it pertains to a particular aspect of the experience, rather than the experience as a whole. And save the broader experience questions for something that claims to indicate loyalty like NPS or Customer Effort.

Scale: Typically satisfaction questions use a scale of 1-7 or 0-10. These are used because they provide a mid-point. You should consider what other questions you are asking when deciding on what scale to use and how you will present that data. For example, if you are asking the likelihood to recommend on a 0-10 scale and satisfaction on a 1-7 scale, will your audience understand the difference in results? It is best to consider how people will be interpreting the results when choosing a scale. Keep things as easy and consistent as possible. There’s no need to overcomplicate your results.

Customer Effort: I’ve seen customer effort measured in two different ways. The first: “How much effort did you personally have to put forth to do xyz?” This version includes a scale of Very Low Effort to Very High Effort. The second: “ABC made it easy for me to do xyz.” This version includes a scale of Strongly Disagree to Strongly Agree. Either version gets you to the desired result: Do you make things easy for your customers? This question is great because just about every customer values an easy experience. They expect things to follow a predetermined path, they expect you to keep them informed and handle any surprises, and they expect you to deliver what they’re paying for without having to go out of their way to get it.

Scale: Typically this question uses a 5 point scale from Very Low Effort to Very High Effort or a 7 point scale from Strongly Disagree to Strongly Agree, depending on which version of the question you are asking.

More information on Customer Effort here.

Open-Ended – “Why?” Asking why or what you can do differently after any of the above questions is critical to taking action on your results. Allowing customers to elaborate on why they gave you a particular rating will provide you with more valuable information than just a rating. These questions are time-consuming to interpret and analyze, yet very insightful. I recommend only having one open-ended question per survey. The reason being, the customer will use whatever open text field you give them to tell you what they want to tell you, regardless of what you’re asking. Keep your analysis simple with just one.

Scale: Open Text Box

 

SURVEY LENGTH


Have you taken one of those 40 question surveys about a website experience before? If the answer is NO, it is probably because the survey is 40 QUESTIONS LONG! Long surveys serve their purpose and they can be acceptable as long as your organization is comfortable with a 2% response rate and a 50% abandonment rate.

Keep it short. My recommendation is to keep the survey short enough that you (1) don’t irritate your customer (2) don’t lose them halfway through and (3) don’t become inundated with response data that you don’t have time to analyze. If you adopt the mindset that you plan to take action on the responses to every question you ask, you’ll find it much easier to shorten your survey. I love the one question surveys, but I do wonder what companies are doing with the data from that one question. Is it enough information to make positive customer experience changes? Is it just for the sake of having a score? If you’re planning to ask only one question such as Customer Effort or Likelihood to Recommend, make sure you can match those results up to your operational metrics. If you’re able to find correlations between the score and what happened, one question might suffice. If you don’t have those reporting capabilities, considering adding a few more questions pointed at the key aspects of the experience.  A good goal for a company trying to pair down a survey is no more than 10 questions or no more than 2 minutes.

It depends on how often you’re sending it. Another consideration to make about your survey length is how often you are requesting feedback. The less often you want a response, the longer your survey can be. For example, an annual survey of your relationship with the customer can be longer than a survey that goes out after every transaction. More on survey intervals in Part 3.

 

Next up in Survey Design Part 3: Survey Intervals, Reviewing and Adapting Your Surveys

Survey Design: Part 1

Here are some quick, simple tips to use when evaluating your customer satisfaction survey design. This is Part 1 of 3 that covers ideas on What to Ask and How to Ask It.

 

WHAT TO ASK


You should seek information that you can act on. Think about what you will do with the information you receive, and then frame the question to get you that information. Try to avoid asking questions that make you say “that would be really interesting to know but I’m not sure what we’d do with it.” Your customers are busy so show them that you value their time by only asking what you plan to act on.

Make sure you’re only asking about one thing at a time. Your question shouldn’t tackle two issues at once. Be clear and concise, and only ask about one specific thing in a question. For example, if you ask “Was the representative knowledgeable and courteous?” you are really asking two questions: “Was the representative knowledgeable” and “Was the representative courteous?” They are different so decide which you want to measure.

Make sure there is only one interpretation of what you are asking. You don’t want one customer thinking you mean something totally different than another. Ask people around your office for their interpretation of your question if you think it might be too vague. Too many different answers mean you should consider rephrasing and testing again (unless it’s an open-ended question, of course).

Don’t ask anything you should be measuring internally through operational KPI’s. I see this on surveys time and time again. If there is something that you can specifically measure through your operational metrics, consider not asking the customer to provide you with that information. For example, “Did our service person arrive on time?” If you have the technology to measure that, then measure it! And hold your teams accountable to the standard. You should be able to match up your survey data to your operational data and check whether there is a correlation between being on time and customer satisfaction. However, the reality is that not everyone has the reporting capabilities and resources to do so. If that’s true for you, try to minimize the number of things you ask that you should already have access to.

 

HOW TO ASK IT


Walk the customer through the experience in an order that makes sense. Think about the experience that you’re evaluating with the survey. Try to recreate the experience in the customer’s mind with your questions. In other words, don’t ask about various aspects of the experience out of sequence.

Don’t ask leading questions. Make sure your questions are not directing customers to a particular answer. You are seeking honest, unbiased feedback, right?

Use appropriate lingo. This one is tough sometimes. When you’re an expert on how a process works internally, it’s hard to differentiate internal jargon from layman’s terms.  You want to avoid using internal terms and abbreviations that won’t make sense to the customer.The best test here is to grab a friend from outside your department or company to proof-read the survey. See if it makes sense to her. Now, sometimes it makes sense to use industry lingo. For example, if you’re surveying someone about a technical process, the recipient could be an engineer and he might think it’s silly for you to be using non-technical terminology. Ultimately, evaluate your recipients and make sure they’ll understand the words you use.

 

Next up in Survey Design Part 2: Standard Questions, Question Scale, and Survey Length

 

Choosing a Survey Vendor

Choosing a vendor to administer your customer satisfaction surveys can be a daunting task as there are many considerations at hand. It is best to think through your program strategy prior to approaching vendors to ensure the best fit. Below are key topics that should be evaluated for each potential solution.

Level of automation: This depends on the type and size of your organization. Do you have a large customer base with a lot of transactional touch-points or a small customer base with more intimate interactions? A company with fewer, high-touch customer interactions could get away with a more manual surveying process whereas a high volume company needs to consider automating their survey distribution.

Survey triggers: What action or situation will trigger a survey? You need to make sure you have the ability to report on the customers that qualify for a survey based on the desired “trigger.” It is important to investigate how your database will interact with the survey vendor. This goes hand-in-hand with the level of automation. How will the vendor receive the data for customers you want to survey?

Filters: Does the vendor have the ability to exclude certain customers from receiving invitations or would you have to incorporate that into the process on your end? For example, if you don’t want a customer to receive a survey after they’ve already received one in the last 30 days, how will that be accounted for? How complex can you get with your filters?

Additional data points: You’ll ultimately want to evaluate survey responses based on other data points associated with each customer. Will you be passing these data points to the vendor along with the information to trigger a survey? Or will you be matching up the responses to your customer data outside of the survey platform after responses are received?

Invitation: Think about what you want your survey invitation to look like. Do you want a question embedded in the email, or a simple invitation with just a button to click? Do you want to be able to customize the entire thing with your own html code?

Branding: Going along with the design of your invitation is the overall look and feel of the survey. Are you able to white-label it and are there additional costs associated with doing so? Are you comfortable using an existing template offered by the vendor?

Question types: It’s so easy to get overly complicated with survey questions (more on this in a later post on survey design), but you do want to make sure the types of questions you want to ask are available in the survey platform. Be sure to get a demo of the interface for designing questions to understand if the desired features exist.

Reporting capabilities: Some solutions have minimal reporting functionality built into their platform assuming that you will integrate the survey data in another tool outside of theirs. You can generally save money here if you plan to export the data and do your reporting in another program. The vendor in that case doesn’t need a robust reporting interface. If you’re interested in a holistic solution, there are vendors that offer lots of pretty graphs, charts, and dashboards with-in their tool.

Support and consulting: Find out the level of support offered with your subscription/contract. Given that this process involves direct interaction with your customers, you want to feel confident that your vendor is going to provide assistance if something goes wrong. Some vendors also offer consulting hours to help with things like mapping out the customer journey and designing surveys.

Changes:  Think of how you will manage the distribution of surveys on an ongoing basis. This is critical. If changes are required to a survey, invitation, or trigger, will you be able to make them yourself or will you have to engage your vendor? Do you have the time to make your own changes? Do you have the patience to wait for your vendor to make changes? Considering this  aspect from the beginning will save you a lot of aggravation later on.

Survey and response allowances: Try to forecast how many surveys you want to send and receive and be sure you subscribe to the appropriate allotment.

Response integration: Where will your survey responses go when they come back in? A lot of platforms today can integrate with other systems if you want to use something outside of your vendor’s platform. Find out what integration they support. If possible, bring an IT employee along to evaluate the structure of the integration. Is it reliable? Is there a fail-safe in place if a survey response is not properly integrated?

Follow-up: If you’re not planning to follow-up with customers based on survey responses, shame on you. If you are planning to follow-up, great! Figure out how you want to append additional follow-up data to a survey response after it is collected and how follow-up status will be reported on.

Price: An obvious consideration and often negotiable based on the desired level of customization and support.

In general, I recommend keeping things as simple as possible. You will eventually want to make changes to surveys based on how your business processes evolve, and straying too far from an out-of-the-box solution can come back to haunt you when that time comes. Remember, your customers want the see something simple and easy to interact with, and you want something simple and easy to manage and adapt.