Ready Aim Fire or Ready Fire Aim

Ready Aim Fire or Ready Fire Aim

When it comes to bringing a new process to market, people generally fall into one of two camps: meticulous planning, testing, re-planning, and re-testing; or getting it out there whether it’s ready or not. Knowing your strategy when it comes to implementing customer experience initiatives will help you overcome the drawbacks of the strategy to which you subscribe.

Ready. AIM. Fire. – If you fall into the ‘Ready, Aim, Fire’ category, it is likely that you carefully plan and consider all implications prior to implementation. You test the new process to death. You believe that there is worth in getting it exactly right, even to a painstaking level of detail. The drawback to this approach is that sometimes it can take longer than necessary to make a change. The benefit to this approach is that you are very careful about providing the best possible experience to your customers.

Suggestions for improvement: Decide what things you can live without, for now. You can refine the process once it is in place instead of delaying the launch. Consider the risk of NOT providing the solution to your customers sooner rather than later, and weigh that against the implications of it not being “perfect” (if there’s such a thing).

Ready. FIRE. Aim. – If you fall into the ‘Ready, Fire, Aim’ category, it is likely that you’re ready to start making things happen whether they are ready or not. You believe that you can get it right in the field and a less-than-perfect implementation is better than delaying the implementation for additional planning. The drawback to this approach is that your customers could be frustrated by a lack of planning and smooth functionality. The benefit is that you can begin getting feedback immediately and truly mold the process/product to the customers needs.

Suggestions for improvement: If you currently get a little trigger happy with new ideas, make sure you’re vetting them and spending your effort on those that resonate the most with your customers. A little extra planning might help you save time in the long run.

One method isn’t necessarily better than the other, but it may help in certain circumstances to test out the one you are not accustomed to using. You should do what your customers can tolerate. Can your customers tolerate the status quo while they wait what seems like forever for a solution? Or can your customers tolerate an imperfect solution while you work out the details after implementation?

In my experience, customers tend to tolerate imperfection when they can see that improvements are coming on a regular basis and an effort is being made to make it the best possible experience. If one method isn’t working, maybe you need to shake it up and get things out there in a new way. Make sure you have a solid feedback loop in place to listen to your customers and adapt to their response.

The Case for a Global Agent

The term ‘global agent’ can be used to describe a customer service rep that can help with any type of inquiry. A global agent is versed in support, sales, billing, and so on. A customer service model that utilizes global agents can still support some groups that handle specialized problems, but for day-to-day service, one skill set can do it all. Below are some of the benefits to make the case for implementing a global agent model.

Shorter wait times. If something causes a spike in contacts in one particular area, you no longer have to worry about a massive increase in wait time for your customers in that one department. However, in order to ensure that the channel is not clogged and revenue generating calls can still flow in, your company will be forced to focus on reducing the unnecessary reasons for customer contacts. This is a good thing. You will be providing a better experience by addressing these needs.

Greater flexibility in staffing. With global agents, meeting the ebb and flow of staffing needs becomes much simpler. You can fill slow periods with outbound, revenue generating activities that can be performed by any agent. And you will be prepared for an increase in inbound volume should the need arise.

Easier transition from self-service channels. Contacts from your self-service channel can flow directly into one group rather than trying to play the guessing game of what area they need to go to. This makes the flow of self-service to agent much easier to manage.

Customers aren’t lost in transition. Companies that don’t have a strong process for issue tracking risk having customers lost in the hand-off from one department to the next. If you have agents with the basic skill set to tackle most issues, that risk becomes greatly reduced.

Reduces customer frustration with repeating information. No one is a fan of having to explain an issue to multiple people. You lose all confidence when you feel like information is not making its way from one person to the next, not to mention the time that is wasted. With global agents, customers can explain their problem, only once, to the person who is going to resolve it.

Agents see the big picture. The initial onboarding training takes longer, but the product of that training is a high-caliber, well-rounded employee. The global agents have the ability to see the big picture by handling all types of contacts. They learn the impact of the sales process on customer complaints later in the lifecycle and vice versa.

Visibility across multiple areas leads to process improvements. Things that used to be done by functional silos suddenly become visible by one group resulting in process improvements and elimination of redundant, unproductive actions.

Using the notion of a global agent has the potential to provide a better customer experience for certain organizations. If you struggle to provide consistent quality service, perhaps you should explore the idea and see if it’s a fit.

 

Survey Design: Part 3

SURVEY INTERVALS


How often should you send surveys? The answer can be quite complicated, but I’ll try to keep things simple here. The survey interval discussion has two main facets: how often you send surveys in general (daily, monthly, annual process) and how often you survey the same customer. How often you send surveys in general should really be based on the resources you have to manage the feedback and what they can handle. And also, it should be based on the type of experience, whether it is a survey about a recent interaction with a customer or a survey about the long-term relationship between you and your customer (depending on your type of business). How often you send a survey to the same customer should be based on the number of interactions they have with you in a period of time and the number of times you can reasonably expect them to respond before getting tired of seeing surveys. Here are some tips:

After a transaction. If you want to send surveys after a particular transaction, it helps to start by evaluating the number of transactions you have per day/month/year. If you triggered a survey after every transaction, would that (a) give you too many responses to manage? and/or (b) touch the same customer too many times in a short period?  You want to keep the interval between transactional surveys long enough that you don’t aggravate your customers causing them to stop responding. The best thing to do is test out some scenarios with different rules on when to exclude someone from a survey invitation. For example, take your transactional data for a year. If you were to exclude a customer from a survey if they’ve received one in the last 90 days, how many would you end up sending out for the year (or month, or whatever period is applicable for your business)? Does that volume feel right to you and your customers?

During a lifecycle. Sending surveys during a customer lifecycle or relationship is a great way to reach out if you don’t have a high-touch relationship with your customers (it will remind you that you need to find more reasons to communicate with your customers if you haven’t been!). Typically these are done annually and that interval can provide great YOY performance data, especially if done at the same time every year.

 

REVIEWING AND ADAPTING YOUR SURVEYS


You’ve successfully implemented your survey program but the fun isn’t over yet. The next step is to carefully review your survey results to make sure you’re getting what you need.

  1. The right number of responses — Based on the business rules and survey intervals that you set, are you getting the volume you expected? Do you have enough responses to provide significant findings without irritating your customer base by over surveying?
  2. Abandonment rate — Are your customers leaving the survey before completing it? If so, you should investigate if it is happening because of a bad question or if the survey is just too long.
  3. Actionable response data — Are you getting the answers you expected from your questions? More specifically, are customers correctly interpreting what you’re asking? Check to make sure the data you’re getting back is valuable and actionable. Make sure it is measuring what you intended it to measure and make sure you’re capturing the most critical aspects of the experience (you’ll be able to tell from the comments what is most important).
  4. Survey complaints — If you have a large number of customers complaining about a particular aspect of your survey process, consider changing it. And be sure to check the opt-out rate as it may indicate a problem if you see an increase.

As we improve the customer experience in our organizations, we want to capture that through our survey results, so don’t be too quick to change everything up. However, we want to keep our customers engaged and keep things fresh in how we collect feedback. It is no doubt a balancing act.

 

Survey Design Part 1: What to Ask and How to Ask It

Survey Design Part 2: Standard Questions, Question Scale, and Survey Length

 

Survey Design: Part 2

STANDARD QUESTION SUGGESTIONS & QUESTION SCALE


Net Promoter Question or Likelihood to Recommend – “How likely are you to recommend X to a friend or colleague?”  This question makes more sense when you know the customer has had enough of an experience with your company to make a recommendation. This question gives you a sense of whether or not customers are willing to spread positive word of mouth about you. And NPS is a widely accepted calculation as a success/loyalty indicator. It also really gets the customer in the mindset of putting their personal credibility on the line. Was the experience good enough that they would personally tell others to try it? The likelihood to recommend question might not make sense in all scenarios or for all organizations. For example, if your customers can’t recommend you for some reason, you shouldn’t ask this question. But it’s a great addition if you’re interested in benchmarking, and if want to perform additional calculations on loyalty and word-of-mouth with the results.

Scale: The standard scale for likelihood to recommend (if you want to do NPS calculations) is 0-10.

More information on Net Promoter here.

Satisfaction – “How satisfied are you with ABC’s product/process/experience?” A satisfaction question is standard, it’s easy, and everyone understands it. While this is standard, it is not necessarily considered the best indicator of loyalty. Just because a customer is satisfied, it doesn’t mean they will stay. However, a question on satisfaction does have its place. You can ask satisfaction as it pertains to a particular aspect of the experience, rather than the experience as a whole. And save the broader experience questions for something that claims to indicate loyalty like NPS or Customer Effort.

Scale: Typically satisfaction questions use a scale of 1-7 or 0-10. These are used because they provide a mid-point. You should consider what other questions you are asking when deciding on what scale to use and how you will present that data. For example, if you are asking the likelihood to recommend on a 0-10 scale and satisfaction on a 1-7 scale, will your audience understand the difference in results? It is best to consider how people will be interpreting the results when choosing a scale. Keep things as easy and consistent as possible. There’s no need to overcomplicate your results.

Customer Effort: I’ve seen customer effort measured in two different ways. The first: “How much effort did you personally have to put forth to do xyz?” This version includes a scale of Very Low Effort to Very High Effort. The second: “ABC made it easy for me to do xyz.” This version includes a scale of Strongly Disagree to Strongly Agree. Either version gets you to the desired result: Do you make things easy for your customers? This question is great because just about every customer values an easy experience. They expect things to follow a predetermined path, they expect you to keep them informed and handle any surprises, and they expect you to deliver what they’re paying for without having to go out of their way to get it.

Scale: Typically this question uses a 5 point scale from Very Low Effort to Very High Effort or a 7 point scale from Strongly Disagree to Strongly Agree, depending on which version of the question you are asking.

More information on Customer Effort here.

Open-Ended – “Why?” Asking why or what you can do differently after any of the above questions is critical to taking action on your results. Allowing customers to elaborate on why they gave you a particular rating will provide you with more valuable information than just a rating. These questions are time-consuming to interpret and analyze, yet very insightful. I recommend only having one open-ended question per survey. The reason being, the customer will use whatever open text field you give them to tell you what they want to tell you, regardless of what you’re asking. Keep your analysis simple with just one.

Scale: Open Text Box

 

SURVEY LENGTH


Have you taken one of those 40 question surveys about a website experience before? If the answer is NO, it is probably because the survey is 40 QUESTIONS LONG! Long surveys serve their purpose and they can be acceptable as long as your organization is comfortable with a 2% response rate and a 50% abandonment rate.

Keep it short. My recommendation is to keep the survey short enough that you (1) don’t irritate your customer (2) don’t lose them halfway through and (3) don’t become inundated with response data that you don’t have time to analyze. If you adopt the mindset that you plan to take action on the responses to every question you ask, you’ll find it much easier to shorten your survey. I love the one question surveys, but I do wonder what companies are doing with the data from that one question. Is it enough information to make positive customer experience changes? Is it just for the sake of having a score? If you’re planning to ask only one question such as Customer Effort or Likelihood to Recommend, make sure you can match those results up to your operational metrics. If you’re able to find correlations between the score and what happened, one question might suffice. If you don’t have those reporting capabilities, considering adding a few more questions pointed at the key aspects of the experience.  A good goal for a company trying to pair down a survey is no more than 10 questions or no more than 2 minutes.

It depends on how often you’re sending it. Another consideration to make about your survey length is how often you are requesting feedback. The less often you want a response, the longer your survey can be. For example, an annual survey of your relationship with the customer can be longer than a survey that goes out after every transaction. More on survey intervals in Part 3.

 

Next up in Survey Design Part 3: Survey Intervals, Reviewing and Adapting Your Surveys