Analyzing and Reporting on Respondent Scores Surveys Research

Happy woman with small clipboard and pen looking at customer.
•••   Indeed / Getty Images 

Analysis and reporting of survey results deserve as much care as survey construction. Market researchers agree that it is important to communicate survey results to audiences with clarity. It is never wasted effort to explain in layman’s language how the survey results were analyzed and what the reporting conventions mean. This is particularly true when survey results are reported as statistics.

Infographics provide a readily interpretable format for reporting survey outcomes. Through an infographics approach, complex statistical information can be represented visually with great clarity and enhanced appeal to the various audiences for market research. The format used for reporting survey outcomes data can make an enormous difference in how accessible the information is to various audiences. The work of Dr. Edward Tufte is a prime example of the effectiveness of data visualization.

Using Top Box Reporting to Simplify Survey Findings

The top box scores are the highest rating points on a scale that has been used by respondents to indicate their answers to what are typically closed-end question items in the survey. For example, if the survey participants were asked to respond to the survey questions by using a 5-point Likert scale, each point on the scale would be associated with a descriptive phrase or term. It helps to think of the scale as being vertically arranged—like a stack of children’s alphabet blocks—with the most positive possible response on the top and the most negative response on the bottom. The top-box is typically assigned the number “5” by the market researchers and is the most positive of the responses, and “4” is the second most positive of the possible responses, if the survey participant marks either one of these responses, they have given a top-box response.

Most people look for simple patterns in the data, so constructing an executive summary that reports top-box scores facilitates this very natural and human tendency. If an executive summary is provided to the market research audience, reporting on the top-box, cumulative frequency of survey responses can be attention-getting without being misleading. For example, if 82% of the responses on a survey question item were marked either number “5” (which stands for extremely satisfied) or number “4” (which stands for very satisfied), ​the market researchers can report that 82% of the survey respondents were very to extremely satisfied. Certainly, the body of the summary survey report can elaborate on what the top-box figures mean and how they were calculated, but it is the top-box scores that most audience members will remember and understand.

Often, the tendency is to focus attention on the frequency or percentage of survey responses in the top-box. But it is important to consider the frequency of responses in the bottom two boxes, as well. A high percentage score in the top-box range should not be allowed to eclipse bottom box scores altogether. One of the best ways to address this split analysis is to place a ceiling on the frequency or percentage of responses that are within the bottom box, just as a certain frequency or percentage in the top-box range is designated as the level to aim for either quarterly or annually.

Top Box Scores & Mean Customer Survey Scores Tell Different Stories

Data interpretation is made stronger when the frequency distribution and the cumulative frequency distribution are also provided. The frequency distribution shows the percent of responses for each question item that correspond to the points on the rating scale used by respondents to provide their answers to the survey. Cumulative percentages show the percent of responses up to and including all the previous points on the rating scale.

For year-to-year comparisons of surveys research that is conducted annually, the central tendency of the frequency distribution is one of the most valuable statistical tools. The mean or arithmetic average, which may require weighting to be accurate, provides the best overall statistic of the typical rating given by survey respondents. In fact, it can be informative to overlay the frequency distributions of survey results from several years in order to compare the mean, median, skewness, and kurtosis of the distribution. This can be accomplished digitally by using Excel or the embedded capacity of a number of survey software applications.

The hazard of using top-box reporting is that the audience loses visibility into the frequency distribution shape. Ostensibly, this is of greater interest to market researchers and other internal clients because a business development goal remains to move customers from the second highest top-box to the highest top box—as well as to move customers from that sitting-on-the-fence position of “3” or neutral on the Likert scale. In fact, top-box score reporting and mean score reporting do not produce identical results. A good way to demonstrate this for a customer or client is to arrange the responses to survey questions in rank order, creating two rows—one with mean scores and the other with top-box scores. The rank ordering will differ for the two methods. This difference can be particularly important when survey outcomes feed into employee performance evaluations or when the surveys are used to identify clients who are potentially at-risk of terminating their relationship with the company or the organization.

Customer Satisfaction Is a Special Case

Surveys that measure customer satisfaction pose particular challenges to market researchers. Customer satisfaction surveys are purposefully designed to identify strengths and weaknesses in a company or organization from a consumer’s perspective. An associated challenge is that the results from customer satisfaction surveys are sometimes used to measure the performance of employees, which is not what the survey is designed to do.