In this report, four basic approaches to the analysis of customer satisfaction
survey data were presented and discussed in the light of data from six
States. These approaches were:
Each approach has strengths and weaknesses and varying degrees of suitability
for different audiences. Table 10 presents each approach with its respective
strengths and weaknesses and its primary audience.
Approach
|
Strength
|
Weakness
|
Primary Audience
|
Descriptive Statistics |
- Easily calculated
- Easily understood
- Summarizes key information
- Basis for performance standards
- Serves as a point of comparison
|
- Terms are used loosely and can be misunderstood
- Can oversimplify results
- Impacted by extreme responses
- Does not allow for determining practical differences
in results
|
- Appropriate for all audiences
- Stakeholders
- Senior management
|
Dispersion or spread of
responses |
- Easily calculated
- Visual display presents an immediate image of
the results
- Indicates spread of responses
|
- Charts may be confusing
- The dispersion statistic (standard deviation)
is not easily understood and is often ignored
|
- Appropriate for more limited audiences and detailed
description needed
|
Comparing responses
between groups |
- Provides clear indication of practical differences
between groups
- Can indicate areas of concern
- Key to assessing performance in relation to negotiated
standards
|
- Comparisons are not always appropriate (apples
to oranges)
- Not always understood
|
- Management
- One-Stop staff
- Stakeholders
|
Relationships between ACSI and additional
questions |
- Tailored to a specific State's programs
- May identify drivers of satisfaction
|
- Can lead to too many questions
- Questions may not be relevant
- Provides indications, not final solutions
|
- Questions can be developed for a specific audience
|