Customer Satisfaction Pilot Studies and Analysis

Response Comparisons

Customer satisfaction information can be grouped in several ways. These include different program groups (e.g., Title IIA, Title IIC, and Title III), different States (e.g., State A, State B, State C, State D, State E, and State F), and different time periods (e.g., information gathered in 1999, 2000, and 2001). A comparison of customer group scores answers the question, "Are some categories of customers better served by the system than others?" For example, in discussing the variation in responses in Figure 1 and Table 2, the question was raised if this variation was due to a specific customer group rating service differently from another group.

In Table 2, State F's ASCI score is 74.37 and State E's ASCI score is 72.08. Is this a practical or significant difference? Should State E look to State F as a model in service delivery? If in 2001, State F has a ASCI score of 73.00, is this drop of 1.37 points significant? Should State B then reevaluate its service delivery? Or are the scores for the two years really indistinguishable and does the drop not merit action? This analysis can be calculated for any group for which the ASCI score is known. This comparison is calculated by taking an ASCI score and performing a statistical test that either compares two or more ACSISAT scores or compares an ACSISAT score to a fixed standard such as a negotiated level of performance.

Strengths

A comparison of customer satisfaction by customer groups provides a clear indication of the perception of service by different groups. The results of this comparison can be presented without using statistical jargon or a large amount of numbers so that it is easily understood (e.g., Title IIA customers were significantly more satisfied with the service they received than Title III customers).

This comparison can highlight potential areas of concern in the way services are delivered to a specific group. If we find that Title IIA adults are less satisfied than Title III adults, program managers will be forced to ask the question, "What about our service delivery would make one group of adult job seekers (those who often have less work history) less satisfied than adults with more work history?" Such questions contribute to a continuous effort for improving the match between service delivery and specific customer groups' needs.

This comparison is central to determining whether States and local WIBs are meeting negotiated performance levels. The Federal and State assessment of customer satisfaction performance relative to negotiated levels can not occur by a simple visual comparison of the numbers (e.g., 69 looks different from 70). Since all scores such as the ACSISAT have a margin of error, the test is necessary to account for the error and ensure that there is an appropriate interpretation of the results. The statistical test ensures that the difference between the negotiated level and the ACSISAT score obtained through the survey are significantly different.

Weaknesses

Some comparisons, although they might yield a statistically significant result, may not be appropriate in the first place. Two groups may be such different populations and differ to such a degree in the types of services received that a comparison of the two may not produce useful information. For example, while comparing two groups of adults with a similar mix of services is appropriate, comparing Title IIA adults with Title IIC youth may not be useful. While Title IIA emphasizes job-related skills (e.g., job search, job readiness and occupational training) designed to facilitate immediate employment, Title IIC emphasizes learning about the world of work and gaining basic skills that lead toward long-term employment preparation. The satisfaction scores relate to two different sets of expectations and two very different program designs making comparisons more like apples to cheese than even apples to oranges. Therefore, the nature of the comparison and the type of analysis must be clearly explained so that the audience understands the limitations of the comparison.

Primary Audience

A comparison of customer satisfaction by customer groups is particularly useful for management and staff in identifying best practices.

Examples of Analyses from Pilot Studies>>
 
 
Home | About | Resources | Services | Forums