|
Customer Satisfaction Pilot Studies and Analysis
Overview
Survey Administration.
Six states conducted and made available participant satisfaction data.
Four of those states also conducted and made available employer satisfaction
data. Nearly all of the states used the three questions that make up the
ACSISAT and asked them at the beginning of the survey immediately after
the first question that determined the types of services received. One
state, however, did ask about satisfaction with each of the services before
asking the three ACSI questions. This was done by mistake, but produced
no discernable difference in the results.
The data were collected using independent contractors who conducted the
telephone surveys as outlined in TEGL 7-99. The participant customers
were a combination of JTPA Title IIA, IIC, and III participants. Employer
customers were drawn from both JTPA and Wagner-Peyser contacts.
In the Spring of 2000, before all of the surveying had been completed,
the major contractors met at ICESA in Washington, DC to discuss lessons
learned. Because analyses had not been conducted or were very preliminary,
the focus was on survey administration.
- Contractors agreed that the greatest difficulty in meeting response
rates was among employer customers. The problem stemmed from not having
adequate contact information. They defined adequate information as having
at least two and preferably three names and phone numbers for each employer.
- Contractors suggested that the window for contacting all customers
be no more than 60 days2
- There is a need to translate the surveys for significant concentrations
of non-English speaking populations.
- Youth will present a major challenge in terms of obtaining an adequate
response rate.
- The one state that translated the survey into Spanish found that there
were still difficulties in understanding the ACSI questions, especially
the questions about the ideal and expectations. However, it was unclear
whether the difficulty arose because of problems with translation or
the inherent complexity of the questions themselves.
One of the questions, comparing the customer's experience to their expectations,
had an earlier version of the response scale. The question was scaled
from "failed to meet expectations" to "met expectations." The current
version of this question substitutes "met expectations" with "exceeded
expectations." For this reason, the results presented do not necessarily
represent the same pattern as with the newer wording. In addition to this
difference, the ACSISAT is not calculated using the weights which were
not available to USDOL during the pilot period. Therefore, although the
overall response rates and other results are likely to provide a generally
accurate picture of this index, the results can not be assumed to provide
a highly accurate, detailed picture. The states participating are labeled
from A-F.
Analyses
There are a number of approaches that can be used to analyze customer
satisfaction survey data. Each has its strengths and weaknesses, and,
each has varying degrees of suitability for different audiences (e.g.,
local office staff, management, policy makers, stakeholders, and the public).
No one approach fully portrays the customer responses; a complete portrayal
of the information calls for using a combination of approaches. All of
the approaches suggested are easily calculated using commonly available
computer software (e.g., MS Excel). The approaches include:
- Looking at the basic descriptive statistics (e.g., average, frequencies,
percentages)
- Examining the dispersion or spread of responses
- Comparing responses
- Of customer groups
- Among states or local One-Stops
- At different time periods
- Examining relationships between the ASCI questions and additional
questions about services
|