User satisfaction surveys are perhaps the most problematic tool used by social service agencies to assess performance. Not only is self-reporting notoriously unreliable, especially when proxies are involved (families, circles or staff responding on behalf of users), but it can make service users complicit in validating services that are not helping them and that may even be doing them harm.
1. Satisfaction is not an indicator of efficacy even though it is often confused as such by service delivery organizations, funders and accrediting bodies. Personal satisfaction tells us about someone’s feelings and impressions, it does not tell us about what actually happened. The taste of a thing is very different than its nutritional value.
Worse, setting satisfaction metrics as effectiveness targets in outcomes reports is a category mistake because our perceptions are likely to differ from reality. To equate them is like saying we can advance scientific knowledge if we can increase people’s perceptions that they have it. In fact, it is quite possible for people to receive a very poor service and yet rate it very highly.
2. Service recipients’ exposure to other services, or to different categories and types of services, or to a range of interactions and outcomes, may be limited. Without this wider exposure in which to ground feedback, levels of satisfaction becomes less helpful/informative because service users have not had an opportunity to develop and refine their expectations. We may feel good that someone who has been in our services for a long time rates them highly, but would they still do so if they had sampled others sorts of services? Without that exposure, satisfaction surveys can become fishbowl applause, supplying services with a false sense of desirability.
3. Social service organizations often emphasize safety, sanctuary and satisfaction over catalyzing personal growth and social transformation. When that happens, they can create contexts that are caring, fun and social, but that don’t actually launch people into good lives. Instead, they can foster regression, complacency and ongoing dependence. High satisfaction with those services should not be taken as an affirmation, but as an admonition that we are not fulfilling our ultimate mission to help people live fuller and better lives. Satisfaction can be inimical to good outcomes. Consider, for example, Fenton’s study around paying doctors according to levels of patient satisfaction: it resulted in higher health care costs, because doctors were more likely to prescribe tests and treatments to satisfy patient expectations, and much higher mortality rates, because they were less likely to challenge patients around unhealthy habits.
User satisfaction surveys have a place in social services to the extent that they provide us with information around how users are experiencing our services. But too often the preponderance of “outputs” reporting and absence of good outcomes data means satisfaction surveys become substitutes for quality. We think that if those receiving services are satisfied with what we think is important, and if they would recommend us to others, we must be doing good work. That is a fallacious conclusion. More importantly, from an ethical perspective, we run the risk of enlisting the voices of service recipients to validate services that ought not to be validated.