Evaluation

Overview

Evaluation sounds ominous. It sounds like assessment — an examination of sorts. But evaluation should be ordinary, everyday, part of everything we do. It is an activity that supports us to adapt our work.

Whenever we run a campaign, develop a resource, or provide a service, we want to know whether it has made a difference in people’s lives. In particular, we might ask if it has affected their HIV outcomes, including the risk of getting HIV or passing it on, or their quality of life as a person living with HIV. Evaluation refers to activities that help us understand how (or whether) our programs and services are making a difference in people’s lives and in the world around us. Participating in evaluation is a key part of the job for anyone working in the community response to HIV in Australia.

This lesson does not attempt to cover all the different kinds of evaluation. If you work on a project or service that is being evaluated, chances are there will be a researcher or evaluator who will provide training in the method they are using.

External evaluation

At any one time you are likely to be participating in both internal and external evaluation activities. ‘External’ means evaluation undertaken by someone outside the organisation — for example, a researcher or consultant who has been contracted to evaluate a project, service, or strategy.

An external evaluation is often seen as having more credibility because the evaluator is, to some extent, ‘independent’ and therefore (perhaps) unbiased. External evaluation is more common with higher stakes projects. For example, an external evaluator might assess the impact of a pilot initiative and a funder might use their findings to decide whether to fund the initiative as an ongoing activity.

Internal evaluation

Organisations often conduct their own internal evaluation activities to monitor the progress and delivery of projects and services. It’s common for internal evaluation to include the use of feedback forms, but it can also include insights noticed during client contacts, team meetings, and so on. Internal evaluation may also draw insights from activities like public forums, webinars, focus groups, etc.

It is common for internal evaluation to be led by a senior project worker or manager, and to be an ongoing activity rather than delivering findings ‘once and for all’ at the end of a project. However, there are initiatives like the W3 (What Works and Why) project where an external researcher works with a community organisation to support their internal evaluation activities.

Supporting organisational learning and adaptation

The W3 (‘What Works and Why’) approach aims to value and capture the knowledge and insights that peer practitioners build up over our work with clients, contacts and communities.

An independent evaluation generally asks ‘did it work?’ (‘it’ being a project or service), while the W3 approach generally asks ‘what do we know and how?’

These two questions are connected, though. Peer and community-led programs lose effectiveness when they stop learning about and responding to emerging changes in the communities they are working with.

So the W3 approach uses a monitoring, evaluation and learning (MEL) framework that checks for learning and adaptation in programs and services. It looks for evidence that four key things are happening:

  1. Engagement with the community
  2. Alignment with the policy system
  3. Adaptation to what it learns from 1 & 2
  4. Demonstrated influence on the community and policy systems

People working in peer-based practice – like HIV Peer Navigators – often obtain insights into changing needs and experiences in clients and communities. When these insights are shared within the team and the organisation, they can be used to support changing the service approach, as well as partnership work and advocacy within the HIV sector and the broader policy system.

Two things are really important for peer-based programs:

  • encouraging peer staff to notice and share insights from their work
  • capturing and using them to support change (e.g. minutes of a meeting)

Peer and community-led organisations need to be able to demonstrate that learning and adaptation are happening and point to concrete examples of community and policy influence.

When these things are happening, we can share the findings with stakeholders (such as funders, policy-makers, clinicians, researchers, and most important, members of communities themselves). This can increase their confidence that the program or service is relevant, responsive and effective.

Qualitative or quantitative?

Quantitative methods involve numeric data and statistical analysis, typically planned by a person and carried out by a computer. By contrast, qualitative methods can include all sorts of data (text from interviews, open-text questions on surveys, images, television or film, even music) and the analysis is done by a person using a theoretical framework as a guide.

There is a widespread assumption that numbers are ‘hard data’ and that quantitative findings are more reliable than qualitative findings. This understanding is common but mistaken.

What matters is using the right methods for the question being asked:

  • If the question is ‘how often’ or ‘how likely’ — then we’re talking about a frequency or probability, and we should use quantitative methods.
  • If the question is ‘why’ or ‘how’ — then we’re talking about values, purposes, and process, and qualitative methods may be more appropriate.

These are generalisations and there are exceptions in both cases. You can use quantitative methods to ask ‘why’ questions, for instance, using surveys. And you can use qualitative data as evidence of something being common or widespread (if you seek opinions from well-placed, reliable observers).

But in general we use quantitative methods to estimate frequency and probability, and qualitative methods to help us understand personal and social values and meanings. It’s also common to use a mix of methods, both quantitative and qualitative, answering different questions in the same research or evaluation project.

Mixed methods

We should be using a mix of methods to evaluate our efforts to promote health equity, reduce health inequities, meet health needs, and build health literacy.

The mix can include things like:

  • PozQOL at baseline and repeated at regular intervals
  • Qualitative insights captured using the W3 approach
  • Tools for assessing health literacy
  • Social research measuring health outcomes
  • Case notes on goals and outcomes achieved by clients
  • Feedback from clients themselves
  • Consultation with communities

None of these methods tells us everything we’d want to know. But taken together, they are pieces of the puzzle giving us insights into the bigger picture – our vision of quality of life for all people living with HIV.

Ethical considerations

It is vital that people know they are participating in research or evaluation and freely give their agreement to do so. This is known as informed consent. Equally, people taking part in research must understand the potential risks, and how to access support if they are distressed or harmed.

It is vital that we don’t put pressure on clients and community members to take part in research or evaluation activities, as this can violate informed consent. Pressure can occur in surprisingly subtle ways. For instance, a client might feel that saying ‘no’ to taking part might change your attitude towards them, or that refusal could harm their ongoing relationship with your service or organisation. For another example, a client on a very low income might not feel able to turn down an incentive payment, such as a gift voucher offered to people who complete a survey or take part in a focus group.

These are not hard and fast rules but things to think about when promoting a research study or inviting clients and community members to take part in evaluation activities.

Research involving human subjects (i.e. people) must undergo review by a human research ethics committee (HREC) before recruiting and engaging with participants. Evaluation activities and market research do not need to undergo HREC review. However, external evaluators will often seen out HREC approval because their findings cannot be published in peer-reviewed journals without it.

Post a comment

Leave a Comment