By JOHN WILLIS
Full-scope evaluation is a powerful tool that can make you a more effective change agent – but it’s not always the right approach. Here are simple and highly-effective alternatives.
You have a solid plan for your campaign or project. You have the talent and resources to implement your strategy and the timing is right. You work hard… and yet… showing results is not always easy, even the best campaigns sometimes have trouble quantifying their achievements.
That’s where evaluation comes in – gathering the facts and presenting them in an accessible way lets you show your community partners, supporters and funders that you’re not only getting the job done, but that the lessons you’ve learned will make your work more effective in the future.
Evaluation is a highly evolved practice in the fields of international development, healthcare, and social planning, but in advocacy and social innovation we have found that evaluation is usually often the least-utilized strategic tool.
In our experience even a modest program review can build momentum, increase communication and analytical skills among team members, and enhance solidarity with the larger circle of stakeholders. The trick is to organize evaluations as a learning experience and strategy process within the project rather than treating them as an after-thought.
What is the right approach for you?
Classic program evaluation assesses progress against stated objectives– you said you’d do ‘xyz’ – how’s it going? what are those?
The exciting part of this type of ‘full scope’ evaluation is the potential to measure not just what participants think happened, but what actually did happen based on perceptions of people who are directly impacted . We sometimes spend many weeks identifying, recruiting, and interviewing individuals involved.
This method, however, assumes a high degree of measurability in both what you set out to accomplish, and how you tried to accomplish it.Like traditional strategic planning (watch for an upcoming post on this topic), classic program evaluation is all about knowing the relationships between objectives, strategies, resources, activities, and outcomes.
But often, the objectives are diverse and even changeable over the course of the project being evaluated. Moreover, the resources to implement the ideal strategies may not exist so people ’muddle through’, often without tracking key metrics. Short of reconstructing the whole project from the ground up, this context of fragmentation or uncertainty makes the framework for evaluation anything but ‘logical’.
We have come across these challenges more than once in our work with non-profit clients. The following are some valuable alternative approaches to evaluation that give good results at a reasonable cost, both financially and organizationally.
Audits and Internal Evaluations
A perception audit is usually done fairly quickly and with less formality than a full-scope evaluation. An effective audit can be as informal as a roundtable conversation, a facilitated workshop, or a series of in-depth interviews. What do we think we’ve achieved? What worked, and what did not? How can we do better going forward? These three questions are all it takes to start a simple audit.
Auditing (or rapid assessment) techniques are particularly useful when you need a mid-course update on a project, or when the project did not start with very well-defined objectives. The point is to figure out your aims and appreciate what you have done well – it builds consensus among your stakeholders for the next stage of strategic development.
You can go beyond a simple audit without breaking the bank by conducting an internal evaluation – that is, go back over the original planning for the project and lay out the goals and objectives, strategies and tactics, desired outcomes, uncertainties, and risks. Then, interview participants and ask them how the framework changed over time.
This is a good way to create a clear framework and it keeps the data organized without making the conversation itself too rigid.
Measuring Community Engagement
Successful non-profit work and social innovation brings people together, and encouraging participation is often a major objective of a project. When it comes time to evaluate this type of project, it can be a delicate balance between highlighting the positives while listening carefully to the participants’ concerns or criticisms.
Whether you talk to internal or external stakeholders you should focus on a few key questions: What value did the project have for participants? How could it be enhanced in the future? Did participants feel supported and welcomed? Could they ‘find their way around’ the project or did a lack of guidance hamper their involvement? What initiatives were most engaging for them, and why? How might you recreate successful models for engagement in future projects?
For large-scale projects such as advocacy campaigns, social marketing, or union organizing drives, opinion research is an invaluable aid to uncover and investigate community perceptions. We use surveys but also online bulletin boards and focus groups that complement the traditional interview and workshop methods for community engagement evaluation.
In many projects you may also need to use opinion research to evaluate how your messages and ideas were interpreted by external influencers, how they were passed on by stakeholders, and how they reframe them in dialogue with others.
In a separate post, we’ve highlighted some case studies of evaluations that Stratcom has completed to show how these techniques play out in the real world.
Two major types of program evaluation have emerged as the two ends of the spectrum in this field – both have advantages and are worth exploring if you wish to go deeper into this topic:
‘Logical framework approach’ (LFA) and its kin ‘results-based management’ grew out of academic criticism of overseas development projects. For more information, see this article on the Logical Framework approach.
‘Developmental evaluation,’ -a more participatory style of evaluation -is focused on helping project participants discover new learnings about their problem and the solutions that have been proposed and/or implemented. For more on this topic, read DE 201: A Practitioner’s Guide to Developmental Evaluation: