Think about the last check you wrote, the raffle ticket you purchased, or product that you purchased to benefit an organization. Do you ever wonder if it made a difference?
There is a growing demand on nonprofits from funders to deliver on their promises and to show results. As someone who specializes in evaluation, this brings me endless joy, but I lament that funders often don’t know what they are asking for and nonprofits don’t know what’s being asked of them. What does this lead to? Generally speaking, confusion, but also a continued waste of time and money. In order to make evaluation and data collection useful for nonprofits and funders, it has to be calibrated correctly.
Let’s set the record straight on evaluation. Evaluation should not be used punitively. At the SWFL Community Foundation, we cannot stress this enough. We need organizations to develop an evidence-based learning culture and this cannot be done if nonprofit leaders feel that cooperating with evaluations leads to criticism and a decrease in funding for their work. That’s not to say that evaluation should not be challenging or critical, it must, but it has to be seen as valuable inside and out.
Reframing evaluation efforts in this way improves the data quality and reporting. It also leads to my second point – evaluation results must be used for improvement. Collecting data to be squirreled away to higher powers is not conducive for developing a learning culture. It is not useful to the organization that is burdened with collecting it, leading to resistance and data corruption. However, if those tasked with data collection are part of the evaluation process – question making, analysis, reporting and feedback – they are part of that system with a vested interest in it. This provides multiple opportunities for learning and improvement along the way.
So how do we measure the right results? We, at the Foundation, are addressing this with nonprofits and funders. We want nonprofits to design programs to demonstrate success toward their goals without reaching for pie-in-the-sky outcomes, instead encouraging grantees to develop outcomes that are directly related to their activities. For instance, a youth business training workshop looks to measure their effectiveness by measuring how much the youth’s knowledge in the area of business increases, not on how many of those youth open businesses or graduate from high school.
The latter is impact, which is difficult to connect definitively to one, small business training workshop. Agencies offering small scale programs like this should be aligning themselves with others who share their same ultimate goals, so they can then begin to understand their collective impact, as well as better measure it. Let’s call this “proximity evaluation.”
The other side of this topic is helping funders better understand this “proximity evaluation.” It is important to understand that a program is doing what it says it will do based on what a non-profit agency can reasonably hope to achieve through a given program. The broader and longer-term impacts should be connected to these program results, but we should not be judging new, small programs on this. Funders should evaluate the quality of a program based on the effectiveness of achieving the results that are in the closest proximity to their activities. Not every great program has to solve big issues, but they should be able to demonstrate effective, connected steps toward them.
The SWFL Community Foundation is committed to strengthening nonprofit organizations in our region through offering resources, coaching, training and guidance in addition to grants. And while we help our grantees to establish their metrics, we also monitor our own Foundation measurements to aspire to our outcomes, and to learn from our failures.
We want to make sure our work makes a measurable difference. If you want to make a difference, we’d love to hear from you at 239-274-5900 or visit our website at www.FloridaCommunity.com.