Category Archives: Planning & Evaluation

New ratings sites offer broader view of nonprofit performance

What should would-be donors know about your organization? Your overhead rate, or your core strategies? Your budget size, or the program capacity you’ve built?  How much money you raised, or what you’ve accomplished with it?

According to some nonprofit leaders, new ways of assessing nonprofit accountability are needed. Critics of nonprofit ratings sites like Charity Navigator say that the “metrics only” approach to judging a nonprofit sends the wrong message, and that other kinds of information should be widely available. Now, a handful of web-based ratings sites are trying to expand public information on nonprofits past the old “efficiency” indicators of overhead and budget numbers.

The Charting Impact is one such framework. Designed by Guidestar, Independent Sector, and the BBB Wise Giving Alliance to “encourage strategic thinking,” the site offers nonprofits a chance to create a report that summarizes their strategies, capacities, and measurement techniques. The report is then reviewed by the Board Chair and CEO, who can add comments, and by anonymous stakeholders. The reviews are intended to validate the report, which includes answers to five questions:

1. What is your organization aiming to accomplish?
2. What are your strategies for making this happen?
3. What are your organization’s capabilities for doing this?
4. How will your organization know if you are making progress?
5. What have and haven’t you accomplished so far?

Once an organization completes a Charting Impact Report, its responses produce a document with a unique URL that can be shared on the Charting Impact site, on GuideStar profiles, on BBB Wise Giving Alliance evaluations, and on their own websites. The idea, say the founders, is to allow current and potential donors a concise look at your plans and progress, and to judge your strategic thinking along with your overhead expense rate.

Charity Navigator has also shown a shift in its model, which now includes such indicators as having a conflict of interest policy in place and audited financials posted on the site. Eventually the service plans to include results reporting as well, including logic models and benchmarking documents. These will become part of the program’s star ratings for measuring nonprofit effectiveness.

Can the new direction help clear up myths about nonprofit effectiveness and reduce the tendency to focus on nothing but the numbers?  It depends on whether potential supporters know where to look. But nonprofits can link to their organizational reviews and reports on the scoring sites from their own websites, using the tools as a way to help donors do their research. The sites provide nonprofits the opportunity to craft online information that more accurately reflects “what’s going on” beyond financials, and to think critically about how it will be received. For instance, you can show donors that you distinguish between outputs, such as “number of trainings conducted” and outcomes, such as increased knowledge and skills. It never hurts to have a reason to concisely articulate your goals, strategies, and progress.

The state of evaluation

Evaluation is the second lowest organizational priority for most nonprofits, ranking only higher than research, and less than a quarter of nonprofits devote the recommended five percent of their budget to evaluation.

Surprised? Or is this in line with your experience?

Evaluation rarely plays the role in nonprofits we imagine it should. For many organizations, it’s just too far from day-to-day needs and activities. Others have spent considerable resources on slick evaluation reports that stay on a hard drive rather than getting any use. Still others go through the motions, writing surveys, collecting responses, but secretly skeptical the data is really meaningful.

The nonprofit evaluation firm Innonet recently released its State of Evaluation 2010 report. (Click on “Download the report” to access the PDF.) The survey, based on responses from more than 1,000 nonprofits across the country, was designed to capture insights on the current role of evaluation in the sector—how nonprofits think about it, how they conduct it, and what they do with it.

According to the report, nonprofits see evaluation as 1) an opportunity to promote their organization; 2) a tool for strategic management; or 3) a resource drain and distraction.  The authors conclude that while nonprofits are “using the data and findings they generate in ways that strengthen their organizations,” there is a lack of “support, capacity, and expertise [needed] to harness the power of evaluation.”

The authors say the report is intended to collect baseline information for further research. It would have been useful to read more about how nonprofits learn and build evaluation skills, and the ways in which evaluation actually informs their operations, program development, and strategic planning. The report also doesn’t specify much about the scale or level of evaluation—organizational, program-level, short- vs. long-term, etc.

At the Alliance, we’re always trying to learn more about members’ needs when it comes to evaluation. What kind of training and support would help you conduct evaluations that are affordable and effective?

Below we’ve summarized key findings from the Innonet report:  (All findings refer to 2009.)

  • 85% of nonprofits conduct evaluation.
  • 13% have one full-time staff person devoted to evaluation.
  • Large organizations are more likely to evaluate their work than small ones.
  • Professional evaluators are responsible for only 21% of evaluations.
  • 73% of organizations that have worked with an external evaluator ranked the experience as “good” or “excellent.”
  • Less than a quarter of organizations devote the minimum recommended amount of their budget (5%) to evaluation.
  • Half of organizations have a logic model/theory of change, with large organizations more likely to revise it and keep it up-to-date.
  • Quantitative methods are used more frequently than qualitative methods.
  • Outcomes/impact evaluation is ranked as the highest priority form of evaluation.
  • Funders are named the highest priority audience for evaluation.
  • The most-named barriers to evaluation across the sector include limited staff time, insufficient financial resources, and limited staff expertise.
  • Evaluation is the second lowest organizational priority, ranking only higher than research.
  • More than a third of nonprofits say funders don’t support any of their evaluation work.