Nonprofit Evaluation

The Role of Strategic Evaluation in Nonprofits

In my last post, I began a discussion of program level accountability and evaluation and framed some of the issues that create a “disconnect” between the discussion of program evaluation and the actual practice of program evaluation. I also overviewed some common organizational barriers to program evaluation. Since writing that last post, an interesting opinion article by William Schambra appeared in the Chronicle of Philanthropy titled “Measurement Is a Futile Way to Approach Grant Making.” The article is rhetorical and almost caustic about the disconnect between the talk and practice of evaluation. Typifying the tenor of the article is the following mythical diatribe:

“If only those nonprofits had the luxury to speak the truth to those upon whom they rely for money, they might say something like this: “The last thing we need right now is to devote yet more time to gathering data that won’t affect your decision one way or another—even if you bothered looking at it, and that cannot be used anyway to build up a coherent or useful body of research for grant making.”

As anyone with preteen and adolescents will immediately understand, this nonprofit rant demonstrates a clear external locus of control. In case you are unfamiliar with the concept of locus of control, it is a theoretical construct that suggests an individual (or organization) can be externally oriented –seeking approval, guidance, direction and –well, control from the outside. The converse, an internal locus of control is where an individual or organization is self-directed, with self-mastery and an internal sense of power and control. Indeed, the parenting task of the early adolescent years is to help kids grapple with (and hopefully master) an internal locus of control and independence, rather than clinging onto childhood dependence on external caregivers. Unfortunately, the fundamental flaw with the article about the futility of measurement is that it gets the locus of control all wrong. While I agree with the premise of the author that it is detrimental for an organization to merely collect program level accountability and evaluation data to simply meet external requirements, it is at this point that my agreement with Schambra parts way. As I stated in my last post, program evaluation planning is an internal strategy that is peer to strategic planning and resource planning. Until an organization is clear on this point and creates an operating culture that values program evaluation as strategy, data collection will remain an externally controlled and driven process with limited relevance.

So the question that remains is that “When an organization decides to treat program evaluation as strategy, what exactly does that mean?” I would suggest that the answer is in being intentional about addressing three fundamental domain questions: 1) What did we do? 2) How well did we do it? and 3) Did it matter? Implied in each of these questions is a strategy that can be summarized by the table insert. As you can see, each of the evaluation domains has a related metric, audience & purpose and should be aligned with your program approach, logic or social impact model.

Creating an evaluation table forces an agency to think about the strategic questions related to the internal and external ecology of its program. The evaluation grid is not intended to tell you what to think rather it is intended to suggest how to think about program evaluation.

I personally believe that if the three macro questions and supporting strategy questions can’t be answered by an agency, then it will be increasingly difficult for the organization to retain its place in the social-citizen sector. We are in the early stages of what will be an endemic underfunding of social services. With demand for social programs and services continuing to be high and resources stagnant and, in some cases, rapidly regressing, a new “survival of the fittest evolution” is at hand. In this new evolution fitness will not be determined by the size of one’s reserves or endowment but fitness will be determined by the ability of nonprofit to be evaluative, transparent and networked. All three of these attributes are connected to evaluation strategy and become the points of an organizational internal locus of control.

Evaluative: Increasingly, relevant nonprofits are focusing their strategic intention on performance improvement and developing a culture of organizational learning. It is impossible to have a twin focus on performance improvement and learning without commitment to evaluative thinking and acting. For those organization striving to be on the leading edge of impact through systamically learning and improving will embrace evaluation as having ongoing evaluative data is the only meaningful way to support how well an organization is doing in meeting its mission.

Conversely, one only needs to look at the Federal Department of Education’s Student Mentoring program to see the folly and peril of resting on “no evaluation” or equally detrimental, relying on “historical evaluation data.” While youth mentoring has long been ascribed the status of an “evidence-based” youth intervention, the Department of Education lost its entire $47 million budget for the student mentoring program, when the impact evaluation found the initiative to be ineffective and that the program was deemed to be duplicative of other Federal programs.(1) This incident left dozens of youth mentoring programs with lost revenue and some programs ceased to operate. Such data-oriented decision-making foreshadows a move towards a more disciplined “performance-based” grant making model that dramatically increases the need for nonprofits to think strategically about evaluation.

Transparent: If the first internal motivation for developing and evaluation strategy is for performance improvement and organizational learning, the second motivation is to strengthen transparency. Elsewhere I have written about organizational transparency but at the program level, transparency also matters. Take, for example, the model of transparency set by the Northwest Area Foundation (NWAF). In a recent published report, the NWAF takes a brutally honest look at its programs and practices of a decade and reflects on both the successes and failures of their strategy, programs, and services. Transparency opens the opportunity for dialogue, support and change. Adapting the ideas expressed NWAF’s President & CEO Kevin F. Walker, I would like to suggest that “sharing lessons learned – not just trumpeting success stories, but also examining missteps and false starts – has yet to become one of the [social sectors] core strengths.” Government, philanthropy and nonprofits are increasingly looking for ways to examine and discuss data and such transparent conversations generate ideas for the improvement of the entire sector. And, as Walker concludes, “given the difficulties facing our society in this decade, philanthropy is duty-bound to evolve toward ever-greater effectiveness.

Networked: A third internal driver for evaluation is considering the network in which an agency operates because the “unit of change” is rapidly moving away from the nonprofit organization to the local nonprofit network or ecology. Just last week I was sitting in on a dialogue between a nonprofit and a potential funder. The questions being asked by the potential funder were less about the individual nonprofit and more about the nonprofit’s relationship to others working on the same social issue. “How do you fit in with others?” “Where are the points of intersection?” Leverage? Scale? “What do you know of others successes and challenges and what do they know about yours?” Such a network and shared understanding is created in the context of data. A network is not simply a group with a shared passion for a cause, rather, a network is defined by the collective and tangible action in the direction of change. It is action and not affinity that causes change. As such, program evaluation plays a central organizing role in defining actions and the results of the action.

In both my last post and this one, I have suggested that the bar for program evaluation has been set too low. While foundations and funders might not be using evaluation data consistently or effectively, it is not an excuse for nonprofits to relegate evaluation to a lower-tier organizational functioning. Indeed nonprofits that embrace evaluation as strategy will be driven by internal excellence rather than an external locus of control. Nonprofits that embrace evaluation as strategy with strengthen not only their organizational core but the centrality of their place in solving social needs. Together, strategic planning, resource planning and evaluation planning comprise nonprofit strategy. With such a three-legged strategy. an organization will not only survive as an organization but will demonstrate leadership and be positioned for growth and stability in social citizen sector.

~Mark

Photo Credit: Daniel Mena

(1) Office of Management and Budget. (2009). Terminations, reductions, and savings: Budget of the U.S. Government, Fiscal Year 2010. Washington, DC: OMB.

 


Mark Fulop

Mark founded Facilitation & Process in 2009 to help organizations and communities bridge the gap between where they are today and where they want to be tomorrow. He’s led dozens of Portland nonprofits, government agencies and philanthropic organizations through complex change initiatives including strategic planning, revenue planning, board development, collaboration, and facilitation.

2 Comments

  1. Thank you for your post. I especially appreciated the reframing of the standard process, output, outcome measurement as what we did, how well did we do it, and did it matter. For small non-profits new to evaluation, I can see these categories as a bit easier to understand and tie to evaluation questions.

    Your point about the network a non-profit functions in is also well-taken. This movement towards considering the larger picture is certainly happening at our foundation, but we have not yet found a way to systematically (and practical, both on a cost and staff resources level) integrate network analysis components into our existing evaluation model. I would be very interested in any additional recommendations or sources you have on this topic!

    Thanks,
    Elena

  2. Elena

    I wish I could say that the reframing of evaluation questions was original to me but I know I stole the concept from someone else tho I can’t remember from whom. Making evaluation accessible and useful are two concepts most clearly articulated in Michael Quinn Patton’s work on utilization-focused evaluation and his work on developmental evaluation.

    Your question about practically integrating network analysis is a great one. Personally I think that the concept of network analysis is the most hyped buzzword of the year, especially when it is paired with a spider web diagram (example). While I believe that the term nonprofit network is useful to help organizations recognize that they are connected, I also believe that when it comes to measuring “network effects” I default back to the concept of utility. What is it about the network do we want to know? Most often the answer to that question is some measure of collaboration. When we make the leap from “network analysis” to “network functional collaboration” we rediscover the old school of evaluating coalitions and collaborations. For a great example of such an evaluation see: Measuring Collaboration Among Grant Partners.

    In terms of a well designed workshop approach collaborations and outcomes, I would also highly recommend Outcomes Mapping website and their outcome mapping workbook. Finally, a year ago I wrote a blog post on measuring network effects and I think it still has some additional useful ideas.

Comments are closed.