My first experience with organizational evaluation came early. I was a secretary at my local neighborhood center – known by its constituents as “the Center” – sitting in the tiny administrative office at the typewriter with my boss looking over my shoulder as I painstakingly squeezed type into the tiny boxes of a funder’s grant report form. These were the days when our grant applications and program evaluations were like works of art, constructed of whiteout, rubber cement and clear plastic tape.
My attention had mostly been on my task rather than on the words being dictated, until the director began to describe the youth program I was a part of. His goals – keeping low-income youth off the streets and preparing them for work and/or college – sounded worlds away from my experience of the program.
This was the place where my siblings and I attended nursery school and summer camp, were checked for lice, and learned to swim. One of my first jobs was as an afterschool counselor there. I was an active member of the teen programs and even attended board meetings as a teen constituent. I typed and mimeographed the newsletter and did community outreach door-to-door with one of the pastors. I even had my first kiss in the Center’s elevator. If anyone knew the Center’s programs inside and out, I did. But the words I was typing were not describing the Center I knew.
Yes, some of us were now studying for the GED, or even going to college, but the director’s description seemed to be reducing us to numbers – and not giving a true picture of what we really got from the program. The youth program, originally adult-directed, was now run entirely by the teen leadership. Rather than focusing on work or college specifically, our activities were a combination of fun (field trips to plays or concerts), practicality (learning how to use Roberts’ Rules of Order), and developing pride in our culture (inviting Puerto Rican poets, writers, musicians, and activists to make presentations and interact with us). I would have stressed in the report how we were building community, caring for each other, and gaining an appreciation for ourselves and our culture.
But as a 17-year-old secretary dependent on my job, I didn’t yet have the voice to challenge the director. And looking back, I don’t know if he was just trying to fit the words to the required format to keep our funding, or if he was actively interpreting the meaning of programs for their constituents instead of asking us. Whatever it was, eventually a weak board, a director not in touch with his constituency, and a community that wanted more of a voice in the governing of the organization led to the Center’s demise. Several weeks ago I visited the building, now boarded up. An older man stepped out from next door, and together we commiserated over the loss that shell of a building represented.
After 27 years working in and for nonprofits, I know that evaluations do not have to be like that first one. A good evaluation should include three components: reflection, measurement, and documentation. It should show how an organization is functioning, how it is meeting its mission, and what others think of its work on their behalf. An evaluation is an opportunity to engage with peers to decide what to continue, what to change, and how to put that in place. Unless they do periodic evaluations, organizations can stray from their mission without noticing, become stuck in a rut, or even cease to exist.
Long-time community activist Shad Reinstein, now working in Seattle, learned the dangers of not evaluating from her involvement in the leadership of the Women’s Peace Encampment in Seneca Falls in 1983. The Encampment, a grassroots anti-nuclear project, brought national attention to the nuclear arms race. Along with education on nuclear warfare, the group held on-going classes on local, regional, and national organizing and consensus process. It was a project that encouraged leadership from within, and members of the encampment cycled in and out of decision-making positions.
The Women’s Peace Encampment was wonderfully successful given the hostile environment of the times. The project gave people a way to focus on the key issues, and a way to talk about what it meant to live in the nuclear age. Its fundraising efforts were so successful that it was able to buy the land it occupied free and clear. A foundation was in place on which it could build a longer, more sustained grassroots movement.
But, says Reinstein, the group’s leaders never engaged in an evaluation process, even as the conditions around them evolved. As a result they failed to recognize the critical point at which the needs of the project and the make-up of its constituents began to change, and were unable to adjust their plans to it. Ultimately the group disbanded.
The Encampment’s story, while extreme, is hardly unique among community groups. The organizational cultures of many nonprofits are not open to systematic introspection. Meaningful measurements of their impact are rare, and even documentation – keeping track of the documents, flyers, posters, and pictures that tell our organizational histories – is often haphazard at best.
Nonprofits shy away from evaluation for many reasons. One of the most common is that groups feel they don’t have enough time given their workload. Nonprofit staffs are stretched notoriously thin. Miche Sheffield, director of crisis and information services at Friends of the Family, a shelter in Denton, Texas, faced resistance in her organization when she instituted an on-going evaluation process: Another form to fill out? Who wants to know? Sheffield says her hardest job was getting people to understand that this was not about Big Brother watching over them, nor about giving them more work, but about gathering information to aid everyone in their work. The agency as a whole had to be careful not to create an undue workload for staff in the evaluation process.
Other resistance to evaluation can come from reluctance to change, or from fears that evaluation is a test you either pass or fail, that you will be found lacking, or that your efforts will be misinterpreted. These latter fears are not entirely unfounded. Most organizations have experienced evaluation only as a requirement of funders, and aside from chafing at something externally imposed, many also have legitimate complaints about their experiences.
During its early years, ALLGO (the Austin Latino/Latina Lesbian & Gay Organization) performed evaluations when funders told it to. ALLGO was founded 15 years ago as a community-based organization serving the social, cultural, and political needs of Latina and Latino lesbians, gay men, and their families in Austin, Texas. It also does outreach work on HIV/AIDS. Funders wanted to see numbers: how many people were served, how many participants attended events. They also wanted information about impact, but the kinds of data they were looking for showed a serious misunderstanding about the nature of the HIV/AIDS outreach work. “Our funders wanted to know that someone has changed their behavior based on their contact with our agency,” Executive Director Martha Duffer recalls. “But it is a little difficult for our outreach workers to spend an hour in conversation with a drug user or sex worker on the street or in a bar and then pull out a survey asking them if they will now change their lives based on their contact with us.”
Funder-driven evaluation often involves external evaluators, and even groups that know the value of evaluations have had mixed experiences with this approach. The Enterprise Corporation of the Delta (ECD), a seven-year-old community development funding institution, provides revolving loans to small businesses in the Mississippi Delta region. ECD wants to see its investment produce jobs, regional growth, and development, while making a profit that will roll back into more loans.
ECD’s first evaluation was done by a third party, as required by a major funder. ECD didn’t have a choice about who would do it or how it was conducted. The funder engaged a team of external evaluators who looked at things like benefit and wage levels and jobs created or assisted. The process took six months.
Both Bill Bynum, ECD’s CEO, and Garrett Martin, a program officer, say they gained useful information from the quantitative study outcomes, while also learning some tools they could apply to their own evaluation. Had it been up to them, however, Bynum and Martin would have done the outside evaluation differently. “Jobs created” and “jobs assisted” are considered the currency for economic development, and that’s what the outside evaluators measured. But ECD needed more specifics. Are these jobs part-time or fulltime, 40 hours or 35? Do they include benefits? How do the jobs created fluctuate over the long term? A three-month-long job counted as a job created in the outside evaluation, but it would have a different community impact than a job that is still there 10 years later. Do employers or participants need some other form of assistance that ECD could provide, such as training, literacy, or GED programs for employees? None of these questions were covered by the outside evaluators.
Despite the challenges – insufficient time, staff anxieties, dealing with outsiders – many organizations recognize the need for evaluation and are taking steps to figure out how to make it their own issue. Friends of the Family’s Sheffield says that while evaluation for funders is required to keep Friends of the Family open, she is also a proponent of evaluation for its own sake. “We use evaluation to ensure efficient use of resources, to make sure that the job we are doing is actually helpful and not harmful, and to identify future needs and program development,” she says. “We have talked about being a change agency, with a change mentality. We have therefore had to ask: Are we open to change? Do we really want to know if we are doing a good job?”
When organizations start paying attention to reflection, measurement, and documentation they often find that the results call for a change in programming. ALLGO is one example. It had followed a typical path for organizations serving low-income areas and communities of color, growing from a small volunteer organization in 1985 to one with three staff members in only two years. In its third year it sought funding for an HIV/AIDS community-based program called InformeSida. Although only one of several ALLGO programs, InformeSida soon became the primary focus of the board and director because its funding increased each year.
Martha Duffer came on board as executive director three years ago to find a well-established organization with credibility in the community, an annual budget well over $200,000, and nine employees. Its board and staff were respected and active political advocates known for their program expertise, and ALLGO was represented on city and county planning councils, at community forums on the health needs of the Latino population, and other bodies. The group had also become known for its innovative cultural programming, working with other community-based organizations and the neighborhood Catholic church to incorporate art, culture, and spirituality as tools for education as well as community healing around violence and the AIDS epidemic.
One of the first things Duffer did was talk with current and former board members, staff, local allies, and constituents about ALLGO’s role in the community. In effect, she carried out her own informal evaluation. Duffer found that the organization had fallen into the trap of basing program development on funding availability. The group’s many board and staff retreats consistently produced beautiful lists of organizational goals and tasks which were never incorporated into the day-to-day activities of the organization. The major problem was that all the staff were paid for social service work under InformeSida, and so social service work was superseding the more radical, progressive agenda envisioned at the retreats. Any evaluations ALLGO had done were funder driven, and so they too focused on the InformeSida program.
Duffer and the board made a commitment to change that. The next annual retreat included a more formal collective evaluation of ALLGO’s mission, goals, programming and funding, which confirmed the gap between the original mission and current programming that Duffer had identified, and the group committed to finding ways to close it. ALLGO has now begun to expand programming beyond InformeSida to include cultural outreach programs grounded in an anti-oppression model – one of the primary components of the original mission of the organization.
Evaluation has also become a part of the organizational culture at PODER (People Organizing to Demand Environmental and Economic Rights), a 10-year-old organization whose focus is on community organizing and leadership development in San Francisco’s Mission district. PODER is a small organization, with four full-time staff and a crew of part-time youth outreach workers. But it is not too small to need evaluation. “We have to ask if we are meeting our goals,” Project Director Antonio Díaz explains. “It is also important to look at the values of our organization and question the extent to which we are adhering to them. This includes how we treat one another, our organizational culture, and our relationship to our members.”
Last year’s evaluation produced some important feedback from the youth employees, who thought their existing 10-week summer program was insufficient. In response, PODER expanded its youth program to year round. This year evaluative efforts will see if the change has strengthened the program. “Evaluation is a cyclical, on-going process,” says Díaz. “It is not an end product.”
Keeping that cyclical process moving forward over rough spots – incorporating big changes, shifting organizational culture, negotiating with funders – isn’t always simple. Without large research budgets or much time to spare, organizations that are serious about change have to get creative.
For ALLGO, this required creating a new forum to air constructive criticism of the change process. Everyone knew that their organizational culture and the demands of their programs meant that regular staff or board meetings were reserved for the business of running the organization. So Duffer worked with staff and board to put into place a series of cenas, or dinners, where staff and board gather at least once a month between retreats or formal meetings to check in on their progress. This instrument for internal evaluation has so far proven to be successful, and Duffer believes this is because it is not a static, one-time pencil-and-paper exercise. The regular “social” meetings on neutral ground are helping to create a new organizational culture in which board and staff can review what they’ve done, discuss whether or not their work is producing the desired outcomes, and identify steps for change if necessary.
ECD decided that in addition to the numbers provided by the outside evaluator, it should prepare a case study of the community it serves in order to understand its impact. ECD wants to be sure to capture qualitative indicators particular to its region, and to provide an appropriate lens through which to view the numbers. So each quarter it takes an in-depth look at its customers. ECD staff ask customers what worked well and what didn’t in their interaction with ECD, and may ask employees about how their experience has changed as a result of the loan. ECD also talks with loan recipients about their experience with commercial lenders and what impact the loan has had in stabilizing or helping to grow the company.
Kate McLachlan, director of the Martinez Street Women’s Center (MSWC) in San Antonio, Texas, has had to find creative ways to document the quality and success of MSWC’s leadership program for girls. When the program was still an idea on paper, MSWC thought it would be able to reach a larger number of girls than it actually has. As the project progressed, the group learned that its constituents had qualitative needs that it decided were more important to address than meeting the goal of a certain number of “contact hours” with a certain number of participants. Because of this shift in goals, MSWC wound up with fewer contact hours than it had projected, and McLachlan has had to walk a fine line – trying to provide funders with the quantifiable outcomes they expect, while also attempting to educate them about the importance of the more qualitative outcomes MWSC has been focusing on.
MSWC staffers, and their young participants, are busy trying to help. They are creating documentation that includes videos (made by the girls themselves), written and interview-style evaluations, and examples of activities. McLachlan has found that being consistent in maintaining this type of documentation creates a “memory bank” as well, providing important examples of what does and doesn’t work as it plans ahead.
Thinking back on my days at the Center, I wonder what would have happened had the organization’s constituents been part of an ongoing evaluation process. What if the youth had been asked to document their experiences, as the youth in PODER and MSWC are doing? What if community and staff engaged in collective reflection, allowing the Center to document outcomes in the constituents’ words, using the constituents’ own measurements? And what if I could go back now, 30 years later, and look at documentation produced when I was a teen and see the history of a community organization across that span of time?
The challenge facing community-based organizations across the country is to claim that ownership, embrace feedback even when it stings a bit, and make evaluation an integral part of what they do. To take on that challenge may mean changing organizational cultures. It may also mean butting heads with funders. In the end, however, it is a challenge well worth taking on.