#119 Sep/Oct 2001 — Evaluation

Shifting The Balance of Power

A substantial amount of Community Development Block Grant money – at least for housing rehabilitation – was reaching the poor in Birmingham, Alabama. At least, that was the conclusion of […]

A substantial amount of Community Development Block Grant money – at least for housing rehabilitation – was reaching the poor in Birmingham, Alabama. At least, that was the conclusion of researchers at the University of Pennsylvania and Abt Associates in a $12.5 million nine-city evaluation for the U.S. Department of Housing and Urban Development. But Carolyn Crawford thought otherwise.

A citizen activist working for a coalition of low-income groups, Crawford began poring through local records and concluded that the government had been cynically, and perhaps illegally, targeting aid at the downtown and more promising neighborhoods, leaving poor neighborhoods out in the cold. Crawford’s work was part of the National Citizens Monitoring Project, an initiative funded by the U.S. Department of Health and Human Services’s Office of Community Services that trained neighborhood groups in 43 cities to assess and evaluate how city governments spend community development money.

The divergent conclusions reached by these two investigations highlight the tension between the traditional mode of program evaluation – an academic study by outside experts – and a newer approach that harnesses grassroots knowledge and expertise in the cause of measuring success and gauging impact. The shift has important implications for community organizations.

Evaluations are about power – the power to define what constitutes success, and the power to determine whether a particular program or organization has met that standard. In traditional evaluations, like the Penn/Abt study, that power lies entirely with outsiders – usually some combination of funders and “expert” evaluators. These outsiders identify the goals of the program and the definitions of success, design an “objective” evaluation process, carry out the investigation, analyze the findings, and report the results. If the program is still active, they may develop a plan of action for insiders to implement. People who participate in a program or project – whether as managers or as participants – are generally viewed only as subjects of study. Evaluations like these are in constant danger of providing results desired by outsiders, without benefiting program staff or constituents. They tend to maintain existing power relations and constrain organizations’ independence.

In response, many organizations and researchers have been developing ways to evaluate community programs that will shift the balance of power toward the insiders – the people who run the programs and the people they serve. Under these new models, both the organizations and their clients are active participants in the entire evaluation process, helping to define goals and measures, identify and collect data, and carry out the analysis. Such a process respects and develops the knowledge and skills of the insiders, recognizes and accounts for local contexts, and often results in more accurate and more useful information. This way of working is more likely to produce changes that will benefit clientele and community, since it involves them directly in identifying problems and developing solutions.

Indeed, the official HUD evaluation in Birmingham had little impact, according to those who have studied the fallout from that episode. But Crawford’s efforts with the Citizens Monitoring Project led to concrete changes, including city plan revisions that re-targeted resources to the city’s poorest residents and neighborhoods.

Similarly, when the Calcutta Slum Improvement Project recruited participants in its family planning program to conduct an evaluation survey, it learned some things that outside academics had failed to uncover: that birth control was more widely used than expected, for example, and that women’s attitudes toward birth control were tied up in concerns about control and gender roles. The Project responded by looking for ways to educate men about their role in family planning and encourage a more active role among women in choosing family planning methods.

As these examples demonstrate, experts don’t always know best. Even the most conscientious outside experts often miss crucial pieces of information – both because they don’t have the benefit of on-the-ground knowledge, and because their “subjects” do not necessarily trust them. One of the advantages of these non-traditional evaluations is the way they tap into local context-specific knowledge.

The level of involvement of outside experts in non-traditional evaluations can vary widely, although experts generally talk about two basic models: collaborative and participatory. The key difference between the two is ownership. Collaborative evaluation research is still initiated by an outside entity, and that entity retains at least partial control over the direction of the evaluation, the final results, and often the purse strings. Collaborative evaluation may use both outsiders and insiders at all stages, with the goal of sharing power as much as possible. While this is distinctly better than the traditional methods, imbalances in power may remain, especially if outside evaluators are paid by the funders or if program staff and constituents are unfamiliar with the data collection and analytical procedures that are being used.

A Shelterforce ad seeking donations from readers. On the left there's a photo of a person wearing a red shirt that reads "Because the Rent Can't Wait."

Participatory evaluation takes things a step further, putting insiders in control from start to finish. A participatory evaluation is initiated by insiders, and they control the purse strings and can approve the final product. Outsiders are used for specific tasks as needed, but power over and ownership of the evaluation reside with the participants in the program. Depending on the project, these participants can include program staff, clients or members, board members, and the geographical community served by the program or project.

Just because an evaluation needs expert help doesn’t mean the group doing the evaluation has to lose control. The Farmers Legal Action Group (FLAG), which provides legal information and support to groups dealing with farmers in financial distress, carried out an evaluation along with its associate organizations beginning in 1998. The Northwest Area Foundation provided a grant for the evaluation, and FLAG hired the researchers and consultants themselves. Outside facilitators helped staff and clients from all the involved organizations to agree on a set of outcomes that described their goals, and to determine what they wanted measured. Researchers were then brought in to do the hard data crunching: comparing the actual costs of foreclosure borne by lenders, insurers, communities, and individual families with the costs of the mortgage foreclosure prevention programs that FLAG’s partner groups provide. Their findings demonstrated that preventing foreclosures is clearly cost-effective for all parties concerned. Using the results, FLAG has been able to refine its programs and generate good publicity. Lenders and other community leaders are now more likely to refer farm households at risk to the FLAG partners because they have evidence of FLAG’s successes.

A key role that outside experts can play in a participatory evaluation is to build the skills and knowledge of insiders, which can shift the balance of power long-term, not just within a particular evaluation. In the Empowerment Zone/Enterprise Community (EZ/EC) Learning Initiative (see Shelterforce #112), professional evaluators and researchers working with the Community Partnership Center at the University of Tennessee helped community representatives learn how to monitor, measure, and evaluate the progress of their EZ/EC program and suggest mid-course adjustments.

EZ/EC is a federal economic development initiative administered by HUD and USDA that provides businesses tax incentives to work in distressed communities. The EZ/EC Learning Initiative, which ran from 1995-1998 in 10 EZ/EC locations, was initiated by the Community Partnership Center and funded by the Economic Research Service of the U.S. Department of Agriculture. But the evaluations were actually done by “local learning teams” of approximately eight to 20 people from each community, led by a local coordinator also drawn from the community. The learning teams have presented their recommendations to EZ/EC administrators and a wide range of other government officials, including local leaders, HUD officials, and USDA representatives, convincing them that they needed to coordinate better among themselves and provide more flexible resources to communities.

To get to this point, a team of seven regional researchers worked closely with the local coordinators, helping them at each step – developing indicators of success, determining ways to collect the information, field-testing methods, collecting the information, analyzing results, sharing conclusions with key decision-makers, and deciding how else to take action. The CPC research team got the ball rolling, and the regional researchers provided expert knowledge as needed and wrote the reports, but control of the work and results remained primarily with the communities involved in the pilot study. They not only carried out the work of the evaluation, but were also deeply involved in crafting the process itself. No products or results moved forward until they were reviewed and approved by the teams. The academics were in a sense trying to work themselves out of a job.

Local learning team members have begun to use the skills and processes they developed through the initiative to influence other development projects in their communities. For example, based on the Rio Grande Empowerment Zone learning team’s identification of obstacles within the local school board, a 501(c)3 nonprofit was formed to secure funding to sustain a “community learning center” over the long term, rather than relying on political cycles.

Experts can also play a catalytic role in participatory evaluations in which they have a much more limited involvement. A citizen’s economic development group in Roses Creek and the Clearfork Valley, Tennessee committed to an intensive evaluation process after their initial attempts to recruit industry to the area and to start local businesses had failed. The group members initiated and controlled the process, but called in outside facilitators to give them guidance at strategic points. The “outsiders” helped them reflect on their experiences in a systematic way, and that was enough to enable the group to identify the power issues that were underlying their problems. Absentee landholders owned upwards of 80 percent of the county, and what was left was mostly in the hands of the local coal and timber operators or public entities, including the University of Tennessee. With this new perspective, the group went to the university, and together they developed a sustainable management plan on some of the university’s land that provided local people access to saleable timber as well as saleable non-timber forest products such as ginseng and yellow root.

Despite changes like these in evaluation practice, there is still a long way to go. In a recent report, Program Evaluation Practice in Community Development, Kesha Moore and Susan Rees found that although community development practitioners perceive evaluations that are both participatory and outcome-based as more successful than other types of evaluation, few of the actual evaluations they studied aspired even to be collaborative, much less fully participatory. Most involved all stakeholders at some point, but only 12 percent met the study’s definition of participatory evaluation. Community residents and program participants rarely took part in critical decisions, such as identifying indicators of success.

Nevertheless, community organizations are starting to understand and embrace participatory and collaborative evaluation, and even foundations and funding agencies are beginning to value the innovative types of knowledge, insights, and improved program services these new approaches generate. Further progress will require a commitment on the part of funders, agencies, and especially community-based groups and organizations. With that commitment will come the opportunity to shift the balance of power away from professional evaluators and researchers and put that power into the hands of those who are actually experiencing and trying to solve a problem. It is through this process that we can uncover the too-often hidden knowledge of the people who participate in and carry out community programs and harness their experience in the cause of making those programs even stronger.

OTHER ARTICLES IN THIS ISSUE

  • The New World

    October 31, 2001

    The world did seem to change on September 11. America’s prosperity, its geographic isolation, and its persistent optimism led us to believe we were immune to the violence of the […]

  • The Evaluation Imperative

    January 1, 2001

    In the aftermath of September 11th, many of us felt the initial shock and sadness turn to a deeper quest to connect with those things that matter most: family, friends, […]

  • A Look In The Mirror

    January 1, 2001

    My first experience with organizational evaluation came early. I was a secretary at my local neighborhood center – known by its constituents as “the Center” – sitting in the tiny […]