At the 2008 National Conference on Dialogue & Deliberation, we focused on 5 challenges identified by participants at our past conferences as being vitally important for our field to address. This is one in a series of five posts featuring the final reports from our “challenge leaders.”
Evaluation Challenge: Demonstrating that dialogue and deliberation works How can we demonstrate to power-holders (public officials, funders, CEOs, etc.) that D&D really works? Evaluation and measurement is a perennial focus of human performance/change interventions. What evaluation tools and related research do we need to develop? Challenge Leaders: John Gastil, Communications Professor at the University of Washington Janette Hartz-Karp, Professor at Curtin Univ. Sustainability Policy (CUSP) Institute The most poignant reflection of where the field of deliberative democracy stands in relation to evaluation is that despite this being a specific ‘challenge’ area, there was only one session in the NCDD Conference aimed specifically at evaluation – ‘Evaluating Dialogue and Deliberation: What are we Learning?’ by Miriam Wyman, Jacquie Dale and Natasha Manji. This deficit of specific sessions in evaluation at the NCDD Conference offerings is all the more surprising since as learners, practitioners, public and elected officials and researchers, we all grapple with this issue with regular monotony, knowing that it is pivotal to our practice. Suffice to say, this challenge is so daunting that few choose to face it head-on. Wyman et al. made this observation when they quoted the cautionary words of the OECD (from a 2006 report): “There is a striking imbalance between time, money and energy that governments in OECD countries invest in engaging citizens and civil society in public decision-making and the amount of attention they pay to evaluating the effectiveness and impact of such efforts.” The conversations during the Conference appeared to weave into two main streams: the varied reasons people have for doing evaluations and the diverse approaches to evaluation. A. Reasons for Evaluating The first conversation stream was one of convergence or more accurately, several streams proceeding quietly in tandem. This conversation eddied around the reasons different practitioners have for conducting evaluations. These included: “External” reasons oriented toward outside perceptions:
“Internal” reasons more focused on making the process work or the practitioner’s drive for self-critique:
B. How to Evaluate The second conversation stream at the Conference – how we should evaluate – was more divergent, reflecting some of the divides in values and practices between participants. On the one hand there was a loud and clear stream that stated if we want to link citizens’ voices with governance/decision making, we need to use measures that have credibility with policy/decision-makers. Such measures would include instruments such as surveys, interviews and cost benefit analysis that applied quantitative, statistical methods, and to a lesser extent, qualitative analyses, that could claim independence and research rigor. On the other hand, there was another stream that questioned the assumptions underlying these more status quo instruments and their basis in linear thinking. This stream inquired, Are we measuring what matters when we use more conventional tools? For example, did the dialogue and deliberation result in:
From these questions, at least three perspectives emerged:
An ecumenical approach to evaluation may keep peace in the NCDD community, but one of the challenges raised in the Wyman et al. session was the lack of standard indicators for comparability. What good are our evaluation tools if they differ so much from one context to another? How then could we compare the efficacy of different approaches to public involvement? Final Reflections Along with the lack of standard indicators, other barriers to evaluation also persist, as identified in the Wyman et al. session:
Wyman et al commenced their session with the seemingly obvious but often neglected proposition that evaluation plans need to be built into the design of processes. This was demonstrated in their Canadian preventative health care example on the potential pandemic of influenza, where there was a conscious decision to integrate evaluation from the outset. The process they outlined was as follows: Any evaluation should start with early agreement on areas of inquiry. This should be followed by deciding the kinds of information that would support these areas of inquiry, the performance indicators; then the tools most suited; and finally the questions to be asked given the context. A key learning from the pandemic initiative they examined was “In a nutshell, start at the beginning and hang in until well after the end, if there even is an end (because the learning is ideally never ending).” In terms of NCDD, we clearly need to find opportunities to share more D & D evaluation stories to increase our learning, and in so doing, increase the strength and resilience of our dialogue and deliberation community.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
Categories
All
|