This paper needs to be quoted as: Carlsson, C and Engel, P. 2002. Enhancing Learning Through Evaluation: Approaches, Dillemas and Some Possible Ways Forward. (Background Papers). Maastricht: ECDPM.
Enhancing learning through evaluation: Approaches, dilemmas and some possible ways forward
Enhancing learning through evaluation: Approaches, dilemmas and some possible ways forward (PDF)
Enhancing learning through evaluation: three different perspectives
Dilemmas in strengthening the learning function of evaluations
Some suggestions for the way forward
Paper commissioned by the Ministry of Foreign Affairs, Directorate-General for Development Cooperation, the Netherlands, to be presented at the 2002 EES Conference, Seville; October 10-12, 2002.
The views or opinions expressed in this report do not necessarily represent those of the Ministry of Foreign Affairs/Directorate-General for Development Cooperation. This report may be quoted and excerpts may be translated without prior permission, provided that the source is fully acknowledged.
Enhancing learning through evaluation: Approaches, dilemmas and some possible ways forward
Paul G.H. Engel & Charlotte Carlsson Dr. Paul G.H. Engel (firstname.lastname@example.org) and Charlotte Carlsson (email@example.com) work at the European Centre for Development Policy Management at Maastricht, the Netherlands (www.ecdpm.org). The authors are grateful to Hans Pelgröm and Piet de Lange from the Dutch Ministry of Foreign Affairs/ Directorate-General for Development Cooperation, for substantial contributions to the development of this text.
Consistently, development agencies are required to improve their performance, not only at the project level but also at the programme and institutional level. This requires strengthening learning capabilities. One logical way to do so is to seek to enrich evaluation practices. Development practice has therefore been challenged to produce practical experiences which emphasise learning as part of monitoring and evaluation. As a result, in a very practical sense, evaluation processes are now perceived as opportunities for institutional learning. Hence, the search for ways of integrating such processes, approaches, instruments and techniques into mainstream development practice has started.
This paper sketches three progressively more inclusive ways to look at mainstreaming learning for development. The first emphasises improving feedback to development policy and planning, the second organisational learning and the third societal learning. For each of these, some of the main sources and developments are briefly highlighted. Next, a number of dilemmas are discussed that emerge from an increased emphasis on learning through evaluations. Finally, the question is asked how difficult, or even feasible, it is to match each of the different approaches with mainstream evaluation practice. Finally, the paper offers a number of challenges for further action and inquiry.
Evaluation For the purpose of this paper, we use the term evaluation in its inclusive sense, referring to the collection of evidence as well as its processing, valuing and weighing. has always been about learning, about how to be accountable, how to be transparent, how to learn from experience. Today’s questions therefore, do not evolve around whether it is desirable, but focus at whose learning we are talking about and, for what purposes? Besides, many point at the quality issues involved if and when learning is to be shared widely. As such, the current trend to enrich evaluation processes by strengthening its learning function seems to respond to two general trends in development thinking. The first is the pressure upon development agencies to improve their own performance through learning from their successes and mistakes. As a result, evaluative functions are increasingly articulated into processes of institutional learning. The second is the wide-spread recognition of the active involvement of stakeholders as a fundamental principle for making development co-operation into a success. Mainstream thinking today emphasises development partnerships, national ownership, facilitation, stakeholder participation, dialogue and mutual obligations, as well as a strong move towards decentralisation of decision-making, democracy and local dynamics. Within the complex development arena which emerges as a result of this, clearly only the ones that learn and learn quickly and effectively may find their way eventually.
This article is not meant to address all relevant issues and challenges emerging in such a dynamic environment. It will concentrate upon one particular, rather instrumental issue: If we intend to enhance learning in evaluations, what approaches may be found that show us how to do it? In view of the available time, a quick search has been done to review available, partly grey, literature for practical examples. This inquiry revealed a very rich literature on professional learning, social learning, knowing for action and cognition for development. Specific applications of these theories within the realm of evaluation however, were not too many. Therefore, in this paper we will try to do two things. First, we will present a number of approaches through which development practitioners and researchers have tried to enhance learning for development. Without pretending to be exhaustive, this should set the scene for a discussion of options, results and future landscapes for evaluation. Then, we will distil some of the relevant dilemmas evaluators and their sponsors are bound to face when advancing towards enhanced learning. This should provide a start for drawing some tentative conclusions as to the way ahead. Finally, we will present some conclusions of our own, as a first attempt to formulate some challenges for the near future.
Box 1: Traditional use of evaluation results by development agencies:
Enhancing learning through evaluation: three different perspectives
Development practice being a quest for innovation and societal change, in this paper we take an inclusive approach to evaluation as well as learning. In its most general expression, development may be understood as “learning our way into a sustainable future”, implying, amongst other things, that we try to avoid mistakes. Yet if we fail to do so, we intend to learn from them. The history of development cooperation, again confirmed by the recent Johannesburg Summit, shows such an intention is not at all easy to achieve in practice. Yet current thinking about learning from evaluations very much reflects this wish to learn and, to learn faster from what we do. Spearheaded by the OECD/DAC Working Party on Aid Evaluation (2001), donors have taken the initiative to step up learning and to commit to improving their practices. Within this context too narrow a view of improving the tools of the trade seems inadequate. A serious effort has to be made to open up new vistas, refine and develop new approaches, new tools and to inquire into new experiences. In our contribution to this debate, we propose to look at three strands of current thinking about improving learning in development, each at a different level of societal complexity, and try to match these with evaluation practice, different approaches and tools that are used separately or combined.
|Instrumental use: operations are tried and tested and results fed to planning|
Conceptual use: findings trickle down into the organization and take the shape of new ideas, concepts and new ways of structuring operations.
Legitimizing use: legitimizes decisions and positions that have already been taken on other grounds
Ritual use: evaluation as a symbolic act
No use: potential users are not aware of findings, or see no relevance in them
Source: J. Carlsson, (2000) EGDI. Using a sample of nine evaluations carried out by SIDA, and interviewing only stakeholders who actively participated in the evaluations, “Learning from Evaluations, 2000.
Improving feedback to development policy and programming
Probably the most straightforward way of enhancing learning is to improve feedback from evaluations and its effects upon development policy and programming. However, many of the existing feedback mechanisms are still mainly one-directional, drawing on the logic of information dissemination to selected target groups rather than communication around evidence as a reiterative learning process. The OECD/DAC Working Party on Aid Evaluation (2001) reviews some of the current experiences. Most call for more intensive involvement of Southern partners and stakeholders to benefit from existing learning and feedback routes.
Box 2. Examples of approaches to feedback from evaluations to development policy and programming
There is also need for a more direct link between feedback and the planning and monitoring of country programmes. Participatory monitoring and evaluation is recognised as a valuable addition to more conventional approaches. IFAD recognises the need to “shift the fulcrum of evaluation feedback to the South”. Emphasis on results-based planning and management is seen as a way to improve the practical use of evaluation results.
Another example is the European Commission’s “Fiche Contradictoire” which plots the recommendations drawn by the evaluation team against the responses and action taken by those responsible for implementing them. DFID’s “Public Service Agreements” linked to international development targets sets a framework for a government-wide appraisal of results. The communication of lessons through mass media to stimulate wide recognition is seen as an important element by most. Finally, horizontal learning between development agencies is seen as an important element of the way forward.
|Impact Pathway Analysis & Research Uptake Analysis: this method tries to address shortcomings in impact assessment methodology by applying a more holistic approach to impact measurement. DFID used research uptake analysis in its Forestry Research Programme in Malawi as an aid to management and M&E. The approach provides managers with real time signals of changing prospects for the achievement of their long-term objectives (Springer-Heinze A., Hartwich F, et al).|
Joint Learning with Geographic Information Systems: Using the technology’s integrative spatial analysis capability and people’s and planners’ intimate knowledge of their own space. A different side of GIS is shown – from mapping and data management – to that of jointly constructing and understanding the ‘world’ to be managed. (Gonzalez, R). Also see CIET method (below).
Results-based management (RBM): Involving partners in performance management with the need to use a variety of performance indicators and balance annual performance assessment and reporting with longer term sustainable issues and objectives (USAID). Some agencies believe RBM is more useful for operational or policy departments rather than evaluation work. The World Bank includes a Corporate Scorecard and Fast Track Briefs to inform senior management, a feedback loop with key action points for management emerging from and evaluation are identified. Linking evaluations to RBM and International Development Goals rather than just narrowly on project objectives has been identified as a way forward (OECD, 2001).
Community Voice in Planning Initiative: uses the CIET (community information, empowerment and transparency) method and social audits to support government planners in their decision-making and evidence-based planning. It combines M&E with evidence-led learning tools for community and stakeholder use and builds capacity for data-gathering, analysis and communication (Andersson, N), www.ciet.org.
Improving collective learning
A second approach has emerged among development thinkers. It focuses on organisational learning. It recognises development processes are the result of actions and interactions on the part of diverse social actors, all of which are performing parts in the same play. As a result, the active participation, capacity building and learning of all relevant actors becomes a fundamental, rather than an instrumental condition, and the approach focuses on facilitating collective rather than individual learning. Policy makers and/or donors become one of the pack rather than the main intended learners.
One of the most useful instruments this approach offers for understanding collective learning, is the recognition of different types of learning, i.e. single and double loop learning. The first is aimed at improving actions within existing policies, rules and regulations, the second at renewing this action-governing framework itself (Argyris, 1992). Groot & Maarleveld add triple loop learning, which questions the underlying principles, norms and values of collective behaviour, including single and double loop learning. These same authors show the fundamental shift in the role of the evaluator when adopting an organisational learning approach: from a relative outsider, collector and judge of evidence to a facilitator of a process, a catalyst of collective inquiry and learning (Groot & Maarleveld, 2000). As such, the organisational learning approach to evaluation not only fundamentally changes the way social actors relate to each other in order to assess collective performance, it requires a fundamental shift in the role of the evaluator as well.
Another perspective of collective learning recognises “epistemic communities” that can help to advance learning on a topic between members and feed knowledge into policy processes. These communities often spring up automatically among like-minded groups and individuals and are often informal. However they also run the risk of ‘clique-building’ and may alienate a diversity of opinions to be heard on a topic if the discourse is captured by the most vocal actors of the group (Sutton, 1999). A combination of self-criticism and internal as well as external evaluation of the way these networks and communities operate can help to acknowledge differences in view and intentions between group members. Learning from self-evaluations can also help realign priorities within the group and serve as ‘reality checks’ to avoid a too narrow discourse among closed-off opinion circles.
Box 3. A selection of approaches to facilitate collective learning for development
Improving societal learning
|Mentoring: SIDA has designed a system of informal (often tacit) knowledge exchange between junior and more senior staff.|
Performance Reporting Information System (PRISM): DFID has created a computer-based system to combine basic project management information with qualitative information on the nature and objectives of the programme.
Communities of practice: World Bank has established ‘communities of practice’ around particular themes with over 100 thematic groups across the organisation to establish trust and a culture of sharing between staff, including evaluation staff.
Learning-based Approach to Institutional Assessment: In terms of institutional assessment, the approach looks at four areas: organisational performance, organisational capacity, organisation motivation and the organisational environment (Carden, IDRC).
Outcome Mapping: Outcome mapping is an integrated planning, monitoring and evaluation methodology with a learning-based and use-driven view of evaluation guided by principles of participation and iterative learning. It captures the International Development Research Centre’s (IDRC) experience in developing and implementing organisatinal learning, planning and evaluation (Carden F.).
Knowledge to Action: Evaluation for learning in a multi-organisational global partnership identifies six factors that made learning work (in-house culture for self-evaluation, substantial funding, trust between partners, team approach with external evaluators, actionable findings, internal and external views working together. It is stressing the need to combine internal self-evaluation and experiential learning with external views from evaluators. (Solomon, M., Chowdhury, A.)
The Temporal Logic Model: Represents an alternative to the Logical Framework Analysis by introducing the concept of organisational learning and ‘learning loops’ as ways to refine current strategies from a people and knowledge-centred perspective (Den Heyer).
Evaluation and Learning System for Acacia (ELSA): Looks specifically at evaluation and learning from introduction of ICTs in poor communities. The approach builds on four components that are seen as integrative and overlapping: evaluation exercises (baseline and progress data), continuous learning, research studies (based on hypotheses on lessons learned), multi-stakeholder interaction.
A third approach focuses on societal change and performance in dealing with resource dilemmas. A shift towards more interactive policy models was raised already more than a decade ago by e.g. Grindle and Thomas, stressing that “unlike the linear [policy] model, the interactive model views policy reform as a process, one in which interested parties can exert pressure for change at many points… Understanding the location, strength and stakes involved in these attempts to promote, alter or reverse policy reform initiatives is central to understanding the outcomes.” (Grindle and Thomas, 1991). But recognising the shift from a “donor-centric” need to understand outcomes through evaluations, to a “people-centric” view of turning evaluations into more broad-based societal learning around development options is more recent.
Weiss coined the term ‘knowledge creep’ in 1980, referring to how the conceptual use of evidence can ‘gradually bring about major shifts in awareness and reorientation of basic perspectives.’ These ideas have recently re-emerged in concepts such as knowledge management, and knowledge as a global public good, pioneered by the World Bank and others. Yet linkages are few to broader use of ongoing Country Analytic Work and assessments by donor agencies – much of which is still confidential and poorly shared both horizontally between agencies, or used to enrich national debate (GDNet, 2001).
Although this discourse is backing the use of scientific evidence to solve development problems, the risk is that it is failing to connect with realities and evidence at local level and the schools of thought around recording and enhancing endogenous development processes and knowledge (see examples below). The concept of ‘evidence and knowledge’, linked more closely with learning, may be a way around this. Uphoff and Combs show that scientists and policy-makers alike often need to ‘unlearn’ the things they think they know to avoid “paradigm traps”. This again calls for a more direct engagement with communities and stakeholders. They argue that engagement is particularly important because it can ‘engender the communities that prompt one to become detached from dogma in ways not possible otherwise.’ (Uphoff, Combs, 2001).
One of the risks, highlighted by Cooke and Kothari (2001) is that Participatory Rural Analysis (PRA) methods can be used to co-opt people’s participation into established development paradigms, and that this can reinforce, rather than counteract, existing inequalities (Cooke, Kothari). Increased societal learning from evaluation findings can in this sense be seen as an antidote to simplified development narratives and co-option by broadening the debate to the societal (and not donor-exclusive) level.
In order to come to grips with the complex social interactions, knowledge and information processes involved in technological and social innovation in rural areas, Niels Röling and others at the Communication & Innovation Studies Group of Wageningen University developed the knowledge systems approach. It definitively leaves behind the oversimplified linear notions of knowledge and technology transfer while posing innovation as an emergent property of the social interaction and learning among multiple stakeholders who, invariably, represent multiple intentionalities and (often conflicting) interests. The appreciative character of networking and learning is emphasised and a participatory action-research methodology for improving the social organisation for innovation – RAAKS – is developed and tested (Engel, 1997; Engel and Salomon, 1997). As this school of thought gradually evolved into cognitive systems thinking, modern theories of cognition provided it with the building blocks to inquire into dynamic learning situations. On the basis of work by Chilean scientists Maturana and Varela, for example, knowledge is understood as effective action in the domain of human existence. Internal coherence – between the various strands of learning – and correspondence – stemming from a structural coupling between the learner and his or her domain of existence – are considered the main drivers of the learning process (Röling, 2002).
Box 4. A selection of innovative approaches to societal learning
The perspective emerging from all of the above is clearly the most recent, most inclusive and, less developed one. However, it promises far-reaching consequences for our thinking on development cooperation. Traditional evaluation as we know it gradually disappears into the background and is replaced by multiple forms of evidence-gathering by different stakeholders, adaptive management of resources, communication and negotiation, conflict resolution strategies and growing attention for the need to create enabling conditions through good governance. Governments and donors are no longer standing at the side line, they are seen and critically assessed as fundamental players that enable, or may disable, society’s chances to learn adequately. As it calls for enhancing societal learning, recognising knowledge and learning as fundamental building blocks of development, this approach focuses on societal change, governance issues and resource-based identities to address development problems. As increasingly called for by development policy-makers and field workers alike, it seems to hold the potential to link evaluation into governance issues, such as performance-based resource allocation and the functioning of democratic institutions.
Dilemmas in strengthening the learning function of evaluations
From the above rough sketch of perspectives, we may extract a number of dilemmas affecting current evaluation practice. We believe it is the way in which we eventually respond to these questions that will decide the future of evaluations and learning for development.
|Going to scale with participatory monitoring and evaluation (PM&E): Various forms of participatory monitoring and evaluation have been applied to widen learning from evaluations. In order for societal learning to happen, these small-scale efforts need to be taken to scale, and social and political dimensions of the scaling-up process be taken into consideration (Gaventa et. al).|
Lessons from social marketing: Evaluations of social marketing and communications initiatives stress the need for in-depth formative evaluation including pre-testing of questionnaires and messages. Just like in the marketing world, evaluations need to be utilization-focused, starting by identifying the actual users, the ways they intend to use it, and stay flexible and look for “spin-off” products, i.e. others that may also find the information of use and which could widen the learning circle around findings. (Balch & Sutton, 1997).
Linking up with local dynamics: Although all participatory monitoring and evaluation methods have some form of including local views and dynamics into the process, some take it further than others. Drawing on the cognitive systems literature, the Linking-Local-Learning approach is based on three steps: analysing the past (of a given community), visioning the future, operationalising the vision. In doing so the key factors of local dynamics need to be taken into account and recorded. An ‘action spiral’ is suggested to make it possible to adapt to the changing context of development, taking into account the role of evaluations and turning it into societal learning. (Hounkonnou, 2001)
National dialogue platforms: to discuss findings with a more broad-based national constituency has proved to be a successful approach in Laos, which helped strengthen donor, government, stakeholder relations. (OECD). The challenge is to open up these sorts of debates to a societal level.
RAAKS – Participatory action-research, allowing stakeholders to design improvements in the way they strategise for innovation at the community or sector level. By gathering and analysing information about the way they’re organised now, comparing actual impacts with desired results, local and/or institutional stakeholders design ways to strengthen their capacity to achieve change. Outcomes are communication, information and organisational strategies and measures to enhance networking and learning (Engel and Salomon, 1997).
Media partnerships: Moving to wider in-country dissemination, donors have started to make more strategic partnerships with the media and to explore new ways of using already established channels, such as radio and radio shows to communicate lessons. In Afghanistan, BBC World Service included mine awareness messages that came out of an UNOCHA National Mine Awareness Evaluation (1997).
Socialising evidence for poverty alleviation (SEPA): Based on the ‘pressure of fact’ philosophy, the CIET method (community information, empowerment and transparency) uses social audits and programme evaluation findings for social mobilisation around evidence at different levels simultaneously (community, district, national). Use and mobilisation around evidence takes place at the same time as the reiterative fact-finding and feedback cycles are carried out. Following a 10-point plan, this approach is currently piloted in Bangladesh, Pakistan and South Africa. (www.ciet.org)
Adaptive environmental and resource management: “That surely is at the heart of sustainable development – the release of human opportunity. It requires flexible, diverse, and redundant regulation, monitoring that leads to corrective responses, and experimental probing of the continually changing reality of the external world.” (Holling, 1995)
Whose learning are we talking about?
When we intend to improve learning from evaluations, whose learning are we talking about? Is it referring mostly to a need on the part of donor agencies? Does it reflect a felt need of national governments, institutions and private agencies to learn? Does it include recognition of the local actors’ right to learn from their mistakes? The improving-policy-feedback approach emphasises donor or policy-makers’ learning mostly. It aims at improving policy performance. Hence, its quest for “participation of local actors” has an instrumental sense, more geared towards actor consultation than effective local participation in decision-making. The collective learning approach, on the contrary, points at the need to involve all stakeholders in the learning process, recognising their complementary roles in reaching development targets. Their different points of view are instrumental ingredients to achieving a collective understanding of relevant development issues and the way to go forward. At the same time, diverse perceptions, responsibilities and cultural differences may make it hard to work together. However, stakeholders are expected jointly to hold the keys to improving performance. The societal learning approach finally recognises possibly insurmountable differences in perspectives among actors yet at the same time the need to negotiate sustainable answers to common challenges. In line with its attention for adaptive management and aspects of governance, it focuses on institutional development for improving societal learning and decision-making.
Why strengthening learning? What purposes are to be served?
Evaluation serves a purpose. A growing emphasis on learning from evaluation means a shift in intentions. In figure 1 we have tried to summarise the purposes generally served by evaluations. Traditionally, control has been an important purpose directed at enhancing transparency and accountability, particularly from the donor point of view. The next important purpose can be seen as assessment, i.e. judging whether efforts are in fact contributing to achieving agreed upon objectives. Now, learning to improve performance increasingly becomes a purpose for evaluation. Eventually, evaluations might become geared towards adaptive management Adaptive management is used here, in line with Holling, 1995 (cf. box 4), to refer to a management style based on flexible regulations, continuous probing, observation and adaptation of policy and programming frameworks, stimulating human learning and institutional change so as to adequately respond to ever changing understanding and circumstances., requiring as a prerequisite institutional learning and development of those institutions involved in governing development.
One intriguing question is whether pursuing one purpose might imply a trade-off in terms of pursuing another. For evaluation, might these purposes be mutually exclusive? Some of the conditions required for effective social learning, such as openness, curiosity and trust among stakeholders may be hard to create if one of the stakeholders is mostly interested in control. On the contrary, one may argue that as donors are no longer seen as the only ones in control of the development process while national and local ownership becomes effective, more stakeholders take an interest in transparency and accountability. Similarly, one may argue that assessment, which includes passing judgment, might work against adaptive management requiring cooperation. However, it seems more likely that the movement of intentions from quadrant one to four reflects a growing complexity of evaluation functions rather than the necessary exclusion of one or more of them.
Interestingly, such a development roughly mirrors the understanding we have achieved of the development process itself. Control-oriented evaluations imply that development objectives and the means to achieve them are known and agreed upon, and that the question is just to see whether they are applied. Assessment implies looking at the level and/or quality of the application as well. Learning-oriented evaluations are very much open-ended and imply a questioning of objectives as well as means. Finally, adaptive management would imply an increased emphasis on developing enabling structures – institutions – to continuously reassess objectives, learn about how to achieve them and implement the lessons learned. In a very simplified and schematic way, the movement from quadrant I to IV mirrors the growing complexity we’ve encountered in development, not just in evaluations.
What do we mean when we say learning?
Learning is a buzz-word, often used but not often clearly defined. As a result, one of the challenges defined by the OECD/DAC Working Party on Aid Evaluation 2001 is the need to unpack the learning concept. What do we mean by learning? Who learns, how and why? A quick review of literature reveals at least three approaches that may claim to be useful in focussing on learning for development. The first is a more general approach to learning in complex and dynamic contexts from the field of (adult) education (Van der Veen, 2000). A distinction is made between reproductive, communicative and transformative learning. In complex situations these fulfil complementary roles as their dynamics can be traced to distinct cognitive and motivational processes. The second focuses on organisational learning (King & Jiggins, 2002). Single, double and triple loop learning are distinguished in accordance to the degree to which underlying organisational rules, values, norms and behaviour are truly affected. The third focuses on cognition (Röling, 2002), the process by which the organism deals with changes in context. It stipulates that social learning can be understood taking into account two fundamental drivers of the cognitive process: the coherence sought among values/emotions and perceptions on the one hand and theory and actions on the other and, as well as the need for correspondence between the above elements and the prevailing context.
The assessment of learning outcomes generally differs between different traditions. In Kirkpatrick’s Hierarchy of Evaluation (cited in Van der Veen, 2000) a distinction is made between four levels of outcomes: First, the immediate or almost immediate reaction of the learners; do they feel they have learned? Second, the exam-type assessment; are the learners able to demonstrate what they learned? Third, do the learners effectively use what they learned in, for example, their work? Did they actually change their habits? And fourth, we may look for impacts at the community level. For example, does productive output rise? Do people make more use of public services? Likewise, in organisational learning outcomes are defined at different levels. The effects of single loop learning, for example, by looking at whether existing rules are understood and/or applied better. The outcomes of double loop learning, by asking whether organisational policies, rules and customs have been changed effectively. And finally, the impacts of triple loop learning by tracing changes in organisational culture, norms and values (…).
Why learning from/in evaluation?
Monitoring and evaluation is only one aspect of enhanced learning. There is also learning-by-doing, that is, developing capacities through adequate knowledge and information management and social learning. Why give evaluations such a separate place in development practice? Apart from the obvious ones – accountability, control, curiosity about results – development practice seems to provide us with another, very significant reason: among the activists of development we all are in a hurry to see results. Creating space for inquiry and reflection has not yet been met with equally strong and consistent support, partly because of the length of time it normally takes to harmonise the divergent opinions in more participatory approaches. We should probably be careful to protect the evaluative space there is and to improve its outcomes.
Wide-spread learning versus the quality of the learning process
When learning through studying one’s own and others’ performance becomes wide-spread, what consequences does such “popularisation” of evaluation entail for development practice and practitioners? One may argue that quality will be reduced due to lack of uniformity of methods. Scientific “rigour” might be at stake. On the other hand, one may counter that quality is increased by greater richness of opinion while stakeholder participation enhances applicability of results. Unfortunately, while discussing these topics widely, very few studies of learning for development so far pay much attention to assessing learning and its concrete outcomes.
Power relationships and learning
Maybe even more than others, the development arena is characterised by skewed power relationships between those who hold the keys to decision-making about resource allocations, and those who don’t. How do such differences in power among stakeholders, even when local ownership and development partnerships are intended, affect the process and outcomes of learning? In their analysis of institutional learning with respect to local forest management, Engel et al. (2001) point at the need for empowerment of local stakeholders and conclude that even then the national policy context may make or break the process at the local level. This leads us directly to the question of governance of learning: Who sets the rules? Who monitors them? What about evaluating the performance of (learning) evaluators?
Learning and the role of the evaluator(s)
As we have argued, the organisational learning approach fundamentally changes the role of the evaluator. From a distant, research-oriented person trying to systematise the known and unearth the hidden, he or she becomes a process facilitator whose greatest skill is to design and organise others’ learning effectively. Stakeholder analysis, communication knowledge and skills become increasingly important as well as managing group dynamics. Objectivity is still a prerequisite, yet subject-matter knowledge might at times be a hindrance. When we move further towards societal learning, at times even objectivity may be challenged, for example, when the empowerment of disadvantaged groups is seen a prerequisite for full participation in joint learning. Situation analysis, a comprehensive as well as practical understanding of society and its development, of conflict resolution and negotiation skills will become crucial. In other words, enhancing learning will eventually mean fundamentally restructuring the training, methodological baggage, professional skills and outlook of (would-be) evaluators.
Can learning be mainstreamed through evaluation?
At first sight, to push evaluation into a learning mode is easy. And it is, if one concentrates on methodologies to increase the learning content, without fundamentally altering the evaluation exercise itself. However, we hope to have shown that such a move almost naturally leads to the next level: adaptive management. The consequence of this is that agencies prone to learn more about what they’re doing and how they’re doing it elsewhere, will be drawn irresistibly into asking questions about what they’re doing and how they’re doing it in-house. An emphasis on institutional development therefore seems a logical complement of institutionalised efforts to improve learning from evaluation. We would even dare to suggest an even tighter link: To improve learning through evaluation beyond a rather trivial level without a strong institutional development component tied into it seems impractical.
How to create conditions for effective learning?
Different approaches are, again, followed by those studying the necessary conditions for enhancing learning. General conditions often mentioned in practical situations are the inexistence of threats to openness and the sharing of opinions; curiosity and motivation on the part of the participants, the availability of intellectual and/or practical challenges and possibilities for practical follow-up to the learning process. A systematic treatment of conditions is presented by Guijt and Woodhill in their paper on learning through e-networks. Here and there slightly rephrased for the purpose of this paper, their six “Conditions for Learning from Action” through Internet-based networking read as follows (Guijt & Woodhill, 2002):
1. Individuals are motivated to participate actively. Important factors are the match of participants’ interests and intentions with the opportunities the network offers and the time/space their institutions allow them to participate in networking activities.
2. A systematic and explicit learning process. The authors advocate a step-wise approach that makes the learning process accessible and defines intermediary goals and outputs.
3. E-network participants are able to effectively and efficiently access the Internet. Trivial as it may sound, this conditions is not only applicable when modern electronics are involved. Most field workers have experience with failing electricity supply or learning instruments overly dependent upon written language messing up their learning sessions.
4. Development initiatives are designed or modified to be learning-oriented. Learners in non-learning, non-responsive environments tend to get frustrated.
5. Collaborative learning processes are institutionally supported. Of course, institutions need to actively support learning by being flexible and rewarding towards their members who (attempt to) effectively engage in the process.
6. Clarity about the opportunities and constraints related to collaborative learning. What is the potential benefit, for the institutions and for their members, of participating? What minimum participation is required to achieve those benefits fully? Who should participate to guarantee the expected outcomes? Who might?
Some suggestions for the way forward
A number of lessons may be drawn from the above. These are not new. Institutional development and evaluation specialists have been highlighting similar issues insistently for some time now. Our purpose here is to underline those we deem central to our intentions to enhance learning through evaluation.
Effective learning through evaluation implies a shift in perspective with respect to the way development processes and indeed, its institutions are managed. For one, flexibility and responsiveness to lessons learned is to become a key institutional feature. As a consequence, institutional development is a necessary complement to learning by development agencies.
The balance between accountability and transparency on the one hand, and learning and adaptive management on the other, needs further scrutiny. This is particularly important in view of the power differentials that exist between donors and recipients in development cooperation. The use of methods to learn-how-to-do-things-better might not match easily with those aimed at deciding-whether-or-not-to-continue-financing. Ways and means to safeguard the credibility of evaluators and development professionals engaged in learning through evaluation will have to be developed.
In terms of methodologies, a pragmatic approach is needed rather than a dogmatic one. There is no single set of tools that guarantees learning content. And many tools can be turned into learning ones. Attention, however, should be paid to the principles governing success in a learning setting; mutual respect, inclusive thinking, the wish to understand others’ perspectives, willingness to take criticism seriously and the preparedness to renew institutional cultures, rules and procedures, seem more important for long term success.
Another element emerging regularly from the literature is “a plea for practical evaluation” (Balch and Sutton, 1997). Complexity is a regular feature of multi-actor learning processes. And that’s exactly why practitioners have to strive to keep it doable. Irene Guijt provides us with some tough food for thought: “This stuff is worse than tiririca!” exclaims a Brasilian farmer after his first experience in co-designing a participatory M&E system. Tiririca is a local weed that sprouts many new shoots when cut. Just as in M&E design; for every question the group had just answered, several new questions had emerged. When would the questions stop, the farmer wondered? And the methodological issues, Irene Guijt concludes, “…extend far beyond simply which method works best, as these are jus a small part of the extensive communication processes that lie at the heart of M&E.” (Guijt, 2000)
A need exists to rethink the professional profile of evaluators that are to engage in enhancing learning. Next to research skills, such professionals should excel in communication, process facilitation, conflict resolution and negotiation skills. Analytical skills and tools will have to include situation and stakeholder analysis, as well as group dynamics. A truly interdisciplinary education may be a prerequisite, as well as a permanent and systematic exchange of practices and experiences among professionals and students.
A much stronger emphasis is needed on developing the tools of the trade. So far, research has hardly touched a systematic assessment of the learning effects of different monitoring and evaluation approaches, tools and techniques. A systematic orientation of approaches and methodologies towards specific developmental situations and learning needs has only just started. The conceptualisation of learning for development remains at the very most, fuzzy.
Given the societal and institutional implications of the quest for enhancing learning through evaluation, this seems one of those fields of inquiry where most can be learned from a systematic collaboration between evaluation researchers and practitioners from the South and the North. Some of the most inspiring collections of reviews of experience span cultural boundaries between countries as well as continents. International cooperation might have a distinct competitive advantage in furthering this field of study.
Acacia Initiative. Evaluation and Learning System for Acacia: A Report Based on a Consultative Meeting held in Johannesburg, February, 1997
Andersson N., Martinez, E., Cerrato, F., Morales, E., Ledogar, R., (1989). ‘The use of community-data in health planning in Mexico and Central America’. Health and Policy Planing: 4(3): 197-206. Oxford University Press
Argyris, C. (1992). ‘On organizational learning’ Cambridge (USA): Blackwell Publishers.
Balch, G. and Sutton, S., (1997). ‘Keep Me Posted: A Plea for Practical Evaluations’, University of Illinois/US Dept of Agriculture
Carlsson J. et al. (1999). ‘Are Evaluations Useful?’. Cases from Swedish Development Cooperation. SIDA Studies in Evaluation 99/1.
Carlsson, J. (2000). ‘Learning from Evaluations, Learning in Development Cooperation’. pp. 120-129. EGDI, Stockholm.
Cooke, B., Kothari, U., (2001). ‘Participation: The New Tyranny?’, Zed Books, London
Den Heyer, M., ‘Modelling Learning Programmes’, Development in Practice, Volume 12, Numbers 3&4, August 2002
Engel, P.G.H. (1997). ‘The Social Organization of Innovation: A Focus on Stakeholder Interaction’ KIT Press, Amsterdam NL
Engel, P.G.H. and Salomon, M. (1997) ‘Facilitating innovation for development: a RAAKS resource box’ KIT Press, Amsterdam NL.
Engel, P.G.H., Hoeberichts, A. & Umans, L. (2001). ‘Accommodating multiple interests in local forest management: a focus on facilitation, actors and practices’ In: Int. Journal of Agricultural Resources, Governance and Ecology, Vol.1
Engel, P.G.H. & Salomon, M. (2002). ‘Cognition, development and governance: some lessons from knowledge systems research and practice’ In: Leeuwis, C., Pyburn, R. (eds.), ‘Wheel-barrows Full of Frogs’. Van Gorcum, Assen, the Netherlands
Gaventa, J., Blauert, J., ‘Learning to Change from Change: Going to Scale with Participatory Monitoring and Evaluation’ (2000). In: Blauert, J., Campilan, D., Gaventa, J., Gonsalves, J., Guijt, I., Johnson, D., Ricafort, R. (eds.) ‘Learning from Change: Issues and Experiences in Participatory Monitoring and Evaluation’. Intermediate Technology Publications, London
Gonzales, R.,(2002)., ‘Joint Learning with Geographic Information Systems: Towards Participatory Technology Development’. In: Leeuwis, C., Pyburn, R. (eds.), ‘Wheel-barrows Full of Frogs’. Van Gorcum, Assen, the Netherlands.
GDNet (2001).‘The Decision-making Process’ htp://nt1.ids.ac.uk/gdn/power/a12.htm
Grindle, M., Thomas, J. (1991). ‘After the Decision: Implementing Policy Reforms in Developing Countries.’ World Development Vol. 18 (8)
Groot, A. and Maarleveld, M. (2000). ‘Demystifying facilitation in participatory interventions’ Gatekeeper Series, Wageningen University, Department of Communication and Innovation Studies (submitted)
Guijt, I. ‘Methodological Issues in Participatory Monitoring and Evaluation’ (2000). In: Blauert, J., Campilan, D., Gaventa, J., Gonsalves, J., Guijt, I., Johnson, D., Ricafort, R. (eds.) ‘Learning from Change: Issues and Experiences in Participatory Monitoring and Evaluation’. Intermediate Technology Publications, London
Guijt, I. and Woodhill, J. (2002), with Berdegué, J. and Visser, I. ‘Learning through E-networks and Related M&E Issues”. FIDAMERICA/Grupo Chorlaví.
Holling, C.S. (1995). What Barriers? What Bridges? In: L.H. Gundersen, C.S. Holling, and S.S. Light (eds.) ‘Barriers and Bridges to the Renewal of Ecosystems and Institutions’ Colombia University Press, New York.
Hounkonnou, D., (2002). ‘Linking up with local dynamics: Learning to Listen’ In ‘Wheelbarrows Full of Frogs: Social Learning in Rural Resource Management.’ Leeuwis, C., and Pyburn, R. (eds.), Van Gorcum, Assen, the Netherlands
Jiggins, J., and Röling, N. (2000). Adaptive Management: Potential and Limitations for Ecological Governance. International Journal of Agricultural Resources, Governance and Ecology 1 (1) 28-42.
King, C. and Jiggins, J. (2002) ‘A systemic model and theory for facilitating social learning’ In ‘Wheelbarrows Full of Frogs: Social Learning in Rural Resource Management.’ Leeuwis, C., and Pyburn, R. (eds.), Van Gorcum, Assen, the Netherlands
OECD/DAC. (1997). Evaluation of Programs Promoting Participatory Development and Good Governance. Synthesis Report. OECD/DAC Expert Group on Aid Evaluation
OECD/DAC. (2001). Evaluation Feedback for Effective Learning and Accountability.
Röling, N.R. (2002). ‘Beyond the aggregation of individual preferences: moving from multiple to distributed cognition in resource dilemmas’ In: ‘Wheelbarrows Full of Frogs: Social Learning in Rural Resource Management.’ Leeuwis, C., and Pyburn, R. (eds.), Van Gorcum, Assen, the Netherlands
Solomon, M., Chowdhury, A. Mushtaque R., ‘Knowledge to Action: Evaluation for Learning in a Multi-organsiational Global Partnership’, Development in Practice, Volume 12, Numbers 3&4, August 2002
Sutton, R., (1999), ‘The Policy Process: An Overview’. ODI Working Paper 118. Lonon.
Uphoff, N., Combs, J. (2001). ‘Some Things Can’t be True But Are: Rice, Rickets and What Else? Unlearning Conventional Wisdoms to Remove Paradigm Blockages’, Cornell International Institute for Food, Agriculture and Development, Cornell University, New York
Van der Veen, R.G.W. (2000). ‘Learning Natural Resource Management’ In: Guijt, I., Berdegué, J.A. and Loevinsohn, M. (eds.) ‘Deepening the basis of rural Resource Management’ ISNAR & RIMISP, The Hague NL/Santiago de Chile