TIPC is not only in the business of creating new policy frameworks for transformative change; a key programmatic focus for the Consortium is also on the evaluation of such policies. Sometimes, evaluation has been understood as a buzzword for researchers, funding agencies, policymakers, governments, and NGOs, who are under increasing pressure to provide measurable outcomes that demonstrate the effectiveness or efficiency of their projects. But are current evaluative strategies “fit for purpose” for the emerging framework of Transformative Innovation Policy (TIP)? In their efforts to demonstrate that ever elusive impact, many projects can drown in a sea of indicators that demand pithy case-studies and statistics, turning the evaluation into a meaningless checklist.
Rather than acting as a perfunctory check-in at specified points of the project, evaluation at its most effective should be part of the core of the policy, across different levels of implementation (e.g. programme, project). TIPC seeks to interweave evaluation strategy within the process of policy experimentation. The co-creation of new policy frameworks will mean the co-creation of new evaluative strategies – Transformative policies require transformative evaluation. The Consortium is moving towards developing an evaluation strategy (based on a generic Theory of Change, amongst others), and a set of guidelines that will enable to assess policy experimentation and the contributions of National Science and Technology Agencies to the transformations in socio-technical systems toward sustainable transitions.
What could an evaluation strategy for TIP look like? Transformative Innovation Policy (TIP) differs from previous frameworks in that it explicitly seeks to address social and environmental challenges at the onset, rather than assuming – or, indeed, hoping –that economic growth will eventually result in the resolution of relevant social and environmental problems. This in itself will mean that the criteria for assessment will be extremely different from previous policy evaluations. In the following months, TIPC consortium members will be reflecting on what evaluative strategies could work in their policy environments, contributing to an overall evaluative framework. The elements and areas the Consortium team will explore will include looking at tools, methods, questions, criteria, governance systems and evaluation processes.
A Theory of (Transformative) Change:
An important first step is creating a Theory of Change (ToC) for Transformative Innovation Policy. A ToC is an evaluative planning tool that enables stakeholders to work backwards from policy objectives, mapping expected outcomes and outputs, processes and inputs that will be need to happen in order to achieve the project goals. Through this, an explicit pathway is laid out for the framework and the evaluators. From a generic Theory of Change that will lay out how multiple TIP experiments will ultimately feed into a positive change in the wider socio-technical system, there will also need to be more specific ToCs for individual experiments to reflect the contextual diversity they exist in.
Planning and Participating in Evaluation:
The process of an evaluation itself should reflect the framework of the policy that it is assessing; as we expect the TIP creation processes to be inclusive and participatory, so must the evaluation be. Traditional evaluations are often led by external actors, experts specialising in evaluation that implement and plan the process. In contrast, it is crucial to TIPC that the actors who are involved in policy experiments also participate in (almost) all stages of the evaluation of those experiments, with external experts mainly acting as facilitators. The inclusivity characterising the process of policy experimentation should also manifest in the evaluation process; attention must be paid in particular to the power dynamics that determine which voices are usually heard more than others.
New questions, new answers:
An evaluation can be good only if the questions that drive it are clear. As the questions that will lead TIPC policy experiments should be very different from previous policy approaches, so the evaluative questions should be different. Generally, the stages of an evaluation focus on 4 areas – impacts, outcomes, process (or activities), and inputs. But within those four areas, we should also asking for some key questions that assume a special relevance for TIP evaluation.
- What was generated?
Outcomes are direct results of policy experiments or processes, and for many programmes or projects often include reports, findings, or papers that are produced. For TIPC, outcomes of interest are those that enable the actors to continue improving and contributing to further transformative initiatives. These will be the results of a learning process, such as any built capacities of actors or actionable knowledge that can be carried forward. Changes within the organisations that undertook the experiment could also be explored, whether it be structural, an up-take in more experimentation, or new methods that are now being adopted in specific relation to transformative change and transformative innovations. Have the actors involved in the experiment learnt any new, relatable skills that could be used in furthering experimentation in transformative change? And, more importantly, have actors deeply reflected about their current socio-technical practices and routines? Can we observe transformations in narratives, behaviours and networks in relation to the existing socio-technical regimes?
- What was accomplished?
Impact is often the thing that policymakers are most interested in regards to policy evaluation. For TIPC, impacts of interest should relate to social and/or environmental challenges that have been addressed, especially those that are related to the Sustainable Development Goals (SDGs). Taking the SDGs as a starting point, possible questions to consider could ask whether or not specific SDGs have been tackled and to what success. In this sense, is important to define – within this context – what a ‘success’ is, and avoiding to talk about ‘attribution of effect’ in favour of the idea of factors contributing to transformative change. How can we assess the contribution to a transition toward sustainability in specific socio-technical systems?
- How was it completed?
Charting and assessing the process of an experimentation can often shed light on how success (or indeed failure) was achieved, and how the process could be improved in the future. Unlike other projects, an unsuccessful TIPC experiment will not written off as a failure; there will still be valuable learning lessons from that experiment’s process that will lead to success and learning in future projects. For TIPC, the policy experiments must provide opportunities for reflexivity and deep learning. How did power dynamics affect the decision-making and learning processes? Have different directions for technology and pathways of change been considered?
- What was invested?
Often evaluators and policymakers focus on key inputs such as financial resources or Human Resources. While this must definitely be a factor, the process of evaluating inputs for TIPC will need to focus on less tangible but equally as important elements. If participatory evaluation is to be successfully achieved, trust must be present as well as support, commitment, and expertise. Was there a collective willingness to collaborate on a mutual footing? Were different and conflicting interests acknowledged?
What was learned? A formative approach for TIP evaluation
The evaluation of TIPs for sustainability transitions presents greater challenges that need careful reflection about the evaluation approach to be used. In this sense, an opportunity comes from reviewing the existing literature in combination with the experience of policymakers and local activists involved in STI policy and – sometimes –-transformative processes. A first conclusion that seems to emerge is that there is a need for a formative approach to policy evaluation, where second-order (deep) learning and reflexivity are central elements. The function of TIP evaluation should be a formative one instead of for accountability purposes. This also reinforces the necessary connections between Evaluation and other core activities of the Consortium: capacity building firstly, but also experimentation and research.
With this backdrop, TIPC can develop a consistent evaluation framework for delivering a more specific evaluation strategy and a set of guidelines. Working towards a fresh approach for transformative evaluation is a reflexive, participatory process that is interwoven at all stages of the experiment and creation of a policy framework. As the framework evolves and adapts, so will the evaluation. Transformative Innovation Policy experiments will need a new evaluative strategy that must be co-created through the same actors that conduct policy experiments. As the Consortium delves deeper into the underlying assumptions and mechanisms at the base of its upcoming policy experiments, there will be space to design specific evaluative strategies for Transformative Innovation Policy.