Evidence for innovation

In earlier themes, we explored the challenges in the current ability of service systems to meet the needs of families and young people. We have discussed the need to shift to genuinely, developmental and participatory ways of working, which recognise local and community expertise and engage with rather than seek to reduce complexity.  These processes involve bringing together evidence and wisdom from non-traditional sources to support innovation.

This post explores our relationship to evidence and rigour.  It proposes different evidence gathering practices needed to support learning, experimentation and decision-making in complex settings. It also advocates for a better understanding of the role of practice-based evidence, and the potential of prototyping as a means to both develop new responses and ‘innovations’ as well as produce learning and evidence to inform decision-making in sensitive settings.

Such practices require us to be more critical and more open about what we mean by evidence and rigour and to build our collective capacity and understanding of how rigour is established in the face of dynamic and emerging challenges.

They also require us to challenge or question assumptions and norms about evidence, how it is constituted, who validates knowledge, what knowledge systems are privileged and who gets to decide.

This post explores:

  • How might we identify our own relationship to data and evidence?

  • Where is the current evidence bias in policy and service settings?

  • What is the value of prototyping to build practice-based evidence?

  • How do we better understand and establish rigour?

  • What does this mean for the skills and roles of researchers?

The thinking, practice and definitions shared here are also still developing and exploratory. This post draws on ideas and work that will be presented in more depth in a forthcoming paper co-authored by Penny Hagen (Auckland Co-Design Lab), Ingrid Burkett (Griffith University), Chris Vanstone (TACSI), and Carolyn Curtis (TACSI). It also draws on ideas in development by the Centre for Community Child Health about the relationship between evidence and practice.

How might we identify our own relationship to data and evidence?

190503-Evidence-based-practice2.png



A place to start the conversation is in exploring some of the relationships we have to data and evidence. The sketch above is an extension of one initially developed by Dr Ingrid Burkett. It’s not intended as definitive, just a tool for us to consider how we think about data and evidence. The diagram illustrates the role of evidence-based practice on the left with practice-based evidence on the right. It is intended to suggest these different types of practice can have complementary roles, and that both are necessary when seeking to understand what will meet the needs of families and young people.

Evidence-based practice refers to the development and application of (usually) interventions that have been shown to be effective elsewhere through specific and consistent forms of trials and testing such as randomised control trials (RCT). Relying on or drawing from the existing and established evidence about what works, or applying ‘evidence-based interventions’ is, for example, a typical approach to identifying interventions, programs or services in health contexts.

Practice-based evidence draws upon the existing evidence-base, creates new knowledge through working together with communities, on the ground testing and prototyping to find what is needed and what works in a local context. Practice-based approaches recognise that expertise sits with practitioners as well as with communities and families, and that effective definition of issues and responses need to take into account local context, history (including history of whenua), place, culture, values, resources and assets.

The different types of data or sources of evidence described in the middle include:

Big Data (and other forms of quantitative data). Big Data refers to large sets of quantitative data that can provide a view or identify patterns of what is going on from a numerical or statistical point of view e.g. how much, how many, when, where, what kind. It relies on large numbers of people or data sets.

Qualitative Data or Thick Data [1], which goes deeper into behaviours, social context, motivations and underlying reasons. It reveals why and how, involves smaller numbers of people and is often produced through ethnographic methods. Thick Data and Big Data provide different kinds of insights at different scales.

Expertise, which involves drawing upon the understanding, lived experience, know-how, values, perspectives, culture and beliefs of those involved, impacted, and implementing the change process, including practitioners, whānau, children, young people and community members.

Traditional knowledge, which refers to indigenous knowledge – or systems of knowledge developed over centuries by communities from a particular culture or place [2]. Mātauranga Māori for example is “the body of knowledge originating from Māori ancestors, including the Māori world view and perspectives, Māori creativity and cultural practices” [3].

The approaches and types of data included in the image represent different perspectives about knowledge, data, evidence and rigour, some of which are more dominant in the current policy and service system. Our willingness to accept all of these as legitimate forms of data or evidence and the extent to which we are familiar with employing them as a way to respond to issues depends on our background, training and epistemological viewpoints and world view. Here we argue that the ability to draw upon a plurality of data sources and ways of seeing and knowing in the world will be critical to our ability to work collaboratively and in complexity-informed ways. However, if we have been trained to value certain things as evidence or ‘legitimate’ (and not others), moving to a different view can require a significant shift in mindset.

Where is the current evidence bias in policy and service settings?

Other work has identified that in policy and service settings there are ‘unwritten rules’ which can include narrow ideas about evidence and rigour. While there is increasing recognition in mainstream or Western science settings of indigenous knowledge systems as legitimate in their own right, references to evidence or evidence-based practices and programmes in policy and service settings still rarely include this type of knowledge or evidence. While policy-makers may draw upon both qualitative and quantitative evidence, as the quote below suggests, quantitative evidence in particular is favoured for decision-making and reporting as it is perceived as being definitive or providing certainty in decision-making.

“Different research approaches do different things and offer different kinds of validity, to allow policy officials and ministers to reach decisions. But in the culture of policy making, the deductive logic has the allure of offering definitive evidence.” [4]

It is also often default practice in government to advocate for evidence-based interventions or evidence-based practice which appears as a way of arguing for investing in approaches that are proven – quantitatively – to be effective. Most commonly this involves evaluation of the quality and strength of evidence in accordance with a hierarchy which generally ranks experimental designs using Randomised Control Trials (RCTs) at the top [5].

This means that the demonstration of efficacy has been achieved through experimental methods that rely on targeted, tightly focused selection of a few specified variables. As suggested by Robert Chambers in his book ‘Can we know better?’, such approaches might lend themselves to controlled or controllable scientific or medical research where there is a linear logic, and where phenomena is measurable and can be standardised. But these approaches are less applicable for complex social issues and contexts dominated by people and processes that are emergent, political, dynamic and harder to control [6].

Also, the standardised testing of interventions to establish impact or efficacy in controlled settings usually involves limited or little focus on the processes of implementation. The limitations of this approach are articulated in a Centre for Community Child Health Policy Brief on using evidence in policy and programs:

“Real-world implementation of interventions are infinitely more complex (and unpredictable) than a research study in which multiple variables have been controlled or corrected for.” 

The “plug and play” approach of taking interventions demonstrated in one context or region into another also often ignores what other conditions, infrastructure or implementation contexts may have been at play in the original context. They may also be selected oblivious to the context into which they are being “plugged”, or how and who success was judged by—often not those whom the intervention was meant to benefit. While an “evidence-based approach” that relies on experimental designs appears like a strategy for reducing risk and guaranteeing effective outcomes—this is not always the case.

As outlined in the previous post “Moving away from just programs and services” conventional Western approaches to intervention development have consistently reinforced the notion that expertise in how change might be achieved is held only by experts and professionals, rather than recognising the role, resources, capacity or agency of families and local communities in determining their responses to the things that matter to them. Other voices have often been absent in the identification of issues as well as responses, including and in particular the voices of indigenous communities often ‘targeted’ by such interventions [7]. In Aotearoa New Zealand, for example, adherence to evidence-based approaches consistently presents problems given that no internationally trialled interventions have been developed in the context of the Treaty of Waitangi or for the sort of cultural contexts present in Aotearoa.

The RCT design and other quantitative approaches have numerous strengths when applied in the right circumstances. However, their preeminent status in what is considered to be evidence-based is not optimal—or even feasible—for generating knowledge and learning needed to address complex issues. Narrow definitions of evidence and knowledge risk holding us in the patterns of the status quo, which often reinforces particular kinds of western world views and value systems.

Researchers from the Brookfield Institute note:

“There are “phantom rules” or orthodoxies in government around what is allowable and qualifies as valid evidence that may inhibit policy professionals from innovating.” [8]

While a critical lens does need to be applied to the “existing evidence-base” in innovation, we aren’t arguing for this to be disregarded. Drawing upon existing evidence can help us avoid practices that are harmful or practices that have been shown to be ineffective. They can also help to provide strong arguments for direction or demonstrate how behaviours may at times run counter to what we might intuitively assume. They constitute a knowledge base that we can draw upon to avoid making mistakes and to ensure we further knowledge rather than operate in a vacuum or waste resources trying something that’s already been tested.

A criticism of some design practices is that ideas or prototypes have at times been developed largely disconnected from this existing evidence base and therefore lack credibility or opportunity to build on existing knowledge. Yet, practice-based approaches help us to engage in the complexity of people, place and policy interacting together. Such approaches can be used to inform us about how things actually work on the ground in context, as well as what is relevant and meaningful to understand and support change in a particular setting or locality. They help to identify and build upon local knowledge and strengths, and are particularly relevant to complex settings when other efforts have failed in the past and novel or new approaches are needed.

The reality is we don’t know what will work to address the complex challenges families and young people are currently facing. With complex issues we argue that a rigorous approach is to harness evidence-based practice and practice-based evidence, bringing both into how we work alongside communities to inform practice and policy design that works for those communities.

What is the value of prototyping to build practice-based evidence?

In an effort to move beyond evidence based practice, the Centre for Community Child Health advanced the concept of evidence-informed practice, which brings together three components: evidence-based programs/strategies, evidence-based processes, and client and professional values and beliefs [9]. Such practice recognises different forms of evidence (i.e. that of previous research as well as expertise and lived experience of professionals and families or ‘clients’) and puts equal emphasis on both the program design and the processes for implementation.

In this interdisciplinary approach different forms of evidence and expertise come together to inform an approach that is best practice in context. This is an important advance on evidence-based practice and one that recognises different forms of expertise and local knowledge.

The development of practice-based evidence goes further than this in the creation of new knowledge through collaborative cycles of innovation: rigorous prototyping, experimentation and testing, working together with communities to learn what works in their context. Cycles of prototyping for example can produce new insights and knowledge about what might work, where and why, as well as new responses, capabilities and capacities in the teams and communities involved in the work.

More and more, co-design approaches that involve live prototyping in situ (sometimes described as field prototypes or live prototyping) with families, stakeholders and service providers are becoming a way for understanding and trying to catalyse change in complex and dynamic settings.

Prototyping as a concept has been embraced because of its emphasis on testing assumptions, ideas and concepts quickly and early. Prototypes make potential ideas tangible, allowing us to test things out in practice with those impacted, get fast feedback on potential approaches and how proposals might actually work (or not) in the real world.

Prototyping is a collaborative learning process that involves cycles of trying things out and then iterating in response. They help to reveal the gaps and barriers between policy intent and outcomes in practice. Prototyping reduces risk by exposing issues early on, significant effort and money might be saved by demonstrating the implications of options or directions that may turn out to be infeasible before significant investment has been made. Prototyping can also reveal opportunities for different approaches that may have remained otherwise hidden.  The prototype or concepts evolves over time as we learn more both about the context and issue and
the best ways to respond.

In complex settings prototypes serve the role of provoking the system in ways that tell us more about the system and where we should focus for greatest impact. For example, trying to prototype a new intervention for improving housing conditions may quickly reveal issues at an implementation level that prevent what might seem on the surface to be a straightforward intervention to better heat a home. However they can just as easily reveal policy and contract conditions that prevent a proposed solution from ever being possible to implement. Such ‘strategic learning’ might mean that the focus for change would be better targeted, for example, at funding and contract structures rather than implementation issues or new intervention ideas.

Historically the purpose of prototypes in innovation settings has often been positioned as a process to find successful ideas or solutions that can ‘scale’ for impact. When trying to address complex issues and catalyse change in policy and service settings, the primary role of prototypes – ‘successful’ or not – is to produce learning that can support decision-making about plausible directions and ways of working prior to significant commitment in investment, resource or decision-making.  The role of prototypes to expose the nature of the system, and the behaviours that underpin current activities and outcomes is especially important when seeking systems-level changes which go beyond any single intervention, organisation or issue. It matters less whether a prototype has been successful as an intervention or idea. It matters more what the prototype has made visible, helped to activate in the system, and the learning or ‘evidence’ for which direction to head.

In the context of innovation in policy and service systems change prototyping can help us to:

  1. Test out and localise aspects of the existing evidence-base in particular contexts or communities: Does this thing work here in practice? How does this thing work here in practice?

  2. Develop an understanding of what responses are needed in a particular context – “how” things will work, or need to work, and how different factors play out in practice including how policies interacton the ground.

  3. Reduce risk by testing out approaches and quickly identifying issues or barriers to implementation as well as some of the unintended consequences.

  4. Explore and generate and test new plausible approaches – when existing evidence is not sufficient or doesn’t yet exist.

  5. Enable local input and leadership into the development and understanding of what is needed in particular settings.

  6. Develop new knowledge through collaborative action and mutual learning between different groups working together.

Prototyping and other forms of learning orientated experimentation offer the opportunity for forms of practice-based evidence to emerge that directly engages in the complexity of how things work on the ground and enables the drawing and building of local knowledge. But we are still building our capacity to understand and leverage this potential.

To support the use of prototyping to produce practice-based evidence we need a better understanding of what rigour looks like in this setting. We need to develop our practice to ensure rigorous approaches that can legitimately claim to contribute to the broader evidence base. We also need to ensure that the value of the kinds of evidence that can be produced through practice-based efforts such as live prototyping are understood in the context of what they offer.

Understanding and establishing rigour

Although rigour is often associated with particular types of methods, it is not fixed. Rigour, like research design, or method, is contextual – defined by what is needed and appropriate to the situation we are working in and the questions we are asking. In testing evidence-based interventions, rigour is normally established in relation to adherence (usually without deviation) to the intervention model and the prescribed and standardised evaluation framework. What determines success and rigour is the model applied as intended with a positive result that can be correlated to the intervention. In different contexts rigour is established in other ways.

In discussing rigour in design research, Biggs & Büchler argue that the obligations of rigour in all forms of research are, at their most basic, met by the practitioners’ ability to demonstrate the validity of their selected method to deliver the research solution [10]. They suggest that establishing rigour in research is based upon making explicit the necessity of a particular method, and that is what legitimises the whole process.

Rigor in research is the strength of the chain of reasoning and that has to be judged in the context of the question and the answer, for example, in the context of social innovation as opposed to some other context.
Therefore, when innovating with whānau or families using participatory paradigms, ensuring that whānau or family voice is driving and governing decision-making is part of the rigour.

Adhering to the values and principles agreed with our partners and being responsive and adaptive to whānau needs might also represent rigour. Following tikanga [11] may be an important aspect of rigour. In inclusive practice rigour is more likely to be principles rather than rules based, concerned with participation, methodological pluralism, reflexivity and relevance.

Establishing rigour in the context of prototyping as a source of evidence-building begins by recognising its role as a participatory learning process. The question of rigour is then focused on the rigour of the learning and
decision-making process, and the rigour of the guides, heuristics or tools we are using to support the cycles of inclusive data capture, reflection and analysis. The following image by Ingrid Burkett [12] helps to highlight differences between prototyping and other more established forms of experimentation – emphasising the role of learning and feedback loops integral to prototyping. We’ve added some further definitions below to each of the approaches. The image is not intended to suggest a sequence between these three approaches, but rather how they differ in their purpose. In practice these different approaches are combined in different ways.

Prototyping as a method for producing practice-based evidence for innovation, isn’t seen as having value or rigour in the same way as other more established approaches such as pilots or RCT’s. To achieve this we need to increase the ability of teams to make sure that the practice is rigorous.  We also need to improve the understanding amongst policy and service practitioners about how and when such evidence has value.

Design and innovation processes are inherently reflective practices – orientated to learning through action. Such activities frequently produce new initiatives, ideas capacities and knowledge in the locations in which they are developed. In order that they also contribute back into the broader evidence base and are seen as legitimate sources for insight and evidence for policy, they also need to be supported by strong research and evaluation practices.

Development evaluation plays an important role here in helping to put rigour around decision-making and judgements that are made about what is ‘good’, ‘worthwhile’ or ‘good enough’ to base decision-making on. It can help to ensure and enhancing the rigour of the evaluative reasoning process by producing timely feedback loops and evidence that helps to challenge updates and update and inform the prototyping process. Researchers also have an important role to play, though one that differs somewhat from traditional research practice.

190503-Evidence-based-practice-1.png

Original image by Dr Ingrid Burkett

Prototyping

Purpose is building towards the right answer and gain understanding about the nature of a particular context.

A hypothesis to be refined, changed or dropped.

Plausible but provisional.

Orientated towards learning, testing assumptions, reducing risk around decision-making.

Pilots

Purpose is to ‘iron out creases’ or demonstrate viability, ideally in preparation for scaling.

Solution largely already identified & refined.

Less about research and evidence and more about feasibility and viability before further funding or commitment.

Randomised Control Trials

Purpose is to test if an intervention achieves intended outcomes when implemented with fidelity.

Intervention is already designed, the controlled experiment is to test and provide evidence of cause and effect.

Measures outcome of intervention with little attention to context or implementation.

What this means for skills and roles of researchers

This interdisciplinary way of working is still relatively new and the teams and skills required to support innovation cycles that bring together different forms of evidence are still developing. For researchers, this provides an important opportunity to work alongside, actively supporting the innovation process to inform policy and systems change.  Research skills are needed to pull together the evidence in a way that is contextual and applicable to the issues of concern and for this evidence to be drawn from a range of knowledge sources. A good understanding of relevant tools and data is needed and strong reasoning skills help challenge assumptions along the way. For the outcomes and insights produced through innovation to be considered legitimate policy inputs research skills are also needed to support the rigour of evidence development underpinned by ethical practice and data sovereignty.

But it requires researchers to rethink and reframe evidence, foster real-time relationships, and act as critical friends not as experts, building knowledge in situ with community. Teamwork is essential and researchers must have the ability to work shoulder to shoulder rather than operate from a distance.  This is a different role for researchers and different skills, processes and relationships are needed.

For organisations such as the Centre for Community Child Health this raises a number of questions about how we best prepare for supporting innovation in this space.

Questions we are grappling with include:

  • What is the researchers role in supporting innovation?

    1. How does the position of the researcher change in these innovation settings? For example, how do we ensure the voice of the researcher doesn’t drown out other voices?

    2. How might notions of evidence standards be rethought in this context?

    3. How can we ensure ethical standards are maintained?

    4. How do we ensure evidence is an output of innovation in a way that supports learning, improvement, scaling and sustaining?

One reflection is that traditional research strategies don’t keep pace with the speed of innovation. As a result, The Centre for Community Child Health has begun exploring different types of research contributions such as the following inputs to innovation cycles:

  • Data literacy capability building. Researchers support design teams to access and understand available data and how to use it in the innovation process.

    1. Pragmatic rapid reviews. This involves a scan of seminal grey literature and peer reviewed meta-analysis. The output is a guidance paper based on the evidence, produced in a short period of time that can be used in the design process.

    2. Evidence informed tools. For example, one tool, currently in development, will provide an evidence informed framework for considering what community level factors could be considered when a team is co-designing improvements to the conditions under which families are raising young children. Rather than innovation occurring in a vacuum, the tool will help the team to understand the range of factors we know will make a difference such as housing instability. It won’t tell the team which of these is a local priority or what to do about it but will ensure that attention is given to the factors that can be most impactful.

    3. Positive deviance case studies. Positive deviance approaches assume that in ‘every group or community, a few individuals will use uncommon practices and behaviours to achieve better solutions to problems than their peers who face the same challenges and barriers.’ Researchers could bring rigour and richness to case studies for use within innovation cycles.

These are an initial handful of ideas and we recognize there is still much to learn about the role of researchers in innovation and we welcome the opportunity to explore this alongside others in the co-design process.

The realignment and re-organisation of our skills, expertise and ways of working described here must be similarly reflected in the wider systems and structures that organise and direct our work. Our roles and how we work together need to be different and so do our processes for funding, procurement and contracting. Monitoring what and how new approaches are being developed and their contribution to changing outcomes also need to be understood. Some of these shifts are explored in the blog that follows: Theme 5 Innovation in procurement.

Acknowledgements

We’d like to thank Lee Ryan for helping to wrangle some of the longer sentences.

[1] Thick Data is a term coined by ethnographer Tricia Wang. In doing so she was drawing on the notion of thick descriptions (descriptions of behaviour) developed in ethnographic terms by influential anthropologist Clifford Geertz.  

[2] In referring to Indigenous Knowledge systems, particular terms and definitions differ between different communities, here we have referenced terminology used within The Southern Initiative in Tāmaki, Aotearoa.

[3] Definition from https://maoridictionary.co.nz

[4] Applying Design Approaches to Policy Making: Discovering Policy Lab, Lucy Kimbell 2016.

[5] Policy Brief | Edition no. 21, 2011 |  Evidence-based practice and practice-based evidence, Centre for Community Child Health, Murdoch Children’s Research Institute

[6] Chambers, R. (2017). Can We Know Better? Reflections for Development, Rugby, UK: Practical Action Publishing.

[7]  Bartgis, J., & Bigfoot, D. (2010). Evidence-based practices and practice-based evidence, National Indian Health Board Edition, Healthy Indian Country Initiative Promising Prevention Practices Resource Guide.

[8] Exploring Policy Innovation: Tools, Techniques + Approaches. (2018) Brookfield Institute for Innovation and Entrepreneurship

[9] Policy Brief | Edition no. 27, November 2017 | Using Evidence in Policy and Programs. Centre for Community Child Health, Murdoch Children’s Research Institute

[10] Biggs, M.,& Buchler, D. (2007). Rigour and practice-based researchDesign Issues23 (3), 62-69.

[11] Tikanga: procedure, custom, practice, convention or protocol

[12] Image from Dr Ingrid Burket’s presentation: Evaluation Systems Change presented at the ‘Making a Difference’ event at AUT which explored how evaluative practices can increase impact in social innovation. 2017, Auckland University of Technology, Auckland

This post was originally published on May 6th, 2019 by the Centre for Community Child Health at The Royal Children’s Hospital (Melbourne) and the Murdoch Children's Research Institute.