How we work
Our approach to developing learning resources and tools to make reliable evidence more accessible and useful is thorough, careful and unique. The problem is universal and so is our solution. However, our starting point is low-income countries because the less you have, the less you can afford to make uninformed health decisions. We engage and work with teachers, learners, health professionals, policymakers, and other stakeholders from start to finish. Our work is informed by systematic reviews and the evidence that we communicate to people making decisions comes from systematic reviews.
People must also be able to recognise reliable and unreliable claims, and to use reliable information to inform their decisions. Our approach to enabling people to do this includes: 1) engaging stakeholders, 2) using systematic reviews, 3) mapping the skills people need, 4) developing tools to measure those skills, 5) designing resources to teach those skills, and 6) evaluating the effects of those resources, 7) sustaining development and use of the resources.
1) Engaging stakeholders
We engage learners, teachers, policymakers, health professionals, researchers and other stakeholders in our work in a number of ways [SURE 2011, Oxman 2009, Schünemann 2006]. We have international advisory groups that include experts from around the world with relevant types of expertise. We seek input from our advisory groups at each stage of a project. We collaborate with learners, teacher networks, journalist networks, policymakers and advisory groups with relevant experience, viewpoints and expertise in the countries where we work. We work together with these groups to brainstorm, pilot, user-test, evaluate and disseminate resources.
2) Using systematic reviews
Systematic reviews are summaries of studies addressing a clear question, using systematic and explicit methods to identify, select, and critically appraise relevant studies, and to collect and analyse data from them [Higgins 2011]. They use scientifically defensible, explicit methods to reduce bias (systematic error) and, if appropriate and possible, meta-analysis to reduce the play of chance. We use systematic reviews from the learning sciences to inform the design of our learning resources [e.g. Nordheim 2016, Abrami 2015, Zohar 2013, Higgins 2008, Slavin 2014, Potvin 2014, Furtak 2012, Bennett 2005, Hogarth 2005, Clark 2016, Boyle 2016, Islam 2016, Gerard 2011, Snilstveit 2015, Evans 2015, Ganimian 2016], to learn from other relevant learning resources [Krause 2011, Austvoll-Dahlgren 2016a, Nordheim 2016, Cusack 2016], and to inform the way that we work [e.g. Kunz 1998, Vist 2005, Oxman 2006, Nilsen 2006, Hopewell 2009, Akl 2011a, Akl 2011b, Higgins 2011]. Reliable evidence of the effects of treatments, which we are working to make easily accessible and useful to people making health choices, comes from Cochrane reviews and other systematic reviews found in Epistemonikos.
3) Mapping the skills people need
We initially identified 32 Key Concepts that people need to apply to assess claims about the effects of treatments (any action intended to improve health) [Austvoll-Dahlgren 2015]. These include concepts about claims and whether they are justified, about comparisons and whether they are fair and reliable, and about using evidence to make informed choices. We are continuing to develop the list and have added twelve more concepts based on feedback from users, so there are now 44 Key Concepts.
The Key Concepts are the basis for developing a spiral curriculum for teaching people to think critically about treatments and make informed health choices. A spiral curriculum is an approach to education that introduces key concepts to students at a young age and covers these concepts repeatedly, with increasing degrees of complexity.
4) Developing measurement tools
The Claim Evaluation Tools database contains multiple-choice questions that can be used to assess an individual’s understanding and ability to apply the Key Concepts that people must be able to understand and apply to assess treatment claims and to make informed health choices. We have developed the questions based on extensive qualitative and quantitative feedback from methodological experts, health professionals, teachers and members of the public [Austvoll-Dahlgren 2016b]. We have used Rasch analysis to ensure that sets of these questions used in tests of people’s skills are reliable and valid [Austvoll-Dahlgren 2016c, Semakula 2017], and we have used systematic judgements about the difficulty of each question to establish passing and mastery scores [Davies 2017].
5) Human-centred design
We use a human-centred design approach [Giacomin 2014]. This is characterised by iterative development, repeating cycles of prototyping and testing that are informed by user-centred methods of feedback and design [Rosenbaum 2010a, IDEO 2015, Sanders 2008,Abras 2004] to ensure that users experience our learning-resources as engaging, useful, and easy to use. Our research team has considerable experience with user-centred, iterative design methods and systematic documentation of output from this approach [Rosenbaum 2008, Rosenbaum 2010a, Rosenbaum 2010b, Glenton 2010, Rosenbaum 2011, Mijumbi 2017].
6) Evaluating the effects of learning resources
We use randomized trials (fair comparisons) to reduce the risk of bias when we evaluate the effects of our resources [Nsangi 2017a, Semakula 2017a]. A comparison is always needed to evaluate effects, although sometimes there might be only one group. For example, there might be a comparison of health conditions within the same group of people before and after a treatment. Randomized trials use random allocation (a chance process, like tossing a coin) to assign participants to one of two or more interventions that are being compared. Random allocation ensures that each participant has a known (usually an equal) chance of being assigned to any of the treatments being compared. This results in treatment comparison groups that are similar in terms of prognostic variables, whether or not these have been recognised. Thus, there is generally a lower risk of systematic differences in prognostic variables (allocation bias) in randomized trials than there is in nonrandomized studies [Kunz 1998, Chalmers 2001].
We use process evaluations [Moore 2015] to better understand what our resources have the effects they do and to learn how to improve them and scale up their use [Nsangi 2016b, Semakula 2016b]. Our process evaluations are multi-method studies, using both qualitative data (from observations, in-depth interviews and focus group discussions) and quantitative data (e.g. to explore and try to explain variation in effects).
7) Sustaining development and use of learning resources
A problem with many learning resources that are developed and evaluated as part of a research project is that when the project ends, there is no plan or capacity for sustaining their development and promoting their use [Gershenfeld 2011]. Our approach to this problem includes:
• Undertaking market and stakeholder analyses at the start of the project to learn how best to ensure that we develop resources that will be used in schools, and preparing a business plan informed by our findings
• Engaging key stakeholders throughout the project, including children, parents, teachers, school authorities and policymakers
• Conducting process evaluations to identify determinants of effective use of the resources and implementation strategies that are tailored to address these
• Collaborating with colleagues in other countries and developing manuals to facilitate translation, testing, adaptation, and implementation in other countries
• Making all the learning-resources that we develop open access, and designed for translation