Evaluation and Tracking of Widening Participation/ Equity Initiatives

This blog post is part of the Gonski Institute for Education’s open access annotated bibliography (OAAB) series, a project led by Dr Sally Baker. OAABs offer a snapshot of some of the available literature on a particular topic. The literature is curated by a collective of scholars who share an interest in equity in education. These resources are intended to be shared with the international community of researchers, students, educators and practitioners. The literature has been organised thematically according to patterns that have emerged from a deep and sustained engagement with the various fields.

Literature Review written by Katie Osborne-Crowley

Over recent years, there has been an increasing emphasis on robust evaluation of widening participation programs to ensure that program planning is based on evidence and not just on “good intentions” (Thomas, 2000) and to demonstrate effectiveness to funders (Palmero, Marr, Oriel, Arthur & Johnston, 2012). Additionally, in the Australian context, decreasing government funding for WP interventions has put pressure on universities to use evaluative data to direct limited funds most effectively (Haintz, Goldinggay, Heckman, Ward, Afrouz & George, 2018). However, evaluating the impact of WP interventions on progression to HE has presented a major challenge for the HE sector, since student decisions about higher education are multi-determined and situated in complex social realities (Hayton & Bengry-Howell, 2016; Go8, 2010; Holland, Houghton, Armstrong & Mashiter, 2017). Many WP interventions are still not evidence-informed and do not explicitly set out a rationale for their content or delivery (Hayton & Bengry-Howell, 2016). Further, there is still little systematic evaluation and most WP evaluations rely solely on participant perception and pre- and post- survey methodology (Gale, Sellar, Parker, Hattam, Comber, Tranter & Bills, 2010). Thus, there is considerable scope to improve WP evaluation practices, and a number of suggestions are discussed in the literature reviewed below.

Much of the literature on best practice for evaluating WP interventions emphasises the need for a clear conceptual framework and clearly identified objectives (Wilkens & de Vries, 2014; Naylor, 2014). A common approach to this is the use of a theory of change framework, which sets out an organisation’s theory on how and why their initiative will work. This approach acknowledges the complexity of outreach activity and the inter-related environments in which they operate, instead of ignoring them as more clinical approach’s such as RCTs do (Childs, Hanson, Carnegie-Douglas & Archbold, 2016). A theory of change is a useful framework for evaluation as it allows an organisation to explicitly set out its intended outcomes (against which success can be measured) as well as the mechanisms assumed to underpin the effectiveness of the intervention, which can then be examined (Barkat, 2019). Critically, in measuring success against pre-determined outcomes, WP evaluators should be careful not to attribute observed changes directly to their work, but instead must instead recognise that their interventions can only ever be a contributing factor to a student’s decision to attend HE, or to their increased school attainment. Indeed, to attribute the positive outcomes you observe to your work alone negates the hard work of and input from teachers, schools, parents and communities (Hayton & Bengry-Howell, 2016).

The literature also emphasises that evaluation must encourage strategic learning and inform the future direction of the program via an organisational commitment to critical assessment of what needs to change (Reed, King and Whiteford, 2015). Participatory action research (PAR) is an approach which recognises the need for an intrinsic relationship between evaluation and practice. As Thomas (2000) says, the PAR approach “goes beyond simply understanding and reporting what is happening, but evaluation research that has an impact on practice and changes people’s lives” (p. 99). The PAR approach recognises that research that can truly change practice for the better needs to recognise local context and the unique features of each project, offering local solution for local problems (Thomas, 2000). Evaluation which is effectively informing program design also needs to be flexible enough to adapt to such changes over time. 

A number of authors have also emphasised the need for evaluation to occur at all levels of WP work; the activity level, the program level and the cross-institutional level (Haintz et al., 2016). Program level evaluation, which is often missed, gives a more holistic picture and acknowledges the intersecting and cumulative effects of various activities. Holland and colleagues (2017) observed in the UK context widespread inconsistency in the focus or wording of questions used to evaluate different activities within the same institution, which prevented analysis at the program level. Further, cross-institutional evaluation is rarely undertaken but would provide an opportunity to locate broad patterns of effectiveness (Walton & Carillo-Higueras, 2019).

Finally, many authors point the value of incorporating richer qualitative data into evaluation practices to generate a more complete picture of the complex impacts that WP interventions have on student’s ‘learner journey’ and on their decisions about higher education (Holland et al., 2017; Raven, 2015). As such, there has been a growing interest in the use of data gathered through focus groups and interviews, as well as the use of more creative methods such as photo elicitation. This is in resistance to a long standing orthodox in the evaluation field which places a premium on quantitative data and methods, based on the idea that numerical evidence is more objective and persuasive (Raven, 2015). Additionally, qualitative data has great potential in informing the development of new outreach activities which service students self-identified needs (Raven, 2015). Attention has also been given to the potential of longitudinal evaluation and tracking to provide critical evidence of cumulative impact of WP programs (Palermo et al., 2015; Lam, Jackson, Walstab & Huo., 2015).

The thematic organisation of the open access annotated bibliographies (OAABs) does not reflect the intersecting and complex overlaps of the various foci in the literature, so please keep in mind that this is an interpretive exercise and one that could easily be reworked by another set of authors. An important note to make is that these resources should not be read as ‘the reading’ of any piece — rather they reflect the interpretive lens of a small number of people and should therefore be used as a ‘way in’ to the academic and grey literature. Hyperlinks have been provided to each entry (where possible) so that you may be able to access the original texts (although many of these will be hidden behind pay walls, which we cannot override for copyright reasons).

Furthermore, it is important to note that these resources are not a ‘finished product’; rather, they are reflective of an on-going, iterative engagement with the inter/national literature that critically engages with issues relating to equity in education. As such, there are unintentional omissions in these resources — if you see a gap in the literature, please feel free to make this clear, or offer an entry for inclusion. This annotated bibliography will be updated every six months for the first year, and annually thereafter.

Evaluation and Tracking of Widening Participation/ Equity Initiatives