Skip to main content

Can an electronic monitoring system capture implementation of health promotion programs? A focussed ethnographic exploration of the story behind program monitoring data

Abstract

Background

There is a pressing need for policy makers to demonstrate progress made on investments in prevention, but few examples of monitoring systems capable of tracking population-level prevention policies and programs and their implementation. In New South Wales, Australia, the scale up of childhood obesity prevention programs to over 6000 childcare centres and primary schools is monitored via an electronic monitoring system, “PHIMS”.

Methods

Via a focussed ethnography with all 14 health promotion implementation teams in the state, we set out to explore what aspects of program implementation are captured via PHIMS, what aspects are not, and the implications for future IT implementation monitoring systems as a result.

Results

Practitioners perform a range of activities in the context of delivering obesity prevention programs, but only specific activities are captured via PHIMS. PHIMS thereby defines and standardises certain activities, while non-captured activities can be considered as “extra” work by practitioners. The achievement of implementation targets is influenced by multi-level contextual factors, with only some of the factors accounted for in PHIMS. This evidences incongruencies between work done, recorded and, therefore, recognised.

Conclusions

While monitoring systems cannot and should not capture every aspect of implementation, better accounting for aspects of context and “extra” work involved in program implementation could help illuminate why implementation succeeds or fails. Failure to do so may result in policy makers drawing false conclusions about what is required to achieve implementation targets. Practitioners, as experts of context, are well placed to assist policy makers to develop accurate and meaningful implementation targets and approaches to monitoring.

Peer Review reports

“The tick in [the PHIMS information system] is like the tip of an iceberg. It's that tiny bit above the surface. And behind it is years of chatting, visits, gently urging, suggesting they go in this direction rather than that direction.” – Quote captured in ethnographic fieldnotes, Team G

Background

There is broad agreement that policy-level investment in population-level health promotion is needed and effective in keeping people healthy and reducing health costs [1, 2]. But health promotion has often struggled to maintain funding and political support [3]. Policy makers and population-level program coordinators are therefore faced with a pressing need to demonstrate progress on investments in health promotion. Information technology (IT) systems hold promise to assist policy makers in tracking delivery of population-level health programs and the achievement of implementation targets, but there are very few examples of their use in population health contexts [4].

In clinical contexts, IT systems have a long history of design, use, and often, abandonment [5, 6]. The design, implementation, success and failures of electronic health records, for example, constitute a wealth of experiences, evaluation and research. This history provides a rich source of material by which to inform, debate, and interrogate the value, design, and standards for best use and implementation of electronic health records [7, 8]. In the context of using IT systems to monitor the implementation of population-level health programs these conversations are only just beginning. There are signs that current systems designed for research purposes fail to translate to everyday practice [9]. And that some systems designed for monitoring health promotion and prevention programs are overly onerous, resulting in push-back from users and, ultimately, abandonment despite considerable financial investments [10].

As more health promotion programs are scaled up to be delivered at the population-level, the demand for IT systems to monitor implementation will increase. Such systems need to effectively capture and monitor implementation progress for coordination across sites. However, processes by which to effectively monitor, implement and sustain programs at scale are understudied [11, 12]. This includes research on monitoring systems for health promotion programs delivered at scale, particularly because there are very few examples of these IT systems to study [4].

In New South Wales (NSW), Australia, the Ministry of Health designed and rolled-out an over AU$1 million IT implementation monitoring system, Population Health Information Management System (PHIMS), to facilitate and track the reach and delivery of state-wide childhood obesity prevention programs called the Healthy Children Initiative (HCI) [13]. PHIMS is unique in that it has sustained implementation since 2011, whereas many IT systems – in clinical and population health- have failed [4]. It presents a unique opportunity to examine the use of IT systems to track large-scale implementation. We set out to explore what aspects of implementation are captured via this IT monitoring system, what aspects are not, and the implications for future IT implementation monitoring systems. Our purpose was to gain insights that might improve coherency between what a recording system captures and what it really takes to achieve implementation.

The findings in this paper are part of a larger study that examined the dynamics between PHIMS use and health promotion practice [14]. The Monitoring Practice study was co-produced via a partnership with state-level policy makers and program managers for HCI, PHIMS technical designers, health promotion practitioners, and university-based researchers – all of whom are co-authors of this paper. While the university-based researchers (hereafter called ‘researchers’) led the collection and analysis of data, the roles played by the wider co-production team served to help position and interpret the findings within the broader context of HCI implementation (more information about the roles and contributions of team members are in Additional file 1). The research questions guiding this analysis were developed with the partners at study’s outset and are:

  1. 1.

    What constitutes/is the breadth of work involved in supporting early childhood services and primary schools to achieve HCI practices?

  2. 2.

    What constitutes/is the intensity of work?

  3. 3.

    To what extent are breadth and intensity captured by PHIMS?

Context

PHIMS was designed to support the implementation of the Healthy Children Initiative (HCI) – Australia’s largest ever scale-up of evidence-based obesity prevention programs, which includes programs delivered in approximately 6000 early childhood services and primary schools across NSW. Two particular HCI programs, Munch and Move and Live Life Well at School (hereafter referred to as HCI), are delivered by 14 health promotion teams that are situated within 15 local health districts across the state. To date, these HCI programs are currently reporting high rates of participation, reaching over 89% of early childhood services (3348/3766) and 83% of primary schools (2133/2566) [15].

The PHIMS system is described in detail elsewhere [15, 16]. It is used to track the adoption of healthy eating and physical activity practices in schools and services (See Additional file 2 for full list of practices) for Munch and Move and Live Life Well at School. Site-level progress is aggregated via PHIMS and enables district and state-level coordinators to track district-level progress against key performance indicator (KPI) targets. Access to PHIMS is restricted via a state-wide login service, where user access is configured according to roles. User roles include 1) health promotion practitioners who access and input notes and record progress towards implementation targets for their assigned sites; 2) supervisors who can access all information entered by their staff for sites in their health district; and 3) state-level managers who can only access aggregated data reporting progress towards KPI target achievement at the health district level.

Methods

Data collection

Our approach to data collection was consistent with focussed ethnography. Focussed ethnography is a short-term, high-intensity form of ethnography where short visits to the field (or, relatively short in comparison with traditional ethnography) are balanced with extensive preparation, focussed selection and attention on specific activities and sites relevant to the research questions, use of multiple data sources, and intense, iterative, and collaborative analysis of data [17, 18]. Preparatory work for this study began 1-year prior to field visits during which time researchers conducted informal interviews with study partners, attended PHIMS demonstrations and reviewed documentation, met with sites to discuss the approach, and conducted a thorough review of a range of theories as a sensitisation tool [14].

The fieldwork was conducted with all 14 local health promotion teams funded to deliver HCI programs across NSW. Over 12 months, three researchers (KC, VL, SG) spent between 1 and 5 days in each health district observing the day-to-day implementation work conducted by health promotion practitioners who delivered the HCI programs. Researchers collected extensive field notes, pictures, recordings from ad hoc interviews, and program materials that were compiled in an NVIVO project database [19]. In total, we shadowed, interviewed or observed 106 practitioners across all 14 teams. Researchers recorded their observations as soon as possible after leaving the field yielding over 590 pages of detailed field notes. Regular meetings between the researchers enabled iterative, theory-informed dialogue and analysis and reflection on the interpretations arising through analysis. Ongoing correspondence with participants in the field and regular meetings with the broader co-production team allowed the representation of findings in field notes to be further critiqued and interpreted. This group approach to analysis and interpretation reduces the subjectivity of field notes by enabling interpretations to become shared by those they are about, rather than the purview of a lone ethnographer [17]. While raw field notes were only shared among the researchers and sometimes with participants whom they were about, only de-identified or abstracted data were shared with the broader co-production team and in public forums. We used the consolidated criteria for reporting qualitative research (COREQ) guidelines [20] to guide the reporting of our study (see Additional file 1).

Data analysis

We used a grounded theory approach to code the materials in the project database, and to generate an initial project codebook, as described more fully elsewhere [14, 21, 22]. For this subsequent analysis, we used both a deductive and inductive approach [23]. Two researchers (KC and LM) collaborated in an iterative process of coding, reflection, and theming the data. For the deductive analysis, we extracted data from 15 codes we determined were most relevant to the research questions (see Additional file 3 for list and description of codes). Using a directed content analysis approach [24], we recoded this data to develop new codes related to “breadth” and “intensity,” as defined a priori (see definitions in Table 1) and generated a list of activities, strategies and resources that reflected the breadth of HCI activities. An initial coding scheme was developed and operational definitions for each category were defined and iteratively revised. We revised our coding list and recoded the data until no new codes emerged and theoretical saturation was reached. Through this process, we produced a rough ‘taxonomy’ of categories of activities involved in HCI implementation.

Table 1 Definitions of “breadth” and “intensity” as operationalised in this analysis

Whilst this approach worked well for research question 1 regarding “breadth,” we found few examples of “intensity” using this approach. It was difficult to observe and interpret “intensity” in our field notes, and because our study was cross-sectional, we were unable to fully observe “intensity” in practice. However, many practitioners discussed this issue so we adopted a grounded theory approach to better explore what “intensity” of work looked like in our dataset. Through an iterative process of discussions with the ethnographers, using NVIVO tools to expand coding to the broader context, and reading many field notes in their entirety, we inductively developed codes by coding data line-by-line, and subsequently looking for overall patterns.

To answer the third research question, we coded data that spoke to two overarching, abstracted questions throughout the entire coding process: a) “Is a tick in PHIMS a true reflection of the work done?”, and b) “How is breadth/intensity of work captured in PHIMS?” The second question was not an analysis of PHIMS content, but rather, what we observed or learned about PHIMS during our field work. Therefore, to answer this question we drew both on data from ethnographic field notes, memos created during coding, and our knowledge of the PHIMS system developed over the course of the full project (e.g. through demonstrations of PHIMS, training manuals, conversations with PHIMS developers, etc). The lead authors met frequently during the coding process to discuss emerging insights, as well as met with the other researchers to discuss possible interpretations and interesting examples. Through this iterative process, we moved from concrete codes that are descriptions of activities to more abstract, thematic groupings and generalizations which we report in our results, along with a thematic conceptual model (presented later) [25].

The analyses were concurrent, occurring alongside regular meetings with the full research team and partners where emergent findings and interpretations were discussed and collaboratively explored. Coding and insights therefore developed iteratively whilst project meetings enabled feedback and reflection to be incorporated as part of the analysis process. We presented initial findings to partners for comment and reflection.

Results

“Breadth” of work to implement HCI

We sorted activities used by teams and practitioners to deliver HCI into 15 groupings (see Table 2). These groupings reflected two overarching purposes: 1) work involved in the implementation of HCI; and 2) work required to convert implementation work into PHIMS data. We explore these activities and how they are captured in PHIMS below. The groupings were not mutually exclusive, with specific activities often meeting multiple purposes (e.g. site visits are used for networking, distributing resources, team work, and other purposes). The types of activities within each grouping were diverse and implemented differently across teams. For example, we observed some teams devoting many hours and financial resources to activities categorised as “developing HCI resources” however, others drew mainly from centrally distributed HCI materials. Notably, there was no one way to implement HCI. Practitioners were aware that HCI implementation activities differed from team to team. They were notably curious to learn from the researchers how their approach matched or differed from other teams.

Table 2 The breadth (range & types) of work involved in the implementation of the Healthy Children Initiative and how it is recorded in PHIMS

Work involved in implementing HCI

This category represents work tasks that constitute foundational components for delivering HCI to achieve implementation targets. Many of the activities were observed across all teams, but there was diversity in the specific tasks and styles by which individual activities were implemented. For example, “site visits” constituted a large proportion of practitioners’ work in every team, but approaches varied with some teams conducting regular, in-person meetings whilst others rarely visited in-person or conducted visits electronically, i.e. via phone or email (variations will be discussed in more detail later).

Some practitioners described doing work with sites who had already achieved 100% of their implementation targets. The sense from practitioners was that this responsive or self-directed work was different from or in-addition to the main work involved with implementing HCI. Sometimes this work was done to maintain practices and achievements. We also observed practitioners taking on projects or activities in response to needs identified by the local community or HCI site (e.g. to better reach culturally and linguistically diverse communities). Often – but not always- this work complemented the HCI program aims. For example, in one site practitioners developed a smartphone app to help teach fundamental movement skills. In another, they developed vegetable gardens. Not all practitioners had scope to innovate within their roles, but those that did explained that innovating was significant in keeping things fresh and new for practitioners and for sites.

How work is captured in PHIMS

The extent to which information about breadth was captured in PHIMS varied by category (see Table 2). For some categories, specific PHIMS functions facilitated documentation. The most widely observed and frequently used function was recording details about site-level progress in adopting practices. But beyond this, data entry in PHIMS highly varied depending on the style and skill of individual users and teams. Most practitioners, but not all, used PHIMS-specific functions to track contact details of key contacts. Also, almost all practitioners received the inbuilt reminders to schedule site visits, but their views on the value of this feature varied. PHIMS provided advanced functions – such as developing reports – but only few individuals used these.

But for many activities, such as developing educational resources, PHIMS lacked the necessary tools to enable practitioners to document their work. This was particularly true of self-directed work or work undertaken with sites that may not align directly with HCI program goals. We observed that many practitioners used the generic “notes” function to document details about how they used or distributed specific resources. The “notes” function is a free-text entry box with a limited character limit. There were multiple “notes” sections in different parts of PHIMS, however, that posed problems in summarising, searching and retrieving information. This limitation created confusion about how to best use the “notes” function to retrieve data to support implementation.

Performance monitoring work required to convert the work done into PHIMS data

This category represents work done to translate information about HCI implementation into data in PHIMS. Teams spent considerable time recording sites’ ongoing progress and their activities into PHIMS. Converting work into data in PHIMS required practitioners to interpret what practice adoption looks like, and to collect and translate information on sites’ practice adoption into PHIMS records.

How work is captured in PHIMS

PHIMS provides a user dashboard that tracks team-level progress towards team-level implementation targets. We did not observe that practitioners could extract information about how they used PHIMS. Such information might include, for example, date of last log-in, or active time spent working in PHIMS. Practitioners told us that supervisors have access to user-level data, but we only observed a few instances in which they accessed this information. We learned that supervisors can assign sites to practitioners, can access practitioners’ records, and receive information about target achievement and whether sites visit data were entered within specified time frames.

Intensity of work to achieve an implementation target

Four interrelated themes emerged in relation to intensity. Sub-themes are underlined in the text below and correspond to illustrative extracts from the field notes presented in Table 3.

Table 3 The “intensity” of work involved in implementing the healthy children’s initiative, themes and exemplars from ethnographic fieldnotes

The difficulty of particular practices

The intensity of work involved in achieving a practice varied, with some practices requiring greater time and effort (See Additional file 2). Practitioners indicated that the hardest practices were those that were multi-component, where missing one component would result in the failure of that target. Several practices would involve environmental-level changes that were beyond the control of an individual practitioner to directly influence (e.g. the school provides a supportive environment for healthy eating).

Contextual variations

Contextual variations that influenced intensity represented three levels: the HCI-team, site, and individual practitioner level (See Additional file 4). Contextual factors interacted with each other resulting in variations in degrees of “intensity” of an individual activity or practice. We describe site and HCI-level factors in more detail.

Site-level

Practitioners discussed that local community contexts influenced how difficult an individual practice was to achieve. For example, access to fresh food or to a local food provider influenced the achievability of the healthy canteen practice. Some practitioners described that sites often had more pressing needs or priorities (e.g. truancy or homelessness) that superseded action on some HCI practices. The extent to which site needs and priorities aligned with the specific HCI program aims differed, with the general feeling that HCI aims aligned better with priorities of early childhood services than primary schools. Site-level staff turnover was also reported to affect how quickly progress could be made.

HCI team-level

Participants described how geography, e.g. the rurality and size of districts and how dispersed sites were within it, influenced the time and effort involved in achieving a practice. The proximity of sites to offices varied by teams and ranged from a few minutes to travel between urban sites, to over 3.5 h to reach sites in remote areas. There was variation in the ratio of sites per practitioner between teams ranging from under 50 to over 150 sites per practitioner. High ratios negatively influenced the degree to which practitioners could spend time with sites. Practitioners described that HCI teams’ access to financial and staff resources influenced their ability to engage in innovative work. Much like the site-level factors, staff turnover in HCI teams was reported as affecting program delivery. In Fig. 1, we compiled observations across multiple practitioners completing the same task - a site visit - to illustrate how intensity of a task varies depending on the interplay between the site and aspects of the context in which HCI implementation occurs.

Fig. 1
figure 1

Schematic depicting variations in the intensity of work that goes into a site visit. Developed with LucidChart (free trial version) [26]

Incremental progress made over time

The intensity of work was described as an accumulation of activities over an extended period, sometimes years reflecting an extensive and prolonged commitment to building relationships between the practitioner and key site contacts. Many practitioners explained that relationships provided the foundation for HCI’s success, and therefore highly valued this work. Relationships were drawn upon to initially implement practices at a site, and to monitor progress over time. Some teams reported they had insufficient time or capacity to devote to relationship building, and that it limited their progress with sites.

When to “tick” an implementation target

Determining whether a practice was achieved or not, and therefore, whether it could be marked or “ticked” as achieved in PHIMS was source of much discussion amongst practitioners. For example, one practitioner described her approach as “conservative,” contrasting this with other practitioners she perceived as more “liberal” in their determination of whether a site had achieved a practice. We observed other practitioners making similar comparisons. Given the variations in context, degree of practice difficulty, the incremental nature of progress, and the range of individual interpretations, the achievement of a ‘tick’ in PHIMS was partly attributable to skilful implementation and partly serendipitous alignment with context (see Fig. 2).

Fig. 2
figure 2

Factors overlap and influence the intensity of work involved in achieving a practice, i.e. a “tick” in PHIMS

How intensity is captured in PHIMS

Although practice reporting in PHIMS allowed practitioners to indicate how many “parts” of a multi-component target have been achieved, partial progress towards achieving a practice is not accounted for in the overall tallying of performance indicators. Practitioners were aware and concerned by the implications and difficulties of capturing such incremental – but hugely meaningful – progress over a long period of time in PHIMS. PHIMS was seen to be outcome-centric in that it does not document the particular activities taken to achieve an outcome.

Discussion

To our knowledge, this is one of the first studies to examine the sustained use of an electronic performance monitoring system for health promotion implementation at scale. Given that very few electronic implementation monitoring systems and their uses are described in the literature, the insights gained from studying PHIMS provides novel considerations for the design of future systems. Such considerations are not limited to the design of the IT system itself, but also include the process of monitoring as well as the larger management context in which IT systems are used to govern large scale program delivery. These considerations are summarised in Table 4.

Table 4 Considerations for the design of future performance monitoring systems for health promotion implementation

Our analysis highlights ways in which a tick in PHIMS - indicating that practice change has been achieved - does not always convey the range and types of activities (i.e. breadth) involved in implementation, nor the time and effort (i.e. intensity) activities take. Program target achievement as reported by PHIMS reflects only the “tip of the iceberg” in terms of the scope of work employed, and importantly, some aspects of this work are better captured or represented than others.

Implications for capturing breadth of implementation work

By studying PHIMS in practice, we uncovered a broader story behind the program monitoring data about the work that goes into implementing the flagship HCI programs. The 15 categories of activities we identified that constitute the breadth of HCI work reflect, in part, activities that have been identified elsewhere as program components [27]. But we also observed that teams engaged in varied activities that practitioners described as extending beyond HCI primary aims. Practitioners developed new materials and activities to complement HCI, but they also responded to local site needs, strengths and opportunities, or developed projects of particular interest to individual practitioners and their expertise. Sometimes this involved working to address non-obesity related health issues. That teams undertake “extra-HCI” tasks aligns with HCI’s overall management philosophy of “tight-loose-tight” – that is, local teams have discretion over how they achieve set implementation targets [28]. PHIMS, however, had no specific function to capture information about these activities. This lack of recording ability likely reinforced the feeling among practitioners that these tasks were “extra” or not part of HCI.

PHIMS is not the only mechanism by which HCI implementation is governed or managed at the state level. PHIMS sits within a broader management context in which it is one of many mechanisms used by NSW Health to monitor progress, support implementation, and recognise innovative practice. Practitioners are encouraged to share information and innovations at annual state-wide conferences, regular teleconferences and on an online portal. At the local level, informal technologies that sit alongside PHIMS (e.g. spreadsheets, paper files etc) are used by practitioners to capture and record non-HCI specific work, as well as new knowledge created about how best to implement the HCI program. We explore these methods elsewhere [22]. While these other mechanisms may provide better flexibility to capturing emerging and adaptive work, the inability of PHIMS to capture even simple information about extra work performed presents a potential missed opportunity.

It is not feasible or useful for IT systems to capture every aspect of practice, and we are not suggesting that they should. However, not capturing ‘extra’ work in the long term may pose problems because our understanding of a program, and what it takes to implement it well, may come to be defined only by what is monitored and measured via a recording system. Data from recording systems may therefore present a distorted view of implementation – one that defines and therefore standardises a program as the delivery of a particular set of tasks. When in fact, achieving successful implementation may require that practitioners undertake a range of activities that do not directly align with a program’s aims, but enable practitioners to build relationships and trust with local partners through which subsequent program implementation becomes possible. Consistently documenting information about extra work and new innovations in addition to documenting aspects of implementation and unexpected effects could provide information to demonstrate multiple benefits and effects of investing in health promotion work. The challenge, therefore, is to design IT systems that enable program implementation to remain ‘loose,’ while still capturing meaningful information about progress towards target achievement. Importantly, IT must be engineered in a way that allows for ongoing enhancements to enable the system to evolve as the context changes. Many of the limitations of PHIMS likely reflect changes in practice and the HCI program over time [29].

We also identified that some of the most valuable information about work with sites may be contained within PHIMS, but is currently inaccessible, buried within free-text note fields that are difficult to query or collate. This problem is not unique to population health systems, and much has been written about strategies used for “mining” such data from electronic health records [30]. In clinical contexts, audit and feedback processes as part of continuous quality improvement initiatives have shown promise as a means of effectively using data from electronic health records to improve health promotion services [31]. Audit tools assist practitioners to review records, systematically collect information – particularly from qualitative data – and use information to inform improvement strategies. As part of continuous quality improvement initiatives, teams undertake iterative cycles of reviewing and extracting data from records using audit tools, reviewing data to identify areas in need of improvement, then developing, trialling and testing a strategy to improve practice and improve implementation. Such improvement processes would be a useful complement to health promotion IT systems by providing a structured process through which teams can better harness and learn from qualitative notes and apply these learnings to practice improvement. Interestingly, one person in our study did develop a quality audit tool to review PHIMS records, but it was never finalised, implemented or otherwise shared beyond that team.

Implications for capturing intensity of implementation work

We identified a confluence of four factors – practice difficulty, context, time, and interpretation – that must align to achieve a “tick” in PHIMS. However, almost none of these factors appeared to be captured by PHIMS. This was cause for concern among the teams, who debated when to “tick” a box - despite a standard monitoring guide to support decision making at this level being the part of program fidelity – and questioned whether these factors are accounted for when state-level decisions about HCI implementation are made.

Currently, PHIMS reports whether practices are implemented or not, but fails to capture or appreciate why implementation fails or succeeds. Information that could account for variation is not captured or collated in a meaningful way but could explain why target achievement looks different in different sites. If a lot of data are being recorded off the formal record, or extensive differences in workloads are being simply absorbed by the teams, then this deprives policy-level decision makers and IT designers of the opportunity to re-tweak the design of the scale up protocols, the funding and the recording of practice. Implementation scientists are concerned with developing measures by which to measure aspects of context, process and outcomes in implementation [32]. But given diverse contextual factors that influence implementation [33] comprehensive measuring of all these factors would be infeasible and onerous. Carefully selecting meaningful indicators is critical. Front-line practitioners are likely best-placed to recognise which indicators matter and why, given their deep understanding of local contexts.

Further, if monitoring systems are only concerned with whether a practice is achieved or not and fail to capture the incremental work processes accumulating in practice improvement, the system risks creating an artificial threshold effect wherein practitioners place higher value on completing work tasks that are 'counted,’ or, more easily contribute to practice achievement - rather than prioritizing those resulting in the greatest health improvements. Indeed, this issue is explored more fully in another paper from this research [21].

Indicators of incremental progress towards target achievement (i.e. process evaluation measures) are a simple addition that could improve PHIMS reporting formulas and future implementation monitoring systems. Similarly, summaries of the time and effort involved in achieving targets can likely be calculated using fields already existing in PHIMS (e.g. by tallying the number of notes entered, phone calls, emails and visits made to sites, graphs of progress over time, etc). The practices are at present “unweighted” – there are no extra points in achieving the hardest practices, but perhaps there should be. Such information would be useful in better understanding and recognising progress made over time. Given that other Australian jurisdictions have cut childhood obesity prevention programs, whereas HCI has been sustained [15], tracking progress over time would provide important documentation regarding how long-term investments are required to bring about the cultural and whole-of-environment change necessary to impact complex problems like obesity.

Limitations

Given the short period of observations at each site, we were unable to observe practitioners performing all HCI activities. As such, our results are not meant to be an exhaustive list but rather a snapshot of the range of tasks conducted across the HCI teams at one point in time. Using a directed content analysis approach enabled us to identify and explore in our data the activities most relevant to our research questions, but may have missed relevant data that was not captured during the first coding pass. We minimized this risk by reading many fieldnotes in their entirety, by using researcher triangulation, and by ongoing connections and consultation with the field and with our partners to ensure that we had not missed any key activities. A thorough exploration of the intensity of implementation was precluded by our research design and reflects another limitation of this study. Ideally, such an exploration would adopt a longitudinal approach to explore how implementation builds over time whereas our observations were limited to only a few days with each team. From talking to practitioners, particularly those who are more experienced, we gained a sense of the variations in intensity across time and geographic location which we reported here. The researchers were conscious of producing ethnographic and sometimes challenging insights within the context of a policy and practice partnership, with partners having ongoing accountability for the system being studied. Other papers present our reflectivity about working in co-production and research impact [29, 34] (Loblay V, Conte K, Groen S, Green A, Innes-Hughes C, Milat A, et al. The Weight of Words: collaborative practice and the challenge of coproducing ethnography within a health policy research partnership, submitted).

Conclusion

We set out to study implementation of prevention programs in process, and in doing so uncovered a rich story behind the program monitoring data. Our findings illuminate what is hidden beneath the surface, what work isn’t captured or reflected in PHIMS data, as well as, what is captured and potentially brought to focus. By understanding the full breadth of work and time investment required to successfully achieve a change in practice, we become better poised to capture and appreciate the value and multiple effects of investments in large-scale prevention initiatives. Insights gained through our ethnography illustrate several contextual factors that contribute to implementation outcomes that may be missed by implementation monitoring systems focused solely on outcome-centric indicators. Such insights highlight some of the possible benefits of harnessing the pragmatic knowledge of local practitioners, well positioned to assist policy makers to develop more accurate and meaningful performance indicators that are sensitive to diversity across contexts. Our results suggest that capturing the additional work that exists alongside standardised program implementation may yield new insights into the broad range of activities that exist around implementation efforts. Together, these findings provide implications to inform the design of future IT systems capable of tracking population-level prevention policies and programs.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due to information that could compromise research participant consent and privacy.

Abbreviations

PHIMS:

Population Health Information Management System

IT:

Information technology

NSW:

New South Wales

HCI:

Healthy Children Initiative

KPI:

Key performance indicator

References

  1. de Leeuw E. Engagement of sectors other than health in integrated health governance, policy, and action. Annu Rev Public Health. 2017;38:329–49.

    PubMed  Google Scholar 

  2. Masters R, Anwar E, Collins B, Cookson R, Capewell S. Return on investment of public health interventions: a systematic review. J Epidemiol Community Health. 2017;71(8):827.

    PubMed  PubMed Central  Google Scholar 

  3. Wutzke S, Morrice E, Benton M, Milat A, Russell L, Wilson A. Australia's national partnership agreement on preventive health: critical reflections from states and territories. Health Promot J Austr. 2018;29(3):228–35.

    PubMed  PubMed Central  Google Scholar 

  4. Conte KP, Hawe P. Will E-monitoring of policy and program implementation stifle or enhance practice? How would we know? Front Public Health. 2018;6:243.

    PubMed  PubMed Central  Google Scholar 

  5. Greenhalgh T, Russell J. Why do evaluations of eHealth programs fail? An alternative set of guiding principles. PLoS Med. 2010;7(11):e1000360.

    PubMed  PubMed Central  Google Scholar 

  6. Sligo J, Gauld R, Roberts V, Villa L. A literature review for large-scale health information system project planning, implementation and evaluation. Int J Med Inform. 2017;97:86–97.

    PubMed  Google Scholar 

  7. Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. 2006;6(1):3.

    PubMed  PubMed Central  Google Scholar 

  8. Greenhalgh T, Potts HWW, Wong G, Bark P, Swinglehurst D. Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q. 2009;87(4):729–88.

    PubMed  PubMed Central  Google Scholar 

  9. Bors PA, Kemner A, Fulton J, Stachecki J, Brennan LK. HKHC community dashboard: design, development, and function of a web-based performance monitoring system. J Public Health Man. 2015;21(Suppl. 3):S36–44.

    Google Scholar 

  10. Majic S. Protest by other means? Sex workers, social movement evolution and the political possibilities of nonprofit service provision. Ithaca: Cornell University; 2010.

    Google Scholar 

  11. Milat AJ, Bauman AE, Redman S, Curac N. Public health research outputs from efficacy to dissemination: a bibliometric analysis. BMC Public Health. 2011;11:934.

    PubMed  PubMed Central  Google Scholar 

  12. Wolfenden L, Milat AJ, Lecathelinais C, Sanson-Fisher RW, Carey ML, Bryant J, et al. What is generated and what is used: a description of public health research output and citation. Eur J Pub Health. 2016;26(3):523–5.

    Google Scholar 

  13. Green A, Innes-Hughes C, Rissel C, Mitchell J, Milat A, Williams M, et al. Codesign of the population health information management system to measure reach and practice change of childhood obesity programs. Public Health Res Pract. 2018;28(3):e2831822.

  14. Conte KP, Groen S, Loblay V, Green A, Milat A, Persson L, et al. Dynamics behind the scale up of evidence-based obesity prevention: protocol for a multi-site case study of an electronic implementation monitoring system in health promotion practice. Imp Sci. 2017;12(1):146.

    Google Scholar 

  15. Innes-Hughes C, Rissel C, Thomas M, Wolfenden L. Reflections on the NSW healthy children initiative: a comprehensive state-delivered childhood obesity prevention initiative. Public Health Res Pract. 2019;29(1):e2911908.

  16. Farrell L, Lloyd B, Matthews R, Bravo A, Wiggers J, Rissel C. Applying a performance monitoring framework to increase reach and adoption of children’s healthy eating and physical activity programs. Public Health Res Pract. 2014;25(1):e2511408.

  17. Knoblauch H. Focused ethnography. Forum Qual Soc Res. 2005;6(3). https://0-dx-doi-org.brum.beds.ac.uk/10.17169/fqs-6.3.20.

  18. Pink S, Morgan J. Short-term ethnography: intense routes to knowing. Symb Interact. 2013;36(3):351–61.

    Google Scholar 

  19. Nvivo qualitative data analysis software. 11 ed: QSR International Pty Ltd.; 2015.

  20. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health C. 2007;19(6):349–57.

    Google Scholar 

  21. Grøn S, Loblay V, Conte K, Green A, Innes-Hughes C, Milat A, et al. Key performance indicators for program scale-up and divergent practice styles: a study from NSW, Australia. Health Promot Int. 2020. https://0-doi-org.brum.beds.ac.uk/10.1093/heapro/daaa001.

  22. Conte KP, Shahid A, Grøn S, Loblay V, Green A, Innes-Hughes C, et al. Capturing implementation knowledge: applying focused ethnography to study how implementers generate and manage knowledge in the scale-up of obesity prevention programs. Imp Sci. 2019;14(1):91.

    Google Scholar 

  23. Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80–92.

    Google Scholar 

  24. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

    PubMed  Google Scholar 

  25. Vaismoradi M, Jones J, Turunen H, Snelgrove S. Theme development in qualitative content analysis and thematic analysis. J Nurs Educ Pract. 2016;6:100–10.

    Google Scholar 

  26. Lucid Software I. Lucidchart. 2020.

    Google Scholar 

  27. Farrell L, King L, Hardy LL, Howlett S. Munch and move in preschools. Summary report on implementation and evaluation, phase 1 (2008–2009). 2009.

    Google Scholar 

  28. Dickens P. Tight-loose-tight. A complexity approach to innovation. Organ Dev Pract. 2016;48(4):27–31.

    Google Scholar 

  29. Conte KP, Davidson S. Using a ‘rich picture’ to facilitate systems thinking in research coproduction. Health Res Policy Sys. 2020;18(1):14.

    Google Scholar 

  30. Jensen PB, Jensen LJ, Brunak S. Mining electronic health records: towards better research applications and clinical care. Nat Rev Genet. 2012;13:395.

    CAS  PubMed  Google Scholar 

  31. Percival N, O'Donoghue L, Lin V, Tsey K, Bailie RS. Improving health promotion using quality improvement techniques in Australian indigenous primary health care. Front Public Health. 2016;4:53.

    PubMed  PubMed Central  Google Scholar 

  32. Lewis CC, Mettert KD, Dorsey CN, Martinez RG, Weiner BJ, Nolen E, et al. An updated protocol for a systematic review of implementation-related measures. Syst Rev. 2018;7(1):66.

    PubMed  PubMed Central  Google Scholar 

  33. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Imp Sci. 2009;4:50.

    Google Scholar 

  34. Hawe P, Conte KP, Groen S, Loblay V, Green A, Innes-Hughes C, et al. Mock abstracts with mock findings: a device to catalyse production, interpretation and use of knowledge outputs in a university-policy-practice research partnership. Evid Policy. 2019. https://0-doi-org.brum.beds.ac.uk/10.1332/174426419X15679623018185.

Download references

Acknowledgements

We thank colleagues in the NSW Ministry of Health and Local Health Districts for their input and agreement to develop this project and their ongoing participation.

Funding

This work is funded by the National Health and Medical Research Council of Australia (NHMRC) through its partnership centre grant scheme (Grant ID: GNT9100001). NSW Health, ACT Health, The Commonwealth Department of Health, The Hospitals Contribution Fund of Australia, and HCF Research Foundation have contributed funds to support this work as part of the NHMRC partnership centre grant scheme. Specific staff members of NSW Health contributed to this project as described in the “competing interests” and “author’s contributions” section, but the funding bodies played no role in the development of this publication or the project.

Author information

Authors and Affiliations

Authors

Contributions

KC and LM equally contributed to this analysis and in drafting the manuscript. KC led and supervised LM’s work on the analysis, and KC, VL, and SG conducted the ethnography and initial development of the codebook. PH and KC developed the protocol and study approach. PH conceptualized the study in response to critical reflection on PHIMS’ development by ST and JM. AG, AM, ST, LP, CIH, JM and MW participated in the design of the study. All authors made important contributions to the theoretical approach and interpreting insights. All authors read, revised and approved the final manuscript.

Corresponding author

Correspondence to Kathleen Conte.

Ethics declarations

Ethics approval and consent to participate

Research ethics approval was granted by the Royal Prince Alfred Hospital Human Research Ethics Committee (X16–0156), and by the research governance offices of each local health district. All participants provided written consent and all names have been changed.

Consent for publication

N/A

Competing interests

The NSW Ministry of Health is one of five funding partners of The Australian Prevention Partnership Centre (TAPPC), within which this project is situated. At the time of the study, AM, LP, ST and JM were employees of the NSW Ministry of Health which funds and monitors performance of the Healthy Children Initiative (HCI), on which program this research is focused and the participants were employed. AG, CIH were employees of the Office of Preventive Health which is responsible for supporting the statewide implementation of HCI. MW oversees the health promotion unit at one of the study sites. LP, JM, MW, and ST were part of a team that designed, manages and maintains the PHIMS system described in this paper. KC, VL, SG, LM and PH were University-based researchers funded through TAPPC to conduct this research. To ensure confidentiality of participants, identifying data about the individuals and the local health districts in which they work was only available to the University-based researchers. Only illustrative, de-identified quotes were shared with the research partners in the context of developing papers.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Additional information as per the consolidated criteria for reporting qualitative research (COREQ) checklist.

Additional file 2

Target practices for the Healthy Children Initiative programs, Live Life Well@ School and Munch and Move®, on which key performance indicator targets are based.

Additional file 3.

Description of codes from project codebook extracted for this analysis.

Additional file 4.

Summary of factors that influence the breadth and intensity of work in implementation.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Conte, K., Marks, L., Loblay, V. et al. Can an electronic monitoring system capture implementation of health promotion programs? A focussed ethnographic exploration of the story behind program monitoring data. BMC Public Health 20, 917 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-020-08644-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-020-08644-2

Keywords