Skip to main content
  • Research article
  • Open access
  • Published:

CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program

Abstract

Background

Translation encompasses the continuum from clinical efficacy to widespread adoption within the healthcare service and ultimately routine clinical practice. The Parenting, Eating and Activity for Child Health (PEACHâ„¢) program has previously demonstrated clinical effectiveness in the management of child obesity, and has been recently implemented as a large-scale community intervention in Queensland, Australia. This paper aims to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to large-scale community intervention (PEACHâ„¢ QLD). Tensions between RCT paradigm and implementation research will be discussed along with lived evaluation challenges, responses to overcome these, and key learnings for future evaluation conducted at scale.

Methods

The translation of evaluation from PEACHâ„¢ RCT to the large-scale community intervention PEACHâ„¢ QLD is described. While the CONSORT Statement was used to report findings from two previous RCTs, the REAIM framework was more suitable for the evaluation of upscaled delivery of the PEACHâ„¢ program. Evaluation of PEACHâ„¢ QLD was undertaken during the project delivery period from 2013 to 2016.

Results

Experiential learnings from conducting the evaluation of PEACHâ„¢ QLD to the described evaluation framework are presented for the purposes of informing the future evaluation of upscaled programs. Evaluation changes in response to real-time changes in the delivery of the PEACHâ„¢ QLD Project were necessary at stages during the project term. Key evaluation challenges encountered included the collection of complete evaluation data from a diverse and geographically dispersed workforce and the systematic collection of process evaluation data in real time to support program changes during the project.

Conclusions

Evaluation of large-scale community interventions in the real world is challenging and divergent from RCTs which are rigourously evaluated within a more tightly-controlled clinical research setting. Constructs explored in an RCT are inadequate in describing the enablers and barriers of upscaled community program implementation. Methods for data collection, analysis and reporting also require consideration. We present a number of experiential reflections and suggestions for the successful evaluation of future upscaled community programs which are scarcely reported in the literature.

Trials registration

PEACHâ„¢ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).

Peer Review reports

Background

Implementation science is an emerging area of research that studies the translation of evidenced-based interventions into routine practice and is particularly applicable to healthy lifestyle public health initiatives. The ideal translation continuum begins with an evidence-based randomised controlled trial (RCT) to establish efficacy, followed by community trials to determine clinical effectiveness (i.e. does the intervention work in the real world setting) and finally, adoption and routine delivery by health care delivery systems [1]. The translation of an intervention into the community setting is an iterative process requiring adaptation of the intervention itself, adoption of the program by the community, and implementation often under quite different conditions to the original RCT. Similarly, the evaluation of the intervention evolves with the move from RCT to a program delivered by health providers rather than researchers. Evaluation of an RCT aims to demonstrate efficacy, with little emphasis on process evaluation which probes how the program is received, understood and utilised by participants. Evaluation priorities of community programs include program reach, uptake, fidelity and sustainability to ascertain the feasibility of incorporating the program into routine health service delivery which account for broader contextual social, political and economic factors associated with successful program implementation [2].

While there is a growing body of literature on changes to the implementation of healthy eating programs across the translation continuum, there is little dissemination of the corresponding changes in evaluation, or reflections on evaluation challenges once upscaled. The aim of this paper is to describe the translation of the evaluation framework from a randomised controlled trial (RCT) to a large-scale community intervention using the PEACH™ Program as a case study. This paper also aims to provide the authors’ experiential reflections on their lived tension between RCT and implementation research paradigms, and provide examples along with key learnings which are beneficial for future evaluation of upscaled programs to assist the implementation of programs along the translation pathway.

Methods

The PEACHâ„¢ program

Child obesity is a global public health crisis which requires the implementation of available and effective programs [3]. The recent World Health Organization (WHO) Report of the Commission on Ending Childhood Obesity recommends the provision of family-based, multicomponent, lifestyle weight-management services for children and young people who are obese [4]. The Parenting, Eating and Activity for Child Health (PEACH™) Program [5,6,7] is a parent-led, family-focussed program for families of overweight children which aligns with the aforementioned WHO recommendation. PEACH™ is a 6-month parent-led lifestyle program consisting of 10× 90-min group sessions for parents which are facilitated by a qualified health professional (e.g. dietitian, nutritionist) who had received training in the program. Between sessions 9 and 10 there are three one-on-one phone calls with facilitators which provide individualised support and encouragement to maintain and implement additional healthy lifestyle changes. Children and any siblings attend concurrent group sessions which involved facilitated group-based physical activity and games. The PEACH™ Program aims to support parents in managing their children’s weight by taking a whole-of-family approach and providing information, confidence and skills regarding family nutrition and physical activity, parenting, and problem solving.

The translational pathway of the PEACH™ Program to-date is depicted in Fig. 1. Briefly, the program has progressed from conception across the translation continuum over approximately 17 years. We observed the largest reduction in BMI z-score in the Healthy Eating and Activity through Positive Parenting (HELPP) and PEACH™ RCTs (8% and 10% reduction at 6 m, respectively) compared to the community trial PEACH™ IC and the state-wide population health program PEACH™ QLD (approximately 5–6% reduction), which is a finding in line with previous reports that efficacy trials can over-estimate the effect of an intervention [8].

Fig. 1
figure 1

Translational pathway of the PEACHâ„¢ program (with approximate timeline). Abbreviations; HELPP, Healthy Eating and Activity through Positive Parenting; m, month; PEACHâ„¢, Parenting, Eating and Activity for Child Health; y, year. Cited literature [5, 7, 20,21,22,23,24,25,26,27,28,29,30,31,32,33]

Translation of evaluation from PEACHâ„¢ RCT to PEACHâ„¢ QLD

An overview of the evaluation in each of the three selected PEACH™ iterations to-date is shown (Table 1). PEACH™ RCT was registered as a clinical trial (ACTR00001104 and ACTRN12606000120572) and reported against the (CONSORT) Statement [9, 10]. PEACH™ IC and PEACH™ QLD were community trials reported against the REAIM framework. PEACH™ QLD was retrospectively registered with the Australian New Zealand Clinical Trials Registry on 28 February 2017 (ACTRN12617000315314).

Table 1 Overview and evaluation of three PEACHâ„¢ iterations to-date

The evaluation framework and approach was carried forward from the RCT to the community setting with relatively few changes as the program progressed from demonstrating efficacy in an RCT to determining effectiveness in small- and large-scale community trials.

PEACH ™ QLD

In 2012, the Queensland Department of Health awarded a tender to Queensland University of Technology (QUT) to deliver the PEACHâ„¢ family-focussed, parent-led, child weight management program to 1400 Queensland children in order to: 1) increase the capacity of the families who participate to adopt healthy lifestyles related to healthy eating and physical activity; and 2) promote healthy weight and weight management through sustainable behaviour change. Rather than a research project, PEACHâ„¢ QLD was first and foremost a community service delivery project. The team at QUT were responsible for the implementation and delivery of the PEACHâ„¢ QLD Project (Project Implementation Team). Flinders University (Adelaide, South Australia) was subcontracted by QUT as external evaluators of the program (Evaluation Team) with the funder prescribing the required effectiveness outcomes in the tender. The evaluation of PEACHâ„¢ QLD was funded by the service delivery tender awarded to QUT. The evaluation framework, including the research questions and tools proposed by the QUT and Flinders University teams, was subject to approval by the funder, the Queensland Department of Health. Amendments required by the funder were limited to the use of standardised demographic questions for comparability with population monitoring and surveillance conducted by the Department. No restrictions were placed on the reporting of data, however communication of results during the funding period was also subject to approval. As evaluators of PEACHâ„¢ QLD, Flinders University were responsible to the Project Implementation Team at QUT. Ultimately QUT were responsible for the delivery of the project to Queensland Health. Queensland Health received interim reports on program outcomes approximately 6-monthly. Along with project updates, consultation with facilitators and feedback from families, these regular evaluation reports both precipitated and informed changes to program implementation and evaluation during the project lifecycle which were approved by the funder.

PEACHâ„¢ originated at Flinders University and so the evaluators had intimate knowledge of the program and had conducted the first RCT and small scale community trial. Comprehensive evaluation on program outcomes, impact and process indicators, as well as assessment of adherence to program protocol and implementation were tailored to fit the upscaled program from the RCT evaluation framework. In contrast to the RCT, parent-completed questionnaires were administered on-line using Survey Monkey. This modification was designed to enable immediate data access to the evaluators and to reduce administrative time and cost associated with printing, distributing, collecting and returning hard copies, as well as data entry. Tablet computers were provided to parent facilitators to allow parents to complete questionnaires at the first session if not done prior.

The evaluation of the PEACH™ QLD Project was adapted to the REAIM framework components of Reach (an individual-level measure of patient participation and representativeness); Effectiveness (the program’s success rate at an individual level); Adoption (program acceptance/uptake at the organisational level); Implementation (fidelity of the program to the original RCT intervention, measured at the organisational level); and Maintenance (long term effects at the individual and organisational level) [11]. Table 2 briefly defines REAIM framework components and within each of these dimensions details the outcome, impact and process evaluation collected during the PEACH™ QLD Project. This paper does not report the outcomes of these measures but instead provides experiential learnings and reflections in conducting the evaluation described in the framework in a real-world service delivery project.

Table 2 Evaluation data collected for the PEACH™ QLD Project against the RE-AIM framework dimensions

Results

The evaluation framework of PEACH™ QLD was designed to meet the needs of the funder and as much as possible reflected its evolution from the RCT and community trial settings, comprising a range of outcome, impact and process evaluation indicators. It also included a range of broader environmental- and systems-based measures reflecting the Team’s experience in evaluating OPAL, South Australia’s settings-based community-wide childhood obesity prevention initiative [12]. Despite this broader evaluation design, there were some tensions between the evaluation framework and implementation/program delivery model which were not anticipated. This required a certain degree of reactivity to be introduced to the evaluation which in some cases presented significant practical/operational obstacles. For example, while community lifestyle programs need to be dynamic and have the flexibility to adapt to the needs of its participants during implementation, RCT intervention protocols are more strictly adhered to and do not normally change. In responding to participant needs and to improve participant engagement, there were iterative changes to program delivery including organisational settings, session scheduling, recruitment, and the order of session delivery. Some of these changes required modifications to the evaluation tools and consequent changes to the coding of variables collected and the syntax used for analyses. Furthermore, the governance under which the evaluation team operated required these changes to be submitted to multiple ethics committees for variations. Such changes were largely unanticipated, although in retrospect were necessary and appropriate actions to meet the needs of participants, service providers and the funding body. Achieving a balance between an evaluation framework that was flexible and adaptable enough to respond to change yet robust enough to maintain integrity was challenging. Additional learnings from key evaluation challenges encountered and actions undertaken during PEACH™ QLD are described in Table 3.

Table 3 Challenges arising from differences between RCT and implementation research paradigms

Comprehensive evaluation is unlikely to be an expectation of participants in a community program

In large scale community trials, a trade-off exists between the evaluation data required by the funder and the actual or perceived response burden on participants. From PEACH™ RCT to community iterations of PEACH™, we simplified data collection in order to better meet participant expectations, minimise participant burden and to reduce evaluation costs (e.g. the omission of invasive blood sample collection and expensive blood biochemical analyses). PEACH™ QLD enrollees were participating in a healthy lifestyle program in the community setting and so were unlikely to consider themselves enrolled in a research project as participants in an RCT would. Furthermore, there was no requirement to provide data in order to participate and additionally no incentive to complete evaluation. Evaluation was hence simplified to collect only essential data in later iterations of the program, in order to minimise the influence of evaluation on program engagement and attendance. A specific example is the replacement of the lengthy CLASS questionnaire (Table 2) with the much shorter 6-item tool from the Youth Risk Behaviour Surveillance System questionnaire [13, 14]. It was our observation during the program that generally those who commenced questionnaires, including online, typically answered all or most questions, suggesting that evaluation length was not unacceptable.

Duplication of data collection and complicated study ID numbers may unnecessarily increase burden on participants

Some data were collected as routine during the stages of program enquiry and enrolment for the purposes of determining eligibility and planning groups, including demographic information (child age, gender, residential postcode and parent-reported child height and weight). These data were also collected as part of the formal evaluation. The collection of data at multiple time points may have placed an unnecessary burden on participants. Additionally burdensome was the use of a 9-digit ID number conceived at the beginning of the Project to contain program delivery information including wave, site, group, facilitator, family and child information. Correct transcribing of the 9-digit ID number in evaluation by participants and facilitators was difficult and there were a number of errors. To maximise usable data, child date of birth was added to questionnaires to facilitate data matching and even this was inaccurately recorded a number of times. Future programs should consider simple ID numbers, with at least 2 other variables suitable to use as a cross check of information, such as date of birth and sex. Alternatively, data pertaining to program uptake and success (i.e. conversion from enquiry to enrolment, number of groups run, settings) may be better gathered by existing service provider information systems, where applicable, and linking evaluation data to a Medicare number or My Health Record as part of the National Digital Health Strategy. This would also enable ready access of the data to service providers and can be used internally to justify ongoing service delivery in their catchment, or for continuous quality improvement. Finally, we could have allocated participant ID numbers a posteriori, matching data to names rather than numbers, which may have placed greater focus on the Program delivery and less emphasis on the research component of the Project.

Online data collection

Online data collection was a key strategy to maximise the amount of data collected, reduce facilitator burden, and enable ready access to evaluation data for monitoring and reporting. This was closely monitored and parent facilitators were informed of completion in real-time. Supplying an online link to the baseline questionnaire allowed parents to provide baseline data before their first session. This led to the collection of some baseline data – including parent and family socio-demographics – from 24% of enrolled families who did not attend any session. While these data were limited, they proved valuable in understanding the profile of families who enrolled but do not attend. Pre-program completion of questionnaires may have served as an early indication of engagement and commitment to the program.

Where baseline evaluation were not completed online before the sessions, trained facilitators played a supportive role in data collection at sessions as the geographical spread of groups precluded centralisation of data collection. This is in contrast to clinical research settings, where data collection during sessions/after program commencement is not appropriate as this can either distract from the delivery of the intervention or not serve as a true baseline measurement. Tablets provided for data collection at sessions were inefficient as devices had to be shared between group members and many participants were unfamiliar with their use. However, given the service delivery nature of the program, an additional session pre-program for data collection may not have been acceptable for families, and would have been a significant additional cost for the Project. Future programs need to develop their own strategies to optimise (and prioritise where necessary) the collection of data during program delivery.

Missing data are to be expected in real world implementation research

Evaluation data, particularly at the child level, were largely uncollectable for those families who did not attend any program session (24%). Despite this non-attendance rate, parental response rate to questionnaires was reasonable (75%) and there was a modest level of questionnaire completeness (74%) leading to varying levels of missing outcome data. Of those children enrolled, 41% did not have any paired outcome data, largely due to non-attendance, at the final session in particular. Data collection was the responsibility of session facilitators (including health service clinicians and casually contracted professionals) and may not have been existing routine practice. To ensure more complete data collection in future, the importance of this task should be emphasised and supported so as to inform future practice and meet the ethical obligations of sharing and disseminating findings via peer-reviewed publications.

In addition, not all anticipated evaluation (Table 2) was able to be undertaken within the Project lifecycle. We had planned for stakeholder interviews in the evaluation framework under REAIM’s A: Adoption domain to investigate setting-specific barriers and enablers to program adoption within the health sector. However, during delivery of PEACH™ QLD there were changes in government policy leading to changes to the public health nutrition workforce [15]. We considered it untimely to conduct interviews on barriers and enablers of the project during this period of health care system flux and ensuing job uncertainty. Over the course of the project term, staff who delivered the Program were increasingly casual employees of QUT, operating outside the health sector. The timing and prioritisation of evaluation data collection (c.f. program implementation which is the priority) within a health care system should be carefully and cautiously appraised when systems are vulnerable and undergoing change.

Discussion

While external evaluation has several strengths for research rigour and impartial expert assessment, it presented some challenges in the present study, particularly as the implementation and evaluation teams were two organisationally and geographically separate teams based in Brisbane, Queensland and Adelaide, South Australia, respectively. The main drivers for the Brisbane-based Project Implementation Team were ensuring a smooth running program and satisfying the tender milestone requirements, principally child enrolment numbers which were tied to financial payments. In contrast, the main driver for the Evaluation Team was the amount and quality of data collected with little direct connection to program facilitators.

Overall, attendance was a concern and the catalyst for modifications to program delivery, in particular a change from fortnightly, to weekly delivery of sessions 1–9. It was a challenge for the Evaluation Team to provide process evaluation data, in particular participant concerns, program satisfaction and acceptability in real-time to support this change as these data are only captured at the end of the 6-month program in the follow-up questionnaire. In response to this, additional qualitative research efforts to explore factors associated with enrolment [16] and engagement to better understand program access, need and attendance were driven by the Project Implementation Team. While not considered formal evaluation data, other information captured by the Project Implementation Team during administrative phone calls at enquiry and drop out times provided valuable insights into engagement with the program. Future programs should recognise and establish mechanisms for systematically capturing these data which informed program changes and sustainability of the program. Both the Project Implementation Team and the Evaluation teams worked hard to maintain a cohesive approach by having regular meetings and communicating well. Nevertheless, our experience from PEACH™ QLD has led us to question whether evaluation is best performed by two groups: process evaluation by the Project Implementation Team and outcome evaluation by an external evaluator. Implementation staff require data in a timely manner to facilitate program service delivery, however the present study consisted only of pre-post evaluation and the evaluation framework did not permit data collection or reporting in real time. During planning, teams need to work together to ensure clinical, service and system factors are adequately captured by the evaluation framework. During delivery, implementation staff must review process data sources and collection tools to ensure sufficient data are captured in order to address delivery challenges.

Implementation research is a relatively new and rapidly emerging field and theoretical approaches and frameworks underpinning implementation science have become more common over the past decade. There are now a number of conceptual frameworks for implementation including REAIM [11], Proctor [17, 18] and Consolidated Framework for Implementation Research [19]. While there is overlap between these frameworks, some elements or domains may be of greater importance in one implementation project compared to another. In practice, these models can be tailored to suit a specific implementation research program. In the PEACH™ QLD project, we largely carried forward the evaluation framework from the RCT and retrofitted this to the REAIM framework which we felt to be the best fit in 2012. While the Proctor and CFIR models were both published in 2009, there was at that time little exemplary literature regarding their use. On reflection, choosing the reporting framework a priori may have prompted us to capture additional data e.g. formal data on those who enquired about the program but did not enrol or attend, in order to describe ‘Reach’ in terms of those who do not participate as well as those who do. Ultimately, the scope of evaluation and project deliverables were negotiated with the funder, and the budget for evaluation was based on these deliverables. The funder of the program hence approved the evaluation framework based on the measurement and reporting of program outcomes to demonstrate effectiveness without concern for selection of a conceptual framework in which measures of effectiveness were embedded.

Conclusion

In order to present a case for a program to be implemented long-term or become adopted as part of routine practice, it is critical that there is solid evidence, which requires collection of comprehensive and valid data. Collection of these data requires adequate funding for health service surveillance or specific program-targeted funding, as appropriate. Evaluation of upscaled programs in the real world is challenging as unlike gold standard efficacy studies in clinical or research environments, behavioural interventions and evaluation can evolve over time to meet participant need and fit within changing public health systems and settings. There may be some loss of control when upscaling potentially affecting participant engagement, intervention fidelity and hence evaluation rigour. Ultimately, there can be many trade-offs in translation research and researchers who find themselves working in this space need to be prepared to settle at times for something less than ideal.

Understanding practicalities in data collection and management may help others to design their data collection tools and processes to optimise data quality and ease and efficiency of data collection. Our learnings from the evaluation of PEACHâ„¢ as a service delivery program in Queensland provide new knowledge which can assist to inform the evaluation of future evidence-based, family-focussed interventions for children or other upscaled programs.

Abbreviations

CONSORT:

Consolidated Standards of Reporting Trials

HELPP:

Healthy Eating and Activity through Positive Parenting

HL:

healthy lifestyle

HREC:

Human Research Ethics Committee

P:

parenting

PEACHâ„¢:

Parenting, Eating and Activity for Child Health

QLD:

Queensland

RCT:

Randomised Controlled Trial

REAIM:

Reach, Effectiveness, Adoption, Implementation, and Maintenance

References

  1. Drolet BC, Lorenzi NM. Translational research: understanding the continuum from bench to bedside. Transl Res. 2011;157:1–5.

    Article  PubMed  Google Scholar 

  2. Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M, Medical Research Council G. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Karnik S, Kanekar A. Childhood obesity: a global public health crisis. Int J Prev Med. 2012;3:1–7.

    PubMed  PubMed Central  Google Scholar 

  4. World Health Organization, Report of the Commission on Ending Childhood Obesity. 2016. p. i-52.

  5. Golley RK, Perry RA, Magarey A, Daniels L. Family-focused weight management program for five- to nine-year-olds incorporating parenting skills training with healthy lifestyle information to support behaviour modification. Nutrition & Dietetics. 2007;64:144–50.

    Article  Google Scholar 

  6. Magarey A, Gehling R, Haigh R, Daniels L. Key elements for the nutrition component of child overweight management interventions in five-to nine-year-old children. Nutrition & Dietetics. 2004;61:183–5.

    Google Scholar 

  7. Magarey AM, Perry RA, Baur LA, Steinbeck KS, Sawyer M, Hills AP, Wilson G, Lee A, Daniels LA. A parent-led family-focused treatment program for overweight children aged 5 to 9 years: the PEACH RCT. Pediatrics. 2011;127:214–22.

    Article  PubMed  Google Scholar 

  8. Singal AG, Higgins PD, Waljee AK. A primer on effectiveness and efficacy trials. Clin Transl Gastroenterol. 2014;5:e45.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Schulz KF, Altman DG, Moher D, Group C. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c332.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Consort Group. Consolidated Standards of Reporting Trials: Transparent reporting of trials. 2016. http://www.consort-statement.org. Accessed 20 June 2016.

  11. Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Leslie E, Magarey A, Olds T, Ratcliffe J, Jones M, Cobiac L. Community-based obesity prevention in Australia: background, methods and recruitment outcomes for the evaluation of the effectiveness of OPAL (obesity prevention and lifestyle). Advances in Pediatric Research. 2015:23.

  13. Heath GW, Pate RR, Pratt M. Measuring physical activity among adolescents. Public Health Rep. 1993;108:42–6.

    PubMed  PubMed Central  Google Scholar 

  14. Prochaska JJ, Sallis JF, Long B. A physical activity screening measure for use with adolescents in primary care. Archives of Pediatrics & Adolescent Medicine. 2001;155:554–9.

    Article  CAS  Google Scholar 

  15. Vidgen HA, Adam M, Gallegos D, Who does nutrition prevention work in Queensland? An investigation of structural and political workforce reforms. Nut Diet. 2017;74(1):88–94.

  16. Davidson K, Vidgen H. Why do parents enrol in a childhood obesity management program?: a qualitative study with parents of overweight and obese children. BMC Public Health. 2017;17:159.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38:65–76.

    Article  Google Scholar 

  18. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health Ment Health Serv Res. 2009;36:24–34.

    Article  Google Scholar 

  19. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

  20. Magarey AM, Daniels LA, Boulton TJ. Prevalence of overweight and obesity in Australian children and adolescents: reassessment of 1985 and 1995 data against new standard international definitions. Med J Aust. 2001;174:561–4.

    CAS  PubMed  Google Scholar 

  21. Magarey AM, Daniels LA. Comparison of Australian and US data on overweight and obesity in children and adolescents. Med J Aust. 2001;175:500–1.

    CAS  PubMed  Google Scholar 

  22. Magarey AM, Daniels LA, Boulton TJ, Cockington RA. Predicting obesity in early adulthood from childhood and parental obesity. Int J Obes Relat Metab Disord. 2003;27:505–13.

    Article  CAS  PubMed  Google Scholar 

  23. Golan M, Fainaru M, Weizman A. Role of behaviour modification in the treatment of childhood obesity with the parents as the exclusive agents of change. Int J Obes. 1998;22:1217–24.

    Article  CAS  Google Scholar 

  24. Golan M, Weizman A, Apter A, Fainaru M. Parents as the exclusive agents of change in the treatment of childhood obesity. Am J Clin Nutr. 1998;67:1130–5.

    CAS  PubMed  Google Scholar 

  25. Israel AC, Guile CA, Baker JE, Silverman WK. An evaluation of enhanced self-regulation training in the treatment of childhood obesity. J Pediatr Psychol. 1994;19:737–49.

    Article  CAS  PubMed  Google Scholar 

  26. Israel AC, Stolmaker L, Andrian CAG. The effects of training parents in general child management-skills on a behavioral weight-loss program for children. Behav Ther. 1985;16:169–80.

    Article  Google Scholar 

  27. Wadden TA, Stunkard AJ, Rich L, Rubin CJ, Sweidel G, Mckinney S. Obesity in black-adolescent girls - a controlled clinical-trial of treatment by diet, behavior-modification, and parental support. Pediatrics. 1990;85:345–52.

    CAS  PubMed  Google Scholar 

  28. Summerbell CD, Ashton V, Campbell KJ, Edmunds L, Kelly S, Waters E. Interventions for treating obesity in children. Cochrane Database Syst Rev. 2003;3:CD001872.

  29. Golan M, Crow S. Targeting parents exclusively in the treatment of childhood obesity: long-term results. Obes Res. 2004;12:357–61.

    Article  PubMed  Google Scholar 

  30. Gehling RK, Magarey AM, Daniels LA. Food-based recommendations to reduce fat intake: an evidence-based approach to the development of a family-focused child weight management programme. J Paediatr Child Health. 2005;41:112–8.

    Article  CAS  PubMed  Google Scholar 

  31. Golley RK, Magarey AM, Daniels LA. Children's food and activity patterns following a six-month child weight management program. Int J Pediatr Obes. 2011;6:409–14.

    Article  PubMed  Google Scholar 

  32. Golley RK, Magarey AM, Baur LA, Steinbeck KS, Daniels LA. Twelve-month effectiveness of a parent-led, family-focused weight-management program for prepubertal children: a randomized, controlled trial. Pediatrics. 2007;119:517–25.

    Article  PubMed  Google Scholar 

  33. Magarey A, Hartley J, Perry R, Golley R. Parenting eating and activity for child health (PEACH™) in the community: (PEACH™ IC): translating research to practice. Public Health Bulletin South Australia. 2011;8:58–61.

    Google Scholar 

  34. Waters E, Salmon J, Wake M, Wright M, Hesketh K. Australian authorised adaptation of the child health questionnaire - the interpretation guide. Parent/proxy form CHQ PF50&PF28. Melbourne: Centre for Community Health, Royal Children's Hospital; 1999.

    Google Scholar 

  35. de Onis M, Onyango AW, Borghi E, Siyam A, Nishida C, Siekmann J. Development of a WHO growth reference for school-aged children and adolescents. Bull World Health Organ. 2007;85:660–7.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Kuczmarski RJ, Ogden CL, Guo SS, Grummer-Strawn LM, Flegal KM, Mei Z, Wei R, Curtin LR, Roche AF, Johnson CL. CDC growth charts for the United States: methods and development. Vital Health Stat. 2000;11(2002):1–190.

    Google Scholar 

  37. Freeman JV, Cole TJ, Chinn S, Jones PR, White EM, Preece MA. Cross sectional stature and weight reference curves for the UK, 1990. Arch Dis Child. 1995;73:17–24.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Cole TJ, Bellizzi MC, Flegal KM, Dietz WH. Establishing a standard definition for child overweight and obesity worldwide: international survey. BMJ. 2000;320:1240–3.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Cole TJ, Flegal KM, Nicholls D, Jackson AA. Body mass index cut offs to define thinness in children and adolescents: international survey. BMJ. 2007;335:194.

    Article  PubMed  PubMed Central  Google Scholar 

  40. NHMRC, Eat for Health: Australian dietary guidelines, Department of Health and Ageing, editor. 2013, Commonwealth of Australia: Canberra.

  41. Magarey A, Golley R, Spurrier N, Goodwin E, Ong F. Reliability and validity of the Children's dietary questionnaire; a new tool to measure children's dietary patterns. Int J Pediatr Obes. 2009;4:257–65.

    Article  PubMed  Google Scholar 

  42. Telford A, Salmon J, Jolley D, Crawford D. Reliability and validity of physical activity questionnaires for children: the Children's leisure activities study survey (CLASS). Pediatr Exerc Sci. 2004;16:64–78.

    Article  Google Scholar 

  43. Stevens K. Developing a descriptive system for a new preference-based measure of health-related quality of life for children. Qual Life Res. 2009;18:1105–13.

    Article  PubMed  Google Scholar 

  44. Stevens K. Assessing the performance of a new generic measure of health-related quality of life for children and refining it for use in health state valuation. Appl Health Econ Health Policy. 2011;9:157–69.

    Article  PubMed  Google Scholar 

  45. Lucas N, Nicholson JM, Maguire B, The longitudinal study of Australian children annual statistical report 2010. 2011, Australian Government, Australian Institute of Family Studies,: Canberra.

  46. Becker MH, Maiman LA, Kirscht JP, Haefner DP, Drachman RH. The health belief model and prediction of dietary compliance: a field experiment. J Health Soc Behav. 1977;18:348–66.

    Article  CAS  PubMed  Google Scholar 

  47. Daddario DK. A review of the use of the health belief model for weight management. Medsurg Nurs. 2007;16:363–6.

    PubMed  Google Scholar 

  48. Daniels LA, Magarey A, Battistutta D, Nicholson JM, Farrell A, Davidson G, Cleghorn G. The NOURISH randomised control trial: positive feeding practices and food preferences in early childhood - a primary prevention program for childhood obesity. BMC Public Health. 2009;9:387.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors acknowledge all staff within the Flinders University Evaluation Team and Queensland University of Technology Project Implementation Team who contributed to the evaluation and implementation of the PEACHâ„¢ Program in Queensland from 2013 to 2016. The authors are grateful to all facilitators, parents, and children for their participation in PEACHâ„¢ QLD.

Funding

PEACH™ Queensland is funded by the Queensland Government through the Department of Health (previously funded by the National Partnership Agreement on Preventive Health - Healthy Children’s Initiative until June 2014). The views expressed in this publication do not necessarily reflect those held by the Queensland Government or the Queensland Government Department of Health. The funder was not involved in the design of the study, collection, analysis and interpretation or data, or writing the manuscript. However, the funder agreed to changes in service delivery (implementation) and evaluation during the life of the Project.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Author information

Authors and Affiliations

Authors

Contributions

AMM, LAD and RAP created PEACHâ„¢ and conceived the PEACHâ„¢ QLD evaluation framework. AMM, CJM, JM and LC managed the collection and analysis of PEACHâ„¢ QLD Project data. CJM and JM drafted the manuscript. All authors contributed to all reviews and revisions of the manuscript. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Carly Jane Moores.

Ethics declarations

Ethics approval and consent to participate

Ethics approval for this study were provided by Queensland Children’s Health Services Human Research Ethics Committee (HREC) (Reference HREC/13/QHC/25); Queensland University of Technology University HREC (Reference 1,300,000,633); Flinders University Social and Behavioural Research Ethics Committee (Reference 6231); and Central Queensland University HREC (Reference H13/09–173). All ethics applications were submitted with an Australian National Ethics Application Form (Reference AU/1/D1F2110). Written informed consent was obtained from human subjects who were parents and facilitators in PEACH™ QLD. Parents provided written consent on behalf of their child/children.

Consent for publication

Not applicable.

Competing interests

AMM, LAD and RAP are inventors of the PEACHâ„¢ Program and have received licence money for the use of PEACHâ„¢ materials in the PEACHâ„¢ QLD Project. All other authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moores, C.J., Miller, J., Perry, R.A. et al. CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program. BMC Public Health 17, 918 (2017). https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-017-4907-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-017-4907-2

Keywords