Skip to main content
  • Research article
  • Open access
  • Published:

Expert opinions on good practice in evaluation of health promotion and primary prevention measures related to children and adolescents in Germany

Abstract

Background

Determining what constitutes “good practice” in the measurement of the costs and effects of health promotion and disease prevention measures is of particular importance. The aim of this paper was to gather expert knowledge on (economic) evaluations of health promotion and prevention measures for children and adolescents, especially on the practical importance, the determinants of project success, meaningful parameters for evaluations, and supporting factors, but also on problems in their implementation. This information is targeted at people responsible for the development of primary prevention or health promotion programs.

Methods

Partially structured open interviews were conducted by two interviewers and transcribed, paraphrased, and summarized for further use. Eight experts took part in the interviews.

Results

The interviewed experts saw evaluation as a useful tool to establish the effects of prevention programs, to inform program improvement and further development, and to provide arguments to decision making. The respondents’ thought that determinants of a program’s success were effectiveness with evidence of causality, cost benefit relation, target-group reach and sustainability. It was considered important that hard and soft factors were included in an evaluation; costs were mentioned only by one expert. According to the experts, obstacles to evaluation were lacking resources, additional labor requirements, and the evaluators’ unfamiliarity with a program’s contents. It was recommended to consider evaluation design before a program is launched, to co-operate with people involved in a program and to make use of existing structures.

Conclusion

While in in this study only a partial view of expert knowledge is represented, it could show important points to consider when developing evaluations of prevention programs. By considering these points, researchers could further advance towards a more comprehensive approach of evaluation targeting measures in children and adolescents.

Peer Review reports

Background

In recent years, health promotion and prevention programs have increasingly been implemented in Germany and also in other countries. Because of limited financial resources, only effective intervention measures should be adopted and, if possible, only the most cost effective [1]. However, the question of the costs and effects of health promotion and disease prevention measures has, until now, barely been answered, especially for children and adolescents and for settings in Germany. While there are many overviews on the effectiveness in the international literature, only some overviews of cost-effectiveness can be found for programs directed at adults [1, 2] or for distinct areas of health promotion and prevention, e.g., workplace-related programs [3,4,5].

Although health promotion and prevention measures are often directed at children and adolescents, even fewer attempts have been made to take into account the cost-effectiveness of health promotion and disease prevention measures for children and adolescents. Two recent systematic reviews exist on this topic (e.g. [6, 7]), which found only two economic evaluations for health promotion programs targeting children and adolescents in Germany [8, 9]. A much higher number of evaluations would have been expected, as for example, a relatively high number of prevention programs is listed in the “KNP-Projektdatenbank”, which includes all projects that were funded within the German interdisciplinary Prevention Research Funding Program, and thus gives an excellent overview of the German prevention landscape.

Given only few evaluations of primary prevention and health promotion measures that are directed at children and adolescents appear to be conducted, and in those that were conducted, by omitting cost consideration, an important aspect of evaluation appears to have been omitted, the question of good practice in evaluations of primary prevention and health promotion measures in children and adolescents arises.

Therefore, we aimed to explore expert opinions on good practice in measuring the costs and effects of health promotion and primary prevention measures and point out important implementation aspects of evaluations of health promotion and prevention measures for children and adolescents.

Methods

Interviews were used to gather that expert knowledge and look for unknown obstacles to evaluation in programs involving children as these are hard to find in published studies [10, 11]. To itemize the aspects of implementation, the following items were incorporated in the interviews: the practical importance, the determinants of project success, meaningful parameters to be included in evaluations, supporting factors, and problems in the practical implementation of evaluations. With this overview, it is supposed to show already existing knowledge about the most important aspects of the implementation of (economic) evaluations and also to emphasize its importance for people responsible for the development of primary prevention or health promotion programs (for children and adolescents).

Research approach

Expert interviews were chosen as the basic methodology [12]. As it seems appropriate in the context of this study that experts can report their own opinions on the subject, open interviews were conducted. So that the experts’ answers were a priori in a similar structure, the interviewers followed a guideline with possible questions (see section “Appendix”), from which they deviated if necessary (e.g., if the interviewee requested it). Therefore, a partially standardized structure for the expert interviews was chosen. As those interviewed were all from Germany, the interviews were held in German and the results are translated for this article.

The study specific definition of “expert” was made based on criteria by Meuser and Nagel [13]. In this study persons were considered as experts, if they were working in the area of health promotion/primary prevention (especially for children and/or adolescents). And additionally they should have a privileged access to information on the (economic) evaluation. This is also partly represented in question A2 of the guideline (Role of evaluation in daily work) and presented in the results section.

Study process

Interviews were conducted and interpreted by the authors during July and August 2011. Both interviewers have a background of health economics and business research. The interviews had an average duration of approximately 35 min, ranging between 12 and 100 min in total.

We conducted a problem analysis based on a scoping review of published evaluations of prevention programs and documentation of existing prevention programs. Findings included that a large share of existing prevention programs directed at children and adolescents did not involve some type of evaluation (this could be seen based on the KNP-Projektdatenbank). Programs for which a scientific evaluation was published often addressed only a measure’s effects while costs were omitted. Following problem analysis, we developed an interview guideline that is described below. To achieve comparability of the answers, the guideline was designed to produce a high level of standardization. This means that the content and order of the questions were fixed. However, free formulation, reacting to requests, and ad hoc questions were possible. Such a semi-standardized procedure is considered suitable especially in areas where the expert serves as a source of information which otherwise would be difficult or even impossible to access; through free formulation, experts are given the possibility of introducing new aspects [13].

The guideline can generally be subdivided into the sections of probing questions (A) and general guideline questions (B–C) [14]. The probing questions, which were directed at assessing the respondents’ understanding of the term evaluation, the role evaluations had in their daily work, and at how they thought evaluations were perceived in practice, can be seen as introductory questions that helped the interviewers gain a more comprehensive view on the respondents and brought the respondents to the core questions of the interview in a comparable manner. The general guideline questions were aimed directly at aspects connected “good practice” and therefore addressed the study question. In particular, we asked questions about what were the benefits of conducting evaluations, what were measures for a program’s success, how this could be measured and on factors promoting or impeding evaluations.

Participants

As prevention and health promotion projects are usually regionally anchored, people from the responsible regional institutions for prevention and health promotion were contacted. In total, 11 individuals from such institutions were contacted with a request for an interview. If the contacted individuals could not participate, they were asked to name a suitable replacement to be interviewed instead. In all interviews, two interviewers were present.

Eight respondents agreed to participate in the interview, which gives a participation rate of 73% (8/11). Two respondents were from public health insurance companies, four were from governmental and non-governmental research institutes, and further two were from local government.

Out of a total of eight interviews, four were conducted on-site in Munich, Germany. Another four interviews were conducted by telephone because the respondents were working in different parts of Germany.

All experts have academic backgrounds and they all were working in the area of health promotion and primary prevention or child and youth welfare.

Evaluation strategy

The aim of the strategy is to emphasize supra-individual similarities, i.e. shared knowledge by comparing the texts of the interviews. The data analysis was made without the use of software and is based on an interpretative evaluation strategy for guided expert interviews [13, 15].

The procedure of the evaluation strategy is described below. The individual intermediate results of these steps are shown in the tables in this work.

The first step in the evaluation was the transcription of the acoustically recorded interviews. In the context of this study, nonverbal elements of the interviews were not taken into account. In order not to lose information and to avoid any corresponding distortion, a complete transcription of the recorded interviews was performed. This approach has the additional advantage that future studies with other objectives remain possible.

The second was paraphrasing of the individual interviews to condense the information available, thus reducing the complexity of the data. However, paraphrasing entails the danger of selective reproduction.

In a further step, the paraphrased parts of the interviews were assigned to topic-specific thematic headings. Comparable passages in the different interviews were identified and the corresponding headings were standardized.

A generalization is made by categorizing the headings created in the previous step that were in agreement across interviews. Within this step, the main statements of the experts were determined independently by two interviewers.

Results

In the following passages, the major results are summarized and, at the end of the results section, an overview of the findings is given in Table 1. Detailed information on the questions and on the paraphrasing of the transcribed interviews by two independent interviewers can be found in Tables 2 and 3.

Table 1 Overview of expert responses to the interview questions
Table 2 Synopsis of the interviews transcribed, interviewer 1
Table 3 Synopsis of the interviews transcribed, interviewer 2

Introductory questions and questions regarding the background of the experts

A1. Meaning of the term “evaluation”

Overall, the respondents had a relatively similar basic understanding of evaluation that encompassed the review of achievement of objectives, evaluation or assessment of measures, and evidence of effectiveness. Notably, about half of the respondents specifically addressed the systematic or scientific approach underlying evaluation, which also includes defining evaluation objectives. Several respondents referred to specific types of evaluation, in particular, process evaluation, endpoint evaluation, outcome evaluation, or cost–benefit ratio were mentioned. Further views on evaluation were that it was as a way to optimize results, to collect data, and to take a stand for a project based on the respective results.

A2. Role of evaluation in daily work

For the majority of respondents evaluation represented the main component of their daily work. Five individuals were directly involved in project development, implementation, and evaluation, each with a different focus. One expert had a supportive function and one respondent’s occupation was the development of prevention projects, whereas the evaluation was a secondary focus. Finally, one of the experts interviewed was not involved in the development, implementation, and analysis of evaluations, but rather in the assessment of programs and project proposals.

A3. Personal perception of evaluation in practice

As it turned out in the course of contacting the experts that no expert from “practice”, that is, none is confronted with both the execution and the implementation of a program and its evaluation, this question was expanded to also include a personal view on how evaluations are perceived in practice.

As described above, all respondents generally consider evaluations to be perceived as useful: on the one hand, for further development of programs and, on the other hand, for supporting decision making, for example with regard to financing. Furthermore, the respondents found it to be important that the measuring instruments be understood by participants and practitioners, who should be involved in the development already. One of the respondents stated that there are no senseless evaluations in the proper meaning, and if at all, only badly designed ones.

Shared opinion among respondents was that “meaningful” and “disruptive” were not mutually exclusive. They all reported from their experience that evaluations could be considered to be disruptive and definitively were, primarily by the “evaluated”, i.e., those who are directly involved (for example, program providers or participants). In addition, there was agreement that in practice disruption was experienced predominantly because evaluations were always associated with extra time. Some of the respondents indicated that evaluation can in practice be considered a threat in that routines could be broken or there could be fear that a program could turn out not to be as successful as anticipated.

Significance of evaluation

In part B of the interview, the experts were asked to give an assessment on what makes an evaluation important in practice.

Most experts agreed on the relevance evaluation to decision-making. In particular, it was stated that an evaluation provides insights on a measure’s effect and generates data to inform program improvement and further development. Thereby, as three experts noted, the scientific approach involved in evaluations could open up new perspectives to those involved and offers an interface for involvement of patients and providers.

Hard outcomes and results of evaluations with long-term follow-up were seen especially relevant. Moreover, information on the measures’ costs was seen to be particularly important as an argument with regard to a measure’s implementation or justification. Beyond reaching decision makers, communicating evaluation results was seen as a public relations channel with further audiences, such as investors or the public in general.

Additionally, some examples of what the respondents thought were successful preventive programs were named (see in Table 1).

Evaluation methods

C1. Determinants of prevention project success

In this part of the interview, the experts were asked to give a subjective assessment of when a prevention project is successful and which aspects they consider to be particularly important.

To the majority of respondents, an important determinant of a prevention program’s success is that a prevention program is effective and that causality is visible. Effectiveness has been stated to be a premise that the project is offered at all. While some respondents would call a measure successful even if its effectiveness was minimal, several others also mentioned the cost dimension, which should be in reasonable proportion to the effects. Aside from effectiveness in terms of addressing, for example, risk and protective factors, several experts pointed out that is was particularly important that a prevention program reaches its desired target group. For instance, as one respondent noted, a program should reach those people who benefit most from it and not just the interested middle class. Most respondents agree to that. In addition, it was emphasized the measure should be accepted by the people providing it and actively involve the participants. In this connection, a process of participatory quality improvement through feedback was pointed out.

Moreover, the respondents considered those projects to be successful that are transferable, i.e., that could be implemented under real conditions and were sustainable beyond the end of the project.

C2. Parameters desirable for inclusion in an evaluation

In this section of the interviews, the experts were asked to name parameters, which they thought would allow measuring a prevention project’s success.

Several experts pointed out that documentation and valid study designs were an important factor in making a measure’s success measurable. With regard to which factors to observe, a number of experts emphasized that these would essentially depend on the individual case. Some experts pointed out the general distinction of what they referred to as “hard factors such as medical figures” and “soft factors such as lifestyle parameters or physical activity”. Costs were mentioned by only one person. Overall, the majority of experts considered less objectively quantifiable factors (such as “acceptance” of the program or “fun” for example) as suitable for assessing the success of measures in the purely preventive field. In addition to risk factors, protective factors were seen important. Both, however, should be examined after a longer follow-up period to examine long-term stabilization of behavior and sustainability of structures.

C3. Factors supporting evaluation

The experts were asked to name easy-to-implement evaluation approaches.

Several experts noted that an evaluation concept should be designed at the beginning of a measure to support subsequent evaluation. It was seen important to co-operate closely with those performing the work (all stakeholders) and to bring in their expertise. Furthermore, it was found to be helpful to make use of existing structures (settings). In terms of sustainability, however, the family or environment should also be addressed. Finally, using existing survey tools that were adapted to the intended audiences were mentioned.

C4. Obstacles to evaluations

In this section, respondents were asked to provide information about possible obstacles to the evaluation.

Most often, a lack of resources and acceptance by those affected was mentioned, which included participants, control subjects, and providers of preventive measures. The experts stressed that an evaluation means a lot of extra effort for those affected. In particular, institutions were seen as overburdened by the increasing documentation needs involved with evidence-based analysis and were not likely to express sympathy toward evaluation. Here, the lack of evaluation and quality culture in practice was criticized. Furthermore, evaluation were stated to be frequently seen as a threat by those affected, who did not want to be examined carefully, and institutions, who could worry that their measure would not receive funding anymore because of the evaluation results.

Another important point raised by the experts relates to the effectiveness of the measure. Regarding external evaluators, it was seen primarily necessary to familiarize them sufficiently with the project. If no clear causality mechanism was recognizable an evaluation would likely not capture the program’s intended effects. Thereby, it was noted that particularly for complex measures effects observed in evaluations were often not particularly strong. According to the experts, this casted doubt on whether effects would also show outside the laboratory conditions. Long-term follow-ups were seen unlikely to be feasible, but effects would often only be seen after a long time, especially regarding prevention projects addressing children.

Finally, the aspect of cost involvement was addressed. Obtaining data on the participant’s health expenditure is seen to be difficult owing to data protection. Costs that can be influenced by prevention were also not apparent without long-term observation.

Discussion

In this study, eight experts were interviewed on topics in evaluations of prevention programs addressing children and adolescents, which include practical importance of evaluations, determinants of project success, parameters desirable to be included in evaluations, supporting factors, and also on problems in the implementation of evaluations in practice.

Because of the relatively small number interview participants it can be assumed that only a part of the expert knowledge can be represented. However, even within this small sample, we observed what appeared to be theoretical saturation regarding some questions, which means that no additional findings could be gained. A drawback is that among the non-responders and those who refused to participate in the interview were people who were likely to have experience of being affected by an evaluation (that is, as someone who is “evaluated”). This is consistent with the assessment by the interviewed experts that people affected by evaluation might consider evaluations, and possibly also the present investigation, to be complex without recognizing the underlying purpose. Thus, the inclusion of this perspective, unfortunately, was not immediately possible, but is targeted for more far-reaching surveys. Owing to the small sample and the willingness to participate, a special selection of participants could have resulted. It can be assumed that those who gave positive feedback regarding the participation have a positive attitude toward the evaluation of programs.

The experts indicated that evaluations had a practical relevance, among others, because these generated arguments for decision makers. Accordingly, several experts found it to be an important determinant of prevention program success that costs should be in reasonable proportion to effects. In contrast, only one expert thought of costs an important parameter to include in an evaluation. This was surprising, as costs represent a higher level aspect that equally applies even to heterogeneous measures. However, it appears that there is an awareness of the relevance of economic aspects in evaluations, but still its implementation is problematic, even for experts.

One possible explanation of why costs might not be considered so much in evaluations of prevention programs addressing children and adolescents could lie in the nature of primary prevention, which seeks to avoid disease before it manifests itself. Within the often short time horizons of such evaluations it can be assumed that the supply side immediately faces costs (e.g., for the implementation of the action), whereas monetary benefits of prevention, which mainly consist of avoided expenditures in the distant future, could not be observed [11]. The consideration of costs for prevention programs for children and adolescents therefore puts a disadvantage on these offers initially. This is of great importance, as the evaluation is seen primarily as a foundation for decision-makers. This issue is also reflected by the literature. Although there are several more or less detailed guides how to implement economic evaluations [16,17,18,19] there is still need on more practical oriented tools to reach higher acceptance of economic evaluation also by practitioners [11, 20]. First steps in this direction have already been taken [21].

From the experts’ statements it appeared that compared to costs a prevention program’s health effects were considered to be more important. However, measuring health-specific effects for purely preventive measures in healthy children and adolescents can also be problematic. Especially in health promotion programs directed at children, which often address healthy populations, health effects cannot usually be quantified in terms of patient relevant outcomes or even medical markers that can be measured objectively. Instead, less objectively quantifiable effects are focused on, including a wide variety of measures of physical activity. In this context, fuzziness with regard to how to measure such effects and a low comparability across studies are certainly fields of improvement. In addition, such effects cannot be easily translated into patient-relevant outcomes beyond the observation period for example in context of health economic modelling. The inclusion of prevented outcomes would require assumptions about a relationship of intermediate outcomes and final outcomes and the future development, which is subject to a high degree of uncertainty. Thus, several experts pointed out that the effect of preventive measures is often overestimated. A new vision for what prevention can achieve would have to be developed. Thereby, close cooperation between evaluators and people in the setting would appear to be helpful. Perhaps, this new perspective would reduce reservations with respect to evaluations in the evaluated and lead to more productive field work.

Conclusions

For the reasons mentioned above, it could be useful to understand prevention as a comprehensive approach and to include this in the evaluation. Instead of constantly evaluating many small projects in terms of costs and effects, the evaluation should be performed as an ongoing cross-sectional task (similar to quality assurance). Thereby, it would be important to also consider factors, such as outreach to target groups, or psychosocial determinants, which draw attention to long-term behavioral changes (in terms of sustainability).

In conclusion, by conducting expert interviews we obtained insights into practitioners’ views on “good practice” in evaluation of health promotion and disease prevention measures. Important components of evaluations of prevention measures were pointed out and the need for research regarding development and advancement, especially in terms of economic aspects, was made very clear. This article also emphasizes the importance for stakeholders involved in developing or running prevention or health promotion programs to retaining the economic aspect of evaluation, which can provide important arguments for such programs in light of limited financial resources.

Abbreviations

DFG:

Deutsche Forschungsgemeinschaft (Organisation for Science and Research in Germany)

KNP-Projektdatenbank:

Database of the “Kooperation für nachhaltige Präventionsforschung” (Cooperation for sustainable prevention research)

References

  1. Wolfenstetter SB, Wenig CM. Economic evaluation and transferability of physical activity programmes in primary prevention: a systematic review. Int J Environ Res Public Health. 2010;7(4):1622–48.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Wolfenstetter SB, Wenig CM. Costing of physical activity programmes in primary prevention: a review of the literature. Health Econ Rev. 2011;1(1):17.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Baicker K, Cutler D, Song Z. Workplace wellness programs can generate savings. Health Aff (Millwood). 2010;29(2):304–11.

    Article  Google Scholar 

  4. Chapman LS. Meta-evaluation of worksite health promotion economic return studies: 2012 update. Am J Health Promot. 2012;26(4):TAHP1–TAHP12.

    Article  Google Scholar 

  5. van Dongen JM, Proper KI, van Wier MF, van der Beek AJ, Bongers PM, van Mechelen W, van Tulder MW. Systematic review on the financial return of worksite health promotion programmes aimed at improving nutrition and/or increasing physical activity. Obes Rev. 2011;12(12):1031–49.

    Article  CAS  PubMed  Google Scholar 

  6. Korber K. Potential Transferability of Economic Evaluations of Programs Encouraging Physical Activity in Children and Adolescents across Different Countries-A Systematic Review of the Literature. Int J Environ Res Public Health. 2014;11(10):10606–21.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Korber K. Quality assessment of economic evaluations of health promotion programs for children and adolescents-a systematic review using the example of physical activity. Health Econ Rev. 2015;5(1):35.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Kesztyus D, Schreiber A, Wirt T, Wiedom M, Dreyhaupt J, Brandstetter S, Koch B, Wartha O, Muche R, Wabitsch M, et al. Economic evaluation of URMEL-ICE, a school-based overweight prevention programme comprising metabolism, exercise and lifestyle intervention in children. Eur J Health Econ. 2013;14(2):185–95.

    Article  PubMed  Google Scholar 

  9. Krauth C, Liersch S, Sterdt E, Henze V, Robl M, Walter U. Health Economic Evaluation of Health Promotion - The Example "Fit for Pisa". Gesundheitswesen. 2013;75(11):742–6.

    Article  CAS  PubMed  Google Scholar 

  10. KNP-Projektdatenbank [http://www.knp-forschung.de/?uid=38aabfadcb0b5f4f83757e765751543b&id=projekte&idx=79].

  11. Walter U, Plaumann M, Dubben S, Nöcker G, Kliche T. Gesundheitsökonomische Evaluationen in der Prävention und Gesundheitsförderung. Prävention und Gesundheitsförderung. 2011;6(2):94–101.

    Article  Google Scholar 

  12. Lamnek S, Krell C: Qualitative Sozialforschung: Lehrbuch: Beltz; 2010.

  13. Meuser M, Nagel U. ExpertInneninterviews — vielfach erprobt, wenig bedacht. In: Bogner A, Littig B, Menz W, editors. Das Experteninterview: Theorie, Methode, Anwendung. edn. Wiesbaden: VS Verlag für Sozialwissenschaften; 2002. p. 71–93.

    Chapter  Google Scholar 

  14. Mayring P. Qualitative Inhaltsanalyse. In: Mey G, Mruck K, editors. Handbuch Qualitative Forschung in der Psychologie. edn. Wiesbaden: VS Verlag für Sozialwissenschaften; 2010. p. 601–13.

    Chapter  Google Scholar 

  15. Strauss AL. Qualitative analysis for social scientists. Cambridge: Cambridge University Press; 1987.

  16. Drummond M, Weatherly H, Ferguson B. Economic evaluation of health interventions. BMJ. 2008;337:a1204.

    Article  PubMed  Google Scholar 

  17. Folland S, Goodman AC, Stano M. The Economics of Health and Health Care, 4. edn. Upper Saddle River: Pearson; 2004.

  18. Honeycutt AA, Clayton L, Khavjou O, Finkelstein EA, Prabhu M, Blitstein JL, Evans WD, Renaud JM. Guide to Analyzing the Cost-Effectiveness of Community Public Health Prevention Approaches; Research Triangle Park, NC: U.S. Department of Health and Human Services; 2006.

  19. Wolfenstetter SB. Conceptual Framework for Standard Economic Evaluation of Physical Activity Programs in Primary Prevention. Prev Sci. 2011;12(4):435–51.

  20. Reisig V, Kuhn J, Loos S, Nennstiel-Ratzel U, Wildner M, Caselmann WH. [Primary Prevention And Health Promotion in Bavaria: Taking Stock]. Gesundheitswesen. 2016;79(04):238–246.

  21. Korber K, Wolfenstetter SB. [Data Collection and Assessment of Costs for Prevention And Health Promotion Programs: Development of a Concept Illustrated with 'Promotion of Physical Activity']. Gesundheitswesen. 2017. doi:10.1055/s-0042-120269. [Epub ahead of print].

  22. FAQ. Informationen für Geistes- und Sozialwissenschaftler/innen: Wann brauche ich ein Ethikvotum? [http://www.dfg.de/foerderung/faq/geistes_sozialwissenschaften/index.html].

Download references

Acknowledgements

Not applicable.

Funding

The study did not receive funding.

Availability of data and materials

The anonymized transcripts of the interviews analysed in the current manuscript are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

KK developed the design and analysis plan of the study and drafted the manuscript. CB contributed to study design and analyses. All authors critically reviewed drafts of the individual manuscript, contributed to the writing and interpretation of findings, and approved the final manuscript.

Corresponding author

Correspondence to Katharina Korber.

Ethics declarations

Ethics approval and consent to participate

This study did not involve patients and the interview did not involve questions that exert great physical strain or emotional pressure, therefore, ethics approval was not required according to principles of the Organisation for Science and Research in Germany (DFG) for research in humanities and social sciences [22]. Before the interviews began, all experts were informed about study’s aim and methods and provided informed verbal consent for participation in the study. Participation was voluntary and the experts were free to terminate the interview at any time.

Consent for publication

Not applicable as no details, images, or videos relating to an individual person were reported.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Guideline:

(Economic) evaluation of health promotion and prevention measures related to children and adolescents.

A1. What does the term “evaluation” mean to you?

A2. What role does the evaluation of health promotion and prevention measures play in your daily work?

A3. Do you experience evaluations to be perceived in practice as meaningful or rather disruptive? Please describe one example each for a meaningful evaluation or a less meaningful evaluation from your experience.

B. What according to your assessment is the practical importance of evaluation (in terms of measuring the costs and effects) of health promotion and prevention measures, especially in children and adolescents?

(Please describe, based on your experience, one example each for a useful evaluation or a less useful evaluation.)

C1. What aspects of health promotion and prevention measures (e.g., cost or effects) are particularly important to you, or when is a project successful for you? What would be an example of a particularly successful project in your opinion? (Why?)

C2. Which parameters would make such a success practically measurable?

C3. What has proven to be particularly easy to implement regarding the implementation of the evaluation of health promotion and prevention measures in your experience?

C4. What obstacles are to be expected in the evaluation of health promotion and prevention measures?

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Korber, K., Becker, C. Expert opinions on good practice in evaluation of health promotion and primary prevention measures related to children and adolescents in Germany. BMC Public Health 17, 764 (2017). https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-017-4773-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12889-017-4773-y

Keywords