 Research article
 Open Access
 Open Peer Review
 Published:
Infogap management of public health Policy for TB with HIVprevalence and epidemiological uncertainty
BMC Public Health volume 12, Article number: 1091 (2012)
Abstract
Background
Formulation and evaluation of public health policy commonly employs sciencebased mathematical models. For instance, epidemiological dynamics of TB is dominated, in general, by flow between actively and latently infected populations. Thus modelling is central in planning public health intervention. However, models are highly uncertain because they are based on observations that are geographically and temporally distinct from the population to which they are applied.
Aims
We aim to demonstrate the advantages of infogap theory, a nonprobabilistic approach to severe uncertainty when worst cases cannot be reliably identified and probability distributions are unreliable or unavailable. Infogap is applied here to mathematical modelling of epidemics and analysis of public health decisionmaking.
Methods
Applying infogap robustness analysis to tuberculosis/HIV (TB/HIV) epidemics, we illustrate the critical role of incorporating uncertainty in formulating recommendations for interventions. Robustness is assessed as the magnitude of uncertainty that can be tolerated by a given intervention. We illustrate the methodology by exploring interventions that alter the rates of diagnosis, cure, relapse and HIV infection.
Results
We demonstrate several policy implications. Equivalence among alternative rates of diagnosis and relapse are identified. The impact of initial TB and HIV prevalence on the robustness to uncertainty is quantified. In some configurations, increased aggressiveness of intervention improves the predicted outcome but also reduces the robustness to uncertainty. Similarly, predicted outcomes may be better at larger target times, but may also be more vulnerable to model error.
Conclusions
The infogap framework is useful for managing model uncertainty and is attractive when uncertainties on model parameters are extreme. When a public health model underlies guidelines, infogap decision theory provides valuable insight into the confidence of achieving agreedupon goals.
Background
Public health policies affect millions of people and determine the allocation of health care funds. However, selecting an intervention for a given population at a given time is highly uncertain. Data supporting public health decisions are scarce, of poor quality, not fully generalizable and lack appropriate controls [1]. The high uncertainty in infectious disease epidemiology results also from interdependency among individuals. When prospective studies or randomized controlled trials are available, they usually represent selected groups with as little variance as possible and may not apply to other populations [2]. Such lack of generalizability may be more problematic for the recommendations developed by international organizations. Those guidelines use the best available information and expert opinion. Nonetheless the yield, effectiveness and cost of the interventions vary significantly due to heterogeneity of the populations in which they are implemented [1, 3].
Sciencebased mathematical models commonly support public health decisions [4–7]. Many models were developed to explain or predict the course of an epidemic for specific interventions. However, these models are limited by the uncertainty of the data and assumptions they employ [5, 7].
Despite severe uncertainty in public health decisionmaking, actions must be timely and costeffective. Analysis of uncertainty is central in responsible decision making using uncertain data and models.
Informationgap (infogap) theory [8] was developed for decision making when knowledge gaps are substantial, worst cases cannot be reliably identified, and probability distributions are unreliable or unavailable. An infogap model is the disparity between what is known and what needs to be known in order to achieve an acceptable outcome. The focus is on robustly achieving satisfactory outcomes, thus making this technique suitable for public health policy decision making [9]. Infogap theory has been applied in engineering, biological conservation, economics, project management, medicine and homeland security (see http://infogap.com).
We develop a framework for the practical use of infogap theory in public health for controlling infectious diseases. We focus on tuberculosis (TB) in the context of pandemic HIV as an example.
Methods
Epidemiological background
The World Health Organization reported 9.4 million incident TB cases and 1.7 million TB deaths in 2009 and estimated that only 63% of annual incident TB cases were detected and reported; of these, 86% were successfully treated [10, 11]. Given the disease burden, the United Nations Millennium Development Goals include targets and indicators related to TB control. The targets include decreasing TB incidence by 2015, halving TB prevalence and mortality by 2015 (compared with 1990), and diagnosing 70% of new smearpositive cases and curing 85% of these cases by 2015. However, despite current efforts, many countries will not achieve these targets [10–14].
The HIVAIDS pandemic is the major worldwide challenge to TB control [11, 13, 15, 16]. HIV creates a situation of serious uncertainty for public health interventions based on preHIV era models [10, 11, 13]. This is reflected in population distribution, spread, control, and recurrence. Latently and actively infected individuals contribute differently to spread of disease. It is necessary to consider infectivity, rapidity of progression, reinfection, individuals with higher susceptibility for infection and reinfection resulting from HIV coinfection, etc. in order to produce refined models of diagnosis and treatment.
Many different epidemiological models have been used to evaluate treatment strategies. Deterministic compartment models are the most common, and we use a slightly modified version of the widely used MurraySalomon model [17–19] to describe the evolution of TB/HIV epidemics under various scenarios. The details of the model appear in Appendix “The MurraySalomon model” section.
Infogap theory
The robustness function is the basic decisionsupport tool in an infogap analysis. If our dynamic model were accurate we could evaluate any proposed intervention in terms of the outcome of that intervention that is predicted by the model. An intervention with low predicted TB prevalence is preferred over an intervention with higher predicted prevalence.
The problem is great model uncertainty. This means that predicted outcomes are unreliable and it is unrealistic to prioritize interventions in terms of their predicted outcomes. Using the model to find the intervention whose predicted outcome is best, is not suited to planning with highly uncertain models.
Modelbased predictions are useful, but when deciding which public health intervention to implement, we should also ask: how wrong could the model be, and an acceptable outcome is still guaranteed? For any specified intervention we ask: what is the largest error in the model, up to which all realizations of the model would yield acceptable outcomes? Equivalently, what outcomes can reliably be anticipated from this intervention, given the unknown disparity between the model and reality? Answers to these questions lie in the robustness function, specified in Appendix “Definition of robustness” section. The robustness is dimensionless, and equals the greatest fractional error in the model parameters that is consistent with a specified outcome requirement. We use the robustness function to prioritize the interventions in terms of their robustness against uncertainty for achieving the required outcome.
Knight [20] recognized that probability distributions are sometimes unknown and that severe uncertainty may be nonprobabilistic. Wald [21], BenTal and Nemirovski [22] and others developed tools for robustly managing nonprobabilistic uncertainty by minimizing the worst outcome on a set of possibilities. Infogap theory is nonprobabilistic and handles situations where worst cases are unknown.
We summarize here the main attributes of the infogap robustness function: a plot of robustnesstouncertainty versus required performance. This is the basic infogap tool for prioritizing available options.
Robustness trades off against performance [23, 24]
More demanding performance requirements are less robust against uncertainty than less demanding requirements. This trade off is quantified and expressed graphically by the monotonic robustness curve.
Model predictions have zero robustness against uncertainty [25]
When models are highly uncertain, it is unrealistic to prioritize one’s options based on predicted outcomes of those options, because those predictions have no robustness to errors in the underlying models. Options must be evaluated in terms of the level of performance that can be reliably achieved; this is expressed by robustness.
Combining the trade off and zeroing properties yields realistic prioritization of options.
Prioritization of options depends on performance requirements
Prioritization of options may change as requirements change. This is called “preference reversal” and is expressed by the intersection of the robustness curves of different options. Preference reversal provides insight to anomalous behavior such as the Ellsberg and Allais paradoxes in human decision making [8], the equity premium puzzle in economics [8], and animal foraging [26]. We will show that preference reversal occurs when selecting public health interventions because priorities are time and contextdependent.
Infogap models of uncertainty are nonprobabilistic
Infogap robustness analysis is implementable even when probability distributions are unknown, and thus is suited to severe uncertainty. In contrast, Monte Carlo simulation, Bayesian analysis, or probabilistic risk assessment require knowledge of probabilities. Other nonprobabilistic tools include interval analysis, fuzzy set theory [27], possibility theory [28] and Robust Decision Making (RDM). A comparison of infogap and RDM has recently been published [29].
Infogap is operationally distinct from the minmax or worstcase decision strategy [9]
Infogap robustness does not require knowledge of a worst case. When even typical scenarios are poorly characterized, it is usually impractical to characterize worst cases, which is required by the minmax strategy. Infogap theory does require specifying acceptable outcomes. Thus it is well suited to policy making, because preferences on outcomes are the driving force.
Infogap robustness may proxy for the probability of satisfying the performance requirement [8, 30, 31]
A more robust option is often more likely to achieve the required outcome. By prioritizing the options using infogap robustness, one maximizes the probability of satisfying the requirement, without knowing probability distributions. The proxy property is central to understanding survival in economic [8], biological [26] and other competitive environments [31].
Infogap implementation
Infogap methodology requires three main elements: a system model, a performance measure and a model of uncertainty. The system model is a mathematical representation of a system and its influence on the variables of interest, for which management aspirations (performance criteria) are set. A performance measure assesses value or utility of outcomes. The model of uncertainty is a nonprobabilistic representation of the degree to which the value of parameters, the form of a function, or the structure of a model may deviate from nominal estimates.
The system model in our example is summarized in two functions. C(t) is the variation over time of the total number of TB cases, untreated and treated, HIVpositive and HIVnegative, as a fraction of the initial population. R(t) is the total number of relapses, fast and slow, HIVpositive and HIVnegative, as a fraction of the initial population. (See eqs.(23) and (24) in Appendix “The MurraySalomon model” section.)
The public health practitioner wishes to control the total number of TB cases: the fewer the better. However, trying to minimize this prevalence depends on model predictions that are highly uncertain. The performance requirement is to keep the total fraction of TB cases at a specified time, t _{m}, below a critical value, C _{m}, eq.(25) in Appendix “Performance requirements” section.
Grassly et al[32] note, in discussing epidemiology of HIV/AIDS, that “not all sources of error are amenable to statistical analysis” (p.i37), due to biased, inaccurate or unavailable data. The basic idea of infogap model uncertainty is that we do not know how wrong our estimates are, we have no reliable knowledge of worst cases, and we do not know probability distributions for the estimates. The infogap model uncertainty model is a nonprobabilistic quantification of uncertainties.
A dominant uncertainty in TB dynamics with HIV prevalence is in model parameter values, though HIV causes significant uncertainties in model structure. Structural uncertainty refers to missing terms in the equations, missing equations, or unknown nonlinearities. Structural uncertainty is dealt with much less frequently than parameter uncertainty because of technical challenges. We focus on parameter uncertainty in this paper because of its importance and to facilitate the presentation of this first application of infogap theory to public health.
We use infogap theory [8] to model and manage uncertainties in the following parameters: slow and fast relapse rates for HIV positives and negatives, TB infection rates for HIV positives and negatives, and the HIV infection rate. Much literature suggests these parameters for their impact on the course of epidemics and the difficulty in measuring them [10, 11, 16, 33–36]. Other uncertainties could also be investigated, depending on the purpose of the analysis. We use estimated values for each uncertain parameter, and estimated errors typically chosen as half of an interval estimate of the parameter. The infogap model of uncertainty is specified in Appendix “Uncertainty” section.
We aim to achieve the performance requirement by judicious choice of control variables, defined in Appendix “Control variables” section. Eligible control variables are any coefficients of the dynamic model that can be influenced by public health or related medical intervention. We use the diagnosis rate, cure rate, relapse rate, and HIV infection rate. We define an intervention in terms of the values of these variables [15, 34, 37–40].
Results: robustness and policy evaluation
We use the infogap robustness function to evaluate alternative interventions aimed at controlling the relative TB prevalence, C(t), at a specified target time, t _{m}, in the future. An intervention is specified by the values of the control variables. The evaluation leads to realistic assessment of outcomes and preferences among the interventions.
Interpreting robustness curves: trade off and zeroing
All infogap robustness curves have two properties, mentioned earlier: trade off between performance and robustness, and zeroing of the robustness curve. These properties are central in using robustness curves to evaluate public health policy.
The coefficients of the epidemiological models are specified in Tables 1 and 2. Thoughout our examples, the initial conditions correspond to low TB and low HIV prevalence (the first datacolumn of Table 3) unless specified otherwise. The control variables specified in Appendix “Control variables” section are themselves model parameters. The robustness curve in Figure 1 is evaluated for the nominal values of the control variables specified in Tables 1 and 2. This set of control variables is the “baseline intervention”. The uncertain variables specified in Appendix “Uncertainty” section are also model parameters. Their nominal values and uncertainty estimates are specified in Table 4. These nominal values are the same as appear in Tables 1 and 2 for these variables. The total case load is evaluated at time t _{m}=10 years after initiation unless indicated otherwise.
Figures 2 and 3 show the temporal evolution of the relative prevalence of TB cases, C(t), and relative relapses, R(t), based on the nominal estimates of the model parameters, with moderately low initial TB and HIV prevalence. C(t) and R(t) are fractions of the initial total population size. Figure 2 shows that the total number of TB cases starts at about 4.2% of the initial population and decays to about 3% in the first 1.5 years, thereafter decaying more slowly, reaching 2.1% of the initial population size after 10 years. The relapse population starts very small, rises rapidly in the first year and thereafter decays gradually. The reduction in the rate of decrease of the TB cases after 1.5 years, Figure 2, results from the influx of relapses which have built up since initiation of the intervention.
Trade off
Key to understanding the trade off expressed by the robustness curve is the concept of satisficing. In contrast to optimizing, satisficing asks for an outcome that meets minimal needs but may not be the best imaginable. The satisficing strategy is not merely “accepting second best.” Satisficing is aspirational, setting a goal just like optimization, but also requiring robustness to uncertainty. The satisficing strategy induces a trade off between the aspiration for good outcome and the robustness against uncertainty in attaining that outcome.
The robustness curve in Figure 1 is based on satisficing the relative TB prevalence: requiring that the prevalence not exceed the critical value, C _{m}. Figure 1 shows the robustness vs. the critical prevalence. The positive slope of the robustness curve in Figure 1 expresses the trade off between robustness and performance: large robustness entails large prevalence at the specified target time (10 years). Equivalently, requiring low relative prevalence entails low robustness to uncertainty in the epidemiological model. The robustness curve quantifies the intuition that more demanding outcomes (small prevalence) are more vulnerable to model uncertainty (small robustness).
We can interpret the numerical values along the robustness curve as follows. The prevalence, C(t), and its critical value, C _{m}, are normalized to the initial population size. For instance, C _{m}=0.025 means that the prevalence at time t _{m} must not exceed 2.5% of the initial population size. The robustness corresponding to this value of C _{m}, is 0.1 as seen in Figure 1. This means that the performance requirement is guaranteed if the uncertain model parameters vary from their nominal values by no more than 10% of their error estimates. (The model parameters are constrained to be positive since they are firstorder rate constants.)
The public health practitioner may feel that robustness to 10% uncertainty in the model parameters is rather small, given the substantial uncertainty in the epidemiological dynamics of TB with HIV prevalence. If we want robustness to, say, 25% uncertainty in the model parameters we must accept a larger final case load, namely, C _{m}=0.033 as seen in Figure 1. Greater robustness is obtained only by accepting poorer outcome; this is an irrevocable trade off that is quantified by the robustness curve.
Zeroing
We note that the robustness curve in Figure 1 reaches the horizontal axis at the value C _{m}=0.021. This means that requiring the prevalence not to exceed 2.1% of the initial population has no robustness against model uncertainty. The value of C _{m} at which the robustness becomes zero is precisely the nominal prediction of the prevalence at time t _{m} as seen by the right endpoint in Figure 2. That is, the value of C(t _{m}), evaluated with the best estimates of the model parameters, equals 0.021. The horizontal intercept in Figure 1 is an example of the property of zeroing that holds for all infogap robustness curves: The outcome predicted by the model, when adopted as the performance requirement, has no robustness against uncertainty in the model.
It is not surprising that the predicted outcome is extremely vulnerable to error in the model upon which the prediction is based. However, the zerorobustness of predicted outcomes has an important implication for policy selection.
The robustness curve in Figure 1 is for a particular choice of values of the control variables: the baseline intervention. The zeroing property—no robustness of the predicted outcome of these control values—implies that we should not assess these control values in terms of their predicted outcome. The predicted prevalence of 0.021 at time t _{m}=10 years does not reliably reflect the performance of these control variables. Due to the trade off property, only larger prevalence can reliably be expected to result from this choice of the control variables. Predicted outcomes are not reliable for prioritizing the interventions.
Equivalent interventions
Different combinations of interventions can yield essentially equivalent results, as in Figure 4. The baseline intervention (solid), is characterized by low diagnosis rate and high relapse rate. The other intervention (dash) has higher diagnosis rate and lower relapse rate as specified in Table 5. (Interventions are specified by the values of control variables presented in Table 5). The robustness curves for these two control strategies, at 10 years, are nearly the same, suggesting that the public health practitioner may choose freely between them, perhaps employing additional criteria such as cost or ease of implementation. Equivalence may be lost if parameters are changed. For instance, we will see later (Figure 5) that these interventions evaluated at 10, 20 or 30 years have very different robustness curves.
Figure 6 shows a different aspect of the equivalence of interventions. The figure shows robustness curves for two strategies specified in Table 5. Both strategies aim to control the relative prevalence of TB, but one (solid) is geared for a 10year target time, while the other (dash) considers a 30year target. The estimated outcomes—prevalence—are very nearly the same for these two strategies, each at its respective target time, as shown by their shared horizontal intercept at C _{m}=0.018. These predictions result from estimated model parameters, so one might be inclined to conclude that TB prevalence of 0.018 can be achieved at either 10 or 30 years by using the corresponding intervention.
However, the epidemiological model is highly uncertain, and the robustness curves in Figure 6 of these two strategies are quite different. Not surprisingly, the 30year target is much less robust to uncertainty. It would be erroneous to treat these two strategies as outcomeequivalent since their performances at positive robustness are quite different. Nominal equivalence (equivalence of the predicted outcome) does not imply robustness equivalence.
Impact of initial TB and HIV prevalence
We now consider higher initial prevalences. The overall shape of the dynamic response is very similar in each case, except that the prevalence increases significantly as the initial prevalence increases. As in Figures 2 and 3, in each scenario the initial TB prevalence decreases rapidly during the first 2 years, and thereafter decreases more slowly as the new relapse population—which peaks around the end of the first year—flows back into active cases.
Figure 7 shows robustness curves for a target time 10 years after initiation, for low (solid), medium (dash) and high (dotdash) initial prevalence of TB and HIV. The lowprevalence curve (solid) is the same as Figure 1. The robustness curves shift dramatically to the right as the baseline prevalence of TB and HIV increases, indicating poorer estimated outcome and lower robustness to uncertainty.
Intervention aggressiveness
Figure 8 shows robustness curves for low initial TB and HIV prevalence with interventions specified in Table 5. The solid curve is the baseline intervention, against which the other curves entail more aggressive intervention in either or both the active cases and the relapse population.
The progression from solid to dash to dotdash in Figure 8 represents increasingly aggressive intervention in the active TB case population. We see that increasing aggressiveness, in this specific parameter configuration, results in increasing prevalence and decreasing robustness to model error at the target time. The explanation is that aggressive treatment of active cases enlarges the relapse population which flows back into the active case population.
The top curve in Figure 8 modifies the most aggressive case (dotdash) by also including more aggressive intervention in the TB relapse population. This reduction in relapse reduces the predicted prevalence after 10 years, and increases the robustness to uncertainty.
Different target times
Most of the results discussed so far evaluated the robustness for a target time 10 years after initiation. We now consider the implications of different target times.
Figure 5 shows robustness curves at target times, t _{m}, of 10, 20 and 30 years (solid, dash, dotdash respectively). The initial prevalences of TB and HIV are low. The interventions are all at the baseline.
The predicted prevalence decreases as the target time increases, as shown by the horizontal intercepts in Figure 5. The baseline intervention is predicted to reduce the prevalence, (in units of initial population size), as the time horizon increases. However, the zeroing property means that these predictions have no robustness to uncertainty in the model used for prediction. Only higher prevalence has positive robustness.
From Figure 5 we see that, for critical TB prevalence C _{m} less than 3%, the 30year TB prevalence is more robust than the 20year prevalence which is more robust than the 10 year prevalence. For instance, at critical TB prevalence of C _{m}=0.02, the robustnesses for 10, 20 and 30 year horizons are 0, 0.08 and 0.12, respectively. This intervention has no robustness to uncertainty when requiring a 2% prevalence after 10 years; in fact, the estimated prevalence at 10 years is greater than 2%. The prevalence at 20 years will be no worse than 2% provided that the model coefficients err by no more than 8%, and at 30 years the robustness to error is 12%.
The practitioner may feel that even 12% robustness against modelcoefficient error is rather small, given the severe uncertainty of TB epidemiology in the context of epidemic HIV. This means that, even at a 30year horizon, this intervention cannot reliably achieve a relative prevalence as low as 2%.
Suppose we are willing to aim at a final TB prevalence of 3.7%. We see from Figure 5 that now the 10year horizon is more robust than 20 years which is more robust than 30 years. The robustnesses are now 30%, 24% and 22% for 10, 20 and 30 years. The robustness curves have intersected one another and the robustness rankings are reversed. As the target time decreases, the predicted outcome becomes worse (horizontal intercept moves right) but the cost of robustness improves. This causes the robustness curves to cross one another. More intuitively, we can say that prediction of TB prevalence is more reliable for short time horizon than for long times. But since a long time is required to overcome the relapse effect, we observe the intersection of the robustness curves and the consequent reversal of their robust dominance.
Results like Figure 5 have important policy implications for TB control over long time periods. The policy maker may be tempted to choose one option that is predicted to yield better short term results. However, that choice might be wrong when one opts to satisfice the outcome with robustness to uncertainty. Predictions of mathematical models (horizontal intercepts) are not sufficiently reliable for comparing and prioritizing interventions; the cost of robustness (slope) must also be considered. In the example in Figure 5 one might conclude that prevalence less than 3% is not achievable at any target time, that 3.7% is feasible at 10years but not beyond, and that other interventions are needed for longerterm outcomes.
Impact of HIV mortality
Figure 9 shows 10year robustness curves for various HIV infection rates, with low initial TB and HIV prevalence, as specified in Table 5. The HIV infection rate decreases in the progression from solid, dash, dotdash to dotdot. As the HIV infection rate decreases, the estimated 10year TB prevalence increases and the robustness decreases. The explanation lies in the high mortality rate of the HIV population. As the HIV infection rate decreases, the size of the relapse population decays more slowly, allowing greater flow back into the active TB case population. Interventions that decrease HIV infection rates or restore immunity to HIV patients, will counterintuitively tend to increase TB prevalence unless compensating measures are taken. Significantly, the cost of robustness (slope of the robustness curve) does not change as a result of decreased HIV infection rate. Reducing HIV infection rate shifts the robustness curve to the right, with almost no change in slope.
Conclusion
We demonstrated a generic infogap framework for managing model uncertainty in public health decision making. By applying it to a mathematical model of TB/HIV epidemics, we illustrated specific recommendations for interventions in the control of TB with HIV in various settings.
The complicated multidimensional epidemiological dynamics are dominated by the flow back and forth between the actively and latently infected TB populations and the different rates of progression of different subpopulations between these compartments. Counterintuitively, the total TB case load even decades after initiation can increase as a result of increased diagnosis and cure rates, and it can increase as the control of HIV becomes more aggressive. These findings highlight the critical importance of modeling in the assessment and planning of public health intervention. Model predictions are often used to choose interventions. However, model predictions must be interpreted in light of model uncertainties. Predicted outcomes have zero robustness to model error. Only worsethanpredicted outcomes (higher relative prevalence) have positive robustness against model error. This means that predicted outcomes are not reliable for prioritizing the interventions. The trade off between robustness and outcome is quantified by the infogap model analysis and is a critical component of the decisionmaking process.
We explore the performance of interventions that alter the rate constants of diagnosis, cure, relapse and HIV infection. Some interventions have quite similar predicted outcomes and robustness curves. This enables the policy maker to choose between these interventions based on additional criteria, such as ease or cost of implementation. It is not true, however, that interventions with the same estimated outcomes necessarily have the same robustness against model error.
We demonstrate the policy implications of initial TB and HIV prevalence, of HIV mortality, of degree of treatment aggressiveness, and of the target time at which outcomes are evaluated. Public health policies are evaluated in terms of confidence—expressed as robustness to modeling error—in achieving specified TB prevalence at the target time. Predicted outcomes have zero robustness and thus are not reliable for evaluating and comparing interventions. Instead, interventions must be prioritized in terms of their capacity for achieving specified outcomes, with robustness to uncertainty. Failure to quantify the uncertainty inherent in public health interventions leads to disappointment from unrealized expectations, and failed policy. Where a public health model underlies guidelines, infogap decision theory provides valuable insight into the confidence of achieving agreedupon goals.
Appendices
The MurraySalomon model
The MurraySalomon (MS) model [17, 18] is a set of coupled differential equations that describe the time evolution of TB. A modification deals with TB infecteds in a population containing HIV smearpositive individuals. In section “The basic MurraySalomon model: No HIV” we define the basic nonHIV model. In section “The HIVExtended model” we present the MS extension to include an HIV subpopulation. The state variables are defined in Table 6 and the parameters are defined in Table 1.
The basic MurraySalomon model: No HIV
The basic MS model is the following 19 differential equations (eqs.(6) and (7) occur in 6 different forms each) appearing on pp.19–20 of Murray and Salomon [18]:
The term ‘±σ’ appears in eqs.(6) and (7). MS write:^{a}
It should be noted in the equations for ${C}_{U}^{i,j}$ and ${C}_{T}^{i,j}$ that the smear rate σis multiplied by the number of individuals in the respective category i ^{⋆}, where i ^{⋆}=2 (smearnegative) for i=1 (smearpositive) and vice versa, and i ^{⋆}=∅for i=3 (extrapulmonary). The term including σis added for i=1, subtracted for i=2, and equal to 0 for i=3. The result of this formulation is that smearnegative patients convert to smearpositive at a rate of σ.
However, the ‘vice versa’ is a mistake. The correct equations for ${C}_{U}^{i,j}$ (with analogs for ${C}_{T}^{i,j}$) are:
Eq.(10) states that smearnegative individuals join the smearpositive population at rate σ. Eq.(11) states that smearnegative individuals leave the smear negative population at rate σ. That way all individuals are accounted for.
The instantaneous rate of infection, λin eq.(1), is defined by Murray and Salomon [18], p.21, as:
The HIVExtended model
IntroductionWe will now formulate the extended dynamic model to include a differentiation between HIVpositive and HIVnegative populations. MS do this also, and state [18], p.4 that they use “two submodels—one for the HIV seronegative population, and one for the HIV seropositive population. Each submodel follows the structure” which is presented here as eqs.(1)–(9). They write that
Individuals move from each category in the HIVnegative submodel to the corresponding category in the HIVpositive submodel at the HIV infection rate, which varies over time. Because the effects of HIV on immune function are not marked with respect to tuberculosis until the CD4 count has dropped below 500, we actually move individuals from the HIVnegative to the HIVpositive submodel after they have been infected with HIV for 3 years. The two submodels are also linked through the annual risk of infection, as HIVnegative tuberculosis cases can infect HIVpositive individuals, and vice versa [18], pp.4–5
Our model does not delay transfer from the HIVnegative submodel.
Submodels Each of the two subpopulations—HIVnegative and HIVpositive—is divided into the 19 groups represented by the state variables in Table 6. Each state variable has a differential equation in eqs.(1)–(9).
Let us denote the HIVnegative state variables as before, and the HIVpositive state variables with the same letters but with an overbar. For compactness we represent these two sets of variables with two vectors:
The model parameters listed in Tables 1 and 2 take different values for HIVnegative and HIVpositive populations (as specified in the tables). Let us denote the model parameters as before for the HIVnegative population, and use the same symbols with an overbar for the HIVpositive population.
Eqs.(1)–(9) are 1storder linear inhomogeneous differential equations. Only eq.(1) has an inhomogeneous term: T births per year. Let F(t) and $\stackrel{\xc2\xaf}{F}\left(t\right)$ denote the matrices of coefficients (model parameters) in the differential equations for HIVnegative and HIVpositive populations, respectively. Let e_{1}denote the 19vector with a 1 in the first element and zeros elsewhere. We can now compactly denote eqs.(1) as:
Let γdenote the HIV infection rate, per person per year. Following MS, we will move individuals from each HIVnegative category to the corresponding HIVpositive category at rate γ. Thus, instead of eq.(16), we have the following coupled sets of equations:
The term ‘−γx’ in eq.(17) removes individuals from the HIVnegative population at the HIV infection rate, and the term ‘ + γx’ in eq.(18) introduces them into the HIVpositive population at the same rate.
MS introduce further highly structured coupling between eqs.(17) and (18) through the TB infection rate, [18], p.23, λ. We do not employ the MS differentiation between the infection rates for HIVnegative and HIVpositive populations. Instead we simply use λand $\stackrel{\xc2\xaf}{\lambda}$ for the TB infection rates in the HIVnegative and HIVpositive populations.
Uncertainty
Many uncertainties accompany the dynamic model. We concentrate on uncertainty in the values of some of the model parameters, as this is the dominant impact of HIV prevalence. We use infogap theory to model and manage these uncertainties [8]. Many different types of infogap models of uncertainty are available. We employ a model particularly suited to severe lack of information.
The dominant uncertain parameters are:
${\rho}_{S},{\stackrel{\xc2\xaf}{\rho}}_{S}$, slow relapse rates.
${\rho}_{F},{\stackrel{\xc2\xaf}{\rho}}_{F}$, fast relapse rates.
$\lambda ,\stackrel{\xc2\xaf}{\lambda}$, TB infection rates.
γ, HIV infection rate.
Let us denote uncertain variables generically as u _{ i }, compiled in a vector u. This vector is:
For each uncertain parameter, u _{ i }, we have an estimated value, denoted ${\stackrel{~}{u}}_{i}$, and an error term s _{ i }typically chosen as half of an interval estimate of the parameter. The error estimate may be derived from a statistical confidence interval, or from a plausible extension of a confidence interval as discussed by Grassly et al[32], or from other professional judgment. The basic idea of an infogap model of uncertainty is that we don’t know how wrong our estimate is; we have no reliable estimate of a worst case. In fact, since the typical values are poorly known, worstcase estimates are even less reliable.
More precisely, the fractional error of the estimate, ${\stackrel{~}{u}}_{i}$, in units of the error, s _{ i }, is unknown. That is, this fractional error is bounded by a number, α, whose value is unknown:
But this must be further refined to reflect the fact that the uncertain parameters are 1storder removalrate constants^{b}, which means that they cannot be negative. Thus we adjoin these constraints to the inequality as:
Finally, we write our infogap model of uncertainty as a family of nested sets of uncertain vectors:
α is called the ‘horizon of uncertainty’. When α=0 there is no uncertainty and the set $\mathcal{U}\left(0\right)$ contains only the estimated values, $\stackrel{~}{u}$. As α increases, the sets $\mathcal{U}\left(\alpha \right)$ become more inclusive. These sets are unbounded in the space on which the parameters are defined. The infogap model embodies the information we have—estimates and errors—without committing to any meaningful worst case (other than the limits which are imposed by the definition of the variables).
In some situations one may not be able to estimate error weights, s _{ i }. In such situations the fractional error in eq.(20) can be replaced by a fractional error relative to the estimate, $\left\right({u}_{i}{\stackrel{~}{u}}_{i})/{\stackrel{~}{u}}_{i}$. The infogap model is then formulated as in eq.(22) with this new fractional error.
Robustness: formulation
Performance requirements
We will consider an aggregated variable for monitoring the TB status of the population. Our goal is to keep the value of this variable acceptably small. The variable we consider is the total number of cases, untreated and treated, HIVpositive and HIVnegative, as a fraction of the initial population:
There are other variables that one could consider. For instance, one could consider the total number of relapses, fast and slow, HIVpositive and HIVnegative, as a fraction of the initial population:
One could also consider the instantaneous or the average rates of change of C(t) and R(t).
Returning to the aggregate prevalence, C(t), our goal is to keep it below a specified maximum acceptable value at a specified target time t _{m}. Thus the performance requirement is:
A relation such as eq.(25) is called a “satisficing” requirement, as opposed to an optimization requirement. We do not aim to minimize the aggregate prevalence, C(t _{m}). Our goal is to make the TB prevalence adequately small: no greater than the critical value C _{m}, as stated in eq.(25). Note that the satisficing requirement includes optimization as a special case. Satisficing and optimizing are the same when C _{m} is chosen as the predicted minimal value.
Control variables
We aim to achieve this goal by judicious choice of control variables that we denote generically as q _{ i }, combined in a vector q. Eligible control variables are any coefficients of the dynamic model that can be influenced by public health or related medical intervention. When a control variable is also an infogap uncertain variable we will refer to the estimated value as the control variable. The uncertainty is then in whether the specified value—the estimate—will be realized in practice. We will consider the following control variables:
${\delta}^{j},{\stackrel{\xc2\xaf}{\delta}}^{j}$, diagnosis rates (same for HIV negative and positive populations).
${\u03f5}_{T}^{k},{\stackrel{\xc2\xaf}{\u03f5}}_{T}^{k}$, cure rates for treateds (same for HIV negative and positive populations).
${\stackrel{~}{\rho}}_{F},{\stackrel{~}{\stackrel{\xc2\xaf}{\rho}}}_{F}$, estimated fast relapse rates.
$\stackrel{~}{\lambda},\stackrel{~}{\stackrel{\xc2\xaf}{\lambda}}$, estimated TB infection rates.
$\stackrel{~}{\gamma}$, estimated HIV infection rate.
${\beta}_{F},{\stackrel{\xc2\xaf}{\beta}}_{F}$, fast breakdown rates.
We define an intervention in terms of the values of these variables. None of these control variables corresponds directly to any of the standard performance measures such as the incidence, prevalence, and death rates associated with TB. For instance, the coefficients δ ^{j}and ${\stackrel{\xc2\xaf}{\delta}}^{j}$, while called “diagnosis rates”, are in fact 1storder kinetic rate coefficients and can meaningfully take any positive value. These coefficients combine with several other coefficients to determine the fraction of new untreated cases that move into the treated category, as seen from eqs.(6) and (7). In other words, the control variables combine to produce aggregate effects such as the proportion of new cases that are diagnosed. One can “calibrate” a set of control variables in terms of aggregate properties, for instance by keeping track of how many cases are created (new members of C _{ U }(t)) and how many are treated (new members of C _{ T }(t)). Unless the population is at steady state (and the intervention tries to prevent this), the calibration in terms of the proportion diagnosed depends on the time after initiation of intervention and on the duration during which the accounting is done. We do not calibrate our model since we focus on a different challenging problem: prioritizing alternative interventions.
Definition of robustness
An intervention is specified by specifying the values of the control variables, q. If our dynamic model were accurate we could evaluate any proposed intervention in terms of the outcome of that intervention that is predicted by the model. An intervention whose predicted outcome entails low TB prevalence is preferred over an intervention with larger predicted prevalence.
The problem is that the dynamic model is highly uncertain. This means that it is unrealistic to prioritize interventions in terms of their predicted outcomes. Since those predictions are highly uncertain, it is unwise to evaluate interventions only in terms of their modelbased predictions.
The modelbased predictions are useful, but we also ask: how wrong could the model be, and the predicted outcome is still acceptable? That is, for any specified intervention, q, we ask: what is the largest fractional error in the uncertain parameters, up to which all realizations of the model would yield acceptable outcomes? The answer to that question is the robustness function, which we will soon specify. We use the robustness function to prioritize the interventions in terms of their robustness against uncertainty for achieving the required outcomes.
The robustness function for the performance requirement in eq.(25) is:
We can “read” this relation from left to right as follows. The robustness, $\hat{\alpha}$, of intervention q, with performance requirement C _{m}, is the maximum horizon of uncertainty, α, up to which the maximum aggregate prevalence, C(t), for all realizations of the uncertain coefficients u in the infogap model $\mathcal{U}\left(\alpha \right)$, does not exceed the critical value, C _{m}. We are not ameliorating a worst case; the worst case is unknown because the horizon of uncertainty, α, is unbounded. Instead, we are asking how large an uncertainty can be tolerated by the intervention, q. In choosing the intervention to enhance the robustness, we attempt to protect against the unbounded uncertainty of the impact of HIV/AIDS on the TB dynamics.
Endnotes
^{a}Footnote 1 in the full online version, pp. 20–21.^{b}This means that these parameters are the coefficients in equations such as $\stackrel{\u0307}{x}\left(t\right)=\mathit{ux}\left(t\right)$ whose solution is x(t)=x(0)e^{−ut}. In order for this to be a removal process, the coefficient u must be positive. It can exceed unity.
Abbreviations
 AIDS:

Acquired immunodeficiency syndrome
 HIV:

Human immunodeficiency virus
 RDM:

Robust Decision Making.
References
 1.
Bhunu CP, Garira W, Mukandavire Z: Modeling HIV/AIDS and tuberculosis coinfection. Bull Math Biol. 2009, 71 (7): 17451780. 10.1007/s1153800994239.
 2.
Bhunu CP, Garira W, Mukandavire Z, Magombedze G: Modelling the effects of preexposure and postexposure vaccines in tuberculosis control. J Theor Biol. 2008, 254 (3): 633649. 10.1016/j.jtbi.2008.06.023.
 3.
Bhunu CP, Garira W, Mukandavire Z, Zimba M: Tuberculosis transmission model with chemoprophylaxis and treatment. Bull Math Biol. 2008, 70 (4): 11631191. 10.1007/s1153800892954.
 4.
Wastney ME, Subramanian KN, Broering N, Boston R: Using models to explore wholebody metabolism and accessing models through a model library. Metabolism. 1997, 46 (3): 330332. 10.1016/S00260495(97)902614.
 5.
Boston R, Stefanovski D, Moate P, Linares O, Greif P: Cornerstones to shape modeling for the 21st century: introducing the AKAGlucose project. Adv Exp Med Biol. 2003, 537: 2142. 10.1007/9781441990198_2.
 6.
Aparicio JP, Capurro AF, CastilloChavez C: Markers of disease evolution: the case of tuberculosis. J Theor Biol. 2002, 215 (2): 227237. 10.1006/jtbi.2001.2489.
 7.
Young D, Stark J, Kirschner D: Systems biology of persistent infection: tuberculosis as a case study. Nat Rev Microbiol. 2008, 6 (7): 520528. 10.1038/nrmicro1919.
 8.
BenHaim Y: InfoGap Decision Theory: Decisions Under Severe Uncertainty. 2006, London: Academic Press
 9.
BenHaim Y, Dacso CC, Carrasco J, Rajan N: Heterogeneous Uncertainties in cholesterol management. Intl J Approximate Reasoning. 2009, 50: 10461065. 10.1016/j.ijar.2009.04.002.
 10.
Dye C: Global epidemiology of tuberculosis. Lancet. 2006, 367 (9514): 938940. 10.1016/S01406736(06)683840.
 11.
Corbett E, Watt C, Walker N, Maher D, Williams B, Raviglione M, Dye C: The growing burden of tuberculosis: global trends and interactions with the HIV epidemic. Arch Intern Med. 2003, 163 (9): 10091021. 10.1001/archinte.163.9.1009.
 12.
Zignol M, Hosseini M, Wright A: Global incidence of multidrugresistant tuberculosis. J Infect Dis. 2006, 194 (4): 479485. 10.1086/505877.
 13.
Nunn P, Williams B, Floyd K, Dye C, Elzinga G, Raviglione M: Tuberculosis control in the era of HIV. Nat Rev Immunol. 2005, 5 (10): 819826. 10.1038/nri1704.
 14.
Dye C, Maher D, Weil D, Espinal M, Raviglione M: Targets for global tuberculosis control. Int J Tuberc Lung Dis. 2006, 10 (4): 460462.
 15.
Williams BG, Korenromp EL, Gouws E, Schmid GP, Auvert B, Dye C: HIV infection, antiretroviral therapy, and CD4+ cell count distributions in African populations. J Infect Dis. 2006, 194 (10): 14501458. 10.1086/508206.
 16.
Williams BG, Granich R, Chauhan LS, Dharmshaktu NS, Dye C: The impact of HIV/AIDS on the control of tuberculosis in India. Proc Natl Acad Sci US. 2005, 102 (27): 96199624. 10.1073/pnas.0501615102.
 17.
Murray CJL, Salomon JA: Modeling the impact of global tuberculosis control strategies. Proc Natl Acad Sci USA. 1998, 95: 1388113886. 10.1073/pnas.95.23.13881.
 18.
Murray CJL, Salomon JA: Using Mathematical Models to Evaluate Global Tuberculosis Control Strategies. 1998, [http://apin.harvard.edu/faculty/joshuaSalomon/files/MurraySalomon_ModelingTBControlStrategies_HCPDS1998.pdf]
 19.
Murray CJL, Salomon JA: Expanding the WHO tuberculosis control strategy: rethinking the role of active casefinding. Int J Tuberc Lung Dis. 1998, Suppl 1: S9S15.
 20.
Knight FH: Risk, Uncertainty, and Profit. 1921, New York: Hart, Schaffner, and Marx
 21.
Wald A: Statistical decision functions which minimize the maximum risk. Ann Mathematics. 1945, 46 (2): 265280. 10.2307/1969022.
 22.
BenTal A, Nemirovski A: Robust solutions of uncertain linear programs. Oper Res Lett. 1999, 25: 113. 10.1016/S01676377(99)000164.
 23.
BenHaim Y: Robust rationality and decisions under severe uncertainty. J Franklin Inst. 2000, 337: 171199. 10.1016/S00160032(00)000168.
 24.
BenHaim Y, Hemez F: Robustness, fidelity and predictionlooseness of models. Proc R Soc A. 1999, 468: 227244.
 25.
BenHaim Y: Infogap decision theory for engineering design. Engineering Design Reliability Handbook. Edited by: Nikolaidis E, Ghiocel D, Singhal S. 2005, Boca Raton: CRC Press, 11.111.30.
 26.
Carmel Y, BenHaim Y: Infogap robustsatisficing model of foraging behavior: Do foragers optimize or satisfice?. Am Naturalist. 2005, 166: 633641. 10.1086/491691.
 27.
Klir G, Folger T: Fuzzy Sets, Uncertainty, and Information. 1988, New York: PrenticeHall
 28.
Dubois D, Prade H: Possibility Theory: An Approach to Computerized Processing of Uncertainty. 1986, New York: Plenum Press
 29.
Hall J, Lempert R, Keller K, Hackbarth A, Mijere C, McInerney D: Robust climate policies under uncertainty: a comparison of robust decision making and infogap methods. Risk Anal. 2012, 32 (10): 16571672. 10.1111/j.15396924.2012.01802.x.
 30.
BenHaim Y: Infogap forecasting and the advantage of suboptimal models. Eur J Operational Res. 2009, 197: 203213. 10.1016/j.ejor.2008.05.017.
 31.
BenHaim Y: Robust satisficing and the probability of survival. Intl J of Syst Sci. [http://0www.tandfonline.com.brum.beds.ac.uk/doi/full/10.1080/00207721.2012.684906]
 32.
Grassly NC, Morgan M, Walker N, Garnett G, Stanecki K, Stover J, Brown T, Ghys PD: Uncertainty in estimates of HIV/AIDS: the estimation and application of plausibility bounds. Sex Transm Infect. 2004, Suppl I: i31i38.
 33.
Cohen T, Colijn C, Finklea B, Murray M: Exogenous reinfection and the dynamics of tuberculosis epidemics: local effects in a network model of transmission. J R Soc Interface. 2007, 4 (14): 523531. 10.1098/rsif.2006.0193.
 34.
Dye C, Garnett GP, Sleeman K, Williams B: Prospects for worldwide tuberculosis control under the WHO DOTS strategy. Directly observed shortcourse therapy. Lancet. 1998, 352 (9144): 18861891. 10.1016/S01406736(98)031997.
 35.
Cohen T, Lipsitch M, Walensky RP, Murray M: Beneficial and perverse effects of isoniazid preventive therapy for latent tuberculosis infection in HIVtuberculosis coinfected populations. Proc Natl Acad Sci USA. 2006, 103 (18): 70427047. 10.1073/pnas.0600349103.
 36.
WoolsKaloustian K, Kimaiyo S, Diero L, Siika A, Sidle J, Yiannoutsos C, Musick B, Einterz R, Fife K, Tierney WM: Viability and effectiveness of largescale HIV treatment initiatives in subSaharan Africa: experience from western Kenya. AIDS. 2006, 20: 4148. 10.1097/01.aids.0000196177.65551.ea.
 37.
Vynnycky E, Nagelkerke N, Borgdorff MW, van Soolingen D, van Embden JD, Fine PE: The effect of age and study duration on the relationship between ‘clustering’ of DNA fingerprint patterns and the proportion of tuberculosis disease attributable to recent transmission. Epidemiol Infect. 2001, 126: 4362.
 38.
Vynnycky E, Fine PE: The natural history of tuberculosis: the implications of agedependent risks of disease and the role of reinfection. Epidemiol Infect. 1997, 119 (2): 183201. 10.1017/S0950268897007917.
 39.
Vynnycky E, Borgdorff MW, Leung CC, Tam CM, Fine PE: Limited impact of tuberculosis control in Hong Kong: attributable to high risks of reactivation disease. Epidemiol Infect. 2008, 136 (7): 943952.
 40.
Verver S, Warren RM, Beyers N, Richardson M, van der Spuy GD, Borgdorff MW, Enarson DA, Behr MA, van Helden PD: Rate of reinfection tuberculosis after successful treatment is higher than rate of new tuberculosis. Am J Respir Crit Care Med. 2005, 171 (12): 14301435. 10.1164/rccm.2004091200OC.
Prepublication history
The prepublication history for this paper can be accessed here:http://0www.biomedcentral.com.brum.beds.ac.uk/14712458/12/1091/prepub
Acknowledgements
Financial Support: This work was supported in part by NIH grant R01AI097045. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
One author (NMZ) is in debt to the University of Pennsylvania CFAR Developmental and International Cores (NIH grant P30AI45008, Penn Center for AIDS Research) for their continuous support in this and other related studies.
Author information
Additional information
Competing interests
The authors have no competing interests.
Authors’ contributions
YBH formulated the decision analysis and implemented the calculations. CD and NZ formulated the medical model. All authors had access to all data, participated in interpreting the results of the analysis, contributed to writing the manuscript and approved the last version of the manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
BenHaim, Y., Dacso, C.C. & Zetola, N.M. Infogap management of public health Policy for TB with HIVprevalence and epidemiological uncertainty. BMC Public Health 12, 1091 (2012). https://0doiorg.brum.beds.ac.uk/10.1186/14712458121091
Received:
Accepted:
Published:
DOI: https://0doiorg.brum.beds.ac.uk/10.1186/14712458121091
Keywords
 TB management
 HIVAIDS
 Public health
 Epidemiology
 Uncertainty
 Robustness
 Infogap