Health technology assessment (HTA) is widely used to evaluate medicines in terms of their value for money and determine those that should be funded, particularly in the current climate of increased spending on healthcare and limited resources. HTA provides policymakers with a comprehensive approach to assessing a medicine's clinical effectiveness, safety, quality, and cost-effectiveness. Therefore, economic evaluation is considered the “fourth hurdle” for funding medicine [1]. Economic models are often developed to synthesise evidence from clinical trials and other sources (e.g., literature, databases, surveys) to estimate the incremental cost-effectiveness ratio (ICER) of the proposed medicine compared to existing treatments. Medicines with ICERs that meet or lie below an explicitly or implicitly pre-specified ICER threshold are typically funded [2]. However, when making recommendations, policymakers do not solely rely on the most likely ICER value but also systematically assess the uncertainties associated with it [3, 4]. Such uncertainties are commonly encountered in economic models and may cause medicines to be rejected for funding, thus delaying patients' access to medicines.
The systematic identification of uncertainties can help inform recommendations [5, 6]. While some uncertainties are inherent in the treatment or disease and thus cannot be mitigated in the short term, other uncertainties may be reduced with alternative evidence or the collection of additional data. The value of information theory suggests that HTA bodies should prioritise mitigating those uncertainties when the cost of acquiring additional data is worthwhile [7].
The ability to quantify the trade-off between the costs and benefits of obtaining additional data critically depends on how well the uncertainties have been characterised [8,9,10,11]. The first step in managing uncertainty in HTA is therefore to formally characterise the uncertainties involved [8]. Generally, uncertainties in economic models can be categorised into four types: methodological, generalisability, structural, and parameter uncertainty [8, 11,12,13,14,15,16,17,18,19].
Methodological uncertainty occurs when the methods used deviate from the reference methodology suggested by the HTA guidelines [12, 13, 15, 18]. Generalisability uncertainty refers to the degree to which the results obtained from current clinical trials or economic models can be applied to populations with distinct characteristics specific to the HTA jurisdiction [13, 15, 18]. The characterisation of generalisability uncertainty has not reached consensus in guidelines among HTA jurisdictions [20,21,22,23] and in recent frameworks [12, 15, 18, 24], potentially as it may be addressed through the choice of parameter values and model structure [8, 17]. Structural uncertainty typically refers to the uncertainty associated with the economic model structure [8, 11,12,13, 15, 16, 18, 19]. While there is currently no standard definition of structural uncertainty [25], it generally encompasses any uncertainty that arises from assumptions made about the underlying model, such as model structure or the functional form of relationships between variables [14, 26]. The distinction between parameter and structural uncertainty may be considered somewhat artificial given that structural choices could be parameterised, although sometimes with considerable effort [8, 11]. Parameter uncertainty arises from a lack of precise information about the “true value” of a parameter [8, 11,12,13,14,15]. Parameter uncertainty can arise due to missing data, poor quality data, and biased study estimates [8].
Among these, methodological and structural uncertainty can be further categorised into unparameterisable and parameterisable uncertainty, depending on whether their influences on the ICER can be explored using alternative data inputs, sources or scenarios [8, 11]. There is also an inherent hierarchy among the four types of uncertainty, reflecting the sequence of conceptualisation in the modelling (e.g., identifying the place of therapy, comparator and patient population, followed by designing the model structure) and when each type is considered in the decision-making process. The methodological uncertainty would be the first in this sequence and the most important, followed by generalisability, structural, and parameter uncertainty [6, 18].
In Australia, the Pharmaceutical Benefits Advisory Committee (PBAC) recommends which medicines should be listed on the Pharmaceutical Benefits Scheme (PBS) and thus receive public funding. The PBAC guidelines list the key factors influencing their decision-making. These key factors include five quantitative factors (i.e., comparative health gain, comparative cost-effectiveness, patient affordability in the absence of PBS subsidy, predicted use in practice and financial implications for the PBS, and predicted use in practice and financial implications for the Australian Government health budget), and other less-readily quantifiable factors (e.g., severity of disease). While the PBAC provides no clear framework for characterising uncertainty, the PBAC and related literature state that the uncertainty around evidence and assumptions in submissions have influenced the funding recommendations [20, 27,28,29,30,31].
The applicability of existing frameworks for characterising uncertainty in economic models [8, 11,12,13,14,15,16,17,18] is, to some extent, limited in Australian practice. First, PBAC guidelines did not include a reference case like other HTA guidelines [e.g., the guidelines of the National Institute for Health and Care Excellence (NICE, England) [22] and the Canadian Agency for Drugs and Technologies in Health (CADTH, Canada) [23]]. In theory, methodological uncertainty can be explored by comparing the results of a reference case analysis with those of a non-reference case analysis to examine the influence of methodological differences on the ICER [18]. The absence of a clearly defined reference case may result in insufficient exploration of methodological uncertainty in Australia. Second, the existing frameworks for characterising uncertainty differ from the categorisation in the PBAC guidelines (i.e., structural, parameter, and translational uncertainty) [20], posing a challenge to the practical application of existing frameworks. Particularly, the PBAC provide limited procedural guidance on how to systematically characterise structural uncertainty [17]. Third, the PBAC guidelines do not include some contemporary economic evaluation techniques [32]. For example, the PBAC only mandates deterministic sensitivity analyses for assessing uncertainty, rather than using probabilistic sensitivity analyses [20]. This introduces challenges in discerning between uncertainties that can be parameterised with those that cannot, given the key drivers of the economic model are not consistently parameterised. For instance, although the PBAC specifically expresses concerns about generalisability uncertainty and assesses the applicability of the presented evidence to the Australian setting in both clinical and economic evaluations (refer to Sections 2 and 3A.3 of the PBAC guidelines), sensitivity analysis of the impact of this uncertainty on the ICER was limited in practice.
Previous studies have focused on assessing specific sources of uncertainty, such as time horizon, model structure, utility and cost estimation, in the decision-making processes of the pan-Canadian Oncology Drug Review (pCODR), National Health Care Institute (Zorginstituut Nederland, the Netherlands), NICE, and the PBAC [4, 33,34,35,36]. These analyses were typically descriptive and univariate, except for two studies. Harris et al. (2008) analysed 103 PBAC submissions using a probit regression model and found little impact of uncertainty around economic model validity, modelled outcome, and modelled cost on PBAC recommendations [27]. Harris et al. (2016) extended that study by considering more concrete measures of the uncertainty around economic model validity and modelled outcomes, which significantly influenced the PBAC recommendations [30].
The evidence being inconclusive in this literature is likely due to the measure of uncertainty being heterogeneous and unsystematic. These two issues are interrelated as defining and measuring uncertainty in HTA decisions has been challenging without a standard framework of classification supported by practical guidelines. Ghabri et al. (2016) is the only empirical study that systematically considered (even though descriptively) different types of uncertainty including methodological, parameter, and structural uncertainty [3]. Based on 28 submissions, they found that the Haute Autorité de Santé (HAS) in France was most concerned with methodological and parameter uncertainty in their decision-making process and much less with structural uncertainty.
The development of a framework to characterise different types of uncertainty in line with the PBAC practice and empirically exploring how these uncertainties influence funding recommendations are needed, as it can help the PBAC better evaluate the robustness of the ICER and thus reduce the probability of incorrect recommendations. It can also help assess if the current evidence is adequate to reduce uncertainty and the value of additional data acquisition [37, 38]. In the meantime, such evidence would be valuable to the sponsors, usually pharmaceutical companies, as it can be used to improve their model quality.
In this study, we aimed to fill the gaps by developing a standard framework and a practical guideline to characterise different types of uncertainty in economic model-based submissions to the PBAC in Australia. We also empirically assessed their associations with funding recommendations based on a sample of first submissions to the PBAC. In this regard, our primary hypothesis to be tested is that the PBAC was more concerned with the uncertainties placed in the higher levels of hierarchy (e.g., methodological, generalisability, and structural uncertainty) and those that cannot be parameterised based on the available information [6, 18].
Comments (0)