Figure 2 depicts the pearl selection and overall pearl-growing process. Out of 595 records, six instruments [7, 8, 21,22,23,24] were identified in the database from the Division of Health Economics [17], while four additional instruments [9, 25,26,27] were identified through the pearl-growing process. One instrument [22] remained inaccessible despite attempts to contact the authors, resulting in nine instruments being included in this study.
Nine studies were utilized as initial pearls for the nine included instruments. Each study is associated with a unique instrument: the COST [7], the Economic Hardship Questionnaire (EHQ) [21], the FIT [8], the InCharge Financial Distress/Financial Well-being (IFDFW) [24], the Socioeconomic Well-being Scale (SWBS) [23], the Personal Financial Burden (PFB) [27], the PROFFIT [9], the Hardship And Recovery with Distress Survey (HARDS) [26], and the Subjective Financial Distress Questionnaire (SFDQ) [25].
3.2 Pearl GrowingThe pearl-growing process was conducted for each of the included instruments. For the COST instrument, from 561 articles citing the pearl in three databases, we identified 69 articles (development, validation, and application studies using the COST instrument) to be eligible for standardized assessment (see Supplementary Material 2 in the electronic supplementary material). The detailed PRISMA charts for each instrument are illustrated in Supplementary Material 2.
Table 1 provides a summary of the descriptive characteristics of the pearls and their respective instruments. All nine instruments were developed after 2001, with four instruments developed within the last 3 years: the SFDQ [25], the HARDS [26], the FIT [8], and the PROFFIT [9]. The majority of these instruments (5 out of 9) were originally validated in the US [6, 7, 21, 23, 24, 27]. The four remaining instruments were validated in four other countries: the FIT in Canada [8], the PROFFIT in Italy [9], the SFDQ in India [25], and the HARDS in China [26]. Seven instruments were specifically validated among cancer populations (SFDQ, HARDS, FIT, PROFFIT, COST, SWBS, and PFB), whereas two were validated in non-cancer populations (EHQ, IFDFW). Notably, the EHQ was validated in parents of seventh- or eighth-grade children without cancer. The number of items in each instrument ranged from seven (PFB and PROFFIT) to 20 (EHQ).
Table 1 Descriptive statistics of the pearls – key publications of included instrumentsThe COST instrument was the most commonly used instrument to assess the SEI of cancer and was applied in 57 studies. Although not cancer specific, the IFDFW [24] and the EHQ [21] were the second (n = 15) and third (n = 6) most frequently used instruments (after COST) for measuring the SEI of cancer, as shown in Table 1. The three newly developed instruments (the FIT [8], the SFDQ [28], and the HARDS [26]) have yet to be applied or validated in other studies at the time of the current review.
Only two instruments, the COST and the IFDFW, were utilized in other countries beyond where they were originally developed (hereby, referred to as contextually adapted versions). Apart from the US, the COST [7] was applied and/or validated in various linguistic versions in multiple countries including India [28], Brazil [29], China [30, 31], Australia [32], Italy [33], Iran [34], Japan [35], and Tunisia [36]. The IFDFW was applied in Iran and Malaysia in Persian and Malaysian versions, respectively.
Further details pertaining to individual instruments can be found in Supplementary Material 2.
3.3 Standardized AssessmentTwenty-one validation studies were conducted for the nine originally developed instruments, ten contextually adapted versions of the COST, and two contextually adapted versions of the IFDFW. Among the adaptations of the COST, six validated the 11-item original COST version 1 (COST V1) developed by de Souza et al. (2014) [6], while four validated the 12-item COST version 2 (COST V2) released by the FACIT group [37]. The overall EMPRO and their attributable-specific scores for each instrument are illustrated in Fig. 3.
Fig. 3The overall EMPRO and attributable scores of included instruments. Instruments with no score: Unable to score due to unavailable/limited information. COST COmprehensive Score for financial Toxicity, EHQ Economic Hardship Questionnaire, EMPRO Evaluating the Measurement of Patient-Reported Outcomes, FIT Financial Index of Toxicity, HARDS Hardship And Recovery with Distress Survey, InCharge InCharge Financial Distress/Financial Well-being (IFDFW), PFB Personal Financial Burden, PROFFIT Patient-Reported Outcome for Fighting Financial Toxicity, SFDQ Subjective Financial Distress Questionnaire, SWBS Socioeconomic Well-being Scale, v version
Regarding the ‘concept and measurement model’ attribute of the EMPRO tool, the nine original instruments provided sufficient information for attribute scoring (refer to Supplementary Material 3; see the electronic supplementary material). Scores ranged from 19.05 (PFB) to 90.47 (COST), with six original instruments scoring over 50: the COST (90.47), the PROFFIT (71.43), the EHQ (57.14), the IFDFW (52.38), the SFDQ (52.38), and the HARDS (52.38). This attribute contributed considerably to the overall EMPRO score of these instruments, as shown in Fig. 3. In contrast, the contextual adaptations of the COST and the IFDFW lacked sufficient information for scoring in this attribute, as they assumed the generalizability of the original instruments' concept and measurement model to different cultural and linguistic settings.
Most instruments’ ‘reliability’ scores were derived from the ‘internal consistency’ sub-attribute due to low reproducibility scores or lack of available information (refer to Supplementary Material 3). Four original instruments scored over 50 for reliability: the SWBS (77.78), the COST (75), the EHQ (58.33), and the PROFFIT (55.56). Among the contextual adaptations, six versions scored 55.56, and the rest either scored below 50 or were not scorable due to insufficient information. ‘Reliability’ is the most popular attribute having a score (n =16) among the 21 investigating validation studies (refer to Fig. 3).
For the ‘validity’ attribute, the SFDQ and the HARDS lacked information for scoring. Among the remaining seven original instruments, scores ranged from 33.33 to 100. The COST achieved the highest score (100), followed by the IFDFW (66.67) (refer to Supplementary Material 3). Among contextual adaptations, only two versions of the COST scored 50 and 58.33, while the rest were non-scorable due to limited information available.
Very limited information was identified for the ‘responsiveness,’ ‘interpretability,’ ‘burden,’ and ‘alternative modes of administration’ attributes, rendering most instruments non-scorable in these aspects, significantly influencing the overall EMPRO score. Consequently, only the original COST instrument attained a score above 50 (66.42), while contextual adaptations of the COST scored below 50 due to insufficient information (refer to Fig. 3 and Supplementary Material 3).
3.4 Concepts Reflected by the Included InstrumentsTable 2 summarizes our allocation of each item of the included instruments to the themes and subthemes of the OECI conceptual framework [1]. The theme most frequently assessed by the instruments was ‘financial coping ability’ (all nine instruments), followed by ‘psychological financial response’ (eight instruments, except the PFB), and ‘financial coping behavior’ (7 instruments, except the COST and the SWBS).
Table 2 Item allocation to the OECI conceptual frameworkOnly one instrument (the PROFFIT) addressed the theme of ‘direct costs’ by considering the ‘non-medical costs’ of travel for treatment, without quantifying the amount in monetary terms. Similarly, only three out of nine instruments addressed the ‘indirect costs’ theme, including the COST, the FIT, and the SFDQ.
Based on our allocation process, two instruments covered most of the themes of the framework, namely the PROFFIT [9] and the SFDQ [25]. The PROFFIT instrument covered all the themes, except for ‘indirect costs,’ whereas the SFDQ instrument covered all themes of the framework except for ‘direct costs.’ Accordingly, none of the included instruments covered all the themes of the framework.
Comments (0)