Utilising multiple discharge coding will improve identification of patients with giant cell arteritis: a retrospective analysis of a hospital discharge dataset

Administrative datasets are frequently used in epidemiological studies [4]. However, it is important to understand the limitations of such data. Since administrative datasets rely largely on discharge diagnosis, this study sought to determine the accuracy of a discharge diagnosis of GCA (ICD code M31.5 and M31.6) as compared to expert diagnosis of GCA at 6 months. This study found that only 65.6%, 95 CI [55.0%, 75.1%] have confirmed GCA based on expert opinion at six months follow up although 88.2%, 95 CI [79.8%, 93.9%] of the patients discharged with a diagnosis of GCA fulfilled the 2022 ACR/EULAR criteria for GCA.

We used a clinician diagnosis of GCA at 6 months as the anchor, which is generally recognised as the current gold standard [7, 8, 14, 15], although other definitions exist. The lack of a perfect test to diagnose GCA as well as lack of standardised definition for GCA diagnosis has resulted in heterogenous data within databases used in epidemiological studies [18]. For example, hospital-based studies often only include biopsy-proven GCA cases, whereas population-based studies often include a clinical diagnosis [18]. Therefore, results may vary depending on which definition of GCA is used. In clinical practice, early diagnosis of GCA is key, and the index of suspicion of GCA should be high, so that treatment can be started promptly to prevent irreversible ischaemic complications [19]. However, this may lead to a preliminary diagnosis of GCA at discharge, though often it is only through the composite results of multimodal imaging, TAB, laboratory parameters and assessment of clinical features over time through which final diagnosis or de-diagnosis is made. Undoubtedly, efforts should be made to objectively secure the diagnosis wherever possible to avoid unnecessary treatment, which may also lead to a revised diagnosis prior to the 6-month mark [15]. It is not surprising that a third of patients with an initial ICD code of GCA are de diagnosed by an expert at 6 months.

The 2022 ACR/ EULAR were utilised to give some validity to the expert opinion, which is by nature subjective. 58 of 61 with a clinical diagnosis of GCA at 6 months met the 2022 ACR/EULAR criteria at first discharge, suggesting that the classification criteria have a high sensitivity of 95.1% in our cohort. Interestingly, although classification criteria are created with specificity in mind, in this study 24/32 were falsely classified as GCA by the 2022 ACR/EULAR criteria, suggesting poor specificity of 25%. Such classification criteria are for research purposes only, as they excluded symptoms of extracranial GCA [20]. Indeed, a retrospective case series has shown that 25.7% of patients with positive TAB did not meet the initial 1990 ACR criteria, further highlighting that these criteria are not intended for diagnostic purposes [20, 21].

This study demonstrated that multiple ICD-10 codes for GCA in a patient raises the probability of a that patient for having GCA. In this study, the chance of identifying a patient with GCA is 67.4% with one ICD-10 codes for GCA, 80% with two ICD-10 codes for GCA, and 100% with three or more ICD-10 codes for GCA (p = 0.1373). The multiple inpatient coding frequencies could be due to readmission for GCA flares, outpatient TAB, or for day infusions of methylprednisolone or tocilizumab. Therefore, to increase the accuracy of studies utilising administrative data, the case ascertainment definition could require multiple discharges with an ICD-10 codes for GCA. This method has been proposed by Almutairi et al. (2021) in utilising rheumatoid arthritis (RA) diagnostic code with biologic infusion codes, to improve accuracy of administrative health data for identifying patients with RA [22]. This is a simple intervention to improve the validity of case identification in GCA studies using administrative data [22]. It would most like produce a more homogenous population in patient selection for GCA studies. However, a further study with a larger sample size is required to achieve statistical significance.

After 6 months follow up, less than half of those patients de diagnosed with GCA were given an alternative diagnosis, the most common being sinusitis and occipital/ trigeminal neuralgia. This does however attest to the protean and often ill-defined presentations that may lead clinicians to mistakenly suspect GCA [23]. Concerningly, a third of these patients still had GCA listed in their medical record and subsequent discharge summaries, indicating a high rate of persistent inaccuracies in medical databases over time. If the diagnosis of GCA had been disproven after expert review, efforts should be made to remove this diagnosis from the database and medical records. The implications of having GCA perpetually remain in their medical history could result in future episodes being mistaken for GCA flares and thus commencing unnecessary immunosuppression. Practical strategies include involving clinicians to actively remove the GCA diagnosis from their medical history; sending a letter to their general practitioner (GP), and to coding department to remove this diagnosis; and empowering the patients to remind other clinicians that they do not have GCA.

A strength of this study is that it took into account patients with a diagnosis of GCA by expert opinion, not just those with a positive TAB or imaging. It also included patients with extracranial phenotypes of GCA. Therefore, it had more accurate representation of the confirmed GCA cases, not just those with a positive TAB. The use of expert opinion is the most reliable tool for diagnosis of GCA [7, 8, 14, 15].

The limitations of this study are its small sample size and retrospective design. Because the data in this study were collected by reviewing medical records and discharge summaries, there was information bias due to missing medical records. It also relies on the accuracy of clinician documentation and discharge summaries which are collected retrospectively. In addition, the ACR/EULAR score was calculated based on symptoms documented in the medical records. As such, there would be high assessor variability, instead of having a select number of trained assessors. This was difficult to address in a retrospective study, thus creating a need for a prospective study design in future.

Comments (0)

No login
gif