Do Medical Schools Need to Adapt Their Curriculum in Order to Teach Medical Students ‘Webside’ Manner? A Systematic Review

Figure 1 demonstrates the PRISMA diagram for article selection. The primary search produced 241 citations. After removal of duplicates and review of abstracts, 38 full-text articles were extracted for review. Of these, 27 were excluded, with a total of 11 articles meeting inclusion criteria. The 11 articles were assessed for quality using the MERSQI scoring system. The MERSQI looks at 6 domains: study design, sampling, data type, validity of evaluation tool, data analysis, and outcome, producing an overall score [28]. The maximum score is 18, with higher scores correlating to better quality studies. Scores for the studies ranged from 6 to 12, with a median score of 8.2/18. For individual study assessments, see Appendix 2. Overall, the quality of the included studies is low, as demonstrated by the quality assessment, with average publishable scores being 10.7, with a cut-off of 9.0. Data provision is lacking in several of the studies, particularly in those that demonstrate a qualitative element to their study. Methodology and analysis reports are limited. This may reflect the rapidly evolving nature of COVID-19 and the demand for ‘quick’ research results to guide a change in practice.

Fig. 1figure 1Study Characteristics

The 11 studies include a total of 809 medical students ranging from first year to final year students. Over half the studies included were from the Americas; 5 out of 11 from the USA and 1 out of 11 from Canada. The remaining 5 studies were undertaken in the UK, Australia, and Germany. The sample sizes varied significantly ranging from 5 students to 153 students. Nearly all the studies used students’ perceptions and opinions of their own personal skills as an evaluation measure, with only 2 out of 11 studies using objective data provided by the simulated patient. Nine out of 11 use self-reported participant questionnaires only. One out of 11 studies uses a combination of objective and self-reported participant feedback for data collection. The loss to follow-up also varied greatly from 0% loss to follow-up through to 55%. Table 1 shows the characteristics of the included studies and the themes that they apply to.

Table 1 Study characteristics of included articles [29,30,31,32,33,34,35,36,37,38,39]Methods of Training or Curriculum Implementation

Curriculum design and training delivery varied considerably across the 11 studies. Although all included some form of simulated practice, there were notable differences in theoretical content, training duration, delivery methods, and the nature of pre-intervention learning or information provided. The specific approach taken in each study is outlined in Table 2.

Table 2 Intervention characteristics of included studies [29,30,31,32,33,34,35,36,37,38,39]Effectiveness of Training or Curriculum

Direct comparison of the included studies was difficult due to significant heterogeneity in methodological approaches, intervention implementation, and outcome reporting as seen in Table 2. As a result, narrative synthesis has been undertaken. Thematic analysis of included studies identified three focussed themes for reporting data: confidence in performing virtual consultations, communication skills, and establishing the doctor-patient relationship or rapport. Digital literacy is not reported as this does not address webside manner. Finally, participants’ overall perception of the interventions provided was included. This is deemed relevant as an intervention needs to be acceptable and appropriate for its user and provide stakeholder feedback for future researchers.

Communication Skills

Seven out of 11 studies include questions directly evaluating communication skills in their study outcomes. The majority demonstrate positive associations as a result of their intervention, with only one demonstrating no change following the telemedicine training (Table 3). Mulcare et al. is not shown in Table 3, as they provide no numerical or descriptive data but simply provide a quote that ‘almost all learners concluded that the simulation session would alter their approach to communicating with patients over a virtual medium going forward’ [34].

Table 3 Results of intervention on communication skills [29, 30, 33, 35, 38, 39]

Whilst the majority of studies rely on self-reported improved communication skills, Kumra et al. provide a numerical score calculated by simulated patient feedback [33]. However, it is not clear how this score is impacted by the session, as a pre-intervention score is not performed. This is similarly seen for Afonso and Bramstedt et al. [30]. Furthermore, it is not entirely clear how this score is generated, despite providing the simulated patient checklist in the appendices.

Of the 11 studies, only Walker et al. specifically reported on body language when performing a virtual consultation [39]. They demonstrated a positive association with training [39]. Body language is reported as one of the most important differences between face-to-face and virtual consulting in the literature; therefore, it may be a consideration for future research within telemedicine training provision [40]. Although most studies reported overwhelmingly positive effects on communication skills for virtual consulting following their training interventions, significant limitations in study design and reporting mean that these findings should be interpreted with caution.

Doctor-Patient Relationship

Three out of 11 studies specifically evaluated the intervention effect on empathy and establishing a doctor-patient relationship. Vogt et al. evaluated student confidence on building a doctor-patient relationship during a virtual consultation with a mean score of 4.05 (SD 1.12) [38]. This variable was not evaluated pre-intervention; therefore, there is no mean difference limiting the assessment of intervention efficacy. Similarly, Bramstedt et al. evaluated empathy towards patients in the virtual setting [30]. They report 12 out of 15 participants strongly agreed or agreed that the session improved their empathy towards patients in this setting, equating to 80% [30]. They also report all 15 participants strongly agree or agree that the session has aided in professionalism and behaviour training [30]. Newcomb et al. are the only study that evaluates empathy pre- and post-session using Likert [36]. Prior to the session, the majority of students felt fairly confident in demonstrating empathy; however, following the session, the majority felt completely confident demonstrating a change post-intervention, with exact numbers not provided [36]. Whilst these 3 studies demonstrate a positive association with telemedicine training and confident provision of the doctor-patient relationship, the numbers are small (n = 101) and therefore the significance is unestablished, limiting interpretation.

Confidence in Performing Virtual Consultations

Whilst specific reporting on non-technical skills such as developing empathy and establishing a rapport is not present, studies did report on general consulting and history taking, a key part of webside manner. A significant part of history taking includes effective communication with the patient. Therefore, overall confidence in performing virtual consultations, including history taking, was evident within the review. This applies to 5 out of 11 studies.

Booth et al. and Gunner et al. both applied numerical values to Likert scales, calculating mean confidence scores pre- and post-intervention [31, 32]. Both present improvement in confidence following the telemedicine training sessions. Booth et al. evaluate mean confidence in history taking increases from 3.4 to 4.45 post-intervention, with a mean difference of 1.05 [31], whilst Gunner et al. evaluated in video consultation skills, with perceived student confidence increasing from 2.32 (SD 0.83) pre-intervention to 3.97 (SD 0.38) post-intervention [32]. This represents a significant mean difference (p < 0.01)[32]. Similarly, Walker et al. report a pre-intervention mean score of 2.73 and post-intervention score of 3.62 (mean difference 0.89) when evaluating student comfort conducting telemedicine encounters[39]. Kumra et al. support this, stating 93% of students post session strongly agreed or agreed that the session increased their confidence in history taking in the virtual setting [33]. Rienitis et al. further support this by evaluating student confidence in conducting the complete virtual consultation [37]. Prior to the intervention, 6 students provided positive responses equating to 10.2%, increasing to 41 post session, equating to 69.5% [37]. These studies demonstrate a positive association with telemedicine training and self-reported confidence in virtual consulting; however, the quality of evidence is low and should be interpreted with caution.

Overall Value of the Session

The value of the session was reported quantitatively as percentages in 4 out of 11 studies with Booth et al., Vogt et al., and Mulcare et al. presenting positive reactions to the training provided [31, 34, 38]. Ninety-seven percent of students felt the course was useful in Mulcare et al., 83.4% of students rated the course as very good or good in Vogt et al., and 87.5% found the session very useful in Booth et al., with none reporting the session as not useful at all [31, 34, 38]. However, Rienitis et al. report that only 45.8% felt the course was valuable to their learning meaning the majority felt the course was not valuable [37]. Other studies included quotes from students related to the course set up and value. However, they did not present any qualitative methodology or evaluation and therefore these were not included in this review due to limited additional value added and an inability to assess trustworthiness of this data. Whilst overall value of the session does not answer the question posed by this review, it is an important consideration when considering any intervention or curricular change. It has to be acceptable to students and be perceived as important to encourage engagement and change [16].

Comments (0)

No login
gif