Abstract
Background Proper basic life support (BLS) skills are crucial for laypeople and health care professionals to increase the survival of cardiac arrest patients. A practical examination at the end of a BLS course may be beneficial for prolonging skill retention. We aimed to investigate the efficacy of our BLS training and the effect of BLS practical examinations on skill retention among fifth-year medical students compared with the effect of additional practice and continuous assessment.
Methods In this randomized, assessor-blinded, parallel group study, fifth-year medical students took a practical BLS examination (“practical examination” group) or participated in an additional 30-minute practice with continuous assessment (“additional practice” group) two weeks after a 90-minute intrahospital COVID-19 BLS training. BLS skill retention was assessed two weeks, two months and one year later, and the results of the two groups were compared. Fourteen elements of BLS were evaluated during the skill retention assessments. Descriptive statistics and Mann‒Whitney and Fisher’s exact tests were used for statistical analysis.
Results Thirty-two voluntary students were included (practical examination: n = 17, additional practice: n = 15), with no significant differences in basic characteristics (age: p = 0.891; gender: p = 0.999; previous BLS education: p = 0.469; previous participation in BLS: p = 0.678; planning to work in emergency medicine or critical care: p = 0.471). BLS skills were satisfactory during all skill retention assessments, except for the application of protective equipment and depth of chest compressions. More students placed surgical masks on patients’ faces in the additional practice group during the first skill retention assessment (p = 0.005). However, this difference disappeared over time, and both groups performed poorly in the application of protective equipment. The activation of the chain of survival and high-quality chest compressions were acceptable during all the skill retention assessments. There was no significant difference in overall BLS skill retention between the two groups (total score after two weeks: p = 0.764; after two months: p = 0.542; after one year: p = 0.791).
Conclusions The BLS course provided by our department was effective; however, the BLS practical examination did not offer a significant advantage in terms of skill retention compared to additional practice and continuous assessment in our student population.
Full
Article
ERCT Criteria Breakdown
-
Level 1 Criteria
-
C
Class-level RCT
- Individual students (not classes or schools) were randomized, and the intervention is not a tutoring-style exception.
- "The students were randomized into two groups with the help of Research Randomizer Version 4.0 software (Urbaniak, G. C., & Plous, S.)." (p. 3)
Relevant Quotes:
1) "This was a randomized, parallel group, single-blinded educational study investigating the effect of BLS practical examinations on skill retention compared with the effect of additional training with continuous assessment." (p. 2)
2) "The students were randomized into two groups with the help of Research Randomizer Version 4.0 software (Urbaniak, G. C., & Plous, S.)." (p. 3)
3) "Students randomized to the “additional practice” group took part in compulsory 90-minute BLS training sessions on the first day of the block and had an additional shorter practice session two weeks after the first training with continuous assessment evaluation by one of our instructors (authors DS, TN and CsM)." (p. 3)
4) "Students randomized to the “practical examination” group participated in the same BLS training at the beginning of the block and took a practical BLS examination two weeks after the BLS training with no additional practice." (p. 3)
Detailed Analysis:
Criterion C requires randomization at the class level (or stronger, such as school level), to reduce contamination between treatment and control in typical educational settings. A narrow exception is allowed for personal one-to-one tutoring interventions where student-level randomization is acceptable.
The paper explicitly states that "students" were randomized into two groups. The intervention is a course-related BLS training and subsequent examination/practice format, not a one-to-one tutoring intervention that would justify applying the exception.
Final Summary:
Criterion C is not met because randomization was at the individual student level and no tutoring-style exception applies.
-
E
Exam-based Assessment
- Outcomes were measured via scenario-based checklists and manikin software rather than a widely recognized standardized exam.
- "Instructors used a checklist (Supplementary material 3) to indicate whether a step was implemented correctly or not and marked the exact rate and depth of chest compressions, measured with the software Ambu Man C Torso™ (Ambu A/S, Copenhagen, Denmark)." (p. 4)
Relevant Quotes:
1) "Fourteen elements of BLS were evaluated during the skill retention assessments." (p. 1)
2) "The following fourteen BLS steps were evaluated by an independent instructor who did not teach the students during their BLS practice and was not involved in the practical examination or continuous assessment:" (p. 4)
3) "Instructors used a checklist (Supplementary material 3) to indicate whether a step was implemented correctly or not and marked the exact rate and depth of chest compressions, measured with the software Ambu Man C Torso™ (Ambu A/S, Copenhagen, Denmark)." (p. 4)
4) "A step was considered correct if it was performed according to the ERC guidelines in at least 75% during the assessment." (p. 4)
Detailed Analysis:
Criterion E requires a standardized, widely recognized exam-based assessment (i.e., an established external/standardized testing instrument), not a study-specific or locally administered checklist aligned to the intervention content.
This study measures BLS performance using (a) scenario performance rated on a checklist and (b) manikin/software-derived measures of compression depth/rate. While these are structured and anchored to ERC guidelines, the paper does not describe using a widely recognized standardized exam instrument for BLS competence; it describes an internally administered checklist-based assessment.
Final Summary:
Criterion E is not met because the outcomes rely on a checklist and manikin software measures rather than a standardized exam.
-
T
Term Duration
- Outcomes were assessed up to one year after the intervention, exceeding a term-length follow-up.
- "BLS skill retention was assessed two weeks, two months and one year later, and the results of the two groups were compared." (p. 1)
Relevant Quotes:
1) "Study participants with the other students in the Intensive Therapy and Anesthesiology course received a 90-minute long in-hospital BLS practice on the first day of the Intensive Therapy and Anesthesiology block ..." (p. 3)
2) "Students randomized to the “practical examination” group participated in the same BLS training at the beginning of the block and took a practical BLS examination two weeks after the BLS training with no additional practice." (p. 3)
3) "We performed a skill retention assessment two weeks, two months and one year after the last educational intervention (practical examination or additional practice with continuous assessment) ..." (p. 4)
Detailed Analysis:
Criterion T requires that outcomes be measured at least one full academic term (typically ~3–4 months) after the intervention begins.
The intervention begins with the initial BLS training on the first day of the block, and the study includes follow-up assessments extending to "one year after" the last educational intervention. This clearly exceeds a term-length follow-up.
Final Summary:
Criterion T is met because the study includes outcome assessment up to one year after the intervention.
-
D
Documented Control Group
- Both conditions are clearly described and baseline characteristics are reported, providing sufficient documentation for comparison.
- "Table 1 shows the characteristics of the students included in the study." (p. 5)
Relevant Quotes:
1) "Students randomized to the “additional practice” group took part in compulsory 90-minute BLS training sessions on the first day of the block and had an additional shorter practice session two weeks after the first training with continuous assessment evaluation by one of our instructors (authors DS, TN and CsM)." (p. 3)
2) "Students randomized to the “practical examination” group participated in the same BLS training at the beginning of the block and took a practical BLS examination two weeks after the BLS training with no additional practice." (p. 3)
3) "Thirty-two voluntary students were included (practical examination: n = 17, additional practice: n = 15), with no significant differences in basic characteristics ..." (p. 1)
4) "Table 1 shows the characteristics of the students included in the study." (p. 5)
5) "The age, gender distribution, previous experience in BLS, previous experience in BLS education, and plans regarding the chosen speciality after university did not differ between the additional practice and practical examination groups." (p. 5)
Detailed Analysis:
Criterion D requires a well-documented control/comparison condition, including what each group received and baseline characteristics sufficient to evaluate comparability.
This paper compares two randomized conditions (practical examination vs additional practice with continuous assessment). It describes both conditions and reports baseline student characteristics in Table 1, explicitly noting no baseline differences between groups.
Final Summary:
Criterion D is met because the comparison groups and baseline characteristics are clearly documented.
-
Level 2 Criteria
-
S
School-level RCT
- Randomization occurred among individual students rather than among schools (or equivalent institutional units).
- "The students were randomized into two groups with the help of Research Randomizer Version 4.0 software (Urbaniak, G. C., & Plous, S.)." (p. 3)
Relevant Quotes:
1) "The participants were fifth-year medical students studying at Semmelweis University in the 2021/2022 academic year ..." (p. 2)
2) "The students were randomized into two groups with the help of Research Randomizer Version 4.0 software (Urbaniak, G. C., & Plous, S.)." (p. 3)
Detailed Analysis:
Criterion S requires random assignment at the school (or equivalent institutional unit) level.
The paper is conducted within one university context and explicitly randomizes "students" into two groups. No schools, programs, or institutions were randomized.
Final Summary:
Criterion S is not met because the unit of randomization is not the school (or equivalent institutional unit).
-
I
Independent Conduct
- Key trial activities were performed by the authors/their department rather than a clearly external independent evaluator.
- "The randomization of the group assignment and the enrolment of students were completed by authors EK, AK, SzF and MB." (p. 3)
Relevant Quotes:
1) "The randomization of the group assignment and the enrolment of students were completed by authors EK, AK, SzF and MB." (p. 3)
2) "Skill retention assessments were performed two weeks, two months and one year later by our instructors to evaluate short- and long-term BLS skill retention (authors GK, EK and PSz)." (p. 3)
3) "In the single-blinded randomization, the students were aware of which group they belonged to; however, the instructors during BLS training and the instructors performing the skill retention assessments were blinded to who belonged to which group." (p. 3)
4) "The following fourteen BLS steps were evaluated by an independent instructor who did not teach the students during their BLS practice and was not involved in the practical examination or continuous assessment:" (p. 4)
Detailed Analysis:
Criterion I requires independent conduct, meaning the study is conducted independently of the intervention designers/providers, typically via a third-party evaluation team or organization.
The paper shows role separation and assessor blinding, and it notes an "independent instructor" who did not teach the students and was not involved in the intervention sessions. However, the paper also explicitly states that randomization/enrolment were done by authors and that skill retention assessments were performed by "our instructors" including named authors. This indicates the evaluation is internal to the author team/department, not clearly external/third-party.
Final Summary:
Criterion I is not met because the study lacks clearly external, third-party independent conduct despite assessor blinding.
-
Y
Year Duration
- Outcomes were assessed one year after the last educational intervention, satisfying the year-duration requirement.
- "We performed a skill retention assessment two weeks, two months and one year after the last educational intervention (practical examination or additional practice with continuous assessment) ..." (p. 4)
Relevant Quotes:
1) "Study participants with the other students in the Intensive Therapy and Anesthesiology course received a 90-minute long in-hospital BLS practice on the first day of the Intensive Therapy and Anesthesiology block ..." (p. 3)
2) "We performed a skill retention assessment two weeks, two months and one year after the last educational intervention (practical examination or additional practice with continuous assessment) ..." (p. 4)
3) "The secondary outcome was defined as long-term skill retention measured two months and one year after the interventions." (p. 4)
Detailed Analysis:
Criterion Y requires that outcomes be measured at least 75% of an academic year after the intervention begins.
The study includes follow-up at "one year" after the last educational intervention, which exceeds the 75%-of-a-year threshold under any typical academic calendar interpretation.
Final Summary:
Criterion Y is met because the study measured outcomes one year after the intervention.
-
B
Balanced Control Group
- The added time/instructor input is the intervention contrast being tested (additional practice with continuous assessment vs a practical examination), so unequal resources are integral rather than a confounding add-on.
- "Students randomized to the additional practice group participated in an additional 30-minute practice two weeks after the BLS training, at the same time as the BLS practical examination." (p. 4)
Relevant Quotes:
1) "Students randomized to the additional practice group participated in an additional 30-minute practice two weeks after the BLS training, at the same time as the BLS practical examination." (p. 4)
2) "An instructor was present during this practical session; however, he or she corrected only major mistakes (mistakes influencing calling for help or high-quality chest compressions) and evaluated the students with a continuous assessment method." (p. 4)
3) "Students randomized to the practical examination group took a practical BLS skills examination two weeks after the training, at the end of the practical part of the Intensive Therapy and Anesthesiology block." (p. 4)
4) "Students received a simple 2-minute long scenario during the examination and had to solve it in the same way as during the practice." (p. 4)
5) "They had no organized opportunity to practise between the BLS training and the examination." (p. 4)
Detailed Analysis:
Criterion B evaluates whether time/resources are balanced across groups, unless the additional resources are explicitly integral to what the study is testing (i.e., the resource difference is the treatment variable, not an accidental confound).
This trial explicitly contrasts two post-training approaches: (a) an "additional 30-minute practice" with instructor presence and "continuous assessment" versus (b) a practical BLS examination consisting of a short scenario, with "no organized opportunity to practise" beforehand. The differences in time-on-task and instructor-supported practice are not incidental; they define the experimental contrast.
Therefore, although the conditions are not resource-equal, that imbalance is integral to the intervention comparison and is the intended treatment contrast rather than a separable confounding add-on.
Final Summary:
Criterion B is met because the additional time/support is integral to the intervention contrast being tested rather than an unintended imbalance.
-
Level 3 Criteria
-
R
Reproduced
- No independent replication of this specific RCT was found as of the ERCT check date.
Relevant Quotes:
1) "There is only limited evidence to show the effect of practical examinations on skill retention." (p. 2)
2) "Additionally, a randomized trial showed that taking a practical examination at the end of a BLS course can be more effective in maintaining short-term BLS skills ..." (p. 7)
Detailed Analysis:
Criterion R requires an independent replication of this specific study by a different research team in a different context, reported in a peer-reviewed publication.
The paper discusses prior randomized trials in the general area of testing effects in BLS training, but it does not identify any independent replication of this specific 2026 trial. Internet searching (as of 2026-03-03) did not identify a peer-reviewed replication study that explicitly cites this paper and reports a reproduction of its specific design.
Final Summary:
Criterion R is not met because no independent replication of this specific RCT was identified as of the ERCT check date.
-
A
All-subject Exams
- Because standardized exams are not used (Criterion E not met), the all-subject standardized exam criterion cannot be met.
Relevant Quotes:
1) "Fourteen elements of BLS were evaluated during the skill retention assessments." (p. 1)
2) "Instructors used a checklist (Supplementary material 3) to indicate whether a step was implemented correctly or not ..." (p. 4)
Detailed Analysis:
Criterion A requires standardized exam-based assessment across all main subjects and explicitly depends on Criterion E: if E is not met, A is not met.
This study uses checklist-based practical skill assessments and manikin measures rather than standardized exams. Therefore, it cannot satisfy the all-subject standardized exam requirement.
Final Summary:
Criterion A is not met because Criterion E is not met.
-
G
Graduation Tracking
- Follow-up ends at one year and does not track participants through graduation from their educational stage.
- "The secondary outcome was defined as long-term skill retention measured two months and one year after the interventions." (p. 4)
Relevant Quotes:
1) "The participants were fifth-year medical students studying at Semmelweis University in the 2021/2022 academic year ..." (p. 2)
2) "The secondary outcome was defined as long-term skill retention measured two months and one year after the interventions." (p. 4)
3) "We performed a skill retention assessment two weeks, two months and one year after the last educational intervention (practical examination or additional practice with continuous assessment) ..." (p. 4)
Detailed Analysis:
Criterion G requires tracking participants until graduation from the relevant educational stage (here, completion/graduation from the medical program stage), not merely a fixed follow-up interval.
This paper follows participants for up to one year after the intervention and does not mention collecting graduation outcomes or tracking the cohort to graduation. Internet searching (as of 2026-03-03) did not identify a follow-up paper by the same author team reporting graduation tracking for this specific cohort.
Final Summary:
Criterion G is not met because the study does not track outcomes through graduation and no graduation-tracking follow-up was found.
-
P
Pre-Registered
- The paper states "Clinical trial number Not applicable" and does not provide a pre-registration record.
- "Clinical trial number Not applicable" (p. 3)
Relevant Quotes:
1) "Clinical trial number Not applicable" (p. 3)
2) "Only students who applied voluntarily before the start of the course and provided written informed consent were included in the study." (p. 2)
Detailed Analysis:
Criterion P requires a publicly available pre-registered protocol (with a registry/platform identifier and registration timing before data collection begins).
The paper explicitly states "Clinical trial number Not applicable" and provides no registration platform, identifier, or registration date. The paper describes ethics approval and informed consent, but these are not equivalent to pre-registration of hypotheses and analysis plans. Internet searching (as of 2026-03-03) did not identify a publicly posted pre-registration record for this trial.
Final Summary:
Criterion P is not met because no pre-registered protocol record is provided or identifiable.
Request an Update or Contact Us
Are you the author of this study? Let us know if you have any questions or updates.