Randomized, controlled study evaluating multi-disciplinary team-based learning (MDTBL) as optimal teaching paradigm for residents, comparing with problem-based learning (PBL) and lecture-based learning (LBL)

Runyi Tao, Shan Gao, Jinteng Feng, Yizhao Sun, Yixing Li, Zhiyu Wang, Bohao Liu, Xingzhuo Zhu, Hongyi Wang, Xi Jia, Guangjian Zhang, Rui Gao

Published:
ERCT Check Date:
DOI: 10.1186/s12909-026-08569-1
  • science
  • higher education
  • adult education
  • China
0
  • C

    Randomization was at the individual participant level (not class- or school-level), and no one-to-one tutoring exception applies.

    "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group in a 1:1:1 ratio, with each group consisting of ten students with at least one year of clinical experience and fifteen students with less than a year of clinical experience." (p. 3)

  • E

    The outcome exams were developed by the course director and instructors for this course rather than being a widely recognized standardized exam.

    "All questions were developed by the course director together with clinical instructors based on Bloom's Taxonomy and the course learning objectives and current diagnostic and treatment guidelines for pulmonary nodules." (p. 4)

  • T

    The paper reports the course running from September 15, 2022, through December 31, 2023, which exceeds a term from start to the end-of-course outcome assessments.

    "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3)

  • D

    The comparator groups (including the lecture-based condition) and baseline characteristics are clearly documented (e.g., Table 1 and the LBL schedule description).

    "Table 1 compares the basic characteristics of students in each group." (p. 5)

  • S

    The study was conducted within one institution and randomized individual participants, not schools/sites.

    "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group..." (p. 3)

  • I

    The paper does not document independent third-party conduct of the intervention evaluation; the author team and course staff appear to have delivered and assessed the course.

    "All questions were developed by the course director together with clinical instructors..." (p. 4)

  • Y

    The reported course period from September 15, 2022, through December 31, 2023, exceeds 75% of an academic year from start to end-of-course measurement.

    "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3)

  • B

    The groups received the same preparatory materials, cases, and session count; staffing/simulation differences appear integral to the teaching models being tested rather than separable add-ons.

    "The preparatory materials provided to each group included the same diagnostic and treatment guidelines and ten course-related academic articles." (p. 3)

  • R

    No independent, peer-reviewed replication of this specific MDTBL vs PBL vs LBL pulmonary-nodule course RCT was identified.

  • A

    Criterion A is not met because Criterion E is not met and because outcomes are limited to a course-specific pulmonary-nodule test rather than all core subjects.

    "All questions were developed by the course director together with clinical instructors..." (p. 4)

  • G

    The study reports only within-course testing (pre/mid/final) and does not track participants until graduation; no follow-up paper reporting graduation outcomes was identified.

    "We compared pre-test, midterm, and final exam scores among the MDTBL, PBL, and LBL groups." (p. 6)

  • P

    The study explicitly states it was retrospectively registered in 2023, after the course start date (September 15, 2022), and no public pre-registration record was found.

    "Trial registration This study was retrospectively registered in 2023 as No.JG2023-0203." (p. 2)

Abstract

Background This study aims to compare multi-disciplinary team-based learning (MDTBL), problem-based learning (PBL) and lecture-based learning (LBL) on the learning outcomes and experiences of medical students. Methods A randomized controlled study was designed to recruit 30 medical students with a minimum of one year of clinical experience and 45 with less than one year of clinical experience to take a course on clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, to December 31, 2023. The participants were randomly assigned to the MDTBL group, PBL group, and LBL group to complete a full-course curriculum. Before, during, and after the learning phases, all participants underwent identical theoretical assessments and received a score on their learning progress. In the MDTBL and PBL groups, group discussions were permitted. After the completion of the learning phase, self- evaluation and course satisfaction were assessed through a questionnaire. Results The authors collected basic information on participants to ensure comparability between groups. All groups showed significant improvement in their competence in the clinical diagnosis and evaluation of pulmonary nodules, with the MDTBL group demonstrating notably higher gains in theoretical knowledge and case analysis skills (P < 0.05). The learning participation scale indicated that student engagement in the MDTBL group was higher than in the other two groups (P < 0.05). Additionally, the MDTBL group perceived the course as more engaging and enjoyable. Conclusions This study demonstrates that the MDTBL teaching model, as an innovative approach, excels in enhancing knowledge acquisition, collaborative skills, and clinical practice application skills in medical students. It positions itself as a valuable teaching model for future medical education, providing educators with a new toolkit for training specialists.

Full Article

ERCT Criteria Breakdown

  • Level 1 Criteria

    • C

      Class-level RCT

      • Randomization was at the individual participant level (not class- or school-level), and no one-to-one tutoring exception applies.
      • "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group in a 1:1:1 ratio, with each group consisting of ten students with at least one year of clinical experience and fifteen students with less than a year of clinical experience." (p. 3)
      • Relevant Quotes: 1) "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group in a 1:1:1 ratio, with each group consisting of ten students with at least one year of clinical experience and fifteen students with less than a year of clinical experience." (p. 3) 2) "A total of forty-five medical students from the First Affiliated Hospital of Xi'an Jiaotong University with less than a year of clinical experience and 30 medical students with at least a year of clinical experience specializing in thoracic surgery, respiratory medicine, radiology, oncology, and pathology were voluntarily recruited as course participants." (p. 3) Detailed Analysis: Criterion C requires an RCT with randomization at the class level (or stronger, e.g., school/site level) to reduce contamination, unless the intervention is explicitly one-to-one tutoring/personal teaching. The paper states that "course participants" (individual learners) were randomized into MDTBL, PBL, and LBL groups. Nothing in the quoted methods frames the intervention as one-to-one tutoring; instead, it is a group-based instructional format. Therefore, the study is an individual-level randomization within a single institution, not a class-level (or stronger) RCT as required by ERCT. Criterion C is not met because randomization is at the individual participant level and the tutoring exception does not apply.
    • E

      Exam-based Assessment

      • The outcome exams were developed by the course director and instructors for this course rather than being a widely recognized standardized exam.
      • "All questions were developed by the course director together with clinical instructors based on Bloom's Taxonomy and the course learning objectives and current diagnostic and treatment guidelines for pulmonary nodules." (p. 4)
      • Relevant Quotes: 1) "The pre-test, midterm, and final exam were the same across all groups, comprising theoretical questions and clinical case analysis." (p. 3) 2) "All assessments were designed to evaluate the extent of learning outcomes that participants received from the course." (p. 4) 3) "All questions were developed by the course director together with clinical instructors based on Bloom's Taxonomy and the course learning objectives and current diagnostic and treatment guidelines for pulmonary nodules." (p. 4) Detailed Analysis: Criterion E requires standardized exam-based assessments that are widely recognized (e.g., national/state standardized tests or established external exams), rather than assessments created by the study team for the local course. The paper explicitly describes that the assessments were designed to evaluate learning from "the course" and that "All questions were developed by the course director together with clinical instructors". This indicates a course-specific, instructor-made test, even though it was administered identically across groups. No quote indicates use of an external standardized exam for these outcomes. Criterion E is not met because the main outcomes rely on course- team-developed exams rather than standardized exams.
    • T

      Term Duration

      • The paper reports the course running from September 15, 2022, through December 31, 2023, which exceeds a term from start to the end-of-course outcome assessments.
      • "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3)
      • Relevant Quotes: 1) "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3) 2) "After all six sessions, students, again in groups, immediately complete the final exam, self-assessment questionnaire, and feedback questionnaire at a scheduled time." (p. 4) Detailed Analysis: Criterion T requires that outcomes be measured at least one full academic term after the intervention begins (roughly 3 to 4 months), based on clearly documented timing. The paper provides explicit calendar dates for the course period (September 15, 2022 through December 31, 2023). It also describes that the final exam is administered after completion of the six sessions (i.e., at the end of the course sequence for the group). While the paper does not specify the exact spacing of sessions, the reported overall course timeframe clearly exceeds one term. Under ERCT, this is sufficient to treat the start-to-final- assessment interval as at least term-length. Criterion T is met because the reported course timeframe exceeds one academic term from start to end-of-course assessment.
    • D

      Documented Control Group

      • The comparator groups (including the lecture-based condition) and baseline characteristics are clearly documented (e.g., Table 1 and the LBL schedule description).
      • "Table 1 compares the basic characteristics of students in each group." (p. 5)
      • Relevant Quotes: 1) "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group in a 1:1:1 ratio..." (p. 3) 2) "Table 1 compares the basic characteristics of students in each group." (p. 5) 3) "There were no statistically significant differences among the three groups in terms of gender, age, or clinical experience (P>0.05)." (p. 5) 4) "In each session, the instructor provides a thorough lecture on theoretical knowledge of pulmonary nodules, using two case examples." (p. 4) Detailed Analysis: Criterion D requires that the control/comparator group(s) be well- documented, including group composition and what the comparator received. The paper describes three randomized instructional conditions and documents baseline comparability in Table 1, stating there were no statistically significant differences across groups on key characteristics (gender, age, and clinical experience). It also provides a concrete description of the lecture-based learning (LBL) condition schedule, supporting that the comparator condition is described in a way that allows interpretation. Criterion D is met because the comparator groups and their baseline characteristics and activities are documented.
  • Level 2 Criteria

    • S

      School-level RCT

      • The study was conducted within one institution and randomized individual participants, not schools/sites.
      • "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group..." (p. 3)
      • Relevant Quotes: 1) "A total of forty-five medical students from the First Affiliated Hospital of Xi'an Jiaotong University..." (p. 3) 2) "Using computer-generated randomization, course participants were divided into the MDTBL group, PBL group, and LBL group in a 1:1:1 ratio..." (p. 3) Detailed Analysis: Criterion S requires randomization at the school/site level (the implementing institution/unit), not at the individual participant level. The study is conducted at a single institution and randomizes "course participants" into different teaching methods. There is no evidence of multiple sites being randomized. Criterion S is not met because assignment is not at the school/ site level.
    • I

      Independent Conduct

      • The paper does not document independent third-party conduct of the intervention evaluation; the author team and course staff appear to have delivered and assessed the course.
      • "All questions were developed by the course director together with clinical instructors..." (p. 4)
      • Relevant Quotes: 1) "In the MDTBL group, the teaching team consisted of one instructor and three assistants (one of whom acts as a simulated patient) to supervise the sessions." (p. 3) 2) "All questions were developed by the course director together with clinical instructors based on Bloom's Taxonomy and the course learning objectives and current diagnostic and treatment guidelines for pulmonary nodules." (p. 4) 3) "Authors’ contributions Conceptualization, validation, visualization and writing—original draft: Runyi Tao and Shan Gao; methodology: Jinteng Feng, Hongyi Wang and Xi Jia; Data curation: Yizhao Sun and Yixing Li; software and formal analysis: Bohao Liu and Zhiyu Wang; investigation: Xingzhuo Zhu; writing—review and editing: Guangjian Zhang. Funding acquisition, project administration, resources and supervision: Rui Gao." (p. 9) Detailed Analysis: Criterion I requires the evaluation to be conducted independently from the intervention designers/providers to reduce bias (e.g., an external evaluation team conducting data collection and/or analysis). The paper describes course teaching teams supervising the sessions and states that the course director/instructors developed the exam questions. The author contribution statement also indicates that the author team performed investigation and formal analysis. There is no statement indicating independent third-party conduct for implementation, scoring, or analysis. Criterion I is not met because independent conduct is not documented and the course/assessment work appears internal.
    • Y

      Year Duration

      • The reported course period from September 15, 2022, through December 31, 2023, exceeds 75% of an academic year from start to end-of-course measurement.
      • "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3)
      • Relevant Quotes: 1) "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3) 2) "After all six sessions, each participant independently completes the final test, self-evaluation competency questionnaire, and feedback questionnaire at a scheduled time." (p. 4) Detailed Analysis: Criterion Y requires outcomes to be measured at least 75% of an academic year after the intervention begins, with clearly stated timing. The paper reports a course period spanning from September 15, 2022 through December 31, 2023 (over 15 months). It also describes final outcome assessment occurring after completion of the course sessions. This reported timeframe exceeds the ERCT year-duration threshold. Criterion Y is met because the reported start-to-end course period is longer than an academic year-scale interval.
    • B

      Balanced Control Group

      • The groups received the same preparatory materials, cases, and session count; staffing/simulation differences appear integral to the teaching models being tested rather than separable add-ons.
      • "The preparatory materials provided to each group included the same diagnostic and treatment guidelines and ten course-related academic articles." (p. 3)
      • Relevant Quotes: 1) "The course was structured into three courses designed to suit the MDTBL, PBL, and LBL with six sessions tailored for each group." (p. 3) 2) "The preparatory materials provided to each group included the same diagnostic and treatment guidelines and ten course-related academic articles." (p. 3) 3) "Each group studied identical cases involving twelve patients." (p. 3) 4) "In the MDTBL group, the teaching team consisted of one instructor and three assistants (one of whom acts as a simulated patient) to supervise the sessions." (p. 3) 5) "In the PBL group, one instructor and two assistants formed the teaching supervision team, while in the LBL, one instructor and two assistants provided supervision." (p. 3) Detailed Analysis: Criterion B compares the nature, quantity, and quality of resources (time, staffing, materials) provided to intervention and control conditions. It is met when resources are balanced, or when the extra resources are integral to the treatment being tested (i.e., the resource differences are part of the intervention package/contrast rather than an accidental confound). Time/resource balancing evidence: all groups had "six sessions" and received identical preparatory materials and identical cases, supporting comparable instructional time and core materials. Extra resources are present in staffing/simulation: MDTBL had "one instructor and three assistants" including "a simulated patient", while PBL and LBL had one instructor and two assistants. This is a meaningful resource difference. However, the paper defines MDTBL as including role-based multidisciplinary interaction and simulated patient engagement as part of the instructional model. That makes the additional simulation/staffing inputs integral to what MDTBL is (the treatment package), not a separable optional enhancement layered on top of an otherwise identical curriculum. Criterion B is met because core time/material inputs are held constant across groups, and the additional MDTBL staffing/simulation appears integral to the intervention contrast being tested.
  • Level 3 Criteria

    • R

      Reproduced

      • No independent, peer-reviewed replication of this specific MDTBL vs PBL vs LBL pulmonary-nodule course RCT was identified.
      • Relevant Quotes: 1) (No statements about independent replication were found in the paper.) (n/a) Detailed Analysis: Criterion R requires that the study be independently replicated by a different research team in a different context and published in a peer-reviewed outlet. The paper itself does not claim replication. Internet searching by DOI, full title, and key terms (MDTBL/PBL/LBL; pulmonary nodules; Xi'an Jiaotong University) did not identify a peer-reviewed, independent replication of this specific intervention/course and evaluation design. Criterion R is not met because independent replication evidence was not found.
    • A

      All-subject Exams

      • Criterion A is not met because Criterion E is not met and because outcomes are limited to a course-specific pulmonary-nodule test rather than all core subjects.
      • "All questions were developed by the course director together with clinical instructors..." (p. 4)
      • Relevant Quotes: 1) "The pre-test, midterm, and final exam were the same across all groups, comprising theoretical questions and clinical case analysis." (p. 3) 2) "All questions were developed by the course director together with clinical instructors based on Bloom's Taxonomy and the course learning objectives and current diagnostic and treatment guidelines for pulmonary nodules." (p. 4) Detailed Analysis: Criterion A requires standardized exam-based assessment across all main subjects and (per ERCT rules) depends on Criterion E being met. Here, the main assessments are course-specific and were developed by the course director/instructors, so Criterion E is not met and Criterion A cannot be met. Additionally, the outcomes are limited to the topic of pulmonary nodule diagnosis/evaluation rather than broad coverage across core subjects for the educational program. Criterion A is not met because the study does not use standardized all-subject exams and does not assess across all main subjects.
    • G

      Graduation Tracking

      • The study reports only within-course testing (pre/mid/final) and does not track participants until graduation; no follow-up paper reporting graduation outcomes was identified.
      • "We compared pre-test, midterm, and final exam scores among the MDTBL, PBL, and LBL groups." (p. 6)
      • Relevant Quotes: 1) "We compared pre-test, midterm, and final exam scores among the MDTBL, PBL, and LBL groups." (p. 6) 2) "After all six sessions, students, again in groups, immediately complete the final exam, self-assessment questionnaire, and feedback questionnaire at a scheduled time." (p. 4) Detailed Analysis: Criterion G requires follow-up tracking of participants until graduation from the relevant educational stage/program. The paper describes assessment confined to the course itself (pre-test, midterm, and final exam) and immediate end-of-course measurement after the sessions. There is no description of collecting graduation outcomes or tracking participants to program completion. An internet search for subsequent follow-up publications by the same author team reporting graduation outcomes for this cohort did not identify any relevant graduation-tracking paper. Criterion G is not met because graduation tracking is not reported in the paper and no follow-up publication with graduation outcomes was found.
    • P

      Pre-Registered

      • The study explicitly states it was retrospectively registered in 2023, after the course start date (September 15, 2022), and no public pre-registration record was found.
      • "Trial registration This study was retrospectively registered in 2023 as No.JG2023-0203." (p. 2)
      • Relevant Quotes: 1) "Participants took part in a training course on the clinical diagnosis and evaluation of pulmonary nodules from September 15, 2022, through December 31, 2023." (p. 3) 2) "Trial registration This study was retrospectively registered in 2023 as No.JG2023-0203." (p. 2) Detailed Analysis: Criterion P requires that the full study protocol be pre- registered before the study begins (i.e., before data collection/ participant enrollment for the trial). The paper reports the course started on September 15, 2022, and it explicitly states the trial was "retrospectively registered in 2023". That indicates registration occurred after the study began. Additional internet searching for a publicly accessible registry entry corresponding to "No.JG2023-0203" did not locate a registry record with a registration date that could qualify as prospective pre-registration. Criterion P is not met because registration is explicitly retrospective and prospective pre-registration evidence was not found.

Request an Update or Contact Us

Are you the author of this study? Let us know if you have any questions or updates.

Have Questions
or Suggestions?

Get in Touch

Have a study you'd like to submit for ERCT evaluation? Found something that could be improved? If you're an author and need to update or correct information about your study, let us know.

  • Submit a Study for Evaluation

    Share your research with us for review

  • Suggest Improvements

    Provide feedback to help us make things better.

  • Update Your Study

    If you're the author, let us know about necessary updates or corrections.