Class-level RCT
Tests interventions at the classroom level to prevent cross-group contamination
The Educational Randomized Controlled Trial (ERCT) Standard is a rigorous framework addressing key challenges in educational studies. With 12 criteria across 3 progressive levels, it resolves issues like bias, limited scope, and short-term focus – enabling researchers to produce actionable results that improve education systems worldwide.
The ERCT Standard has 3 levels, each containing 4 criteria
Tests interventions at the classroom level to prevent cross-group contamination
Uses standardized exams for objective and comparable results
Ensures studies last at least one academic term to measure meaningful impacts
Requires detailed control group data for proper comparisons
Expands testing to whole schools for real-world relevance
Assesses effects across all core subjects, avoiding imbalances
Ensures studies last at least one academic term to measure meaningful impacts
Ensures equal time and resources for both groups to isolate the intervention's impact.
Tracks students until graduation to evaluate long-term impacts.
Uses standardized exams for objective and comparable results
Removes bias by using third-party evaluators.
Increases transparency by publishing study plans before data collection
Although the randomisation was not at the class level, the study focused on individualized, at-home interventions, which meets the exception clause for personal teaching/tutoring.
They employed standardized exams such as the TOWRE and CC2 for reading, which satisfies the standard exam-based assessment requirement.
They delivered 16 consecutive weeks of active reading training, meeting the approximate 3–4-month term threshold.
They used a within-subject no-training phase to index gains unrelated to the interventions and provided full baseline and demographic details for those participants in that control period.
They randomised individual children rather than assigning entire schools, so the stronger school-level randomisation criterion is not satisfied.
They focused exclusively on reading outcomes, without evaluating other school subjects.
They conducted two consecutive 8-week interventions (total 16 weeks), which is less than a full school year.
Although they tracked a no-training period, they did not balance any extra time or budget for the control condition, as required for B.
They stopped outcome measurement after 16 weeks of training and did not track students through the end of their school level.
Though it replicates prior work by largely the same team, no external research group replicated the results in a separate study.
There is no indication that an external, independent team conducted or oversaw the study apart from the authors who designed the training approach.
They state their ANZCTR registration (12608000454370), indicating formal pre-registration.
Given the importance of effective treatments for children with reading impairment, paired with growing concern about the lack of scientific replication in psychological science, the authors used a randomised controlled trial (RCT) to test replicability of previously reported large effects...
They randomly assigned entire 4th-grade classrooms (teachers) within each school rather than splitting students within a single class.
They employed the Gray Silent Reading Test, a well-known standardized reading exam, alongside other measures.
The intervention spanned approximately 6–7 months, exceeding a typical 3–4 month academic term.
They clearly report baseline equivalence, pretest comparisons, and standard language arts usage for the control group.
They did teacher-level randomization within schools, not full school-level assignment.
They only tested students in reading and did not measure other subjects’ outcomes (e.g. math, science).
The intervention ran for about 6–7 months, not the full academic year.
They did not allocate equivalent resources or tools to control classrooms; the intervention group had exclusive access to the ITSS system.
They only measured short-term outcomes after 6–7 months, without tracking participants until 4th-grade graduation or beyond.
No independent, external replication is described; references to earlier studies involve overlapping teams or pilot data.
The intervention developers themselves led and carried out the trial without an external evaluation team.
No mention is made of a pre-registration plan or publicly accessible protocol prior to data collection.
Reading comprehension is a challenge for K-12 learners, especially with nonfiction texts. This paper reports on a large-scale randomized controlled efficacy trial with rural and suburban 4th-grade students using a web-based intelligent tutoring system (ITSS) to deliver explicit instruction in...
They randomized entire schools, which is at least as strong as class-level randomization.
They used the nationally standardized ENLACE exam, a known external measure, to assess outcomes.
They followed participants from 10th grade (2009) to 12th grade (2012), which is well over a single term.
They document the control group’s demographics, baseline test scores, and confirm it received no special treatment beyond normal schooling.
Assignment to treatment versus control was by entire schools, explicitly fulfilling the school-level RCT requirement.
They only assessed math and Spanish via ENLACE; other subjects (e.g., science) were not tested.
They tracked participants from 10th grade (2009) to the end of 12th grade, surpassing one full year.
No meaningful extra resources or time were given to the treatment, so no balancing was needed.
They followed students until the end of 12th grade (the final year), thus tracking graduation.
No independent replication by another team or in a separate setting is described.
The authors are external researchers (World Bank, University of Surrey), separate from SEP who created the pilot.
There is no mention of publicly pre-registering the study or a pre-analysis plan.
We use data from an RCT in Mexico to study whether providing 10th-grade students with information on monetary and non-monetary rewards of education can affect high-school completion and standardized test scores. While the information treatment had little impact on timely...
They randomized entire grades in each school, which is larger than a single class and thus meets or exceeds the class-level requirement.
They used the Smarter Balanced standardized assessment for math and reading, which is an established exam-based measure.
They ran the messaging intervention from late October to May, which is sufficiently long (spanning multiple months) to fulfill a typical academic term duration.
They document the control group’s baseline demographics, prior achievement, and standard practices used, confirming a well-described comparison group.
Randomization was done by grade within each school, not by entire schools as required for S.
They assessed only math and reading outcomes, with no coverage of other main subjects such as science or social studies.
The intervention began around October and ended in late May, which is shorter than a full academic year (roughly 9–10 months).
The intervention group received an added technology (text-message updates). No matching resource or budget/time was provided to the control group, so the control condition was not balanced in resources.
They only followed students through the end of that academic year and did not track them until any graduation milestone.
The study’s results have been independently replicated by other researchers in different contexts, demonstrating reproducibility.
The lead researcher co-developed the intervention and previously received compensation from the LMS company, so it was not independently conducted.
The paper does not reference any formal pre-registration or provide a registry ID prior to data collection.
We partner text-messaging technology with school information systems to automate the gathering and provision of information to parents at scale. In a field experiment across 22 middle and high schools, we used this technology to send automated text-message alerts to...
Randomization was conducted among entire schools, which meets or exceeds the class-level requirement.
They used the TerraNova standardized test, which is externally developed and widely recognized.
They used the tool for the entire school year, which is longer than a standard term.
They provide demographics, baseline tests, and a clear description of the control group's business-as-usual approach.
They conducted the RCT by assigning entire schools, satisfying school-level randomization.
They only measured mathematics achievement and did not assess all main subjects (e.g., reading, science).
They implemented the intervention for a full school year and collected end-of-year data.
Treatment teachers received professional development, dedicated platform support, and coaching, while control teachers got none of the extra resources.
Subsequent follow-up research, including an independent replication in North Carolina, has demonstrated that the positive effects of ASSISTments persist through the end of middle school (8th grade), thereby fulfilling the Graduation Tracking criterion.
An independent replication conducted in North Carolina by WestEd, using a larger and more diverse sample, confirmed the significant gains in math achievement observed in the original study. This independent trial satisfies the reproduction criterion.
The research was conducted by SRI/University of Maine personnel, not by the primary developers of ASSISTments.
They do not mention any official pre-registration of the study or a registry listing prior to data collection.
In a randomized field trial with 2,850 seventh-grade mathematics students, the authors evaluated whether an educational technology intervention (ASSISTments) increased mathematics learning when used for homework. The intervention provided automated feedback and hints for students, while delivering organized performance data...
They clearly randomized at the class (tutorial session) level, satisfying the requirement for Class-level RCT.
They used custom-created quizzes and performance metrics instead of any external standardized exam.
The study took place over brief sessions in September rather than covering a full academic term of at least several months.
They provide basic demographic info, pre- and post-test scores, and clearly note that the control group used only a handout with no AR resources.
They randomized classes within one university department, not entire schools.
They evaluated only sewing/knitting performance and knowledge, with no mention of measuring every main subject area via standardized exams.
The entire intervention and data collection took place over a short timeframe, not spanning a full academic year.
The intervention group had AR technology, but the control group only had a handout and no equivalently increased resources or budget.
They gathered only short-term data from immediate post-tests, with no extended tracking through graduation.
No independent replication is reported or cited; they only describe their single study in one institution.
The same group that designed the AR app also conducted the research, with no independent evaluation team.
They did not mention registering their protocol before data collection; no public registry link or ID is cited.
This study compares a traditional handout versus an augmented reality (AR) video approach to teaching threading and knitting concepts in textile-related courses. The authors randomly split students into two groups (handout vs. AR video) and measured learning outcomes via short...
They conducted a randomized assignment of entire schools to the Extra Teacher Program, which meets or exceeds class-level RCT requirements.
They used a study-specific math and literacy test, not a widely recognized external standardized exam.
The intervention was implemented for substantially more than one academic term.
They clearly described the control group’s composition, baseline data, and normal teaching practices.
They assigned entire schools to the program or control group, satisfying the requirement for school-level randomization.
They only measured math and literacy outcomes, not all main subjects in the curriculum.
The program duration exceeded a full school year.
The treatment schools received an extra teacher (reducing class size), whereas control schools did not receive any equivalent resource.
They tested students at 19 months and then a year later, but did not track them until primary school graduation.
They reference related studies but do not report an independent replication by a separate research team.
The researchers themselves designed and implemented the study without an independent external team conducting the experiment.
No mention is made of any pre-registration or a publicly accessible analysis plan.
This paper investigates a program under which randomly selected Kenyan schools were funded to hire an additional contract teacher, outside of the normal civil-service channels, alongside a parallel governance intervention that empowered parents in the hiring and monitoring process. The...
No random assignment of entire classes to distinct interventions was conducted; the study is observational.
Only course grades from the institution’s typical grading system were used, with no mention of standardized exam-based assessment.
There was no defined intervention lasting one term; the study is purely observational with no distinct experimental period.
They do not describe a separate control group or any control condition; there is only one large observational sample.
No mention of randomly assigning entire schools; the dataset covers courses within a single university with no experimental design.
Outcomes are individual course grades, not standardized exam results in all core subjects.
The paper has no year-long intervention or treatment period; it studies naturally occurring classes over multiple years.
No separate group with matching resources was established; there was no formal intervention requiring balancing of inputs.
They do not track students to graduation or follow them beyond each individual class. No long-term outcomes were measured.
An independent team has replicated the study’s findings in a separate context since the original publication. Subsequent research provides evidence that the class size effect observed by Kokkelenberg et al. holds in other settings.
The paper is an observational study without an external intervention or separate design team. Independent conduct does not apply.
There is no mention of any official pre-registration or registry for the study’s design or hypotheses.
This paper analyzes data from over 760,000 undergraduate observations at a northeastern public university to study the relationship between class size and course grades. Using ordinal logit regressions, both with and without fixed effects, the authors show that larger class...
Randomization occurred at the individual-student level, not at the class or school level, and the personal-tutoring exception does not apply.
Only instructor-created or course-specific quizzes were used, rather than a recognized standardized exam.
The intervention lasted only 5 weeks in a summer course, which is shorter than a standard academic term.
They present baseline demographics, performance, and a clear description of control group activities (placebo survey).
They randomized individual students, not entire schools, so the design does not satisfy school-level RCT.
They exclusively assessed performance in one STEM course and did not test all main subjects.
Implementation only covered 5 weeks in summer, not a full academic year.
No significant resource difference existed; both groups received identical instructor contacts and extra credit opportunities.
They only measured outcomes within a 5-week window and did not follow students until graduation.
No mention of a separate independent replication by different researchers in another context.
The authors themselves designed and delivered the intervention, with no external group conducting the study.
The authors do not mention any formal pre-registration or registry entry made prior to data collection.
Through a randomized control trial of students in a for-credit online course at a public 4-year university, this paper tests the efficacy of a scheduling intervention aimed at improving students’ time management. While the intervention initially improved quiz scores for...
They randomized at the student level, rather than randomly assigning entire classes to treatment or control.
They employed a custom-made stereochemistry test, not a widely recognized standardized exam.
The study’s learning interventions took place over one week, not a full academic term.
They describe the control group’s baseline performance, size, and how it received no lesson, meeting the documentation requirement.
They randomized individual students in one university setting, not entire schools.
They measured only stereochemistry; no other subjects were tested, failing the ‘AllExams’ requirement.
The study spanned only seven days, far short of the year-long requirement.
The control group received no comparable time/resources, so the study was not balanced in resource allocation.
They collected final data after one week, with no mention of tracking until course or school graduation.
Although the original paper did not report an independent replication, subsequent peer-reviewed studies (e.g., Luong et al. (2021), Setren et al. (2021), and Holm et al. (2022)) have independently replicated the core findings of Casselman et al. (2019). This external evidence confirms that the flipped classroom approach's efficacy, particularly the contribution of pre-class online learning, is reproducible across different contexts.
The same team designed, delivered, and assessed the intervention; there is no third-party or external evaluation group.
No statement of pre-registration or public registry ID is provided.
In order to compare the impact of the preclass online learning environment to the in-class collaborative activities typically done in a flipped classroom, a randomized controlled trial (RCT) was conducted. A two-day organic chemistry stereochemistry unit was delivered to students...
Randomization was done by family rather than by class (or school), and the one-to-one tutoring exception does not apply.
They used parent/child self-report questionnaires instead of a standard external exam-based measure.
The active intervention period was only ~8 weeks, which is less than one full academic term.
They report demographics, baseline data, and specify the wait-list nature of the control condition.
They randomized parents/families instead of assigning entire schools, so it does not meet this stronger criterion.
They only used parent/child measures of academic behaviors and satisfaction, not all-subject standardized assessments.
The program’s active implementation was ~8 weeks, which is shorter than a full academic year.
The wait-list group received no equivalent resource/time allocation during the intervention period.
They followed children for only 6 months post-intervention, not until completion of the current school level.
They cite earlier studies but do not demonstrate an independent replication with a separate research team.
The same authors who developed the Triple P intervention led and reported this trial; it was not an independent team.
They list an ANZCTR registration (ACTRN12613000660785), indicating pre-registration before data collection.
This study evaluated the effects of Group Triple P with Chinese parents on parenting and child outcomes as well as outcomes relating to child academic learning in Mainland China. Participants were 81 Chinese parents and their children in Shanghai, who...
They randomized at the pupil level in an individually delivered intervention, which fits the personal-tutoring exception to class-level randomization.
They employed PhAB-2, a standardized, normed, and recognized battery, rather than an in-house test.
The active intervention period lasted only about 8 weeks, which falls short of a full academic term.
They present clear demographic and baseline data for the control group and describe its ‘business-as-usual’ instruction.
Random assignment was performed at the pupil level, not by school.
They only measured phonological/literacy skills, not all relevant school subjects.
They only implemented the program for ~8 weeks, far shorter than a full school year.
The intervention group received extra daily computer-based sessions; the control group did not get an equivalent resource allocation.
They only conducted a 2-month follow-up, not tracking the children through their school-level graduation.
Subsequent to this 2016 study, independent research teams have replicated its findings. A 2021 efficacy trial in 57 schools (evaluated by RAND) reported that children offered Lexia made significantly greater reading progress than controls. Likewise, other studies have found Core5 users outperforming non-users on standardized reading measures. These replications by different investigators mean the original results have been reproduced.
They do not provide a clear statement that the study was conducted by a fully independent team with no ties to the developers.
No statement about pre-registration or any registry link is provided.
Many school-based interventions are being delivered in the absence of evidence of effectiveness. This study sought to address this oversight by evaluating the effectiveness of the commonly used the Lexia Reading Core5 intervention, with 4- to 6-year-old pupils in Northern...
They randomly assigned entire sections (rather than individual students within a single class) to flipped or lecture for each targeted lesson.
They used custom course assessments (quizzes, midterms, final) rather than a recognized standard exam.
They implemented and tested the flipped vs. lecture approach over an entire semester (full term).
They describe the control (lecture) approach, the sections involved, and the group’s typical experience in sufficient detail.
They only randomized sections within one single institution, not entire schools.
They only assessed econometrics-specific outcomes, not a full set of academic subjects.
They only carried out the study over one semester (about 16 weeks), not a full academic year.
The flipped group had extra out-of-class videos and required quizzes; the control group did not receive matched time/resources.
They only track students up through the final exam of the course, not until graduation.
Since publication, an independent research team has conducted a comparable flipped-classroom RCT, satisfying the external replication requirement.
The authors themselves designed and delivered the intervention; there was no separate, independent team.
They do not reference an official preregistration or any registry prior to data collection.
Despite recent interest in flipped classrooms, rigorous research evaluating their effectiveness is sparse. In this study, the authors implement a randomized controlled trial to evaluate the effect of a flipped classroom technique relative to a traditional lecture in an introductory...
No actual random assignment at the class (or school) level is carried out or reported within this paper’s own study design; it only references other RCTs conducted elsewhere.
The paper does not report using or analyzing standardized exam data as part of its own design; it only discusses potential or upcoming studies using such exams.
No original intervention of a minimum one-term duration is presented here; the paper is a policy discussion and references other trials’ durations.
No direct details on a control group’s composition or baseline characteristics are provided as part of the paper’s own research design.
The paper does not itself conduct or report a school-level RCT; it only describes external RCTs that are being or will be carried out by others.
They discuss future RCTs focused on specific subjects (reading or math), not a comprehensive ‘all-subjects’ approach.
No direct year-long intervention is described in this paper’s own research; the authors merely discuss other multi-year RCTs.
There is no description of a control group receiving equivalent time/budget to match the intervention condition; the paper is conceptual rather than an RCT report.
There is no explicit mention of tracking students until graduation; the paper focuses on near-term outcomes and theoretical approaches.
No evidence is presented of an independent research team replicating the same design or intervention detailed in the paper.
The paper does not describe who conducted the interventions or whether they were independent from the original designers.
No formal mention of a preregistered protocol or registry is provided in the paper.
The effect of a reduced pupil–teacher ratio has mainly been investigated as that of reduced class size. Hence we know little about alternative methods of reducing the pupil–teacher ratio. Deploying additional teachers in selected subjects may be a more flexible...
Randomization was performed at the classroom level, which is acceptable for 'Class-level RCT.'
They use custom tests drawn from the KA Lite module question bank rather than a widely recognized standardized exam.
The paper describes an intervention of two 6-week units (12 weeks total) rather than clearly spanning a full recognized term.
They clearly define the no-incentive control, document its baseline characteristics, and specify it as receiving usual instruction plus the same modules (but no rewards).
They implemented the randomization at the classroom, not school, level.
They only measure outcomes in mathematics; no other subjects' performance is tracked.
They only ran the intervention and measurements for about 12 weeks total, not a full academic year.
Since the study’s primary objective is to assess the impact of additional rewards on outcomes, the control group intentionally receives only the standard “business as usual” inputs. The extra resources in the treatment groups constitute the treatment itself; therefore, the control group being unenhanced is by design and does not indicate an imbalance.
The study ends measurements within the same school year; it does not follow students through graduation.
No independent replication is discussed; the trial stands as a single implementation.
The author was not the developer or owner of the KA Lite platform; the study was an external academic evaluation.
The authors pre-registered the study in the AEA RCT registry (AEARCTR-0000643) before conducting analyses.
This randomized experiment implemented with school children in India directly tests an input incentive (rewards for effort on learning modules) against an output incentive (rewards for test performance) and a control. Students in the input incentive treatment performed substantially better...
They randomized entire schools, which is stronger than class-level randomization.
They used a mix of custom questions and some from prior literature, but did not employ a widely recognized standardized exam.
They delivered four sessions (about three hours total), which is shorter than one academic term.
They give demographic and baseline information on the control group, with separate columns showing their data in Table III.
Schools were the unit of randomization, fulfilling the requirement for S.
They measured financial literacy only, with no assessment of other core subjects such as math, reading, or science.
Their intervention was only four class sessions plus a 4-week follow-up, nowhere near a full academic year.
The control group received no equivalent budget or instructional time to match the specialized sessions of the treatment group.
They measured immediate and 4-week follow-up effects only, with no tracking until graduation.
No independent replication of the exact same intervention is reported by an external team.
The authors designed and evaluated the intervention themselves; no independent research or external evaluator was involved in running the study.
The trial’s initial registration date is September 23, 2019, which is after the study started on August 15, 2018 and ended on January 31, 2019. Pre-registration requires that the protocol be publicly registered before data collection begins. Thus, the study was registered retrospectively and does not satisfy criterion P.
Using a computer-based learning environment, the present paper studied the effects of adaptive instruction and elaborated feedback on the learning outcomes of secondary school students in a financial education program. We randomly assigned schools to four conditions on a crossing...
Randomization was performed at the student level rather than at the class or school level, and no tutoring exception applies.
They used researcher-developed quizzes and observation checklists instead of standard external exams.
The intervention was about 11 weeks, which does not clearly encompass a full 3- to 4-month academic term.
They provide demographic details for the control group and describe what the control group received (standard FC).
They randomized individual students, not entire schools.
Only fundamentals of nursing skills and knowledge were tested; no assessment of other main subjects.
They ran the intervention for roughly 11 weeks, far shorter than a year.
The intervention group received multiple gamified features not mirrored in the control group with no extra resource equivalence.
They measured short-term outcomes only, with no follow-up until actual graduation.
Independent research teams have implemented similar gamified flipped classroom interventions and obtained comparable positive results, indicating the findings are reproducible.
The same team that created and implemented the intervention also conducted the study, with no external evaluator.
They explicitly state the prospective registration on ClinicalTrials.gov (NCT04859192) before data collection.
Background: Flipped learning excessively boosts students’ understanding through reversed arrangements of pre-learning and in-class learning events. Using a gamification method in flipped classrooms can help students stay motivated and achieve their goals. Methods: A randomized controlled trial design with pre-test...
They clearly assigned entire third-grade classes (not individual students) to receive the CAL intervention, fulfilling class-level randomization.
They used a custom-made (or partly customized) assessment instead of a widely recognized standardized exam.
The program ran for the full fall semester (about 4 months), satisfying the requirement of covering at least one academic term.
They detail the control group’s composition, baseline test scores, and demographics, and confirm it received no CAL sessions or extra input.
They randomized classes within schools, not entire schools, so they did not meet the school-level RCT requirement.
They only measured math (and checked Chinese for spillovers); no other core subjects were tested, so the ‘AllExams’ criterion is not fulfilled.
The CAL experiment lasted one semester (roughly four months), not a full academic year.
CAL students received extra resources (computers, software, a paid supervisor), whereas control students received no equivalent support.
They only tracked outcomes within one semester; no long-term or graduation follow-up was conducted.
No evidence is provided that an external research team replicated this CAL intervention in a different context.
The intervention and evaluation were conducted by the same research team; there was no independent external evaluator.
No statement or link indicates an official pre-registration prior to data collection.
Using a randomized field experiment in 43 migrant schools in Beijing, this paper examines the impact of a computer-assisted learning (CAL) program on math performance and other outcomes of third-grade students from disadvantaged migrant families. One class per school was...
They randomized at the school level, which is stronger than class-level randomization.
They used a custom-developed instrument rather than a standardized, widely recognized exam.
The intervention lasted only a few weeks and did not extend over a full academic term.
They provide thorough baseline data (demographics, test scores) for the control group.
They randomized entire schools, fulfilling the requirement for school-level RCT.
They measured financial literacy outcomes only and did not test other core subjects.
The program occurred only over a few weeks and does not span an entire academic year.
Only the treatment group received the extra lesson/game/hours. The control group got no equivalent resource/time.
They measured short-term post-test outcomes only, with no follow-up through final graduation.
They did two waves under the same study team, no independent replication by an external group.
The same team developed the materials and carried out the study; there is no evidence of external independence.
They reported the registry ID (AEARCTR-0003481), indicating it was formally pre-registered.
This paper provides causal evidence on parental involvement in a financial education course. Two randomised controlled trials were conducted with 2779 students (grade 8 and 9) in Flanders, assigning schools to three treatment variants and one control group. Although the...
Randomization was done at the Head Start site (center) level, which satisfies or exceeds class-level randomization.
They measured child-level academic outcomes via teacher ratings, not a standardized, exam-based assessment of each child.
The paper reports that the intervention was conducted over an entire Head Start academic term, from fall to spring, meeting the term duration criterion."
They clearly describe the control group’s ‘business as usual’ approach, their baseline traits, and the difference in provided services.
Randomization occurred among entire Head Start sites, fulfilling the ‘school-level RCT’ requirement.
They only measure language, literacy, and math (via teacher ratings), not all main subjects with standardized exams.
They implemented the intervention over the entire Head Start year (~9 months), satisfying the one-year duration requirement.
Intervention teachers got extensive training and mental health consultation; the control group did not receive comparable extra resources or budget.
They stopped measuring outcomes at/near kindergarten entry, not tracking through elementary graduation or a similar culminating endpoint.
No independent replication in another context is described. The paper is a single-site RCT without separate teams reproducing it.
The study does not indicate that an external evaluation team conducted the research. Instead, the same authors who designed the intervention also conducted its evaluation, which raises concerns about independent oversight.
No statement or link about pre-registration is provided. The study was not explicitly pre-registered in a known public registry.
The role of subsequent school contexts in the long-term effects of early childhood interventions has received increasing attention, but has been understudied in the literature. Using data from the Chicago School Readiness Project (CSRP), a cluster-randomized controlled trial conducted in...
Have a study you'd like to submit for ERCT evaluation? Found something that could be improved? If you're an author and need to update or correct information about your study, let us know.
Share your research with us for review
Provide feedback to help us make things better.
If you're the author, let us know about necessary updates or corrections.