Cloud-based learning and cognitive development among lower-secondary students (Grades 8–9): evidence from schools in Kazakhstan

Adil Amirbek, Yerlan Torebek, Marzhan Abdualiyeva, Shadyar Altynbekov, Abay Tursynbayev, Rakhymzhan Abdraimov, and Gauhar Omashova

Published:
ERCT Check Date:
DOI: 10.3389/feduc.2026.1749929
  • science
  • K12
  • Asia
  • blended learning
  • EdTech platform
  • formative assessment
0
  • C

    Randomization was performed at the intact class level (not within classes), satisfying the class-level RCT requirement.

    "Classes were allocated to the experimental (CBL) and control conditions in a 1:1 ratio, using stratified randomization at the class level to balance school context and baseline academic indicators (Tables 1, 2)."

  • E

    The study used performance-based tasks and questionnaires rather than widely recognized standardized exam-based assessments.

    "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)."

  • T

    The intervention ran for one academic quarter (12 weeks) with post-testing, meeting the minimum term-duration requirement.

    "The intervention was delivered over one academic quarter (12 weeks) to experimental classes, with cloud-based tools systematically integrated into the standard Informatics curriculum aligned with the State Compulsory Education Standard, GOSO (Table 15)."

  • D

    The control group is clearly defined (size and condition) and baseline equivalence is documented.

    "Baseline equivalence between conditions was examined for key demographic characteristics and prior academic indicators and was confirmed prior to the intervention (Table 3)."

  • S

    Randomization was implemented at the classroom level within schools rather than by randomizing schools.

    "Randomization was implemented at the classroom level within schools, whereas school selection relied on convenience sampling."

  • I

    The paper does not document an independent external evaluation team separate from the intervention implementers/researchers.

    "Teachers delivering the lessons did not administer the tests, reducing the risk of assessment bias."

  • Y

    The study measures outcomes through a 12-week intervention with a short follow-up, which is far less than 75% of an academic year.

    "Assessments were administered at baseline, immediately post-intervention, and 12 weeks later to examine effect durability."

  • B

    The resource differences (CBL platforms and teacher onboarding) are integral to the treatment being tested, and the intervention is delivered within the regular Informatics course structure.

    "First, intervention teachers received CBL-specific onboarding and instructional materials, whereas control teachers did not receive access to the CBL platforms or training during the study period."

  • R

    No independent peer-reviewed replication of this specific study was found, and the paper does not report a completed external replication.

  • A

    Because criterion E is not met, criterion A is not met; the study also focuses on Informatics/cognitive measures rather than all core subjects using standardized exams.

    "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)."

  • G

    Graduation tracking is not reported, and because criterion Y is not met, criterion G is not met.

    "Future research should prioritize longitudinal designs to examine whether the effects of CBL on cognitive development persist beyond short-term interventions, including follow-up assessments at 6–12 months, and to test whether observed gains transfer to other school subjects and task types."

  • P

    The paper does not provide a pre-registration registry/ID and date demonstrating registration before data collection.

    "The full protocol is documented in the Supplementary materials."

Abstract

Despite growing attention to the digitalization of education and the development of AI-supported learning in Kazakhstan, as well as broader international agendas, the empirical evidence on whether cloud-based learning (CBL) strengthens adolescents’ cognitive development and under what conditions and through which mechanisms remain fragmented. This study aimed to assess the prospects for implementing CBL in Kazakhstani schools and to empirically determine how, under which conditions, and to what extent CBL influences the cognitive development of adolescents aged 14–15. In a cluster randomized controlled trial with an explanatory mixed-methods design, 66 intact classes (N = 1,650; experimental group: 33 classes; control group: 33 classes) from 18 public schools (9 urban, 9 rural) were assigned to a 12-week Informatics intervention supported by CBL or to traditional Informatics instruction. Assessments were administered at baseline, immediately post-intervention, and 12 weeks later to examine effect durability. Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE).

Full Article

ERCT Criteria Breakdown

  • Level 1 Criteria

    • C

      Class-level RCT

      • Randomization was performed at the intact class level (not within classes), satisfying the class-level RCT requirement.
      • "Classes were allocated to the experimental (CBL) and control conditions in a 1:1 ratio, using stratified randomization at the class level to balance school context and baseline academic indicators (Tables 1, 2)."
      • Relevant Quotes: 1) "In a cluster randomized controlled trial with an explanatory mixed-methods design, 66 intact classes (N = 1,650; experimental group: 33 classes; control group: 33 classes) from 18 public schools (9 urban, 9 rural) were assigned to a 12-week Informatics intervention supported by CBL or to traditional Informatics instruction." (p. 1) 2) "Using stratified cluster sampling, we selected 66 intact classes from these schools to ensure proportional representation by school context (urban/rural), geographic area, and school-level socioeconomic indicators." (p. 7) 3) "Classes were allocated to the experimental (CBL) and control conditions in a 1:1 ratio, using stratified randomization at the class level to balance school context and baseline academic indicators (Tables 1, 2)." (p. 7) Detailed Analysis: Criterion C requires that randomization occurs at the class level (or stronger) so that students within the same class are not split across conditions, which would risk contamination. The paper explicitly describes a cluster randomized design in which "66 intact classes" were assigned, and it states that allocation used "stratified randomization at the class level." This directly matches the ERCT requirement for class-level randomization (or stronger), and the language is unambiguous about intact classes being the unit of assignment. Final sentence: Criterion C is met because intact classes were randomized to experimental versus control conditions.
    • E

      Exam-based Assessment

      • The study used performance-based tasks and questionnaires rather than widely recognized standardized exam-based assessments.
      • "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)."
      • Relevant Quotes: 1) "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)." (p. 1) 2) "Growth in logical-analytical reasoning (CS) Non-standard matrix reasoning tasks Accuracy, response time Test–retest reliability; measurement invariance; group comparisons" (p. 12) 3) "An original Likert-type questionnaire (Table 5) measuring perceived changes in higher-order intellectual skills, alongside established instruments for SRL and academic motivation;" (p. 14) Detailed Analysis: Criterion E requires outcomes to be measured using standardized, widely recognized exam-based assessments (e.g., national/statewide standardized exams), not primarily via researcher-assembled tasks or questionnaires. The paper instead describes "performance-based tasks" and questionnaires, and it explicitly includes "Non-standard matrix reasoning tasks" and an "original Likert-type questionnaire." While some instruments may be validated, they are not described as standardized external exams used system-wide for achievement measurement. Final sentence: Criterion E is not met because the study does not document use of a widely recognized standardized exam-based assessment.
    • T

      Term Duration

      • The intervention ran for one academic quarter (12 weeks) with post-testing, meeting the minimum term-duration requirement.
      • "The intervention was delivered over one academic quarter (12 weeks) to experimental classes, with cloud-based tools systematically integrated into the standard Informatics curriculum aligned with the State Compulsory Education Standard, GOSO (Table 15)."
      • Relevant Quotes: 1) "The intervention was implemented over twelve consecutive weeks, with two Informatics lessons per week, ensuring sustained exposure to CBL components while aligning with the national curriculum." (p. 12) 2) "The intervention was delivered over one academic quarter (12 weeks) to experimental classes, with cloud-based tools systematically integrated into the standard Informatics curriculum aligned with the State Compulsory Education Standard, GOSO (Table 15)." (p. 14) 3) "Two primary assessment waves were conducted: (1) an immediate post-test in November 2024 and (2) a delayed follow-up in March 2025 to evaluate skill retention and transfer to novel tasks." (p. 14) Detailed Analysis: Criterion T requires that outcomes are measured at least one full academic term after the intervention begins. The paper explicitly describes the intervention as "one academic quarter (12 weeks)," which is a term-length unit, and it reports an immediate post-test after the quarter. While there is also a later follow-up, the key requirement is that the time from intervention start to outcome measurement reaches at least one term; the 12-week quarter satisfies that minimum. Final sentence: Criterion T is met because the study ran for one academic quarter (12 weeks) and measured outcomes at post-test.
    • D

      Documented Control Group

      • The control group is clearly defined (size and condition) and baseline equivalence is documented.
      • "Baseline equivalence between conditions was examined for key demographic characteristics and prior academic indicators and was confirmed prior to the intervention (Table 3)."
      • Relevant Quotes: 1) "In a cluster randomized controlled trial with an explanatory mixed-methods design, 66 intact classes (N = 1,650; experimental group: 33 classes; control group: 33 classes) from 18 public schools (9 urban, 9 rural) were assigned to a 12-week Informatics intervention supported by CBL or to traditional Informatics instruction." (p. 1) 2) "Control group (CG), 33 classes 825 422 403 14–15 Informatics (standard curriculum)" (p. 9) 3) "Baseline equivalence between conditions was examined for key demographic characteristics and prior academic indicators and was confirmed prior to the intervention (Table 3)." (p. 7) Detailed Analysis: Criterion D requires a well-documented control group, including what instruction they received and evidence that groups were comparable at baseline. The paper provides a specific description of the control condition in Table 2 as "Informatics (standard curriculum)" and gives the control group's size and characteristics. It also explicitly states that baseline equivalence was examined and confirmed. Final sentence: Criterion D is met because the control group is clearly described and baseline equivalence is reported.
  • Level 2 Criteria

    • S

      School-level RCT

      • Randomization was implemented at the classroom level within schools rather than by randomizing schools.
      • "Randomization was implemented at the classroom level within schools, whereas school selection relied on convenience sampling."
      • Relevant Quotes: 1) "Classes were allocated to the experimental (CBL) and control conditions in a 1:1 ratio, using stratified randomization at the class level to balance school context and baseline academic indicators (Tables 1, 2)." (p. 7) 2) "Randomization was implemented at the classroom level within schools, whereas school selection relied on convenience sampling." (p. 21) Detailed Analysis: Criterion S requires schools (the implementing units) to be randomized to treatment versus control. The paper explicitly states that randomization was "at the classroom level within schools" and that schools themselves were convenience-selected. Therefore, the study does not meet the school-level cluster RCT requirement. Final sentence: Criterion S is not met because classes, not schools, were randomized.
    • I

      Independent Conduct

      • The paper does not document an independent external evaluation team separate from the intervention implementers/researchers.
      • "Teachers delivering the lessons did not administer the tests, reducing the risk of assessment bias."
      • Relevant Quotes: 1) "Teachers delivering the lessons did not administer the tests, reducing the risk of assessment bias." (p. 8) 2) "Four weeks prior to intervention delivery, a preparatory phase was conducted to ensure technical readiness across schools, configure digital infrastructure, and train teachers in the experimental group on cloud-based platforms and pedagogical protocols (Supplementary Table A16)." (p. 12) 3) "All statistical analyses were conducted in SPSS." (p. 12) Detailed Analysis: Criterion I requires evidence that the evaluation was conducted independently from the intervention designers/providers (e.g., an external evaluation team leading data collection and/or analysis). The paper includes a helpful procedural safeguard: teachers who delivered lessons did not administer tests. However, the paper does not identify an external evaluator, independent institution, or third-party team responsible for the evaluation. The methods also describe that the study team trained teachers for the experimental condition and conducted the statistical analyses. Final sentence: Criterion I is not met because independence from the intervention implementers is not clearly documented beyond separating teaching from test administration.
    • Y

      Year Duration

      • The study measures outcomes through a 12-week intervention with a short follow-up, which is far less than 75% of an academic year.
      • "Assessments were administered at baseline, immediately post-intervention, and 12 weeks later to examine effect durability."
      • Relevant Quotes: 1) "In a cluster randomized controlled trial with an explanatory mixed-methods design, 66 intact classes (N = 1,650; experimental group: 33 classes; control group: 33 classes) from 18 public schools (9 urban, 9 rural) were assigned to a 12-week Informatics intervention supported by CBL or to traditional Informatics instruction. Assessments were administered at baseline, immediately post-intervention, and 12 weeks later to examine effect durability." (p. 1) 2) "The intervention was delivered over one academic quarter (12 weeks) to experimental classes, with cloud-based tools systematically integrated into the standard Informatics curriculum aligned with the State Compulsory Education Standard, GOSO (Table 15)." (p. 14) Detailed Analysis: Criterion Y requires that outcomes are measured at least 75% of an academic year after the intervention begins. The paper describes a 12-week quarter-long intervention and indicates outcome assessment at post-intervention plus an additional 12-week interval after the intervention (per the abstract). Even taking the abstract at face value, this implies substantially less than a typical ~9–10 month academic year (and thus less than the 75% threshold). Final sentence: Criterion Y is not met because the study does not track outcomes across most of an academic year.
    • B

      Balanced Control Group

      • The resource differences (CBL platforms and teacher onboarding) are integral to the treatment being tested, and the intervention is delivered within the regular Informatics course structure.
      • "First, intervention teachers received CBL-specific onboarding and instructional materials, whereas control teachers did not receive access to the CBL platforms or training during the study period."
      • Relevant Quotes: 1) "The intervention was implemented over twelve consecutive weeks, with two Informatics lessons per week, ensuring sustained exposure to CBL components while aligning with the national curriculum." (p. 12) 2) "First, intervention teachers received CBL-specific onboarding and instructional materials, whereas control teachers did not receive access to the CBL platforms or training during the study period." (p. 7) 3) "Second, cloud workspaces and LMS course areas were created specifically for the experimental classes and were restricted through class-specific accounts and access permissions such as whitelisted class rosters and password-protected environments." (p. 7) 4) "Control classes continued with conventional instruction." (p. 14) Detailed Analysis: Criterion B evaluates whether the intervention provides additional time/budget/resources relative to control, and if so, whether that difference is either (a) balanced in the control condition or (b) explicitly the treatment variable (i.e., integral to what is being tested). Here, the intervention clearly adds resources: CBL platforms, restricted cloud workspaces, and teacher onboarding/training. The paper also positions the intervention as a CBL-supported version of Informatics instruction versus conventional instruction. The added resources are therefore not an accidental confound; they are the defining components of the CBL treatment package. The paper also indicates the intervention occurs within the regular Informatics course structure (two lessons per week over 12 weeks), which supports that the comparison is not primarily "more class time" versus "less class time." Final sentence: Criterion B is met because the additional resources (platform access and onboarding) are integral to the CBL treatment being evaluated within the regular course structure.
  • Level 3 Criteria

    • R

      Reproduced

      • No independent peer-reviewed replication of this specific study was found, and the paper does not report a completed external replication.
      • Relevant Quotes: 1) "TYPE Original Research" (p. 1) Detailed Analysis: Criterion R requires evidence that this study (or its central experimental claim using materially the same intervention and evaluation design) has been independently replicated by a separate research team in another peer-reviewed publication. The paper presents itself as "Original Research" and does not cite any completed independent replication of this specific trial. An internet search conducted as part of this ERCT check (dated 2026-04-14) did not identify a peer-reviewed independent replication of this specific study. Final sentence: Criterion R is not met because no independent replication evidence for this specific study was located.
    • A

      All-subject Exams

      • Because criterion E is not met, criterion A is not met; the study also focuses on Informatics/cognitive measures rather than all core subjects using standardized exams.
      • "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)."
      • Relevant Quotes: 1) "Cognitive outcomes were measured using performance-based tasks and validated questionnaires capturing perceived teacher competence (PTC), self-regulated learning (SRL), and cognitive engagement (CE)." (p. 1) 2) "The intervention was delivered over one academic quarter (12 weeks) to experimental classes, with cloud-based tools systematically integrated into the standard Informatics curriculum aligned with the State Compulsory Education Standard, GOSO (Table 15)." (p. 14) Detailed Analysis: Criterion A requires all-subject standardized exams and is a strict extension of criterion E; per the ERCT standard, if E is not met, then A is not met. Since the paper measures outcomes via performance-based tasks and questionnaires (not standardized exam-based assessments), the prerequisite for A fails. Additionally, the described evaluation is centered on Informatics-related cognitive outcomes rather than standardized achievement across all core subjects. Final sentence: Criterion A is not met because E is not met and the study does not assess all core subjects via standardized exams.
    • G

      Graduation Tracking

      • Graduation tracking is not reported, and because criterion Y is not met, criterion G is not met.
      • "Future research should prioritize longitudinal designs to examine whether the effects of CBL on cognitive development persist beyond short-term interventions, including follow-up assessments at 6–12 months, and to test whether observed gains transfer to other school subjects and task types."
      • Relevant Quotes: 1) "Future research should prioritize longitudinal designs to examine whether the effects of CBL on cognitive development persist beyond short-term interventions, including follow-up assessments at 6–12 months, and to test whether observed gains transfer to other school subjects and task types." (p. 21) 2) "Two primary assessment waves were conducted: (1) an immediate post-test in November 2024 and (2) a delayed follow-up in March 2025 to evaluate skill retention and transfer to novel tasks." (p. 14) Detailed Analysis: Criterion G requires tracking students until graduation. The ERCT standard also specifies that if criterion Y (year duration) is not met, then criterion G is not met. The paper reports only short-term follow-up within the school year timeframe (post-test and a delayed follow-up) and explicitly frames longer follow-up (6–12 months) as future research, which is inconsistent with having already tracked the cohort through any graduation milestone. A targeted internet search for follow-up publications by the same author team tracking this cohort through graduation did not identify any such paper as of 2026-04-14. Final sentence: Criterion G is not met because graduation tracking is not reported and Y is not met.
    • P

      Pre-Registered

      • The paper does not provide a pre-registration registry/ID and date demonstrating registration before data collection.
      • "The full protocol is documented in the Supplementary materials."
      • Relevant Quotes: 1) "The full protocol is documented in the Supplementary materials." (p. 7) 2) "This study adhered to ethical principles for educational research stated in the Declaration of Helsinki and was approved by the Ethics Council of M. Auezov South Kazakhstan University (Protocol No. 05/2024 issued by August 30th 2024)." (p. 7) Detailed Analysis: Criterion P requires an explicit pre-registration statement (e.g., OSF/registry/clinical trials) including an identifier and a registration date that can be verified to be before data collection began. The paper references an ethics approval protocol number and states that the protocol is documented in supplementary materials, but it does not provide a pre-registration platform name, URL, registry identifier, or registration date. Final sentence: Criterion P is not met because no verifiable pre-registration registry ID and timing are provided.

Request an Update or Contact Us

Are you the author of this study? Let us know if you have any questions or updates.

Have Questions
or Suggestions?

Get in Touch

Have a study you'd like to submit for ERCT evaluation? Found something that could be improved? If you're an author and need to update or correct information about your study, let us know.

  • Submit a Study for Evaluation

    Share your research with us for review

  • Suggest Improvements

    Provide feedback to help us make things better.

  • Update Your Study

    If you're the author, let us know about necessary updates or corrections.