Per 2025 12 PDF
Per 2025 12 PDF
Id: 1538100
Vol.12(1), pp. 222-242, January 2025
Available online at http://www.perjournal.com
ISSN: 2148-6123
http://dx.doi.org/10.17275/per.25.12.12.1
Risky Setiawan*
Department of Educational Research and Evaluation, Universitas Negeri Yogyakarta,
Yogyakarta, Indonesia ORCID: 0000-0002-4269-996X
Umi Farisiyah
Department of Educational Research and Evaluation, Universitas Negeri Yogyakarta,
Yogyakarta, Indonesia ORCID: 0000-0003-1076-4816
Widiawanti Widiawanti
Department of Learning Technology, Universitas Negeri Yogyakarta, Yogyakarta, Indonesia
ORCID: 0009-0003-7499-3682
Article history From the most straightforward kind of technology—audiovisual
Received: learning—to the application of artificial intelligence in education,
25.08.2024
technology has been used in education for over 20 years. Despite the
Received in revised form: growing popularity of AI-based learning media technology, there is still a
07.10.2024 dearth of reliable empirical data about its effects on student
achievements. This meta-analysis aims to investigate the impact of
Accepted: intervention time and combine findings from several studies to paint a
26.11.2024
more comprehensive picture of the usefulness of AI media in education.
Key words: In this study, a meta-analysis design is employed in quantitative research.
achievement, AI-based learning The Publish or Perish tool gathered secondary data from published papers
media, artificial intelligence, using the Scopus database and Google Scholar—data analysis for group
learning media, Meta-Analysis contrast meta-analysis data using the R software. The study's findings
demonstrate how using AI-based learning resources greatly impacts
students' academic performance. P value total effect size and three
moderator variables (continent, gained achievement, and intervention
duration < 0.05) show that the aggregate value of the summary effect in
AI-based learning media, which integrates technology products with
software, web programs, augmented reality, and gamification in
increasing student achievement from elementary school to tertiary level
from 2019 to 2024, is still providing significant influence. Thus, artificial
intelligence (AI) should be used more extensively in preparing learning
media to maximize students' academic and non-academic successes.
*
Correspondency: riskysetiawan@uny.ac.id
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Introduction
Technology has been increasingly developing and experiencing evolution since it was
first used in education, especially as a learning tool. The evolution of technology
implementation in the world of education began at an early stage in 1920 – 1930. It is also
now becoming more sophisticated with the presence of Artificial Intelligence (henceforth AI)
as a form of technology implementation in the world of education (del Campo et al., 2012),
especially in teaching methods and learning environments (Nethra R MBA, 2019;
Velayutham et al., 2022).
Technology has constantly improved access to education, from historical inventions like the
printing press to contemporary digital technologies (Li, 2023), solved physical barriers to
online learning (Hassan, 2023), personalized learning experiences include augmented reality
(AR), virtual reality (VR), and AI (Hassan, 2023), never-before-seen opportunities thanks to
digital assistance aids for disabilities (Timmers, 2018). These demonstrate that advancements
in technology and existence are essential and impact education, not just during the learning
process but also during the planning and assessment stages.
On the other hand, technology has also made it easier for students to access research tools and
learning resources, allowing teachers to present more engaging classes to their pupils (Nethra
R MBA, 2019). Through platforms like Zoom and Google Meet, students may now
collaborate and communicate with each other more easily (Velayutham et al., 2022), more
accessible to people in rural places to access online resources and chances for higher
education (Kiong, 2022), improving both the efficiency and enjoyment of teaching and
learning (Raja & Nagasubramani, 2018), and has improved education by giving students more
opportunities for studying, more individualized learning experiences, and more control over
their education (Kiong, 2022). Using technology in the teaching and learning process benefits
both students and teachers.
Integrating technology into education aims to establish settings that support self-directed
learning, communication, and teamwork while equipping students for success in an
increasingly digital world (Abass & Abas, 2019; Kalyani, 2024). That research concludes that
conducting classrooms with technology assistance, especially AI in the learning media,
influences students' academic and non-academic performance.
At all educational levels, artificial intelligence (AI) has demonstrated beneficial effects on
students' academic performance. AI tools facilitate collaborative settings, offer instant
feedback, and improve individualized learning experiences (Kaledio et al., 2024). Research
shows that artificial intelligence (AI) can successfully address particular learning demands,
enhance attitudes toward learning, and increase motivation for study habits (Chiu et al., 2023;
Hooda et al., 2022). According to a meta-analysis, grade levels and the subjects covered in
mathematics classes were important mediators of the tiny but substantial effect size of AI on
primary pupils' mathematical achievement (Hwang, 2022). The beneficial effects of AI and
computational sciences on student performance, especially in STEM fields, were confirmed
by another systematic review and meta-analysis (García-Martínez, 2023). However, privacy
issues and the dangers of relying too much on AI technologies must be addressed (Kaledio et
al., 2024). AI can potentially improve students' academic performance; however, cautious use
and more study are required.
The influence of AI in the learning process can be stuck with the learning media used by
teachers and students. By leveraging input variables like attention, meditation, and cognitive
-223-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
workload, AI-based models can predict individual learning styles and personalize learning
experiences (Lokare & Jadhav, 2024). By using concept mapping and self-evaluation, these
models can also help teach programming principles (Huddar & Kharade, 2023). Adaptive
learning support systems can be designed with the help of AI technologies in education,
which include supervised learning, mining techniques, and Bayesian methods (Song, 2024).
Moreover, cloud computing and database management systems can combine AI-based
learning models to effectively handle and distribute massive volumes of educational data
(Dhaya et al., 2022). AI-based learning models present exciting opportunities to revive
education by giving pupils individualized, efficient, and data-driven learning experiences
through AI-based learning media. Though AI-based learning media is in the learning process,
it can influence learning achievements.
Meta-analyses were carried out to gather more conclusive data on the impact of AI-based
learning materials on student accomplishment. The goal is to look at more publications that
do not start with how AI is used in education and then analyze how employing AI-based
learning materials affects student achievement. This meta-analysis research aims to
demonstrate how applying AI-based learning materials affects student accomplishment in
several earlier studies and what moderator variables come into play if the results have a
significant impact.
Method
Design
This research uses a quantitative research model with a meta-analysis design. The
objective of this study is to identify how significant the average influence of AI-based
learning models is in improving students' academic and non-academic achievements, starting
from primary, secondary, and tertiary levels. Meta-analysis research provides an alternative to
dig deeper into the average influence of AI-based learning models in improving students'
academic and non-academic achievements, starting from primary, secondary, and tertiary
levels, by evaluating previous research findings with statistics.
The procedures for conducting a meta-analysis are discussed by Retnawati et al. (2018).
When conducting a meta-analysis that employs study parameters in the form of means,
researchers must consider whether each study measures variables on the same scale. The
standard error of the effect size for the same size across studies and formulas. This study's
meta-analysis utilizes artefacts or studies on variables with the same scale. The effect size—
the average score of certain variables that are the focus of each study—is taken as the mean in
this meta-analysis. Then, the variable moderators' influence on the effect size matter will be
analyzed before reporting the meta-analysis result.
Going deeper, this meta-analysis research design uses articles from the results of experimental
studies, either purely experimental or quasi-experimental. The articles used in this analysis
include the number of samples, mean values , and standard deviations for each experimental
and control group from the post-test after treatment involving Artificial Intelligence in the
learning model.
The research data in this study are articles accessed and downloaded from the Publish or
Perish computer program with sources from the Scopus and Google Scholar databases. These
two sources can provide references for the required themes. These two sources help
-224-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
researchers to retrieve and analyze relevant studies thoroughly and concisely. These two
sources provide a systemic way to conduct a literature review. From Google Scholar and
Scopus Search, Publish, or Perish was chosen as a filter because researchers can fully access
these two sources. All studies relevant to using AI-based learning models in improving
academic and non-academic achievement were downloaded and analyzed further. The span of
a decade is not limited to the last few years. This is because the emergence of AI in education,
especially its integration into learning models, mushroomed during the pandemic around
2019. So, the topic itself is new and has quite a significant gap in research.
Figure 1 depicts the steps for collecting and analyzing data to achieve meta-analysis results
using the SALSA framework. The SALSA framework achieves research objectives and
reduces data collection and analysis bias. It starts with the first step, namely, search. After
determining the data sources that can be used to collect articles related to the meta-analysis
theme, the following process is searching and downloading articles that match the theme. The
search keyword used the keyword "The influence of AI-based learning media on achievement
and AI on students’ achievement."
From the search for articles from the Scopus Database, 213 articles were obtained, and from
the Google Scholar database using Publish and Perish software (Harzing, 2007), there were
1302 articles. The second RIS file containing the articles' metadata was collected and sorted
by title. Looking at the article's title that mentions AI, from 1515 articles, it has decreased
drastically to 352 articles. Furthermore, filtering in terms of fields, namely education, was
carried out and resulted in 192 articles. The final stage was selecting articles that discussed
the influence of AI-based learning media on the world of education using quasi-experimental
-225-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
research methods, leaving 40 documents. This stage is included in the Appraisal phase.
In detail, synthesizing the articles according to the themes collected is the third step, namely
synthesis. This third step is done manually. A total of 40 articles discussing the influence of
AI-based learning media on student achievement were synthesized one by one by paying
attention to several important pieces of information. Important information that must be
contained in the articles that will ultimately be analyzed are articles that accommodate
experimental research (pure/quasi), research subjects who are students, at any level, involve
the use of AI-based learning media in the implementation of the treatment, the final ability
measured is part of student achievement, both academic and non-academic, each article
reports information on the number of samples involved, the mean and standard deviation of
each group (experimental group and control group) obtained from tests carried out after the
treatment was carried out (post-test). From the synthesis process carried out manually, 31
articles were finally obtained from journals, book chapters, books, and seminar proceedings
ready for meta-analysis.
31 studies were analyzed in this research. All research articles have been filtered according to
meta-analysis requirements. You can be sure that all articles have the same detailed
information. Each article uses an experimental design in data collection, making AI-based
learning media the treatment given to the experimental group and reporting the information
needed during the analysis process. The selection process results obtained 31 articles whose
studies started at the elementary school level and went to the university level. All research
articles aim to increase student achievement by using AI assistance to compile learning
materials, have an experimental implementation duration of 3-12 weeks, and come from
various Asian countries. This is possible because AI-based learning media in Asia is still not
as familiar and widely and continuously applied in learning. This is different from developed
countries, which already live side by side with sophisticated technology. Therefore, studies
regarding the influence of AI-based learning media on student achievement need to be
explored again. Moreover, the condition of education in most developing countries is
experiencing learning losses due to the impact of the pandemic.
The fourth step is analysis. The final result was 31 articles that were suitable to continue with
the analysis process and then carry out meta-analysis by manually recording the information
needed to finally carry out meta-analysis with the help of the "meta" and "metaphor"
packages in the R Studio computer program (RStudio_Team, 2020). Detailed information
collected from each article is the identity of the author, year of publication, number of
samples (N), mean (X̄ ), and standard deviation (s) for each experimental (E) and control (C)
group. Other detailed information related to moderator variables is the country, year of
implementation, type of AI used, experimental research objectives, education level, and
duration of treatment (experimental group), as shown in Appendix 1. Table 1 presents the
results of the meta-analysis of the selected articles.
-226-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Note: The outcomes of the articles chosen for meta-analysis are compiled in this table. Multiple
studies, categorized by author, number of samples (NE and NC), mean (XE and X̄C), and standard
deviation (sE and sC) for both the experimental (E) and control (C) groups, are represented by
each row. The letters a, b, and c after the author’s name indicate the variance of achievement data
presented in the articles.
The discrepancies in study findings about the impact of AI-based learning media on student
accomplishment are explained in detail in Table 1. Significant differences were across the 31
papers gathered, including those regarding the study, participants, mean, and standard
deviation for the experimental and control groups. Overall, table 1's data gives an overview of
the impact numerous studies have found that the intervention under test—in this case, the
employment of AI-based learning media—has on students' academic performance. The mean
and standard deviation offer crucial details regarding the degree of variation seen in each
study and the efficacy of the intervention in altering the outcomes under examination.
-227-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
Heterogeneity provides an overview of the distribution of studies. The summary effect will
provide information regarding the influence produced by a treatment given to a skill. In this
research, the use of AI-based learning media on student achievement. These results also show
the effect of using AI in learning media on student achievement by considering several
moderator variables such as country, the form of AI-based learning media, and the length of
treatment (duration) used. Then, to strengthen these results, publication bias was also looked
at using funnel plots and Egger tests.
Note: Treatment Effects (TE) and Standard Errors (SE) from the examined trials are compiled in
this table. Treatment effects reveal how effective the intervention was; some studies, like Studies
17 and 27, had significant positive benefits, while other studies, like Studies 2 and 10, had adverse
or almost negligible effects. Standard errors differ, which reflects variations in the accuracy of
estimations between researchers. The significance of the consequences of the treatments under
analysis is made more evident by this table.
-228-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Table 2 shows that the average treatment effect from each study is 0.595, with the average
standard error being 0.288. This treatment effect shows that treatment or intervention using
AI-based learning media has a positive impact. The positive impact of using AI-based
learning media regarding the average effect shows a beneficial effect. Meanwhile, the
resulting standard error indicates uncertainty in estimating the treatment effect. The greater
the mean standard error value, the greater the uncertainty in estimating the treatment effect. A
relatively high standard error value indicates a significant uncertainty when estimating a given
treatment's effect.
Heterogeneity Test
This heterogeneity can be seen in variations or differences between the studies
included in the analysis. Heterogeneity describes the degree of dissimilarity between study
results that may be due to differences in study characteristics, populations, research designs,
or other relevant factors (Stogiannis et al., 2024). The result of the Heterogeneity test is
displayed in Table 3.
Note: The analyzed studies exhibit significant heterogeneity, as the table shows. Most of the
variation between studies is due to real heterogeneity rather than random error, as indicated by the
tau² value of 0.5312 and I² of 83.5%. Additionally, the H value of 2.46 supports the existence of
significant heterogeneity. With a p-value of less than 0.0001, the Q heterogeneity test yielded a
value of 182.03, indicating that the variation observed amongst the studies was substantial and not
coincidental.
Table 3 exhibits the result of the Heterogeneity test. The Q test determines if study results
vary more than chance would predict. Significant heterogeneity exists when the p-value is
extremely low (< 0.0001). The heterogeneity results show that the data used is quite
heterogeneous. Each study used this data, which is quite varied, with a proportion of 83.5%.
So, the studies used in this research are quite heterogeneous regarding study characteristics.
These findings suggest that the papers included in this meta-analysis exhibit substantial and
high heterogeneity. The Q test's high I2 value (83.5%) and extremely low p-value (< 0.0001)
support the idea that additional variables that vary throughout studies also contribute to
variation in results.
Stogiannis et al., (2024) claim that the high heterogeneity among the papers included in this
meta-analysis indicates substantial diversity. This indicates that this approach can yield deep
and thorough explanations and analyses. In addition, significant heterogeneity may serve as a
motivator for additional subgroup or moderator variable analysis, potentially impacting these
studies. Additionally, the outcomes of subsequent research may benefit from this high degree
of result heterogeneity. High heterogeneity also indirectly supports the hypothesis that the
random effect model is utilized to assess the summary effect.
-229-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
In meta-analyses, funnel plots are used to identify publication bias by showing the sample
size or variation on the vertical axis and each study's effect size on the horizontal axis (Duval
& Tweedie, 2000). The study points will be distributed symmetrically in a cone if there is no
publishing bias; if there is, the graph will be asymmetric. Funnel plots, in which tiny studies
with significant variances tend to be scattered more widely at the bottom of the graph, are also
helpful in measuring study precision and identifying inhomogeneity between studies (Sterne
& Egger, 2001). A funnel plot's asymmetry could indicate bias or other issues that need more
research (Duval & Tweedie, 2000). Further, Mathur and Vanderweele (2020) explain that
research that exhibits symmetry in the meta-analysis's funnel plot suggests that there may be
no publication bias in the results, making them more legitimate and accurate. This symmetry
boosts the meta-analysis's credibility since it shows more comprehensive and objective data
without distorting the conclusions due to study selection bias. Furthermore, it suggests that
measurable components rather than extraneous effects are more likely to cause variances in
study results, enhancing the validity and dependability of the analysis's conclusions.
Figure 2 shows that the studies involved in this meta-analysis are divided asymmetrically.
This concludes that biased publication is captured in this meta-analysis. This means that the
results of this meta-analysis should be considered carefully (Duval & Tweedie, 2000; Mathur
& Vanderweele, 2020; Sterne & Egger, 2001). The statistical results strengthen the decision
on publication bias of the studies in this meta-analysis. These statistical results can be seen
from the linear regression results of funnel plot asymmetry via the Egger test. The results of
calculations using the Egger test are shown in Table 4.
-230-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Note: This table shows the results of linear regression tests to find asymmetry in funnel plots,
which are used to spot possible publishing bias. The results demonstrate the statistical significance
of the funnel plot's asymmetry with a t-value of 2.28, degrees of freedom (df) of 29, and a p-value
of 0.0298. The analysis yielded a bias estimate of 5.5843, accompanied by a standard error (SE) of
2.4445, suggesting the potential for bias in the research's publications.
The findings of this research show that the funnel plot in your meta-analysis is imbalanced or
asymmetric, which may be a sign of publication bias. Studies with unfavorable or
inconsequential outcomes may be published less frequently than studies with favorable or
significant results due to publication bias. A t-value of 2.28 indicates the degree of departure
from perfect circumstances (no bias). The likelihood of asymmetry or bias increases with the
t-value. Without publication bias, a p-value of 0.0298 suggests less than a 3% probability that
the observed results result from pure chance. The results were deemed significant because the
p-value was less than 0.05, indicating publication bias.
Then, standard error (SE) indicates the degree of certainty associated with the estimate, while
bias estimates (5.5843) and SE (2.4445) offer a quantitative assessment of the potential bias's
magnitude. The significance of the results suggests that the studies in your meta-analysis may
be out of balance and that studies with non-significant or negative outcomes may be under-
represented. Those results translate into an asymmetric funnel plot. This indicates that it is
important to proceed cautiously when examining meta-analyses' results because they can be
skewed by publication bias, which could inflate the stated effects.
However, publication bias does not make your meta-analysis "bad" but indicates that the
results should be interpreted cautiously (Duval & Tweedie, 2000; Sterne & Egger, 2001). The
quality of a meta-analysis depends more on how you recognize, report, and address these
biases. On the other hand, this meta-analysis result cannot be generalized. It can only be
implemented in the same context of research or treatments. A good meta-analysis is
transparent about its limitations and takes steps to minimize the influence of publication bias
on its conclusions.
A funnel plot analysis, like the one shown in Figure 2, can be used to reduce the effect of
publication bias. Other strategies include analyzing the moderator variables to determine the
impact of each factor and performing a fail-safe N calculation, which will be the next step
(Borenstein, 2019; Borenstein et al., 2021; Higgins et al., 2003; Higgins & Green, 2011;
Rosenthal, 1979; Sterne & Egger, 2001). Taking these actions can lessen the effects of
publication bias and improve the validity of meta-analyses' findings.
Fail-safe N Calculation
In meta-analysis, fail-safe N is used to evaluate how resilient results are to publication
bias, i.e., whether significant results can be sustained in the face of bias. It evaluates the
likelihood of publication bias, counts the number of studies with null results required to turn a
significant result into a nonsignificant one, and boosts confidence in the findings (Borenstein
et al., 2021; Higgins et al., 2003; Rosenthal, 1979). The meta-analysis results are robust and
-231-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
stable when the Fail-safe N value is high; potential instability is indicated when the value is
low. Fail-safe N further aids transparent result interpretation and presentation.
In conclusion, fail-safe N computations are a crucial tool in meta-analyses that assess how
resistant findings are to publication bias. This makes it easier for readers and researchers to
evaluate the meta-analysis's conclusions' stability and dependability despite the potential for
unpublished studies or other forms of publication bias.
Note: The results of Fail-safe N calculations using the Rosenthal method are shown in
this table. This method determines the number of additional studies with non-significant
findings required to lower the meta-analysis's overall significance. It took 1302 more
studies with non-significant results to refute the overall significance of the meta-analysis
results, given an observed significance level of <.0001 and a goal significance level of
0.05. This demonstrates how the meta-analysis's conclusions are incredibly solid and
resistant to being swayed by unimportant side research.
The outcome of the Fail-safe N computation is shown in Table 6. The results of the
investigation "Fail-safe N Calculation Using the Rosenthal Approach" shed light on how
immune meta-analysis conclusions are to potential publication bias. Below is an explanation
of these findings: p-value from a meta-analysis that indicates the results are highly
statistically significant (p-value less than 0.0001) is called the "observed significance level,"
or <.0001. The conventional threshold for statistical significance is set at the target
significance level (p-value = 0.05). The meta-analysis results are deemed significant if the p-
value is less than 0.05. N: 1302, fail-safe: The quantity of supplementary research with null
(non-significant) results needed to increase the p-value of your meta-analysis results to non-
significant (more than 0.05) is known as fail-safe N. In this instance, 1,302 more trials with no
effects would be required before the meta-analysis results would no longer be considered
significant.
A high Fail-safe N number (1302) indicates that the meta-analysis findings are robust and
resistant to the effects of publication bias. This implies that the aggregate results would still
be significant even if the meta-analysis had excluded over a thousand additional papers with
non-significant results. Because it is doubtful that so many studies with null results have yet
to be published or discovered, the results appear resistant to potential publication bias.
Overall, the findings of this meta-analysis are more reliable, albeit publication bias should
always be addressed.
Summary Effect
The aggregate estimate of the effect size obtained from each of the individual studies
that were part of the analysis is referred to as the "summary effect" in meta-analyses.
Depending on the data being examined, this impact size may be a mean difference, odds ratio,
risk ratio, or another effect size. When the results of multiple studies are combined, and the
weight of each study is taken into consideration—often based on the sample size or accuracy
of the study's findings—the summary effect is produced, which is a single figure that
-232-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
represents the overall effect of the intervention or relationship under study (Candra &
Retnawati, 2020; Cooper et al., 2009; Etemadfar et al., 2020).
The first step toward computing the summary effect is selecting a model appropriate for
examining the data. The heterogeneity test results and the analysis results from the test of
choosing the best summary effect estimating model can be used to choose this model. The
model appropriateness tests in a meta-analysis are calculated using the RStudio program.
Table 5 displays the analysis findings for selecting the model for the summary effect analysis.
Note: The outcomes of a meta-analysis employing Hedge's g effect size are shown in this
table. A substantial effect is shown by the Common effect model's standardized mean
difference (SMD) value of 0.4940 with a 95% confidence interval [0.3975; 0.5904] and z
value of 10.04 with a p-value < 0.0001. The Random effects model also indicates a
substantial effect with increased inter-study variability, where the SMD is higher at
0.5759 with a 95% confidence interval [0.2997; 0.8520] and a z value of 4.09 with a p-
value < 0.0001.
The model determination test calculation results are displayed in Table 5, which will be
utilized to analyze the data results from the 31 articles that have been thoroughly examined.
One of the two models—the Random Effects Model and the Common Effects Model—will be
utilized to calculate the summary effect in this meta-analysis. The confidence interval value,
represented by the number 0.5759 [0.2997; 0.8520], indicates the model that will be used to
analyze the summary effect based on several detailed data points that were acquired. This
indicates more significant heterogeneity between research when using a random effects model
(Borenstein et al., 2021).
The summary effect is the combined treatment effect of the studies used. The summary shows
the results of calculations using the random effect statistical method. This random effect
considers variations or diversity between studies and factors that will influence the treatment
in each study (Hansen et al., 2022).
The results of the analysis show that the use of AI-based learning media has a significant
influence on student achievement. This result can be seen in the p-value, where the p-value is
0.00 < 0.05. This means that the use of technology, such as e-books, applications, the web,
etc., has a real impact on improving literacy skills at an early stage. The effects of each study
are shown in detail in Figure 3 of the forest plot.
-233-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
Next, the study's summary effect size should be investigated. The aggregate's summary or
effect size can be seen from the forest plot. Forest plots contain various elements. In addition
to the bars in the confidence interval plot of each study and their effectiveness, each bar in
response to a specific meaning is also presented. The left end is the lower limit, and the right
is the upper limit. In the middle, there is a box with a size indicating the amount of weighting
and its position indicating the location of the effect size for each study. At the bottom is a
Diamond whose area is the total weighted area of each study, and its position indicates the
size of the aggregate effect size (Retnawati et al., 2018).
The forest plot results show that the distribution of effects is quite varied in each study. Every
study points to the influence of AI-based learning material on students’ achievement; some
have positive and negative effects. However, most studies show positive effects. Moderate to
negative effects were observed in about 5% of all studies. Apart from that, most of the
weights or roles of each study are below 2% to influence the conclusions of the meta-analysis.
However, researchers also looked at other variables that could have influenced the effect of
implementing AI-based learning material on students’ achievement.
Variable Moderators
The meta-analysis result for some variable moderators supports the effectiveness of
utilizing AI-based learning media to enhance students' achievements, which can be examined
further in Figure 4.
-234-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
From the meta-analysis results on moderator variables presented in Figure 4, the impact of
AI-based learning materials on student accomplishment is influenced by several moderator
variables. According to the findings, educational level (university, JHS, SHS, and ES) is not a
significant mediator of educational level (p-value = 0.35). Nonetheless, some research
indicates that AI works better at lower educational levels since younger, tech-savvy pupils are
more receptive to AI-based teaching strategies (Hwang, 2022). The application of AI has a
statistically significant (p < 0.01) impact on academic accomplishment (SMD = 0.79) as
opposed to non-academic achievement (SMD = 0.21), according to Gained accomplishment
(Type of Achievement Achieved). These results are supported by research by (Zheng et al.,
2023), which indicates that AI will probably enhance students' academic comprehension,
particularly in areas like physics and mathematics that call for analytical abilities.
The impact of AI on achievement differs by region for Continental (Region), with West Asia
seeing a more significant effect (SMD = 1.17) than East Asia (SMD = 0.33). This might have
to do with how regional variations in technology infrastructure and investment impact the use
of AI in classrooms. According to a study by Hwang (2022), infrastructure preparedness and
technological accessibility are critical factors in the success of AI in education. Next, it was
discovered that the length of the intervention was a significant moderator (p-value = 0.01),
with a longer period showing a more pronounced effect (SMD = 0.97). These results are
corroborated by Zheng et al. (2023), who show that longer length enables AI to adjust to the
needs of individual students more effectively, boosting its efficacy in raising success.
-235-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
Lastly, studies with smaller samples typically exhibit a more significant effect (SMD = 0.62),
although sample size does not demonstrate a significant influence (p-value = 0.10). Better
control factors in small-sample research may cause this, enabling more intensive and targeted
AI applications. According to the information gathered for this meta-analysis study, AI in
education positively impacts academic attainment. This is especially true if interventions are
tailored to the local educational context and last sufficient time.
The findings of a meta-analysis on this mediator variable provide more evidence that using
learning materials based on artificial intelligence (AI) in the classroom has had a significant
effect on students' academic performance. However, the outcome may differ based on the
circumstances and features of the intervention. The findings of this study are corroborated by
a meta-analysis by Hwang (2022), which demonstrates that AI positively impacts the
mathematical proficiency of primary school pupils, with an effectiveness value of
approximately 0.351. However, some moderating factors, such as the subject of mathematics
instruction and the student's educational attainment, might impact this outcome.
However, a different study by Zheng et al. (2023), which examined 24 articles, also
demonstrated that AI significantly impacted learning accomplishment, particularly concerning
students' comprehension of the material. The effectiveness of AI is influenced by several
factors, including sample size, education level, learning domain, and the function of AI in
learning. According to this study, adaptive learning and intelligent tutorial systems—two
examples of personalized AI technologies—improved students' academic performance more
than traditional teaching techniques. Therefore, this meta-analysis supports the finding that
incorporating AI into educational learning materials can significantly increase student
accomplishment, particularly when customized to each student's needs and traits.
Conclusion
Integrating technology in the form of artificial intelligence, especially in the form of
learning media needs to be developed and become the focus of all educational parties. This is
because AI-based learning media has been proven to impact students' academic and non-
academic achievements significantly. This meta-analysis result implies that the modifications
to AI interventions depend on the educational level; teachers should consider incorporating AI
specific to each student's requirements and cognitive growth. For example, they should
emphasize interactive elements for elementary school pupils and analytics-based apps for
college students. Furthermore, student achievement was more significantly impacted by an
intervention that lasted longer. This demonstrates how crucial it is to plan using AI-based
learning materials over an extended period to improve teaching strategies' adaptation and
personalization. To promote students' ongoing academic progress, educators and curriculum
designers should create long-term structured AI-based learning programs.
-236-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Conflict of Interest:
The authors state that they have no competing interests regarding this article's publication and no
personal or financial ties to any groups or people that might improperly affect the direction or results
of the research.
Research Involving Human Participants and/or Animals:
No humans or animals were involved in this scientific project. This study's research does not involve
direct interaction with humans or animals but is based on secondary data from previously published
sources.
Acknowledgments:
We thank The Ministry of Education, Culture, Research, and Technology for funding this research
using the research assignment scheme, and the parties involved in data collection. The writers alone
bear responsibility for any lingering errors.
References
Abass, F., & Abas, N. (2019). The role of advance technologies in motivated learning : Case
study of Saudi learners in universities. International Conference on Research in
Education.
Adcock, P. K. (2008). Evolution of teaching and learning through technology Evolution of
Teaching and. The Delta Kappa Gamma Bulletin, 74(4), 37. Retrieved from
https://digitalcommons.unomaha.edu/cgi/viewcontent.cgi?article=1057&context=tedfa
cpub
Alomari, M. A. (2020). The effect of the use of an educational software based on the strategy
of artificial intelligence on students’ achievement and their attitudes towards it.
Management Science Letters, 10(13), 2951–2960.
https://doi.org/10.5267/j.msl.2020.5.030
Angwaomaodoko, E. A. (2023). An appraisal on the role of technology in modern education,
opportunities and challenges. Path of Science, 9(12), 3019–3028.
https://doi.org/10.22178/pos.99-4
Bhargavi, S., & Guruprasad, N. (2019). Impact of artificial intelligence in the field of
Education. Proceedings of the Second International Conference on Emerging Trends
in Science & Technologies For Engineering Systems (ICETSE-2019), May, 35–39.
Bhatt, C., Singh, S., Chauhan, R., Singh, T., & Uniyal, A. (2023). Artificial intelligence in
current education: Roles, applications & challenges. The 3rd International
Conference on Pervasive Computing and Social Networking (ICPCSN), 241–244.
Borenstein, M. (2019). Common mistakes in meta-analysis and how to avoid them.
Englewood, NJ: Biostat, Inc.
Borenstein, M., Hedges, L. V, Higgins, J. P. T., & Rothstein, H. R. (2021). Introduction to
meta-analysis. Oxford, UK: John Wiley & Sons.
Candra, & Retnawati, H. (2020). A meta-analysis of constructivism learning implementation
towards the learning outcomes on civic education lesson. International Journal of
Instruction, 13(2), 835–846. https://doi.org/10.29333/iji.2020.13256a
Chiu, T. K. F., Xia, Q., Zhou, X., Chai, C. S., & Cheng, M. (2023). Systematic literature
review on opportunities, challenges, and future research recommendations of artificial
intelligence in education. Computers and Education: Artificial Intelligence, 4, 100118.
Cooper, H., Hedges, L. V., Valentine, J. C. (2009). The handbook of research synthesis and
meta-analysis. New York: Russel Sage Foundation.
-237-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
Dai, Y., Lin, Z., Liu, A., & Wang, W. (2024). An embodied, analogical and disruptive
approach of AI pedagogy in upper elementary education: An experimental study.
British Journal of Educational Technology, 55(1), 417-434.
https://doi.org/10.1111/bjet.13371
Das, A., Malaviya, S., & Singh, M. (2023). The Impact of AI-Driven Personalization on
Learners’ Performance. International Journal of Computer Sciences and Engineering,
11(8), 15-22. Retrieved from https://www.researchgate.net/profile/Amit-Das-
18/publication/373424876_The_Impact_of_AI-
Driven_Personalization_on_Learners'_Performance/links/64eaeb130453074fbdb66c1f
/The-Impact-of-AI-Driven-Personalization-on-Learners-Performance.pdf
del Campo, J. M., Negro, V., & Núñez, M. (2012). The history of technology in education. A
comparative study and forecast. Procedia-Social and Behavioral Sciences, 69, 1086–
1092. https://doi.org/10.1016/j.sbspro.2012.12.036
Dhaya, R., Kanthavel, R., & Venusamy, K. (2022). AI-based learning model management
framework for private cloud computing. Journal of Internet Technology, 23(7), 1633–
1642. https://doi.org/10.53106/160792642022122307017
Duval, S., & Tweedie, R. (2000). Trim and Fill: A Simple Funnel-Plot-Based Method.
Biometrics, 56(2), 455–463. https://doi.org/10.1111/j.0006-341X.2000.00455.x
Etemadfar, P., Soozandehfar, S. M. A., & Namaziandost, E. (2020). An account of EFL
learners’ listening comprehension and critical thinking in the flipped classroom model.
Cogent Education, 7(1). https://doi.org/10.1080/2331186X.2020.1835150
García-Martínez, I. (2023). Analysing the Impact of Artificial Intelligence and Computational
Sciences on Student Performance: Systematic Review and Meta-analysis. Journal of
New Approaches in Educational Research, 12(1), 171–197.
https://doi.org/10.7821/naer.2023.1.1240
Hansen, C., Steinmetz, H., & Block, J. (2022). How to conduct a meta-analysis in eight steps:
a practical guide. Management Review Quarterly, 72(1), 1–19.
https://doi.org/10.1007/s11301-021-00247-4
Harzing, A. W. (2007). Publish or Perish.
Hassan, G. (2023). Technology and the transformation of educational practices: A future
perspective. International Journal of Economic, Business, Accounting, Agriculture
Management and Sharia Administration, 3(1), 1596–1603. |
https://radjapublika.com/index.php/IJEBAS
Higgins, J. P. T., & Green, S. (2011). Cochrane handbook for systematic reviews of
interventions. London, UK: The Cochrane Collaboration.
Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring
inconsistency in meta-analyses. Education and Debate, 327(7414), 557–560.
https://doi.org/10.1136/bmj.327.7414.557
Huang, A. Y. Q., Lu, O. H. T., & Yang, S. J. H. (2023). Effects of artificial Intelligence–
Enabled personalized recommendations on learners’ learning engagement, motivation,
and outcomes in a flipped classroom. Computers &Education, 194, 104684.
https://doi.org/10.1016/j.compedu.2022.104684
Huddar, R., & Kharade, K. (2023). Designing of AI-based teaching-learning model for
revitalizing education. International Conference on the Future Global Business and
Technology, Kolhapur, India. Retrieved from
https://www.researchgate.net/profile/Kabir-
Kharade/publication/370561052_Designing_of_AI-Based_Teaching-
Learning_Model_for_Revitalizing_Education/links/6455fade809a53502150a740/Desi
gning-of-AI-Based-Teaching-Learning-Model-for-Revitalizing-Education.pdf
-238-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
Hooda, M., Rana, C., Dahiya, O., Rizwan, A., & Hossain, M. S. (2022). Artificial intelligence
for assessment and feedback to enhance student success in higher education.
Mathematical Problems in Engineering, 2022(1), 5215722.
Hwang, S. (2022). Examining the Effects of Artificial Intelligence on Elementary Students’
Mathematics Achievement: A Meta-Analysis. Sustainability (Switzerland), 14(20).
https://doi.org/10.3390/su142013185
Junaidi, J. (2020). Artificial intelligence in EFL context: Rising students’ speaking
performance with Lyra virtual assistance. International Journal of Advanced Science
and Technology Rehabilitation, 29(5), 6735-6741.
Kaledio, P., Robert, A., & Frank, L. (2024). The Impact of Artificial Intelligence on Students’
Learning Experience. Available at SSRN 4716747.
Kalyani, L. K. (2024). The role of technology in education: Enhancing learning outcomes and
21st century skills. International Journal of Scientific Research in Modern Science and
Technology, 3(4), 5–10. Retrieved from
https://ijsrmst.com/index.php/ijsrmst/article/view/199
Kanvaria, V. K., & Suraj, M. T. (2024). The role of AI in Mathemathics education: Assessing
the effects of an auto draw webtool on middle level achievement. The Online Journal
of Distance Education and e-Learning, 12(1), 49. Retrieved from
https://tojqih.net/cgi-sys
Kiong, J. F. (2022). The Impact of Technology on Education: A Case Study of Schools.
Journal of Education Review Provision, 2(2), 43–47.
https://doi.org/10.55885/jerp.v2i2.153
Li, K. (2023). Determinants of college students’ actual use of AI-based systems: An extension
of the technology acceptance model. Sustainability, 15(6), 5221.
https://doi.org/10.3390/su15065221
Liu, C. C., Liao, M. G., Chang, C. H., & Lin, H. M. (2022). An analysis of
children’interaction with an AI chatbot and its impact on their interest in reading.
Computers &Education, 189, 104576. https://doi.org/10.1016/j.compedu.2022.104576
Lokare, V. T., & Jadhav, P. M. (2024). An AI-based learning style prediction model for
personalized and effective learning. Thinking Skills and Creativity, 51. 101421.
https://doi.org/10.1016/j.tsc.2023.101421
Mathur, M. B., & Vanderweele, T. J. (2020). Sensitivity analysis for publication bias in meta-
analyses. Applied Statistics, 69(5), 1091–1119. https://doi.org/10.1111/rssc.12440
Mengist, W., & Soromessa, T. (2020). Method for conducting systematic literature review
and meta-analysis for environmental science research. MethodsX, 7, 100777.
https://doi.org/10.1016/j.mex.2019.100777
Nethra R MBA, N. (2019). Impact of technology on education. Journal of Emerging
Technologies and Innovative Research, 6(7), 166–169.
Raja, R., & Nagasubramani, P. C. (2018). Impact of modern technology in education. Journal
of Applied and Advanced Research, 3(1), 33–35.
https://dx.doi.org/10.21839/jaar.2018.v3S1.165
Retnawati, H., Apino, E., Kartianom, & Djidu, H.; Anazifa, R. D. (2018). Introduction to
Meta Analysis (Pengantar Analisis Meta). Yogyakarta, Indonesia: Parama Publishing.
Rosenthal, R. (1979). The " File Drawer Problem " and Tolerance for Null Results.
Psychological Bulletin, 86(3), 638–641. https://doi.org/10.1037/0033-2909.86.3.638
RStudio_Team. (2020). RStudio: Integrated Development for R. RStudio, PBC, Boston, MA.
Samra, E. M. (2021). The effect of introducing infographic pattern on developing cognitive
understanding by using AI technology for university students during the COVID-19
-239-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
-240-
Participatory Educational Research (PER), 12(1); 222-242, 1 January 2025
-241-
Harnessing AI-based learning media in education: A meta-analysis of its…R. Setiawan, U. Farisiyah, M. Z. Abidin, W.
infographic
AI-supported
animated Saudi
Study26 Univ Willingness to learn infographic Arabia 4
AI-enabled e- Saudi
Study27 Univ Achievement Test learning Arabia 4
AI-enabled e- Saudi
Study28 Univ Learning Process learning Arabia 4
AI-enabled e- Saudi
Study29 Univ Cognitive Achievement learning Arabia 4
AI strategy-based
educational
Study30 Univ Students' achievement software Jordan 3
Study31 SHS Learning Achievement AI-assisted learning India 3
-242-