0% found this document useful (0 votes)
8 views8 pages

Possible Qs

The study investigates the impact of narrative texts on reading comprehension among Filipino college students, highlighting the advantages of silent reading over oral reading in terms of cognitive load and comprehension. It addresses gaps in existing literature, particularly in the Philippine context, and employs a between-subjects design to compare comprehension outcomes. The findings suggest that while both reading modalities are effective, silent reading may facilitate deeper understanding, although the differences were not statistically significant.

Uploaded by

yieluo044
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views8 pages

Possible Qs

The study investigates the impact of narrative texts on reading comprehension among Filipino college students, highlighting the advantages of silent reading over oral reading in terms of cognitive load and comprehension. It addresses gaps in existing literature, particularly in the Philippine context, and employs a between-subjects design to compare comprehension outcomes. The findings suggest that while both reading modalities are effective, silent reading may facilitate deeper understanding, although the differences were not statistically significant.

Uploaded by

yieluo044
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Conceptual Framework & Rationale

1.​ Why did you specifically choose narrative texts instead of expository or informational texts for
this study? Narrative texts were chosen because they are more emotionally engaging, which has been
shown to enhance memory retention and comprehension. They are also commonly encountered in
general education and literature courses in college.
2.​ How does the Construction-Integration Model support your assumption that silent reading leads
to better comprehension? The model suggests that readers integrate textual information with prior
knowledge in two stages. Silent reading reduces external cognitive load, allowing deeper integration
during the second stage, especially for skilled readers.
3.​ How does Cognitive Load Theory apply differently to oral and silent reading in the context of
college-level students? Oral reading adds extraneous cognitive load due to articulation, which may
divert resources away from comprehension. Silent reading eliminates this demand, enabling more
focus on understanding the text.
4.​ What gaps in literature did your study aim to address specifically in the Philippine college
context? Most studies on reading modes focus on primary or ESL contexts. Few have explored the
impact on narrative comprehension among Filipino college students, particularly with validated
comprehension instruments.

Theoretical Underpinnings

5.​ You cited Sweller (1988) and Kintsch (1988); how do their theories differ in terms of explaining
reading comprehension? Sweller focuses on cognitive load management during learning, while
Kintsch emphasizes mental model construction and integration of textual content with existing
knowledge. Both inform different aspects of how comprehension occurs.
6.​ How did you reconcile findings that oral reading enhances memory recall (MacLeod et al., 2021)
with your results showing no significant difference in comprehension? While oral reading may
boost short-term recall, our comprehension tasks required deeper understanding, not just memory.
Silent reading supports integration, which is key to inferential comprehension.
7.​ Can you explain how Santos (2016)'s findings on spoken word recognition relate to your
inferential comprehension results? Santos found auditory input aids lexical processing. However,
comprehension involves higher-order reasoning that may benefit more from silent reading due to
reduced cognitive load.
Methodology

8.​ Why did you choose a between-subjects pretest-posttest design rather than a within-subjects
design? A between-subjects design prevents carryover effects that could bias results if participants
experienced both conditions.
9.​ How did you ensure the randomization process did not introduce sampling bias? We used
simple alternation during participant assignment to evenly distribute potential confounding variables.
10.​Why was silent reading used for the pretest for both groups instead of matching their assigned
conditions? This controlled for reading delivery method and ensured baseline comparability without
introducing articulation variables in the pretest.
11.​What were the limitations of using Millett’s (2017) Speed Readings for ESL Learners as your
instrument for college students? While designed for ESL learners, we validated the test for our
context. However, it may not fully capture the nuances of academic-level inferencing.

Statistical Analysis

12.​Why did you use Welch’s t-test rather than a Mann-Whitney U test when normality was violated?
Welch’s t-test is robust to heteroscedasticity and unequal variances, making it suitable for small
samples with mild normality violations.
13.​Given the small sample size (n=22), how do you justify the generalizability of your findings? We
acknowledge limited generalizability and recommend replication with larger, more diverse samples to
validate results.
14.​What does a p-value of 1.000 (Table 2.1) for literal comprehension tell us about group
equivalence? It indicates no difference at all in mean scores between groups, suggesting that literal
comprehension was unaffected by reading mode.

Instrument Validity & Reliability

15.​How did you validate the categorization of comprehension questions into literal and inferential
types? Expert validation was conducted, and questions were reviewed against established definitions
(Nurjanah & Putri, 2021; Rice et al., 2023).
16.​Was Cronbach’s alpha computed for internal consistency? If not, why was it not included? It
was planned post-data collection but not yet computed at the time of writing. The original test reports
reliability coefficients between 0.70 to 0.85.
17.​Since the instrument was originally designed for ESL learners, how did you ensure its cultural
and linguistic appropriateness for your participants? We selected narrative texts with universal
themes and reviewed language complexity to ensure accessibility.

Design Limitations & Assumptions

18.​Could the similarity in posttest scores be due to a ceiling effect in your comprehension test?
Yes, high average posttest scores suggest a possible ceiling effect, which could obscure subtle
differences between groups.
19.​Did your design account for learning preferences, and how might this have impacted the
results? We discussed learning preferences theoretically but did not measure them empirically. Future
studies should include this variable.
20.​Why not include a control group with no reading task to isolate comprehension gain? Our focus
was comparative rather than causal, hence we designed two treatment groups to observe relative
efficacy.

Time Control & Reading Pace

21.​How did you manage reading time consistency between the oral and silent reading groups?
Participants were not time-restricted but monitored to ensure that neither group was rushed or delayed
disproportionately.
22.​Was there any measurement or recording of reading time per participant? Reading time was
observed but not recorded systematically. This is a limitation and is recommended for future studies.

Participant Behavior

23.​Did you collect demographic data that could have influenced comprehension? We collected basic
demographic data but did not statistically analyze it due to the small sample size.
24.​How did you ensure participants followed instructions during the silent reading condition?
Experimenters observed participants to ensure no subvocalization or whispering occurred during silent
reading.
Text Selection & Content

25.​Why were “Life in the South Pacific Islands” and “Jayaprana” selected? These texts were from
validated ESL reading materials, offering narrative coherence and culturally neutral content.
26.​Were the passages pre-tested for readability or cognitive demand? We relied on prior validation
from the Millett materials, which are standardized at the 1000-word level for ESL learners.

Experimental Controls

27.​Were environmental factors like room setting, background noise, or lighting standardized? Yes,
all sessions took place in the same controlled lab setting with consistent environmental conditions.
28.​Could the presence of the experimenter during oral reading have introduced bias? It’s possible.
We minimized this by maintaining neutral behavior and providing standard instructions.

Applications & Pedagogical Relevance

29.​How might your findings guide instructional decisions in remedial reading programs? Our
findings suggest that both modalities are effective, so educators can tailor instruction to individual
preferences and goals.
30.​Considering the small sample size, how might your findings inform policy or pedagogy? While
preliminary, our results support flexibility in reading approaches, which can be beneficial in designing
inclusive reading interventions.

Additional Panelist Questions and Answers

31.​How do you interpret the minimal difference in posttest means? The minimal difference suggests
that both reading modes are comparably effective for narrative text comprehension in proficient
readers.
32.​What would you change in the experimental setup to capture subtle comprehension
differences? We would use more complex texts or open-ended comprehension tasks to better
differentiate deep understanding.
33.​Did you observe any behavioral differences during reading (e.g., fidgeting, hesitations)? Minor
hesitations were noted during oral reading, which could reflect articulation burden, but no significant
behavioral issues were observed.
34.​How do you plan to improve inferential comprehension measurement in future research? We will
include higher-order inference questions and possibly integrate qualitative recall tasks to capture depth.
35.​Was the comprehension test piloted on similar students beforehand? No formal pilot was
conducted due to time constraints, but the instrument was expert-reviewed and previously validated.
36.​What reading strategies could improve inferential comprehension regardless of modality?
Teaching metacognitive strategies like summarization, questioning, and prediction may improve
inferential comprehension.
37.​How did you address potential Hawthorne effect during the experiment? We standardized
procedures and minimized interaction with participants during reading tasks to reduce performance
bias.
38.​What additional variables would you control for in future research? Reading fluency level, prior
exposure to the narrative, and intrinsic motivation should be controlled for deeper insight.
39.​If you had more time, how would you enhance your data collection? We would conduct delayed
posttests and track long-term retention, as well as log reading durations.
40.​Can these findings apply to digital reading environments? Possibly, but digital modalities introduce
screen-based variables; further research would be needed to explore this context.
41.​How would you adapt your methodology for multilingual populations? We would translate and
validate passages in target languages and account for linguistic proficiency.
42.​Could emotion evoked by narratives influence comprehension outcomes? Yes, emotionally
charged texts can enhance engagement and retention, a variable worth isolating in future studies.
43.​Were comprehension outcomes influenced by the order in which groups were tested? Both
groups were tested sequentially but randomized; order effect was minimized but not fully controlled.
44.​How does this study contribute to cognitive psychology literature? It applies cognitive load and
comprehension models to real-world academic reading, expanding on how mode affects processing.
45.​What ethical considerations were prioritized in your research design? Informed consent, voluntary
participation, and anonymity were strictly enforced. Emotional well-being was monitored throughout.
46.​How did you handle unengaged or careless participants? Observation during the experiment
ensured attention. All data were reviewed for anomalies but no exclusions were necessary.
47.​What does this study imply for students with low reading proficiency? Oral reading might offer
pronunciation benefits, but silent reading may still support deeper comprehension for those with basic
fluency.
48.​Do your findings align with existing Filipino education policies? They support DepEd’s promotion
of learner-centered approaches, emphasizing modality flexibility.
49.​What role does metacognition play in reading comprehension according to your data? While not
directly measured, silent readers may engage more in metacognitive strategies due to reduced
articulation burden.
50.​What practical advice would you give to college students based on your findings? Choose the
reading method that helps you focus best. Silent reading aids deep thinking; oral reading may help with
attention or memory.
51.​How would you test retention after one week post-intervention? We’d re-administer the posttest
and compare decay rates between groups to assess long-term retention.
52.​Could group reading sessions impact comprehension differently than individual sessions? Yes,
social dynamics and peer influence could enhance engagement or introduce distraction—worth testing
in future studies.
53.​What follow-up research are you planning next? A study on how multimodal instruction (visual +
auditory) impacts both comprehension and retention using mixed methods.
54.​How can teachers use your findings in practical classroom settings? They can allow students to
choose reading modes, offer reading-aloud opportunities for struggling readers, and encourage silent
reading for critical tasks.
55.​If you had to redesign the entire experiment, what would you change? We would include more
diverse texts, larger samples, time-tracking, a motivation scale, and post-experiment interviews for
richer insights.
56.​What were the actual pretest and posttest mean scores for both groups, and how did they
differ? The silent reading group had a pretest mean of 6.18 and a posttest mean of 9.55. The oral
reading group had a pretest mean of 6.45 and a posttest mean of 9.45. Both groups improved, with
silent reading showing a slightly higher gain.
57.​Can you interpret the significance of having a p-value of 1.000 for the difference in literal
comprehension posttest scores? A p-value of 1.000 indicates no statistical difference between
groups in literal comprehension, suggesting both groups performed identically on that component.
58.​Despite no statistically significant difference, were there any patterns or trends in the data that
suggest a meaningful difference? Yes, while differences weren’t statistically significant, the silent
group had marginally higher gains in inferential comprehension, hinting at potential practical relevance.
59.​Was there a difference in the standard deviation between groups, and what does this tell you
about score variability? Yes, the oral group had slightly higher standard deviation in posttest scores,
indicating greater variability in their performance.
60.​How do you interpret the effect size—even if not significant statistically? Did you calculate
Cohen’s d or any other effect size? Effect size calculations were not formally included, but based on
mean differences and SDs, the effect appears to be very small—implying minimal practical difference.
61.​Do the inferential comprehension scores suggest that one group struggled more with
higher-order thinking? Slightly lower mean scores in inferential questions were observed for the oral
group, which may suggest articulation demands hindered deeper processing.
62.​Given that both groups improved, what does this imply about the general impact of practice or
exposure to narrative texts? It implies that both oral and silent reading can enhance comprehension
when supported by structured, meaningful texts and repeated exposure.
63.​Were there any outliers in the data, and how did you handle them? No extreme outliers were noted
during analysis. All participant data were retained in the analysis.
64.​How did the assumption violations (e.g., normality) affect your confidence in the statistical tests
used? We addressed these by using Welch’s t-test, which is robust to violations of normality and
variance equality, thus maintaining statistical validity.
65.​Would a larger sample size likely reveal significant differences based on your current mean
differences and variances? Possibly. The current sample size may have been underpowered to
detect small differences, so a larger sample might yield significant results.
66.​Can you discuss the practical significance of your findings even if they are not statistically
significant? Practically, both reading modes seem equally effective for college readers, allowing
flexible instructional choices based on preference or context.
67.​How would the results change if you analyzed subgroups (e.g., based on reading habits or
English proficiency)? Subgroup analysis might reveal that students with stronger English proficiency
or visual learning preferences benefit more from silent reading.
68.​Were there any observed ceiling or floor effects in the data distribution? The posttest scores
approached the maximum possible score, suggesting a ceiling effect that might have masked subtle
differences.
69.​How reliable are your instruments in distinguishing between literal and inferential
comprehension? The instrument used expert-validated questions and was previously standardized for
ESL learners, making it reasonably reliable.
70.​What do the individual score distributions tell you about how consistent comprehension was
across participants in each group? The scores were relatively clustered, especially in the silent
group, suggesting consistent performance. The oral group showed more variability.
71.​Do your results support or contradict your initial hypothesis? How so? The results partially
support our hypothesis. Although silent readers scored slightly higher, the difference was not significant,
suggesting comparable efficacy.
72.​If you were to create a visual representation (e.g., bar chart or box plot), what key differences
would stand out? A bar chart would show a small advantage for silent reading in mean posttest
scores, and a box plot might show tighter clustering in the silent group.
73.​Was there a statistically significant difference in gains from pretest to posttest within each
group? While both groups improved, the within-group gain was not statistically tested due to focus on
between-group comparisons. Visual inspection shows positive change.
74.​Did both groups perform similarly on literal vs. inferential items, or was there divergence? Yes,
both groups performed better on literal items. The silent group had a slight advantage in inferential
items, suggesting they processed deeper meaning more effectively.
75.​What possible confounding factors might explain the lack of statistical significance in your
findings? Possible confounders include individual differences in reading speed, motivation, prior
knowledge, and learning preferences—all of which were not controlled.

You might also like