Assignment No. 2 COURSE: Research Methods in Education (8604) Program: B.Ed (1.5 Year)
Assignment No. 2 COURSE: Research Methods in Education (8604) Program: B.Ed (1.5 Year)
ASSIGNMENT No. 2
ASSIGNMENT No. 2
Q.1 What do you mean by research tool. Discuss different research tools. What is meant
by the validity and reliability of research tools.
Ans,
Anything that becomes a means of collecting information for your study is called a research
tool or a research instrument. For example, observation forms, interview schedules,
questionnaires, and interview guides are all classified as research tools.
Reliability
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the
scores they obtain should also be consistent across time. Test-retest reliability is the extent to
which this is actually the case. For example, intelligence is generally thought to be consistent
across time. A person who is highly intelligent today will be highly intelligent next week.
This means that any good measure of intelligence should produce roughly the same scores for
this individual next week as it does today. Clearly, a measure that produces highly
inconsistent scores over time cannot be a very good measure of a construct that is supposed to
be consistent.Again, high test-retest correlations make sense when the construct being
measured is assumed to be consistent over time, which is the case for intelligence, self-
esteem, and the Big Five personality dimensions. But other constructs are not assumed to be
stable over time. The very nature of mood, for example, is that it changes. So a measure of
mood that produced a low test-retest correlation over a period of a month would not be a
cause for concern.
Internal Consistency
measures. For example, people might make a series of bets in a simulated game of roulette as
a measure of their level of risk seeking. This measure would be internally consistent to the
extent that individual participants’ bets were consistently high or low across trials.
Like test-retest reliability, internal consistency can only be assessed by collecting and
analyzing data. One approach is to look at a split-half correlation. This involves splitting the
items into two sets, such as the first and second halves of the items or the even- and odd-
numbered items. Then a score is computed for each set of items, and the relationship between
the two sets of scores is examined.
Interrater Reliability
Validity
Validity is the extent to which the scores from a measure represent the variable they are
intended to. But how do researchers make this judgment? We have already considered one
factor that they take into account—reliability. When a measure has good test-retest reliability
and internal consistency, researchers should be more confident that the scores represent what
they are supposed to. There has to be more to it, however, because a measure can be
extremely reliable but have no validity whatsoever. As an absurd example, imagine someone
who believes that people’s index finger length reflects their self-esteem and therefore tries to
measure self-esteem by holding a ruler up to people’s index fingers. Although this measure
would have extremely good test-retest reliability, it would have absolutely no validity. The
fact that one person’s index finger is a centimetre longer than another’s would indicate
nothing about which one had higher self-esteem.
Discussions of validity usually divide it into several distinct “types.” But a good way to
interpret these types is that they are other kinds of evidence—in addition to reliability—that
should be taken into account when judging the validity of a measure. Here we consider three
basic kinds: face validity, content validity, and criterion validity.
Course: Research Methods in Education (8604) Semester: Autumn, 2022
Face Validity
Face validity is the extent to which a measurement method appears “on its face” to measure
the construct of interest. Most people would expect a self-esteem questionnaire to include
items about whether they see themselves as a person of worth and whether they think they
have good qualities. So a questionnaire that included these kinds of items would have good
face validity. The finger-length method of measuring self-esteem, on the other hand, seems to
have nothing to do with self-esteem and therefore has poor face validity. Although face
validity can be assessed quantitatively—for example, by having a large sample of people rate
a measure in terms of whether it appears to measure what it is intended to—it is usually
assessed informally.
Content Validity
Content validity is the extent to which a measure “covers” the construct of interest. For
example, if a researcher conceptually defines test anxiety as involving both sympathetic
nervous system activation (leading to nervous feelings) and negative thoughts, then his
measure of test anxiety should include items about both nervous feelings and negative
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and
actions toward something. By this conceptual definition, a person has a positive attitude
toward exercise to the extent that he or she thinks positive thoughts about exercising, feels
good about exercising, and actually exercises. So to have good content validity, a measure of
people’s attitudes toward exercise would have to reflect all three of these aspects. Like face
validity, content validity is not usually assessed quantitatively. Instead, it is assessed by
carefully checking the measurement method against the conceptual definition of the
construct.
Criterion Validity
Criterion validity is the extent to which people’s scores on a measure are correlated with
other variables (known as criteria) that one would expect them to be correlated with. For
example, people’s scores on a new measure of test anxiety should be negatively correlated
with their performance on an important school exam. If it were found that people’s scores
were in fact negatively correlated with their exam performance, then this would be a piece of
evidence that these scores really represent people’s test anxiety. But if it were found that
people scored equally well on the exam regardless of their test anxiety scores, then this would
cast doubt on the validity of the measure.
A criterion can be any variable that one has reason to think should be correlated with the
construct being measured, and there will usually be many of them. For example, one would
expect test anxiety scores to be negatively correlated with exam performance and course
grades and positively correlated with general anxiety and with blood pressure during an
exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s
scores on this measure should be correlated with their participation in “extreme” activities
such as snowboarding and rock climbing, the number of speeding tickets they have received,
and even the number of broken bones they have had over the years. When the criterion is
measured at the same time as the construct, criterion validity is referred to as concurrent
validity; however, when the criterion is measured at some point in the future (after the
Course: Research Methods in Education (8604) Semester: Autumn, 2022
construct has been measured), it is referred to as predictive validity (because scores on the
measure have “predicted” a future outcome).
Criteria can also include other measures of the same construct. For example, one would
expect new measures of test anxiety or physical risk taking to be positively correlated with
existing measures of the same constructs. This is known as convergent validity.
obedience). In the years since it was created, the Need for Cognition Scale has been used in
literally hundreds of studies and has been shown to be correlated with a wide variety of other
variables, including the effectiveness of an advertisement, interest in politics, and juror
decisions (Petty, Briñol, Loersch, & McCaslin, 2009)[2].
Discriminant Validity
Discriminant validity, on the other hand, is the extent to which scores on a measure
are not correlated with measures of variables that are conceptually distinct. For example, self-
esteem is a general attitude toward the self that is fairly stable over time. It is not the same as
mood, which is how good or bad one happens to be feeling right now. So people’s scores on a
new measure of self-esteem should not be very highly correlated with their moods. If the new
measure of self-esteem were highly correlated with a measure of mood, it could be argued
that the new measure is not really measuring self-esteem; it is measuring mood instead.
When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence
of discriminant validity by showing that people’s scores were not correlated with certain
other variables. For example, they found only a weak correlation between people’s need for
cognition and a measure of their cognitive style—the extent to which they tend to think
analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.”
They also found no correlation between people’s need for cognition and measures of their test
anxiety and their tendency to respond in socially desirable ways. All these low correlations
provide evidence that the measure is reflecting a conceptually distinct construct.
Q.2 What is the importance of sample in research? Discuss different sampling techniques
in detail.
Ans.
When you conduct research about a group of people, it’s rarely possible to collect data from
every person in that group. Instead, you select a sample. The sample is the group of
individuals who will actually participate in the research.
Course: Research Methods in Education (8604) Semester: Autumn, 2022
To draw valid conclusions from your results, you have to carefully decide how you will select
a sample that is representative of the group as a whole. There are two types of sampling
methods:
Probability sampling involves random selection, allowing you to make strong
statistical inferences about the whole group.
Non-probability sampling involves non-random selection based on convenience or
other criteria, allowing you to easily collect data.
You should clearly explain how you selected your sample in the methodology section of your
paper or thesis.
Table of contents
1. Population vs sample
2. Probability sampling methods
3. Non-probability sampling methods
4. Frequently asked questions about samplingPopulation vs sample
First, you need to understand the difference between a population and a sample, and identify
the target population of your research.
The population is the entire group that you want to draw conclusions about.
The sample is the specific group of individuals that you will collect data from.
The population can be defined in terms of geographical location, age, income, and many
other characteristics.
It can be very broad or quite narrow: maybe you want to make inferences about the whole
adult population of your country; maybe your research focuses on customers of a certain
company, patients with a specific health condition, or students in a single school.
It is important to carefully define your target population according to the purpose and
practicalities of your project.
If the population is very large, demographically mixed, and geographically dispersed, it
might be difficult to gain access to a representative sample.
Sampling frame
The sampling frame is the actual list of individuals that the sample will be drawn from.
Ideally, it should include the entire target population (and nobody who is not part of that
population).
Example: Sampling frameYou are doing research on working conditions at Company X.
Your population is all 1000 employees of the company. Your sampling frame is the
company’s HR database which lists the names and contact details of every employee.
Sample size
The number of individuals you should include in your sample depends on various factors,
including the size and variability of the population and your research design. There are
different sample size calculators and formulas depending on what you want to achieve
with statistical analysis.
2. Systematic sampling
Systematic sampling is similar to simple random sampling, but it is usually slightly easier to
conduct. Every member of the population is listed with a number, but instead of randomly
generating numbers, individuals are chosen at regular intervals.
Example: Systematic samplingAll employees of the company are listed in alphabetical order.
From the first 10 numbers, you randomly select a starting point: number 6. From number 6
onwards, every 10th person on the list is selected (6, 16, 26, 36, and so on), and you end up
with a sample of 100 people.
If you use this technique, it is important to make sure that there is no hidden pattern in the list
that might skew the sample. For example, if the HR database groups employees by team, and
team members are listed in order of seniority, there is a risk that your interval might skip over
people in junior roles, resulting in a sample that is skewed towards senior employees.
3. Stratified sampling
Stratified sampling involves dividing the population into subpopulations that may differ in
important ways. It allows you draw more precise conclusions by ensuring that every
subgroup is properly represented in the sample.
To use this sampling method, you divide the population into subgroups (called strata) based
on the relevant characteristic (e.g. gender, age range, income bracket, job role).
Based on the overall proportions of the population, you calculate how many people should be
sampled from each subgroup. Then you use random or systematic sampling to select a sample
from each subgroup.
Example: Stratified samplingThe company has 800 female employees and 200 male
employees. You want to ensure that the sample reflects the gender balance of the company,
so you sort the population into two strata based on gender. Then you use random sampling on
Course: Research Methods in Education (8604) Semester: Autumn, 2022
each group, selecting 80 women and 20 men, which gives you a representative sample of 100
people.
4. Cluster sampling
Cluster sampling also involves dividing the population into subgroups, but each subgroup
should have similar characteristics to the whole sample. Instead of sampling individuals from
each subgroup, you randomly select entire subgroups.
If it is practically possible, you might include every individual from each sampled cluster. If
the clusters themselves are large, you can also sample individuals from within each cluster
using one of the techniques above. This is called multistage sampling.
This method is good for dealing with large and dispersed populations, but there is more risk
of error in the sample, as there could be substantial differences between clusters. It’s difficult
to guarantee that the sampled clusters are really representative of the whole population..
Non-probability sampling methods
In a non-probability sample, individuals are selected based on non-random criteria, and not
every individual has a chance of being included.
This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias.
That means the inferences you can make about the population are weaker than with
probability samples, and your conclusions may be more limited. If you use a non-probability
sample, you should still aim to make it as representative of the population as possible.
Non-probability sampling techniques are often used in exploratory and qualitative research.
In these types of research, the aim is not to test a hypothesis about a broad population, but to
develop an initial understanding of a small or under-researched population.
1. Convenience sampling
A convenience sample simply includes the individuals who happen to be most accessible to
the researcher.
This is an easy and inexpensive way to gather initial data, but there is no way to tell if the
sample is representative of the population, so it can’t produce generalizable results.
Example: Convenience samplingYou are researching opinions about student support services
in your university, so after each of your classes, you ask your fellow students to complete
a survey on the topic. This is a convenient way to gather data, but as you only surveyed
students taking the same classes as you at the same level, the sample is not representative of
all the students at your university.
Course: Research Methods in Education (8604) Semester: Autumn, 2022
3. Purposive sampling
This type of sampling, also known as judgement sampling, involves the researcher using their
expertise to select a sample that is most useful to the purposes of the research.
It is often used in qualitative research, where the researcher wants to gain detailed knowledge
about a specific phenomenon rather than make statistical inferences, or where the population
is very small and specific. An effective purposive sample must have clear criteria and
rationale for inclusion. Always make sure to describe your inclusion and exclusion criteria.
Example: Purposive samplingYou want to know more about the opinions and experiences of
disabled students at your university, so you purposefully select a number of students with
different support needs in order to gather a varied range of data on their experiences with
student services.
4. Snowball sampling
If the population is hard to access, snowball sampling can be used to recruit participants via
other participants. The number of people you have access to “snowballs” as you get in
contact with more people.
We all know the importance of education. It is the most important aspect of any nation’s
survival today. Education builds the nations; it determines the future of a nation. So that’s
why we have to adopt our Education Policies very carefully because our future depends on
these policies.
ISLAM also tells us about Education and its importance. The real essence of Education
according to ISLAM is “to know ALLAH” but I think in our country we truly lost. Neither
our schools nor our madrassa’s (Islamic Education Centres) are truly educating our youth in
Course: Research Methods in Education (8604) Semester: Autumn, 2022
this regard. In schools, we are just preparing them for “Money”. We aren’t educating them
we are just preparing “Money Machines”. We are only increasing the burden of the books for
our children and just enrolling them in a reputed, big school for what, just for social status???
On the other hand in our madrassas we are preparing people who finds very difficult to adjust
in the modern society.
Sometimes it seems that they are from another planet. A madrassa student can’t compete
even in our country then the World is so far from him. He finds very difficult to even speak to
a school boy. It is crystal clear that Islamic Education is necessary for Muslims but it is also a
fact that without modern education no one can compete in this world. There are many
examples of Muslim Scholars who not only study the Holy Quraan but also mastered the
other subjects like Physics, Chemistry, Biology, Astronomy and many more, with the help of
Holy Quraan. I think with the current education system we are narrowing the way for our
children instead of widening it. There is no doubt that our children are very talented, both in
schools and in madrassas, we just need to give them proper ways to groom, give them the
space to become Quaid-E- Azam Muhammad Ali Jinnah, Allama Iqbal, Sir Syed Ahmed
Khan, Alberoni, Abnalhasam, or Einstein, Newton, Thomas Edison. The education system
we are running with is not working anymore. We have to find a way to bridge this gap
between school and madrassa. Robert Maynard Hutchins describes it as “The object of
education is to prepare the young to educate themselves throughout their lives.” We should
give our youth the way to educate themselves.Edward Everett said that “Education is a better
safeguard of liberty than a standing army.” Sadly, in Pakistan we are spending more budgets
on our arms than on education which depicts our ideology about education!!! Since 1947 not
a single government is able to change
this scenario. In price of a grenade almost 20 to 30 children can go to school for the whole
year and the other picture.... a grenade can kill 20 to 30 grown people!!!!!!. So a grenade is
damaging in two ways stopping children education and then killing innocent people!!! Why
not authorities think about this? Answer.... we all know that!!! Don’t we?Now lets talk about
our Policy Makers, it seems they are not working enough. Every year policy for education is
reviewed by the government but the results are same.... Illiteracy rate is going upwards in
Pakistan according to a recent survey. Somebody starting “Nai Roshni School”, somebody
starting “Parha Likha Punjab” etc. for what to educate Pakistan? Well, I don’t think so. These
“People” are playing with our nation for the last 60 years just for their on profits and aims.
We should and we have to think about our children education now that are we educating them
in the right way? If not, what should we do? We have to act now otherwise it’s going to be
too late for PAKISTAN!!! The report’s major findings and recommendations are
Although its law requires Pakistan to provide free and compulsory education to all
children between the ages of five and sixteen, millions are still out of school, the
second highest number in the world.
The quality of education in the public school sector remains abysmal, failing to
prepare a fast growing population for the job market, while a deeply flawed
curriculum fosters religious intolerance and xenophobia.
Poorly regulated madrasas and religious schools are filling the gap of the dilapidated
public education sector and contributing to religious extremism and sectarian violence
The state must urgently reverse decades of neglect by increasing expenditure on the
grossly-underfunded education system – ensuring that international aid to this sector
is supplementary to, rather than a substitute for, the state’s financial commitment –
Course: Research Methods in Education (8604) Semester: Autumn, 2022
and opt for meaningful reform of the curriculum, bureaucracy, teaching staff and
methodologies.
Research proposal and research report are two terms that often confuse many student
researchers. A research proposal describes what the researcher intends to do in his research
study and is written before the collection and analysis of data. A research report describes the
whole research study and is submitted after the competition of the whole research project.
Thus, the main difference between research proposal and research report is that a research
proposal describes the proposed research and research design whereas a research report
describes the completed research, including the findings, conclusion, and recommendations.
What is a Research Proposal
A research proposal is a brief and coherent summary of the proposed research study, which is
prepared at the beginning of a research project. The aim of a research proposal is to justify
the need for a specific research proposal and present the practical methods and ways to
conduct the proposed research. In other words, a research proposal presents the proposed
design of the study and justifies the necessity of the specific research. Thus, a research
proposal describes what you intend to do and why you intend to do it.
A research proposal generally contains the following segments:
Introduction/ Context/ Background
Literature Review
Research Methods and Methodology
Research question
Aims and Objectives
List of Reference
Each of these segments is indispensable to a research proposal. For example, it’s impossible
to write a research proposal without reading related work and writing a literature review.
Similarly, it’s not possible to decide a methodology without determining specific research
questions.
What is a Research Report
A research report is a document that is submitted at the end of a research project. This
describes the completed research project. It describes the data collection, analysis, and the
Course: Research Methods in Education (8604) Semester: Autumn, 2022
results as well. Thus, in addition to the sections mentioned above, this also includes sections
such as,
Findings
Analysis
Conclusions
Shortcomings
Recommendations
A research report is also known as a thesis or dissertation. A research report is not research
plan or a proposed design. It describes what was actually done during the research project and
what was learned from it. Research reports are usually longer than research proposals since
they contain step-by-step processes of the research.
Research Proposal: Research Proposal describes what the researcher intends to do and why
he intends to do it.
Research Report: Research report describes what the researcher has done, why he has done
it, and the results he has achieved.
Order
Research Proposal: Research proposals are written at the beginning of a research proposal
before the research project actually begins.
Research Report: Research reports are completed after the completion of the whole research
project.
Content
Research Proposal: Research proposals contain sections such as introduction/background,
literature review, research questions, methodology, aims and objective.
Research Report: Research reports contain sections such as introduction/background,
literature review, research questions, methodology, aims and objective, findings, analysis,
results, conclusion, recommendations, citation.
Length
Research Proposal: Research proposals are shorter in length.
Research Report: Research reports are longer than research proposals.
APA Manual 6th edition and enlist the rules of references for research report:
Your references should begin on a new page. Title the new page "References" and
center the title text at the top of the page.
All entries should be in alphabetical order.
The first line of a reference should be flush with the left margin. Each additional line
should be indented (usually accomplished by using the TAB key.)
While earlier versions of APA format required only one space after each sentence, the
new sixth edition of the style manual now recommends two spaces.
The reference section should be double-spaced.
All sources cited should appear both in-text and on the reference page. Any reference
that appears in the text of your report or article must be cited on the reference page,
Course: Research Methods in Education (8604) Semester: Autumn, 2022
and any item appearing on your reference page must be also included somewhere in
the body of your text.
Titles of books, journals, magazines, and newspapers should appear in italics.
The exact format of each individual reference may vary somewhat depending on
whether you are referencing an author or authors, a book or journal article, or an
electronic source. It pays to spend some time looking at the specific requirements for
each type of reference before formatting your source list.
A Few More Helpful Resources
If you are struggling with APA format or are looking for a good way to collect and organize
your references as you work on your research, consider using a free APA citation machine.
These online tools can help generate an APA style referenced, but always remember to
double-check each one for accuracy.
Purchasing your own copy of the official Publication Manual of the American Psychological
Association is a great way to learn more about APA format and have a handy resource to
check your own work against. Looking at examples of APA format can also be very helpful.
While APA format may seem complex, it will become easier once you familiarize yourself
with the rules and format. The overall format may be similar for many papers, but your
instructor might have specific requirements that vary depending on whether you are writing
an essay or a research paper. In addition to your reference page, your instructor may also
require you to maintain and turn in an APA format bibliography.
Q.5 Describe the use of observation, interview and content analysis in qualitative research.
Ans.
commonly used to explore the perceptions and values that influence behavior, identify unmet
needs, understand how people perceive a marketing message or ad, or to inform a subsequent
phase of quantitative research.
Learn more about Isurus’ qualitative research tools:
In-depth interviews
Focus groups
Asynchronous focus groups
Qualitative techniques
In-depth Interviews
A focus group is a moderated discussion with a group of participants; the size of the group
depends on the target audience and mode (online versus in-person). While focus groups have
historically been held in person (face-to-face), they are increasingly conducted virtually using
teleconferencing, web-conferencing, or online collaboration tools. Focus groups are used
when the research objectives will be better accomplished through a dynamic discussion and
sharing of ideas among participants or when it is critical for the client team to observe the
discussion in real-time.
Asynchronous Focus Groups
Also known as “bulletin-board” groups, asynchronous focus groups are threaded discussions
that take place over the course of multiple days. Participants respond to new questions posted
daily by the moderator. The discussion is observable by the client team. Asynchronous
discussions are most useful when participants need time to digest and respond to the
questions and other stimuli, either because the topic is complex (e.g., highly technical
offerings) or because there is a lot of information (e.g., multiple concepts or messages). The
asynchronous nature of the discussion also enables the client and research team to consider
and react to findings that emerge during the discussion (e.g., use feedback from the group to
revise and re-test a concept). This methodology can be the best way to reach target audiences
who are difficult to schedule (e.g., doctors).
Qualitative Techniques
Isurus moderators employ a range of tools and techniques to make qualitative research
productive, such as projective exercises, laddering and individual exercises. Through
techniques like these as well as effective moderating, we encourage participants to go beyond
superficial, knee-jerk responses to uncover their true opinions and behaviors. Effective
moderation is critical to the success of qualitative research, of the specific methodology used.
Each Isurus moderator brings more than 15 years of experience with a range of qualitative
research approaches. The moderator, typically an Isurus principal, is an integral part of the
Course: Research Methods in Education (8604) Semester: Autumn, 2022
project team from start to finish, and plays a key role in translating the business objectives
into productive research, analyzing the data, and presenting the results
Characteristics of any three tools for qualitative research:
Another characteristic of qualitative research is its promotion of a more diverse reaction from
those who have been asked or surveyed. This is because the human behavior is taken more
into consideration than metrics or numbers–therefore making the results more difficult to
analyze, due to the variety of rules for interpretation of the responses. It is challenging but at
the same time it can be fun.
Yet another characteristic of qualitative research relates to time and cost. This type of
research can be pricey and time-consuming because of the time that the analysis of the
responses may tak–and you've heard it before, “Time means money". There are many voice
questions about the value of qualitative research, especially in recent years. Most of the
criticism comes from those who believe that the evidence is strictly circumstantial and lacks
of hard metrics to be proven.
In conclusion, although the discussion here has been around the characteristics of qualitative
research, it is important to emphasize that both qualitative and quantitative research methods
form two different schools of research. On the surface it seems that qualitative research
concerns the quality of research while quantitative research deals with simply numerical
research. Qualitative researchers seek to appraise things as they are seen by humans, while
making an effort to look at a realistic representation of life and providing an interpretative
understanding of such mental drawing. Face it, qualitative research is not a hard science that
will continue to draw criticism from quantitative researchers. Neither of these schools of
thought is superior to the other, and when carried out correctly both provide what is needed
for good research.