Question Paper Answer
Question Paper Answer
Definitions:
1. Marketing Orientation:
Marketing orientation is a business approach that prioritizes understanding and meeting
customer needs and desires. Companies adopting this approach focus on delivering value to
customers, often involving extensive market research to identify preferences, behaviors, and
trends.
2. Product Orientation:
Product orientation emphasizes the internal focus of an organization on developing and
improving products, often assuming that a superior product will naturally attract customers.
It prioritizes innovation, design, and production quality over customer preferences or
feedback.
Developing and testing products or services that align with consumer demands.
In contrast, product orientation assumes that the product itself is sufficient to attract customers,
often relying less on external feedback and more on internal innovation.
Reason:
Marketing orientation inherently demands insights derived from external data, market surveys, and
customer behavior analysis to shape strategies and maintain competitiveness, making business
research a critical element.
Telephone interviews:-
• Speed: One advantage of telephone interviewing is the speed of data collection. While data
collection with mail or personal interviews can take several weeks, hundreds of telephone
interviews can be conducted literally overnight. When the interviewer enters the
respondents’ answers directly into a computerized system, the data processing speeds up
even more. • Cost: As the cost of personal interviews continues to increase, telephone
interviews are becoming relatively inexpensive. The cost of telephone interviews is estimated
to be less than 23.2 percent of the cost of door-to-door personal interviews. Travel time and
costs are eliminated. However, the typical Internet survey is less expensive than a telephone
survey. • Absence of Face to Face Contact: Telephone interviews are more impersonal than
face-to-face interviews. Respondents may answer embarrassing or confidential questions
more willingly in a telephone interview than in a personal interview. • Cooperation: One
trend is very clear. In the last few decades, telephone response rates have fallen. One way
researchers can try to improve response rates is to leave a message on the
household’stelephone answering machine or voice mail. However, many people will not
return a call to help someone conduct a survey • Incentives to Repond: Respondents should
receive some incentive to respond. Research addresses different types of incentives. For
telephone interviews, test-marketing involving different types of survey introductions
suggests that not all introductions are equally effective. • Representative Samples: Practical
difficulties complicate obtaining representative samples based on listings in the telephone
book. The problem of unlisted phone numbers can be partially resolved through the use of
random digit dialing. Random digit dialing eliminates the counting of names in a list (for
example, calling every fiftieth name in a column) and subjectively determining whether a
directory listing is a business, institution, or legitimate household. In the simplest form of
random digit dialing, telephone exchanges (prefixes) for the geographic areas in the sample
are obtained. Using a table of random numbers, the last four digits of the telephone number
are selected. • Callbacks: An unanswered call, a busy signal, or a respondent who is not at
home requires a callback. Telephone callbacks are much easier to make than callbacks in
personal interviews. However, as mentioned, the ownership of telephone answering
machines is growing, and their effects on callbacks need to be studied. • Limited Duration:
Respondents who run out of patience with the interview can merely hang up. To encourage
participation, interviews should be relatively short. The length of the telephone interview is
definitely limited. • Lack of Visual Medium: Because visual aids cannot be used in telephone
interviews, this method is not appropriate for packaging research, copy testing of television
and print advertising, and concept tests that require visual materials.
The presence of an interviewer at the door generally increases the likelihood that a person will be
willing to complete an interview. Because door-to-door interviews increase the participation rate,
they provide a more representative sample of the population than mail questionnaires.
Personal interviews conducted in shopping malls are referred to as mall intercept interviews, or
shopping center sampling. Interviewers typically intercept shoppers at a central point within the mall
or at an entrance. The main reason mall intercept interviews are conducted is because their costs are
lower. No travel is required to the respondent’s home; instead, the respondent comes to the
interviewer, and many interviews can be conducted quickly in this way.
Probability Sampling :-
Probability sampling is also known as random sampling or chance sampling. In this, sample is
taken in such a manner that each and every unit of the population has an equal and positive
chance of being selected. In this way, it is ensured that the sample would truly represent the
overall population. Probability sampling can be achieved by random selection of the sample
among all the units of the population. Major random sampling procedures are – ✓ Simple
Random Sample ✓ Systematic Random Sample ✓ Stratified Random Sample ✓ Cluster/
Multistage Sample 8C.5.1.1 Simple Random Sample For this, each member of the population
is numbered. Then, a given size of the sample is drawn with the help of a random number
chart. The other way is to do a lottery. Write all the numbers on small, uniform pieces of
paper, fold the papers, put them in a container and take out the required lot in a random
manner from the container as is done in the kitty parties. It is relatively simple to implement
but the final sample may miss out small sub groups. 8C.5.1.2 Systematic Random Sample
Systematic sampling is one method in the broader category of random sampling (for this
reason, it requires precise control of the sampling frame of selectable individuals and of the
probability that they will be selected). It involves choosing a first individual at random from
the population, then selecting every following nth individual within the sampling frame to
make up the sample. Systematic sampling is a very simple process that requires choosing
only one individual at random. The rest of the process is fast and easy. As with simple
random sampling, the results that we obtain are representative of the population, provided
that there is no factor intrinsic to the individuals selected that regularly repeats certain
characteristics of the population every certain number of individuals—which is very rarely
the case. 8C.5.1.3 Stratified Random Sample At first, the population is first divided into
groups or strata each of which is homogeneous with respect to the given characteristic
feature. From each strata, then, samples are drawn at random. This is called stratified
random sampling. For example, with respect to the level of socio-economic status, the
population may first be grouped in such strata as high, middle, low and very low socio-
economic levels as per pre-determined criteria, and random sample drawn from each group.
The sample size for each sub-group can be fixed to get representative sample. This way, it is
possible that different categories in the population are fairly represented in the sample,
which could have been left out otherwise in simple random sample. 8C.5.1.4 Cluster/
Multistage Sampling In some cases, the selection of units may pass through various stages,
before you finally reach your sample of study. For this, a State, for example, may be divided
into districts, districts into blocks, blocks into villages, and villages into identifiable groups of
people, and then taking the random or quota sample from each group. For example, taking a
random selection of 3 out of 15 districts of a State, 6 blocks from each selected district, 10
villages from each selected block and 20 households from each selected village, totalling
3600 respondents. This design is used for large-scale surveys spread over large areas.
8C.5.2 Non-Probability Sampling :-
Non-probability sampling is any sampling method where some elements of the population
have no chance of selection (these are sometimes referred to as 'out of coverage'/'under
covered'), or where the probability of selection can't be accurately determined. It involves
the selection of elements based on assumptions regarding the population of interest, which
forms the criteria for selection. Hence, because the selection of elements is non-random,
non-probability sampling does not allow the estimation of sampling errors. Non-probability
sampling is a non-random and subjective method of sampling where the selection of the
population elements comprising the sample depends on the personal judgment or the
discretion of the sampler. Non-probability sampling includes: ✓ Accidental/ Convenience
Sampling ✓ Quota Sampling ✓ Judgment/ Subjective/ Purposive Sampling ✓ Snowball
Sampling 8C.5.2.1 Convenience/ Accidental Sampling Accidental sampling is a type of non-
probability sampling which involves the sample being drawn from that part of the population
which is close to hand. That is, a sample population selected because it is readily available
and convenient. The researcher using such a sample cannot scientifically make
generalizations about the total population from this sample because it would not be
representative enough. For example, if the interviewer was to conduct such a survey at a
shopping center early in the morning on a given day, the people that s/he could interview
would be limited to those given there at that given time, which would not represent the
views of other members of society in such an area, if the survey was to be conducted at
different times of day and several times per week. This type of sampling is most useful for
pilot testing. 8C.5.2.2 Quota Sampling In quota sampling, the population is first segmented
into mutually exclusive subgroups, just as in stratified sampling. Then judgment is used to
select the subjects or units from each segment based on a specified proportion. For example,
an interviewer may be told to sample 200 females and 300 males between the age of 45 and
60. In quota sampling the selection of the sample is non-random. For example: interviewers
might be tempted to interview those who look most helpful. The problem is that these
samples may be biased because not everyone gets a chance of selection. This random
element is its greatest weakness and quota versus probability has been a matter of
controversy for many years. 8C.5.2.3 Subjective or Purposive or Judgment Sampling In this
sampling, the sample is selected with definite purpose in view and the choice of the
sampling units depends entirely on the discretion and judgment of the investigator. This
sampling suffers from drawbacks of favouritism and nepotism depending upon the beliefs
and prejudices of the investigator and thus does not give a representative sample of the
population. This sampling method is seldom used and cannot be recommended for general
use since it is often biased due to element of subjectivity on the part of the investigator.
However, if the investigator is experienced and skilled and this sampling is carefully applied,
then judgment samples may yield valuable results. Some purposive sampling strategies that
can be used in qualitative studies are given below. Each strategy serves a particular data
gathering and analysis purpose. Extreme Case Sampling: It focuses on cases that are rich in
information because they are unusual or special in some way. e.g. the only community in a
region that prohibits felling of trees. 8C.5.2.4 Snowball Sampling Snowball sampling is a
method in which a researcher identifies one member of some population of interest, speaks
to him/her, and then asks that person to identify others in the population that the researcher
might speak to. This person is then asked to refer the researcher to yet another person, and
so on. 8C.5.2.4 Snowball Sampling Snowball sampling is a method in which a researcher
identifies one member of some population of interest, speaks to him/her, and then asks that
person to identify others in the population that the researcher might speak to. This person is
then asked to refer the researcher to yet another person, and so on.
4. What is the purpose of editing? Give some examples of questions that might need editing.
Purpose of Editing
Editing in research refers to the process of reviewing and correcting data collected to ensure it is
accurate, consistent, complete, and free from errors or inconsistencies. The primary purpose of
editing is to enhance the quality and reliability of data for analysis by addressing issues such as
missing data, duplicate responses, or inconsistencies.
1. Incomplete Responses
o Example: "What is your monthly income?" (Left blank by the respondent).
Action: Check if the respondent has skipped other similar questions to identify a
pattern or attempt to follow up if feasible.
2. Inconsistent Answers
o Example: Respondent selects "Yes" for owning a vehicle but later skips a question
asking for the type of vehicle owned.
Action: Verify if this inconsistency can be logically corrected or flagged for exclusion.
o Example: "What is your primary reason for choosing this product?" (Response: "It's
good.")
Action: Attempt to clarify vague responses by categorizing them into predefined
groups or seeking additional context.
4. Duplicate Responses
o Example: Multiple responses from the same individual for the same survey.
Action: Identify and retain the most complete or accurate entry.
6. Coding Errors
o Example: Misclassification of answers during data entry (e.g., Male coded as "1" but
marked "2" in the dataset).
Action: Cross-check and correct such coding errors.
Outcome of Editing
By addressing these types of issues, editing ensures data integrity and facilitates accurate analysis,
leading to meaningful and reliable research findings.
When the client fails to understand their situation or insists on studying an irrelevant problem, the
research is very likely to fail, even if it is done perfectly. Translating a business situation into
something that can be researched is somewhat like translating one language into another. It begins
by coming to a consensus on a decision statement or question. A decision statement is a written
expression of the key question(s) that a research user wishes to answer. It is the reason that research
is being considered. It must be well stated and relevant. The researcher translates this into research
terms by rephrasing the decision statement into one or more research objectives. These are
expressed as deliverables in the research proposal. The researcher then further expresses these in
precise and scientific research terminology by creating research hypotheses from the research
objectives. For simplicity, the term problem definition is adapted here to refer to the process of
defining and developing a decision statement and the steps involved in translating it into more
precise research terminology, including a set of research objectives. If this process breaks down at
any point, the research will almost certainly be useless or even harmful. It will be useless if it
presents results that simply are deemed irrelevant and do not assist in decision making. It can be
harmful both because of the wasted resources and because it may misdirect the company in a poor
direction. Ultimately, it is difficult to say that any one step in the research process is most important.
However, formally defining the problem to be attacked by developing decision statements and
translating them into actionable research objectives must be done well or the rest of the research
process is misdirected. Even a good road map is useless unless you know just where you are going.
All of the roads can be correctly drawn, but they still don’t get you where you want to be. Similarly,
even the best research procedures will not overcome poor problem definition.
Conceptual Development Concepts serve critical functions in science, through their descriptive
powers and as the building-blocks of theory. When concepts are immature, therefore, science
suffers. Consequently, concept development ought to be considered a fundamental scientific activity.
Knowledge of the different approaches to concept development, however, is relatively limited in the
management discipline. Concepts abstract reality. That is, concepts express in words various events
or objects. Concepts, however, may vary in degree of abstraction. For example, the concept of an
asset is an abstract term that may, in the concrete world of reality, refer to a wide variety of things,
including a specific punch press machine in a production shop.
Qualitative business research is research that addresses business objectives through techniques that
allow the researcher to provide elaborate interpretations of market phenomena without depending
on numerical measurement. Its focus is on discovering true inner meanings and new insights.
Qualitative research is very widely applied in practice. There are many research firms that specialize
in qualitative research. Qualitative research is less structured than most quantitative approaches. It
does not rely on self- response questionnaires containing structured response formats. Instead, it is
more researcher-dependent in that the researcher must extract meaning from unstructured
responses, such as text from a recorded interview or a collage representing the meaning of some
experience, such as skateboarding. The researcher interprets the data to extract its meaning and
converts it to information
In social science, one can find many debates about the superiority of qualitative research over
quantitative research or vice versa. We’ll begin by saying that this is largely a superfluous argument
in either direction. The truth is that qualitative research can accomplish research objectives that
quantitative research cannot. Similarly, truthful, but no more so, quantitative research can
accomplish objectives that qualitative research cannot. The key to successfully using either is to
match the right approach to the right research context. Many good research projects combine both
qualitative and quantitative research. For instance, developing valid survey measures requires first a
deep understanding of the concept to be measured and a description of the way these ideas are
expressed in everyday language. Both of these are tasks best suited for qualitative research.
However, validating the measure formally to make sure it can reliably capture the intended concept
will likely require quantitative research. Also, qualitative research may be needed to separate
symptoms from problems and then quantitative research can follow up to test relationships among
relevant variables. Quantitative business research can be defined as business research that addresses
research objectives through empirical assessments that involve numerical measurement and analysis
approaches. Qualitative research is more apt to stand on its own in the sense that it requires less
interpretation. For example, quantitative research is quite appropriate when a research objective
involves a managerial action standard. For example, a salad dressing company considered changing
its recipe. The new recipe was tested with a sample of consumers. Each consumer rated the product
using numeric scales. Management established a rule that a majority of consumers rating the new
product higher than the old product would have to be established with 90 percent confidence before
replacing the old formula. A project like this can involve both quantitative measurement in the form
of numeric rating scales and quantitative analysis in the form of applied statistical procedures
Ethics are of interest to business scholars because they influence decisions, behaviors, and
outcomes. While scholars have increasingly shown interest in business ethics as a research topic,
there are a mounting number of studies that examine ethical issues at the organizational level of
analysis. When we talk about organizational ethics, we are referring to the set of values that identify
an organization, from within (or, to put another way, how those working in the organization
understand it) as well as from without (the perception of the organization by those who have
dealings with it). Such a set of values can be considered in a broad sense (that is, the set of values
structuring the organization and its practices, be they instrumental or final values, positive or
negative) or in a stricter sense (where we shall refer only to those values that express the vision, the
raison d’être and the commitments of the organization, and that are linked to their corporate and
moral identity)
Disadvantages of Secondary Data An inherent disadvantage of secondary data is that they were not
designed specifically to meet the researchers’ needs. Thus, researchers must ask how pertinent the
data are to their particular project. To evaluate secondary data, researchers should ask questions
such as these: • Is the subject matter consistent with our problem definition? • Do the data apply to
the population of interest? • Do the data apply to the time period of interest? • Do the secondary
data appear in the correct units of measurement? • Do the data cover the subject of interest in
adequate detail? Even when secondary information is available, it can be inadequate. Consider the
following typical situations: • A researcher interested in forklift trucks finds that the secondary data
on the subject are included in a broader, less pertinent category encompassing all industrial trucks
and tractors. Furthermore, the data were collected five years earlier. • An investigator who wishes to
study individuals earning more than $100,000 per year finds the top category in a secondary study
reported at $75,000 or more per year. • A brewery that wish
Often research entails asking people—called respondents—to provide answers to written or spoken
questions. These interviews or questionnaires collect data through the mail, on the telephone,
online, or face-to-face. Thus, a survey is defined as a method of collecting primary data based on
communication with a representative sample of individuals. Surveys provide a snapshot at a given
point in time. The more formal term, sample survey, emphasizes that the purpose of contacting
respondents is to obtain a representative sample, or subset, of the target population.
Surveys provide a quick, inexpensive, efficient, and accurate means of assessing information about a
population. When properly conducted, surveys offer managers many advantages
10. What is focus group interview? Explain its advantages in qualitative research.
What Is a Focus Group Interview? The focus group interview is so widely used that many advertising
and research agencies do nothing but focus group interviews. In that sense, it is wrongly synonymous
with qualitative research. Nonetheless, focus groups are a very important qualitative research
technique and deserve considerable discussion. A focus group interview is an unstructured, free-
flowing interview with a small group of people, usually between six and ten. Focus groups are led by
a trained moderator who follows a flexible format encouraging dialogue among respondents.
Common focus group topics include employee programs, employee satisfaction, brand meanings,
problems with products, advertising themes, or new-product concepts. The group meets at a central
location at a designated time. Participants may range from consumers talking about hair coloring,
petroleum engineers talking about problems in the “oil patch,” children talking about toys, or
employees talking about their jobs. A moderator begins by providing some opening statement to
broadly steer discussion in the intended direction. Ideally, discussion topics emerge at the group’s
initiative, not the moderator’s. Consistent with phenomenological approaches, moderators should
avoid direct questioning unless absolutely necessary.
2. Advantages Of Focus Group Interviews Focus groups allow people to discuss their true feelings,
anxieties, and frustrations, as well as the depth of their convictions, in their own words. While other
approaches may also do much the same, focus groups offer several advantages: 1. Relatively fast 2.
Easy to execute 3. Allow respondents to piggyback off each other’s ideas 4. Provide multiple
perspectives 5. Flexibility to allow more detailed descriptions 6. High degree of scrutiny a) Speed and
Ease In an emergency situation, three or four group sessions can be conducted, analyzed, and
reported in a week or so. The large number of research firms that conduct focus group interviews
makes it easy to find someone to host and conduct the research. Practically every state in the United
States contains multiple research firms that have their own focus group facilities. Companies with
large research departments likely have at least one qualified focus group moderator so that they
need not outsource the focus group. Piggybacking and MultiplePerspectives Furthermore, the group
approach may produce thoughts that would not be produced otherwise. The interplay between
respondents allows them to piggyback off of each other’s ideas. In other words, one respondent
stimulates thought among the others and, as this process continues, increasingly creative insights are
possible. A comment by one individual often triggers a chain of responses from the other
participants. The social nature of the focus group also helps bring out multiple views as each person
shares a particular perspective. c) Flexibility The flexibility of focus group interviews is advantageous,
especially when compared with the more structured and rigid survey format. Numerous topics can
be discussed and many insights can be gained, particularly with regard to the variations in consumer
behavior in different situations. d) Scrutiny A focus group interview allows closer scrutiny in several
ways. First, the session can be observed by several people, as it is usually conducted in a room
containing a two-way mirror. The respondents and moderator are on one side, and an invited
audience that may include both researchers and decision makers is on the other. If the decision
makers are located in another city or country, the session may be shown via a live video hookup.
Either through live video or a two-way mirror, some check on the eventual interpretations is
provided through the ability to actually watch the research being conducted. If the observers have
questions that are not being asked or want the moderator to probe on an issue, they can send a
quick text message with instructions to the moderator.
11. Compare and contrast cross sectional study with longitudinal studies
Data is collected only once from each Data is collected multiple times from the
Data Collection
subject. same subjects.
Cost and Typically less expensive and requires More costly and resource-intensive due to
Resources fewer resources. repeated data collection.
Key Difference
13. List three criteria for good measurement. Distinguish various levels of measurement
1 Types of Reliability 1) Test-Retest Reliability The most obvious method for finding the reliability of
test scores is by repeating the identical test on a second occasion. Test-retest reliability is a measure
of reliability obtained by administering the same test twice over a period of time to a group of
individuals. For Example- A test designed to assess student learning in psychology could be given to a
group of students twice, with the second administration perhaps coming a week after the first. The
obtained correlation coefficient would indicate the stability of the scores. 2) Split-Half Reliability Split-
half reliability is a subtype of internal consistency reliability. In split half reliability we randomly divide
all items that purport to measure the same construct into two sets. We administer the entire
instrument to a sample of people and calculate the total score for each randomly divided half. The
most commonly used method to split the test into two is using the odd-even strategy. 3) Inter-Rater
Reliability Inter-rater reliability is a measure of reliability used to assess the degree to which different
judges or raters agree in their assessment decisions. Interrater reliability is also known as inter-
observer reliability or inter-coder reliability. Inter-rater reliability is useful because human observers
will not necessarily interpret answers the same way; raters may disagree as to how well certain
responses or material demonstrate knowledge of the construct or skill being assessed. Inter-rater
reliability might be employed when different judges are evaluating the degree to which art portfolios
meet certain standards. Inter-rater reliability is especially useful when judgments can be considered
relatively subjective. 4) Parallel-Forms Reliability Parallel forms reliability is a measure of reliability
obtained by administering different versions of an assessment tool to the same group of individuals.
The scores from the two versions can then be correlated in order to evaluate the consistency of
results across alternate versions. 5) Coefficient alpha (α): It is the most commonly applied estimate of
a multiple-item scale’s reliability. Coefficient α represents internal consistency by computing the
average of all possible split-half reliabilities for a multiple-item scale. The coefficient demonstrates
whether or not the different items converge. Although coefficient α does not address validity, many
researchers use α as the sole indicator of a scale’s quality. Coefficient alpha ranges in value from 0,
meaning no consistency, to 1, meaning complete consistency
Validity Validity refers to whether the measure actually measures what it is supposed to measure. If a
measure is unreliable, it is also invalid. That is, if you do not know what it is measuring, it certainly
cannot be said to be measuring what it is supposed to be measuring. On the other hand, you can
have a consistently unreliable measure. For example, if we measure income level by asking someone
how many years of formal education they have completed, we will get consistent results, but
education is not income (although they are positively related). In general, validity is an indication of
how sound your research is. More specifically, validity applies to both the design and the methods of
your research. Validity in data collection means that your findings truly represent the phenomenon
you are claiming to measure. Valid claims are solid claims. There are two main types of validity,
internal and external. Internal validity refers to the validity of the measurement and test itself,
whereas external validity refers to the ability to generalize the findings to the target population.
8A.5.2.1 Types of Validity 1) Face Validity Face validity refers to the degree to which a test appears to
measure what it purports to measure. The stakeholders can easily assess face validity. Although this
is not a very ‘scientific’ type of validity, it may be an essential component in enlisting motivation of
stakeholders. If the stakeholders do not believe the measure is an accurate assessment of the ability,
they may become disengaged with the task. For example, if a measure of art appreciation is created
all of the items should be related to the different components and types of art. If the questions are
regarding historical time periods, with no reference to any artistic movement, stakeholders may not
be motivated to give their best effort or invest in this measure because they do not believe it is a true
assessment of art appreciation. 2) Predictive Validity Predictive validity refers to whether a new
measure of something has the same predictive relationship with something else that the old
measure had. In predictive validity, we assess the operationalization’s ability to predict something it
should theoretically be able to predict. For example, we might theorize that a measure of math
ability should be able to predict how well a person will do in an engineering-based profession. We
could give our measure to experienced engineers and see if there is a high correlation between
scores on the measure and their salaries as engineers. A high correlation would provide evidence for
predictive validity - it would show that our measure can correctly predict something that we
theoretically think it should be able to predict. 3) Criterion-Related Validity Criterion validity is a test
of a measure when the measure has several different parts or indicators in it - compound measures.
Each part or criterion of the measure should have a relationship with all the parts in the measure for
the variable to which the first measure is related in a hypothesis. When you are expecting a future
performance based on the scores obtained currently by the measure, correlate the scores obtained
with the performance. The later performance is called the criterion and the current score is the
prediction. It is used to predict future or current performance - it correlates test results with another
criterion of interest. For example, if a physics program designed a measure to assess cumulative
student learning throughout the major. The new measure could be correlated with a standardized
measure of ability in this discipline, such as GRE subject test. The higher the correlation between the
established measure and new measure, the more faith stakeholders can have in the new assessment
tool. 4) Content Validity In content validity, you essentially check the operationalization against the
relevant content domain for the construct. This approach assumes that you have a good detailed
description of the content domain, something that’s not always true. In content validity, the criteria
are the construct definition itself - it is a direct comparison. In criterion-related validity, we usually
make a prediction about how the operationalization will perform based on our theory of the
construct. When we want to find out if the entire content of the behavior/ construct/ area is
represented in the test we compare the test task with the content of the behavior. This is a logical
method, not an empirical one. 5) Convergent Validity Convergent validity refers to whether two
different measures of presumably the same thing are consistent with each other - whether they
converge to give the same measurement. In convergent validity, we examine the degree to which the
operationalization is similar to (converges on) other operationalizations that it theoretically should be
similar to. For example, to show the convergent validity of a test of arithmetic skills, we might
correlate the scores on test with scores on other tests that purport to measure basic math ability,
where high correlations would be evidence of convergent validity. Or, if SAT scores and GRE scores
are convergent, then someone who scores high on one test should also score high on the other.
Different measures of ideology should classify the same people the same way. If they do not, then
they lack convergent validity. 6) Concurrent Validity: Concurrent validity is the degree to which the
scores on a test are related to the scores on another already established, test administered at the
same time or to some other valid criterion available at the same time. This compares the results from
a new measurement technique to those of a more established technique that claims to measure the
same variable to see if they are related. In concurrent validity, we assess the operationalization’s
ability to distinguish between groups that it should theoretically be able to distinguish between. For
example, if we come up with a way of assessing manic-depression, our measure should be able to
distinguish between people who are diagnosed manic-depression and those diagnosed paranoid
schizophrenic. If we want to assess the concurrent validity of a new measure of empowerment, we
might give the measure to both migrant farm workers and to the farm owners, theorizing that our
measure should show that the farm owners are higher in empowerment. As in any discriminating
test, the results are more powerful if you are able to show that you can discriminate between two
groups that are very similar. 7) Construct Validity Construct validity is used to ensure that the
measure is actually measure what it is intended to measure (i.e. the construct), and not other
variables. Using a panel of ‘experts’ familiar with the construct is a way in which this type of validity
can be assessed. The experts can examine the items and decide what that specific item is intended to
measure. This is whether the measurements of a variable in a study behave in exactly the same way
as the variable itself. This involves examining past research regarding different aspects of the same
variable. It is also the degree to which a test measures an intended hypothetical construct. For
example, if we want to validate a measure of anxiety. We have a hypothesis that anxiety increases
when subjects are under the threat of an electric shock, then the threat of an electric shock should
increase anxiety scores. 8) Formative Validity When applied to outcomes assessment it is used to
assess how well a measure is able to provide information to help improve the program under study.
For example - when designing a rubric for history one could assess student’s knowledge across the
discipline. If the measure can provide information that students are lacking knowledge in a certain
area, for instance the Civil Rights Movement, then that assessment tool is providing meaningful
information that can be used to improve the course or program requirements. 9) Sampling Validity
Sampling validity ensures that the measure covers the broad range of areas within the concept under
study. Not everything can be covered, so items need to be sampled from all of the domains. This may
need to be completed using a panel of ‘experts’ to ensure that the content area is adequately
sampled. Additionally, a panel can help limit ‘expert’ bias. For example - when designing an
assessment of learning in the theatre department, it would not be sufficient to only cover issues
related to acting. Other areas of theatre such as lighting, sound, functions of stage managers should
all be included. The assessment should reflect the content area in its entirety. 10) Discriminant
Validity In discriminant validity, we examine the degree to which the operationalization is not similar
to (diverges from) other operationalizations that it theoretically should be not be similar to. For
example, to show the discriminant validity of a Head Start program, we might gather evidence that
shows that the program is not similar to other early childhood programs that don’t label themselves
as Head Start programs. 8A.5.2.2 Reliability Versus Validity Reliability is a necessary but not sufficient
condition for validity. A reliable scale may not be valid. For example, a purchase intention
measurement technique may consistently indicate that 20 percent of those sampled are willing to
purchase a new product. Whether the measure is valid depends on whether 20 percent of the
population indeed purchases the product. A reliable but invalid instrument will yield consistently
inaccurate results. 8A.5.3 Sensitivity The sensitivity of a scale is an important measurement concept,
particularly when changes in attitudes or other hypothetical constructs are under investigation.
Sensitivity refers to an instrument’s ability to accurately measure variability in a concept. A
dichotomous response category, such as “agree or disagree,” does not allow the recording of subtle
attitude changes. A more sensitive measure with numerous categories on the scale may be needed.
For example, adding “strongly agree,” “mildly agree,” “neither agree nor disagree,” “mildly disagree,”
and “strongly disagree” will increase the scale’s sensitivity. The sensitivity of a scale based on a single
question or single item can also be increased by adding questions or items. In other words, because
composite measures allow for a greater range of possible scores, they are more sensitive than single-
item scales. Thus, sensitivity is generally increased by adding more response points or adding scale
items.
8A.2.1 Nominal Scale The nominal scale (called as dummy coding) simply places people, events,
perceptions, etc. into categories based on some common trait. Some data are naturally suited to the
nominal scale such as males vs. females, white vs. black vs. blue, and American vs. Asian. The
nominal scale forms the basis for such analyses as Analysis of Variance (ANOVA) because those
analyses require that some category is compared to at least one other category. The nominal scale is
the lowest form of measurement because it doesn’t capture information about the focal object other
than whether the object belongs or doesn’t belong to a category; either you are a smoker or not a
smoker, you attended university or you didn’t, a subject has some experience with computers, an
average amount of experience with computers, or extensive experience with computers. No data is
captured that can place the measured object on any kind of scale say, for example, on a continuum
from one to ten. Coding of nominal scale data can be accomplished using numbers, letters, labels, or
any symbol that represents a category into which an object can either belong or not belong. In
research activities a Yes/No scale is nominal. It has no order and there is no distance between Yes
and No. The statistics which can be used with nominal scales are in the non-parametric group. The
most likely ones would be - mode; crosstabulation - with chi-square. There are also highly
sophisticated modelling techniques available for nominal data.
8A.2.2 Ordinal Scale An ordinal level of measurement uses symbols to classify observations into
categories that are not only mutually exclusive and exhaustive; in addition, the categories have some
explicit relationship among them. For example, observations may be classified into categories such as
taller and shorter, greater and lesser, faster and slower, harder and easier, and so forth. However,
each observation must still fall into one of the categories (the categories are exhaustive) but no more
than one (the categories are mutually exclusive). Most of the commonly used questions which ask
about job satisfaction use the ordinal level of measurement. For example, asking whether one is very
satisfied, satisfied, neutral, dissatisfied, or very dissatisfied with one’s job is using an ordinal scale of
measurement. The simplest ordinal scale is a ranking.
8A.2.3 Interval Scale An interval level of measurement classifies observations into categories that are
not only mutually exclusive and exhaustive, and have some explicit relationship among them, but the
relationship between the categories is known and exact. This is the first quantitative application of
numbers. In the interval level, a common and constant unit of measurement has been established
between the categories. For example, the commonly used measures of temperature are interval level
scales. We know that a temperature of 75 degrees is one degree warmer than a temperature of 74
degrees. Numbers may be assigned to the observations because the relationship between the
categories is assumed to be the same as the relationship between numbers in the number system.
For example, 74+1= 75 and 41+1= 42. The intervals between categories are equal, but they originate
from some arbitrary origin, that is, there is no meaningful zero point on an interval scale. The
standard survey rating scale is an interval scale. When you are asked to rate your satisfaction with a
piece of software on a 7 point scale, from Dissatisfied to Satisfied, you are using an interval scale.
Interval scale data would use parametric statistical techniques - Mean and standard deviation;
Correlation; Regression; Analysis of variance; Factor analysis; and whole range of advanced
multivariate and modelling techniques.
The following steps are involved in the questionnaire design process: 1. Specify the Information
Needed: The first and the foremost step in designing the questionnaire is to specify the information
needed from the respondents such that the objective of the survey is fulfilled. The researcher must
completely review the components of the problem, particularly the hypothesis, research questions,
and the information needed. 2. Define the Target Respondent: At the very outset, the researcher
must identify the target respondent from whom the information is to be collected. The questions
must be designed keeping in mind the type of respondents under study. Such as, the questions that
are appropriate for serviceman might not be appropriate for a businessman. The less diversified
respondent group shall be selected because the more diversified the group is, the more difficult it
will be to design a single questionnaire that is appropriate for the entire group. 3. Specify the type of
Interviewing Method: The next step is to identify the way in which the respondents are reached. In
personal interviews, the respondent is presented with a questionnaire and interacts face-to-face with
the interviewer. Thus, lengthy, complex and varied questions can be asked using the personal
interview method. In telephone interviews, the respondent is required to give answers to the
questions over the telephone. Here the respondent cannot see the questionnaire and hence this
method restricts the use of small, simple and precise questions. The questionnaire can be sent
through mail or post. It should be selfexplanatory and contain all the important information such
that the respondent is able to understand every question and gives a complete response. The
electronic questionnaires are sent directly to the mail ids of the respondents and are required to give
answers online. 4. Determine the Content of Individual Questions: Once the information needed is
specified and the interviewing methods are determined, the next step is to decide the content of the
question. The researcher must decide on what should be included in the question such that it
contributes to the information needed or serve some specific purpose. In some situations, the
indirect questions which are not directly related to the information needed may be asked. It is useful
to ask neutral questions at the beginning of a questionnaire with intent to establish respondent’s
involvement and rapport. This is mainly done when the subject of a questionnaire is sensitive or
controversial. The researcher must try to avoid the use of double-barrelled questions. A question
that talks about two issues simultaneously, such as Is the Real juice tasty and a refreshing health
drink? 5. Overcome Respondent’s Inability and Unwillingness to Answer: The researcher should not
presume that the respondent can provide accurate responses to all the questions. He must attempt
to overcome the respondent’s inability to answer. The questions must be designed in a simple and
easy language such that it is easily understood by each respondent. In situations, where the
respondent is not at all informed about the topic of interest, then the researcher may ask the filter
questions, an initial question asked in the questionnaire to identify the prospective respondents to
ensure that they fulfil the requirements of the sample. Despite being able to answer the question,
the respondent is unwilling to devote time in providing information. The researcher must attempt to
understand the reason behind such unwillingness and design the questionnaire in such a way that it
helps in retaining the respondent’s attention. 6. Decide on the Question Structure: The researcher
must decide on the structure of questions to be included in the questionnaire. The question can be
structured or unstructured. The unstructured questions are the open-ended questions which are
answered by the respondents in their own words. These questions are also called as a free-response
or free-answer questions. While, the structured questions are called as closed-ended questions that
prespecify the response alternatives. These questions could be a multiple-choice question,
dichotomous (yes or no) or a scale. 7. Determine the Question Wording: The desired question
content and structure must be translated into words which are easily understood by the
respondents. At this step, the researcher must translate the questions in easy words such that the
information received from the respondents is similar to what was intended. In case the question is
written poorly, then the respondent might refuse to answer it or might give a wrong answer. In case,
the respondent is reluctant to give answers, then “nonresponse” arises which increases the
complexity of data analysis. On the other hand, if the wrong information is given, then “response
error” arises due to which the result is biased. 8. Determine the Order of Questions: At this step, the
researcher must decide the sequence in which the questions are to be asked. The opening questions
are crucial in establishing respondent’s involvement and rapport, and therefore, these questions
must be interesting, non-threatening and easy. Usually, the open-ended questions which ask
respondents for their opinions are considered as good opening questions, because people like to
express their opinions. 9. Identify the Form and Layout: The format, positioning and spacing of
questions has a significant effect on the results. The layout of a questionnaire is specifically
important for the self-administered questionnaires. The questionnaires must be divided into several
parts, and each part shall be numbered accurately to clearly define the branches of a question. 10.
Reproduction of Questionnaire: Here, we talk about the appearance of the questionnaire, i.e. the
quality of paper on which the questionnaire is either written or printed. In case, the questionnaire is
reproduced on a poor-quality paper; then the respondent might feel the research is unimportant due
to which the quality of response gets adversely affected. Thus, it is recommended to reproduce the
questionnaire on a good-quality paper having a professional appearance. In case, the questionnaire
has several pages, then it should be presented in the form of a booklet rather than the sheets clipped
or stapled together. 11. Pretesting: Pretesting means testing the questionnaires on a few selected
respondents or a small sample of actual respondents with a purpose of improving the questionnaire
by identifying and eliminating the potential problems. All the aspects of the questionnaire must be
tested such as question content, structure, wording, sequence, form and layout, instructions, and
question difficulty. The researcher must ensure that the respondents in the pre-test should be similar
to those who are to be finally surveyed
16. Discuss how to choose an appropriate sample design, as well as challenges for Internet
sampling.
There are basically two types of errors in hypothesis testing. Creatively, they call these errors Type I
and Type II errors. Both types of error relate to incorrect conclusions about the null hypothesis.
9C.2.4.1 Type I Errors These are the errors which results in when the null hypothesis is rejected when
it is true. and has a probability of alpha. A Type I error occurs when the researcher concludes that a
relationship or difference exists whereas in reality it does not exist. 9C.2.4.2 Type II Errors When we
failed to reject the null hypothesis and the alternative hypothesis is true ,in such situation these
types of errors occur. It has a probability of beta. The researcher in Type II Error makes a conclusion
that no relationship or difference exists whereas actually one does exist
1. Making a formal statement: The step consists in making a formal statement of the null hypothesis
(H0) and also of the alternative hypothesis (Ha or H1). This means that hypotheses should be clearly
stated, considering the nature of the research problem. 2. Selecting a significance level: The
hypotheses are tested on a pre-determined level of significance and as such the same should be
specified. Generally, in practice, either 5% level or 1% level is adopted for the purpose. 3. Deciding
the distribution to use: After deciding the level of significance, the next step in hypothesis testing is
to determine the appropriate sampling distribution. The choice generally remains between normal
distribution and the t-distribution. 4. Selecting a random sample and computing an appropriate
value: Another step is to select a random sample(s) and compute an appropriate value from the
sample data concerning the test statistic utilizing the relevant distribution. In other words, draw a
sample to furnish empirical data. 5. Calculation of the probability: One has then to calculate the
probability that the sample result would diverge as widely as it has from expectations, if the null
hypothesis were in fact true. 6. Comparing the probability and Decision making: Yet another step
consists in comparing the probability thus calculated with the specified value for α, the significance
level. If the calculated probability is equal to or smaller than the α value in case of one-tailed test
(and α /2 in case of two-tailed test), then reject the null hypothesis (i.e., accept the alternative
hypothesis), but if the calculated probability is greater, then accept the null hypothesis
To choose an appropriate sample design, researchers must consider the following factors:
1. Research Objective:
o Clearly define the goals of the study, such as estimating population parameters,
studying relationships, or testing hypotheses.
2. Target Population:
3. Sampling Frame:
o Ensure the list or database used to select the sample is comprehensive and up-to-
date.
4. Type of Sampling:
o Probability Sampling:
o Non-Probability Sampling:
Used when the population is difficult to access or when time and cost are
constraints.
5. Sample Size:
o Calculate an appropriate sample size based on the desired confidence level, margin
of error, and population size. Larger samples reduce error but increase costs.
7. Time Constraints:
Internet sampling is increasingly popular due to its cost-effectiveness and reach, but it presents
unique challenges:
1. Sample Representativeness:
2. Self-Selection Bias:
o Participants who choose to respond to online surveys may differ systematically from
those who do not, introducing bias.
3. Duplicate Responses:
o The absence of a clear and complete list of internet users makes random sampling
difficult.
5. Technical Barriers:
6. Ethical Concerns:
o Ensuring privacy and informed consent is challenging when collecting data online.
o Internet surveys often suffer from lower response rates, which may affect data
quality and reliability.
By carefully selecting the sample design and addressing internet sampling challenges, researchers
can improve the validity and reliability of their findings.
Chi-Square goodness of fit test is a non-parametric test which is used to find out how the observed
value of a given phenomena is significantly different from the expected value. In Chi-Square
goodness of fit test, the term goodness of fit is used to compare the observed sample distribution
with the expected probability distribution. Chi-Square goodness of fit test determines how well
theoretical distribution (such as normal, binomial, or Poisson) fits the empirical distribution. In Chi-
Square goodness of fit test, sample data is divided into intervals. Then the numbers of points that fall
into the interval are compared, with the expected numbers of points in each interval.
Procedure for Chi-Square Goodness of Fit Test: Set up the hypothesis for Chi-Square goodness of fit
test: A. Null hypothesis: In Chi-Square goodness of fit test, the null hypothesis assumes that there is
no significant difference between the observed and the expected value. B. Alternative hypothesis: In
Chi-Square goodness of fit test, the alternative hypothesis assumes that there is a significant
difference between the observed and the expected value. • Compute the value of Chi-Square
goodness of fit test using the following formula: χ2=∑(Oi−Ei)2 Ei Where, χ2 = Chi-Square goodness of
fit test Oi = observed value Ei = expected value
21. Define the term business research explain managerial value of research
Business research is the application of the scientific method in searching for the truth about business
phenomena. These activities include defining business opportunities and problems, generating, and
evaluating alternative courses of action, and monitoring employee and organizational performance.
Business research is more than conducting surveys. This process includes idea and theory
development, problem definition, searching for and collecting information, analyzing data, and
communicating the findings and their implications. This definition suggests that business research
information is not intuitive or haphazardly gathered. Literally, research (re-search) means “to search
again.” The term connotes patient study and scientific investigation wherein the researcher takes
another, more careful look at the data to discover all that is known about the subject. Ultimately, all
findings are tied back to the underlying theory. The definition also emphasizes, through reference to
the scientific method, that any information generated should be accurate and objective. The
nineteenth-century American humorist Artemus Ward claimed, “It ain’t the things we don’t know
that gets us in trouble. It’s the things we know that ain’t so.” In other words, research is not
performed to support preconceived ideas but to test them. The researcher must be personally
detached and free of bias in attempting to find truth. If bias enters the research process, the value of
the research is considerably reduced. We will discuss this further in a subsequent chapter. Our
definition makes it clear that business research is designed to facilitate the managerial decision-
making process for all aspects of the business: finance, marketing, human resources, and so on.
Business research is an essential tool for management in virtually all problem-solving and decision-
making activities. By providing the necessary information on which to base business decisions,
research can decrease the risk of making a wrong decision in each area. However, it is important to
note that research is an aid to managerial decision making, never a substitute for it. Finally, this
definition of business research is limited by one’s definition of business. Certainly, research regarding
production, finance, marketing, and management in for-profit corporations like DuPont is business
research. However, business research also profits, and governmental agencies can use research in
much the same way as managers at Starbucks, Jelly Belly, or DuPont. While the focus is on for-profit
organizations, this book explores business research as it applies to all institutions.
A. Identifying the existence of problems and opportunities Before any strategy can be developed, an
organization must determine where it wants to go and how it will get there. Business research can
help managers plan strategies by determining the nature of situations by identifying the existence of
problems or opportunities present in the organization. B. Diagnosis and Assessment After an
organization recognizes a problem or identifies a potential opportunity, an important aspect of
business research is often the provision of diagnostic information that clarifies the situation.
Managers need to gain insight about the underlying factors causing the situation. If there is a
problem, they need to specify what happened and why. If an opportunity exists, they may need to
explore, clarify, and refine the nature of the opportunity. C. Selecting and implementing a course of
action Business research is often conducted to obtain specific information to help evaluate the
various alternatives, and to select the best course of action based on certain performance criteria. D.
Evaluation of the course of action Evaluation research is conducted to inform managers whether
planned activities were properly executed and whether they accomplished what they were expected
to do. It serves an evaluation and control function. Evaluation research is a formal, objective
appraisal that provides information about objectives and whether the planned activities
accomplished what they were expected to accomplish. This can be done through performance-
monitoring research, which is a form of research that regularly provides feedback for evaluation and
control of business activity. If this research indicates things are not going as planned, further research
may be required to explain why something “went wrong.”
23. Define the following concept propositions hypothesis ,theory, decision support system
Hypothesis testing is a formal procedure for investigating the data of the research. Hypotheses are
formal statements of explanations stated in a testable form. Most of the times the hypotheses
should be stated in concrete manner so that the method of empirical testing can be done The
different types of hypotheses test which are commonly conducted in the business research are as
following: 1. Relational hypotheses 2. Hypotheses about differences between groups 3. Hypotheses
about differences from some standard The various factors which are considered while formulating
hypothesis are as : • Hypothesis should be clear and precise. • Hypothesis should be capable of
being tested. • Hypothesis should state relationship between variables. • Hypothesis should be
limited in scope and must be specific. • Hypothesis should be stated as far as possible in most simple
terms so that the same is easily understandable by all concerned. • Hypothesis should be amenable
to testing within a reasonable time. • Hypothesis must explain empirical reference.
Large corporations’ decision support systems often contain millions or even hundreds of millions of
records of data. These complex data volumes are too large to be understood by managers. Two
points about data volume are important to keep in mind. First, relevant data are often in
independent and unrelated files. Second, the number of distinct pieces of information each data
record contains is often large. When the number of distinct pieces of information contained in each
data record and data volume grows too large, end users don’t have the capacity to make sense of it
all. Data mining helps clarify the underlying meaning of the data
24. Define ethics and explain importance of ethics in business research with suitable example
Ethics are of interest to business scholars because they influence decisions, behaviors, and
outcomes. While scholars have increasingly shown interest in business ethics as a research topic,
there are a mounting number of studies that examine ethical issues at the organizational level of
analysis. When we talk about organizational ethics, we are referring to the set of values that identify
an organization, from within (or, to put another way, how those working in the organization
understand it) as well as from without (the perception of the organization by those who have
dealings with it). Such a set of values can be considered in a broad sense (that is, the set of values
structuring the organization and its practices, be they instrumental or final values, positive or
negative) or in a stricter sense (where we shall refer only to those values that express the vision, the
raison d’être and the commitments of the organization, and that are linked to their corporate and
moral identity).
A code of ethics is a formal statement of the organization’s ethics and values that is designed to
guide the employees conduct in a variety of business situations. Business ethics relate to corporate
credos like the popular Johnson & Johnson Credo. A corporate credo indicates a company’s
responsibility to its stakeholders, such as individuals and groups who have an interest in the
performance of the enterprise and how it uses its resources. Gomez-Mejia and Balkin (2002) posit
that stakeholders include employees, customers, and shareholders. They state also that a corporate
credo focuses on principles and beliefs that can provide direction in a variety of ethically challenging
situations. Good corporate credos often emphasize corporate social responsibility (CSR) ethical
corporate social responsibility (ECSR) good corporate governance (GCG) as well as the need for
business profitability and sustainability. Sustainability is often confused with CSR, but the two are not
the same. The return of interest in business ethics that began in the 1970s was in realization that
businesses could be tempted to act immorally and unethically whenever necessary in pursuit of
profit. This interest grew rapidly in later years and almost reached a crescendo in the 2000s when it
became clearer that many heavy global businesses like Enron collapsed for the most part, due to
breaches in GCG and business ethics. It is now believed, more than ever before, that business ethics
are also instrumental to the pursuit of long-term profit for the business, as well as prosperity and
sustainability for the organization and society. Organizational sustainability thrives on integrity of the
board of directors (BODs). Integrity is an ethical issue and for sound corporate performance, the
Organization for Economic Co-operation and Development (OECD) principles of corporate
governance state that the BODs should exercise leadership and judgment, with enterprise and
integrity, to achieve continuing prosperity for the corporation. The BODs should also act in the best
interests of the business enterprise in a manner based on transparency, accountability to
shareholders and responsibility to stakeholders. Emphasis is put on integrity as an important ethical
factor in nterprise prosperity and continuity (Ezeh, 2019). Organizations will not adequately meet its
goals and ensure sustainability where there are breeches in business ethics and standards. For
example, the accounting and auditing scandals that led to the, collapse of Enron, WorldCom and
many banks in the 1990s/2000s, and the misfortune of Cadbury Nigeria Plc border on management
ineffectiveness, indiscipline and failure to observe the principles of business ethics. Discipline relates
to the theory of ethics which Kant (1724 – 1804) thought as what is morally right or wrong in social
conduct. Business ethics therefore demands a high dose of discipline among members of the BODs
of a company or any other organization to be able to run the organization professionally along ethical
lines.
25. Explain the following terms wr t research phenomenology , ethnography, grounded theory,
case studied
Ethnography represents ways of studying cultures through methods that involve becoming highly
active within that culture. Participant-observation typifies an ethnographic research approach.
Participantobservation means the researcher becomes immersed within the culture that he or she is
studying and draws data from his or her observations. A culture can be either a broad culture, like
American culture, or a narrow culture, like urban gangs, Harley-Davidson owners, or skateboarding
enthusiasts. Organizational culture would also be relevant for ethnographic study.15 At times,
researchers have actually become employees of an organization for an extended period of time. In
doing so, they become part of the culture and over time other employees come to act quite naturally
around the researcher. The researcher may observe behaviors that the employee would never reveal
otherwise. For instance, a researcher investigating the ethical behavior of salespeople may have
difficulty getting a car salesperson to reveal any potentially deceptive sales tactics in a traditional
interview. However, ethnographic techniques may result in the salesperson letting down his or her
guard, resulting in more valid discoveries about the car selling culture.
Grounded theory represents an inductive investigation in which the researcher poses questions
about information provided by respondents or taken from historical records. The researcher asks the
questions to him or herself and repeatedly questions the responses to derive deeper explanations.
Grounded theory is particularly applicable in highly dynamic situations involving rapid and significant
change. Two key questions asked by the grounded theory researcher are “What is happening here?”
and “How is it different?”19 The distinguishing characteristic of grounded theory is that it does not
begin with a theory but instead extracts one from whatever emerges from an area of inquiry.
6.10 Manipulation of the Independent Variable The thing that makes independent variables special
in experimentation is that the researcher actually creates his or her values. This is how the
researcher manipulates, and therefore controls, independent variables. An experimental treatment is
the term referring to the way an experimental variable is manipulated. 6.10.1 Experimental and
Control Groups In perhaps the simplest experiment, an independent variable is manipulated over
two treatment levels resulting in two groups, an experimental group and a control group. An
experimental group is one in which an experimental treatment is administered. A control group is
one in which no experimental treatment is administered. 6.10.2 Several Experimental Treatment
Levels An experiment with one experimental and one control group may not tell a manager
everything he or she wishes to know. By analyzing more groups each with a different treatment level,
a more precise result may be obtained than in a simple experimental group–control group
experiment. This design, only manipulating the level of advertising, can produce only a main effect.
6.10.3 More Than One Independent Variable An experiment can also be made more complicat ed by
including the effect of another experimental variable. Our extended example of the self-efficacy
experiment would typify a still relatively simple two-variable experiment. Since there are two
variables, each with two different levels, four experimental groups 6.10.4 Repeated Measures
Experiments in which an individual subject is exposed to more than one level of an experimental
treatment are referred to as repeated measures designs. Although this approach has advantages,
including being more economical since the same subject provides more data than otherwise, it has
several drawbacks that can limit its usefulness 6.11 Selection and Assignment of Test Units Test units
are the subjects or entities whose responses to the experimental treatment are measured or
observed. Individual consumers, employees, organizational units, sales territories, market segments,
or other entities may be the test units. People, whether as customers or employees, are the most
common test units in most organizational behavior, human resources, and marketing experiments.
6.11.1 Sample selection and random sampling errors Systematic or nonsampling error may occur if
the sampling units in an experimental cell are somehow different than the units in another cell, and
this difference affects the dependent variable. 6.11.2 Randomization The random assignment of
subject and treatments to groups—is one device for equally distributing the effects of extraneous
variables to all conditions. These nuisance variables, items that may affect the dependent measure
but are not of primary interest, often cannot be eliminated. 6.11.3 Matching Random assignment of
subjects to the various experimental groups is the most common technique used to prevent test
units from differing from each other on key variables; it assumes that all characteristics of the
subjects have been likewise randomized. Matching the respondents on the basis of pertinent
background information is another technique for controlling systematic error by assigning subjects in
a way that their characteristics are the same in each group. This is best thought of in terms of
demographic characteristics. If a subject’s sex is expected to influence dependent variable responses,
as in a taste test, then the researcher may make sure that there are equal numbers of men and
women in each experimental 6.11.4 Control over extraneous variables The fourth decision about the
basic elements of an experiment concerns control over extraneous variables. This is related to the
various types of experimental error. In an earlier chapter, we classified total survey error into two
basic categories: random sampling error and systematic error. The same dichotomy applies to all
research designs, but the terms random (sampling) error and systematic error are more frequently
used when discussing experiments. 6.11.5 Experimental confounds A confound means that there is
an alternative explanation beyond the experimental variables for any observed differences in the
dependent variable. Once a potential confound is identified, the validity of the experiment is severely
questioned. In a simple experimental group–control group experiment, if subjects in the
experimental group are always administered treatment in the morning and subjects in the control
group always receive the treatment in the afternoon, a systematic error occurs. 6.11.5.1 Extraneous
variables Most business students realize that the marketing mix variables—price, product,
promotion, and distribution—interact with uncontrollable forces in the market, such as economic
variables, competitor activities, and consumer trends. Thus, many marketing experiments are subject
to the effect of extraneous variables. Since extraneous variables can produce confounded
29. Explain the following types of service telephone survey, internet survey ,Mall intercept
survey , email questionery survey
1. Speed: Telephone interviews are much faster compared to mail or in-person interviews,
often allowing hundreds of interviews overnight, especially when using computerized
systems.
2. Cost: These are less expensive than personal interviews, estimated to cost less than 23.2% of
door-to-door interviews. However, they are still more expensive than internet surveys.
4. Response Rates: Response rates have declined over the decades. Techniques like leaving
messages on answering machines can help, but many people may not respond.
5. Representative Sampling: Issues like unlisted numbers can be mitigated with techniques
such as random digit dialing.
6. Limited Duration: Respondents may lose patience and hang up, necessitating shorter
interviews.
7. Lack of Visual Medium: Visual aids cannot be used, limiting applications such as advertising
copy tests or concept evaluations【12:0†source】【12:1†source】.
E-Mail Surveys: Questionnaires can be distributed via e-mail, but researchers must remember that
some individuals cannot be reached this way. Certain projects do lend themselves to e-mail surveys,
such as internal surveys of employees or satisfaction surveys of retail buyers who regularly deal with
an organization via e-mail. The benefits of incorporating a questionnaire in an e-mail include the
speed of distribution, lower distribution and processing costs, faster turnaround time, more
flexibility, and less handling of paper questionnaires. The speed of e-mail distribution and the quick
response
Definition: Mall intercept surveys, also called shopping center sampling, involve personal interviews
conducted in shopping malls. Interviewers typically intercept shoppers at a central point within the
mall or near an entrance.
Key Points
1. Cost Efficiency:
o These surveys are less costly as they eliminate travel expenses to the respondents’
homes.
2. Speed:
o Respondents are intercepted in a location where they are already present, making
participation more likely.
4. Usage:
o Often used for research requiring immediate feedback, such as testing food samples
or viewing advertisements.
5. Limitations:
o Results may lack generalizability due to the specific demographics of mall visitors.
o Potential sampling bias may arise as only shoppers who frequent malls are
included【12:0†source】【12:1†source】.