#Textbook (3rd Edition)
#Textbook (3rd Edition)
INTRODUCTION
All disciplines have their own structures. However, not all disciplines have their structures
properly described. This situation may be due to the different ages of disciplines. Science has
a structure that we can say cuts across all disciplines known by name.
An attempt will be made in this chapter to describe the structure of science. Knowing the
structure of science has its value. Science students and all those interested in the subject will
know the components of the discipline and they will be able to relate to the discipline in a
conscious manner so that during the learning processes, they will know what they are
learning. The knowledge of the structure of science will also enable them to know how they
can become scientifically literate persons through a judicious combination of knowledge and
application of the components of the structure of science.
Also, science teachers will understand what they are teaching their students better if they
understand the structure of their subject matter. This knowledge will assist them in knowing
how to properly organise science content for instruction. Evaluation of students’ learning
outcome will be enhanced through this knowledge.
First, different definitions of science are presented to demonstrate an awareness of various
misconceptions about science to properly situate an acceptable definition of science. Second,
the three components of the structure of science are presented in great details, one after the
other, to demonstrate what a scientifically literate person needs to possess for knowledge,
skills, and attitudes. Finally, the chapter concludes with information on the functions and
limitations of science.
OBJECTIVES
1
(xi) describe the organisation and the roles of the major early scientific societies or
academies during the scientific revolution;
(xii) discuss the roles of the early scientific societies or academies as the
forerunners of scientific societies in the contemporary world;
(xiii) list major four major scientific societies or academies in Nigeria; and
(xiv) discuss the relevance of the three characterisations of the 17th Century based
on the major events of the Century.
DEFINITIONS OF SCIENCE
Definitions of science vary slightly from scientist to scientist and from one subject area to
another. Some people still describe science as an ordered body of knowledge while others
define it as a search for explanations to natural objects and phenomena.
Generally, we can define science as a body of knowledge, a way of investigating or method,
and a way of thinking in the pursuit of an understanding of nature. We can therefore view
science as a noun, a verb, and as an adjective.
We can define science with its products: Knowledge in the form of concepts, facts,
generalisations, principles or rules or laws, and theories, subject to error and change. We can
define science with its processes or methods or skills: how scientific knowledge comes about.
We can also define science with its motives, or ethics or attitudes that guide the day-to-day
activities of scientists. Selecting any of these aspects to define science would be inadequate
and misleading. We can consider science at present as an enterprise in which human beings
participate to study and give parsimonious explanations of the materials and forces of nature.
Science employs a variety of techniques. It has as its motive a desire to know. It assumes
orderliness in nature. Understandable and acceptable ethical principles govern the activities
of scientists. Science ends with credible concepts in the form of theoretical, empirical and
rational constructs. We should take science as a human enterprise, the consequences of which
have human implications. In the teaching and learning of science therefore, all aspects of
science need attention. A scientifically literate person should therefore possess a body of
scientific knowledge, a set of scientific skills, and behave scientifically in his day-to-day
activities.
PRODUCTS OF SCIENCE
Concept
A concept is the meaning that we attach to a given symbol or label. Words, formulae and
mechanical models are symbols for concepts. There are three types of concepts: Empirical
concepts, theoretical concepts, and relational concepts.
(a) Empirical concepts are concepts by inspection. They are concrete and observable
concepts. Examples include: cell, pulley, aluminium, and so on. Philosophers and
scientists do not necessarily share the same meaning for what they regard as
observable thing or event. While philosophers usually limit their ideal of observation
to what they can perceive with human sense organs, scientists extend their meaning
of observation to what they can perceive using extensions of sense organs. Such
extensions of the sense organs may include the use of microscope and lenses to see
tiny objects; the use of telescope to view distant objects; the use of public address
system to extend the sense organ of hearing; the use of trained dogs to detect smell
2
and taste for humans; and the use of dissecting instruments to extend the sense organ
of touch. Also, scientists include what they can measure directly using various type
of instruments as part of their conceptions of observations. Therefore, they regard
what they can measure as an observable thing or event. This is why scientists would
treat a thermometer reading of 500C as if it is the degree of hotness of a particular
environment at a particular time of the day as human beings really experience it.
(b) Theoretical concepts are concepts by definition. They are abstract and observable
concepts. Examples include: molecule, absolute zero, gene, and so forth.
(c) Relational concepts are concepts that relate two or more concepts together. They
cannot exist on their own. Examples include: less, more, equal, inverse, proportional,
and so forth.
We should note that whether a concept is observable or not is arbitrary. No sharp hue can be
drawn separating the two. Also, it is a matter of degree, varying from what can be directly
observed using sense organs or that can be measured using simple instruments, to things that
can only be measured or observed indirectly.
Fact
A fact is an event that occurred in the past that people recorded with no disagreement among
observers. This does not mean that a disagreement may not occur in the future, in which case
the fact of the past may not be a fact in the present if there are changes or revisions in
observations, the object of observation, or the meaning attached to the observation. An
example of a fact is: “An insect has three pairs of legs.”
Law
A scientific law or principle or rule or generalisation is a brief statement or mathematical
formula predicting inter-relationships among concepts. Scientists usually base their
predictions on all instances of the concepts of the phenomena that had been observed up to
that time. Laws, therefore, describe regularity and order in our observations of natural
phenomena and events. They do not prescribe. Scientific laws are not laws of society that
members of the society cannot break. For this reason, scientific laws are not “broken”
because they are not commands. It is not even necessary to break them. They are subject to
modification and abandonment if scientists find them to be inaccurate.
There are two types of laws; Empirical law and theoretical law. An empirical law is one that
refers to observable concepts or concepts by inspection that does not provide explanation for
the relationship it predicts. An observable concept is one that people can observe directly. An
example of an empirical law is Ohm’s law: “The electric current in a circuit is directly
proportional to the electromotive force and inversely proportional to the resistance”. All the
concepts in this statement are either directly observable or measurable or, at least, we can
observe their manifestations indirectly.
A theoretical law is more advanced than an empirical law. A theoretical law refers to
theoretical concepts and provides explanation besides prediction. Therefore, a theoretical law
predicts as well as explains. The value of a theoretical law lies in its ability to predict new
empirical law. A theoretical law is usually not generalised from observations, it is invented.
Also, it is usually more general than an empirical law. An example of a theoretical law is: “At
all temperatures other than absolute zero, the motion of gas molecules is random concerning
both rate and direction”. While it is easier for an empirical law to contain key concepts that
are wholly observable, a theoretical law is usually not easy to formulate without using a few
concepts that are observable or measurable. Hence, in the above example, only “absolute
3
zero” and “molecule” are theoretical concepts, while “temperatures” are observable because
they are measurable. The presence of observable concepts in a theoretical law makes it
possible to derive empirical laws about things that are observable from theoretical laws about
things that are unobservable.
Correspondence rules or bridge principles guide the connection of theoretical terms with
empirical terms. The assumption in this is that theoretical laws are based on observational
facts. The fact that we can view concepts like “temperature” as both empirical and theoretical
makes this assumption to break down. Theories are supposed to determine what we observe
because observations are theory laden. The line separating observable from non-observable,
again, is arbitrary. If no clear-cut distinctions exist between empirical and theoretical terms,
may be, there is really nothing to bridge!
Theory
A scientific theory is more advanced than a law. A theory combines several laws in its
formulation. A classical definition of theory states it as a system of logical reasoning that
scientists carefully construct to explain the unknown, based on valid assumptions about
natural phenomena. It contains both observable and unobservable terms. We cannot directly
interpret the unobservable terms.
A modern view of theory holds that it is a way of viewing the world. In formulating a theory,
scientists usually employ a model that is easy to understand, and some logical relationships
from which they can make deductions. Students should note that theories are not laws in
waiting. Laws and theories perform different functions in science. Laws are not necessarily
better than or more superior to theories. In fact, when theories are used in explanations,
concepts, facts, and laws are employed to make valid explanations.
Scientific laws use more mathematics than scientific theories for precision sake. This
situation explains why there are more laws in chemistry and physics than biology.
Additionally, biology has more theories than laws because of its nature. Biology, as the study
of life, is difficult to describe in abstract mathematical terms, because of the complexities of
life.
An example of a theory is the kinetic theory of gases as Body (1970) stated it:
1. All gases are composed of many small molecules that are in a state of perpetual rapid
motion in straight lines.
2. At normal pressures, the total volume of the molecules is very small compared to the
total volume of the gas.
3. The molecules are spherical, elastic, and smooth.
4. The pressure exerted by the gas is the force per unit area due to the impact of the
individual molecules on the wall of the containing vessel.
5. Two gases are at the same temperature when the mean kinetic energy of the individual
molecules of the-two gases is the same (p.183).
Another example is Darwin’s theory of evolution that we can state as follows:
1. There is an overproduction of all living things.
2. This results in a struggle for existence (competition) with the survival of the fittest.
3. Those best fitted are the ones with variations that adapt them to their environment.
4. Natural selection destroys the ones that adapt poorly.
5. Those, which survive, reproduce and pass on their variations to their offspring
(Abimbola, 2002, p. 26).
4
All theories must take, into account the known facts, however, scientists never prove theories
true or false; they are only adequate or inadequate to explain a phenomenon. Theories,
therefore, vary in credibility. This credibility criterion is probably based on the cumulative
view of scientific progress. However, a theory can still be credible even when it is not
consistent with known facts, especially if it is from a different perspective that may lead to a
scientific revolution. This is the only way by which scientific progress is possible. When a
scientist has a choice of two theories, one complex and one simple, and both are adequate for
use in explaining the phenomenon, he, or she selects the simplest theory. When a scientist has
a choice between a theory that can change and one that cannot change, and both are adequate
in explaining the phenomenon, he, or she selects the theory that can change.
A theory serves as a basis for deducing empirical consequences from it. A theory is useful in
explaining observed data. We can also use a theory to predict future events.
PROCESSES OF SCIENCE
Processes of science are the methods and skills that scientists employ in their work. Processes
of science are the scientific activity per se. We can state them in a noun form or in a verb
form. However, in listing scientific processes, we must not mix together the noun form and
the verb form. The following processes are in the verb form: Observing, classifying, inferring
predicting, measuring, communicating, interpreting data, making operational definitions,
formulating questions and hypotheses, experimenting, and formulating models.
Observing
Observing takes places in a variety of ways using all the sense organs. We observe using
direct sense experience with the aid of our sense organs. Where direct observation is not
adequate or feasible, we use indirect methods of observation. We observe the qualities and
quantities of objects and events. The precision of observations is important if eventually we
are going to infer from such observations. We improve the precision of observations by
making quantitative observations. Observations vary with the background knowledge of the
observer.
Classifying
Classifying is the grouping or ordering of phenomena according to a scheme that we establish
for that purpose. We may classify objects and events based on observations. We base
classification schemes on observable similarities and differences in properties that we
arbitrarily select. We use classification schemes to group items within a scheme as well as to
retrieve information from a scheme.
Inferring
The process of inferring, while based on observation, requires evaluation. Inferences that are
based upon one set of observations may suggest further observations, which in turn, require
modification of original inferences. Inferences lead to prediction.
Predicting
The process of predicting involves the formulation of a result that we expect based on
experience. The reliability of prediction depends upon the accuracy of past observations and
upon the nature of the event being predicted. We base prediction on inference. Progressive
series of observations and, in particular, graphs are important tools of prediction in science.
An experiment can support or contradict a prediction.
5
Measuring
We measure properties of objects and events by direct comparison or by indirect comparison
with arbitrary units. We need to standardise these units to make for easy communication. We
can relate together identifiable characteristics that we can measure to provide other
quantitative values that are valuable in the description of physical phenomena.
Communicating
To communicate observations, we need to keep accurate records that we can submit for
checking and re-checking by others. We can represent in many ways, accumulated records
and their analyses. Scientists often use graphical representations since they are clear, concise
and meaningful. Complete and understandable experimental reports are essential to scientific
communication.
Interpreting Data
Interpreting data requires the application of other basic process skills–in particular, processes
of inferring, predicting, classifying and communicating. It is through this complex process
that scientists determine the usefulness of data in answering the question that they are
investigating. Interpretations are always subject to revision considering new or more refined
data.
Making Operational Definition
We make operational definitions to simplify communication concerning phenomena that we
are investigating. In making such definitions, it is necessary to give the minimum amount of
information that we need to differentiate that which we are defining from other similar
phenomena. We may base operational definitions upon the operations that we are performing.
Operational definitions are precise and, in some cases, based upon mathematical
relationships.
Formulating questions and hypotheses
We formulate questions based upon the observations that we make and they usually precede
an attempt to evaluate a situation or event. Questions, when we state them precisely, are
problems that we want to solve through application of the other processes of science. The
attempt to answer one question may generate other questions. The formulation of hypothesis
depends directly upon questions, inferences and predictions. The process consists of devising
a statement that we can experiment. When a set of observations suggests more than one
hypothesis, we must state each of them separately. We state a workable hypothesis in such a
way that, upon testing, we can establish its credibility.
Experimenting
Experimenting is the process of designing data-gathering procedures as well as the process of
gathering data to test a hypothesis. In a less formal sense, we may conduct experiments
simply to make observations. However, even here, there is a plan to relate cause and effect. In
an experiment, we must identify and control variables very much. We design an experimental
test of a hypothesis to indicate whether to reject, modify, or not to reject the hypothesis. In
designing an experiment, we must consider the limitations of method and apparatus because
these play a role in determining what we observe and the veracity of the findings.
ETHICS OF SCIENCE
The ethical standards of the disciplines of the sciences guide the actions of the scientists.
Scientists view their observations and conclusions as another approximation of what we
commonly call truth; however, they see truth in the absolute sense to be impossible. The
following are some of the ethnical attributes that scientists are supposed to possess. They are
also attributes that a person that is scientifically literate should show in his or her behaviours.
6
Curiosity and Science
Scientists must be very curious. Curiosity is spontaneous desire on their part to explore the
environment and to learn about its phenomena. Curiosity concerning the physical universe is
the fundamental driving force in science. It is the starting point for science. It is the same
across the barriers of time, race and geography. It requires that one does not satisfy oneself
with the status quo. This attribute is natural in children and students. All it may require is for
somebody to encourage it.
Open-mindedness
Scientists should keep an open mind to things and events. There should be openness
concerning investigation that such factors as religion, politics or geography do not limit. They
should be willing to question the status quo. They should always maintain a doubting attitude
of a sceptic and not allow themselves to be too gullible to spurious data.
Positive Approach to Failure
Scientists should maintain a positive approach to failure. They should make positive use of
failure. For every failed attempt, scientists should know that they have learned at least one
way of not doing the same thing the same way again. To achieve this attitude, one requires
persistence and perseverance. It requires that one be not easily discouraged.
Objectivity in Science
Scientists are to view all within their discipline objectively; they must be impersonal,
impartial, and detached from prejudice as they make observations and formulate conclusions.
They must be willing to look at many sides of an issue without prejudice. There must be a
general reluctance on the part of scientists not to let their personal emotions affect the
decisions that they take. As long as we view science as a human enterprise, it may be difficult
to be absolutely objective as we make observations. Human experience colours what they see
things as.
Respect for the Opinion of Others
Scientists should have respect or tolerance for the ideas or opinions of others. This requires
that scientists do not adopt an attitude where they appear as all-knowing and infallible. In
fact, opinions of other experts and novices can be crucial in hitting on ideas that may be
useful or damaging to an idea that one may strongly hold before. The science community
relies so much on the ideas of members of the community to validate discoveries and
inventions that scientists make within the community.
Willingness to Suspend Judgment
Scientists should always be willing to suspend judgment. This requires that they do not jump
to hasty conclusions. They should wait for all necessary facts to come in before judging and
concluding. Persons that possess this scientific attitude will not involve themselves in rumour
mongering or tale bearing. Despite the fact that science progresses through revisions of old
ideas that were once correct, the revisions that might be necessary for ideas that were hastily
believed would be of a different kind. They are revisions that would have not been necessary
if adequate care had been taken initially.
Willingness to Accept Criticism
Scientists should always be willing to allow others to question their ideas. Criticism
contributes much to the progress of science. This attribute requires that individual scientists
submit their works for others to criticise them. They should also be willing to consider the
criticisms with open minds so that they can be willing to do something about them. Most
academic and professional journals would not publish a research report until appropriate
corrections suggested by blind reviewers have been implemented. There is the possibility that
good works may be prevented from seeing the light of the day because of requirements like
this. However, the integrity of science has to be guarded jealously to prevent frivolous
discoveries from being published and recognised. Also, adequate care is taken by the editors
7
to ensure that reviewers are persons sufficiently knowledgeable in the areas in which they are
given papers to review.
Unwillingness to Believe in Superstitions
A superstition is a folk idea, belief, or custom that is based on ignorance and irrational fear of
the unknown. It started as an attempt to cope with a world full of stress and mysteries such as
births and deaths, rain, flood, lightning, drought, eclipses, rainbow, and so forth. In spite of
the progress made in science, no country of the world is completely free of superstitions.
However crude, superstitions are an initial attempt to explain and deal with a mysterious
world. Most of the science of the Middle Ages may now appear superstitious, but they
provide the needed insight into the future that made subsequent progress in science possible.
Superstitions may appear foolish to scientifically literate persons, but they are not, to persons
believing them. To be able to disbelieve superstitions, it requires that one seeks to understand
the superstitions and the scientific explanations for them. Providing scientific explanations
for superstitions and teaching them early to students may serve to reduce the number of
persons that will grow up with superstitions. This is the task of all science teachers.
Science is Dynamic
Scientists seek relationships among data and formulate generalisations in the form of
concepts, laws and theories. More refined data comes with each new method of making
observations. We may modify or reject the generalisations altogether as more data
accumulate. This requires an openness of mind that allows for willingness to change one's
opinion in the face of evidence. Concepts in science are dynamic, not static. They have only
temporary status. Also, science is self-correcting.
Cooperation among Scientists
Scientists work alone or in groups. Most of the time, they find themselves working in groups.
They, therefore, must learn and be willing to cooperate with others when working on group
activities. Due credit must be given to individual's contribution to group activities. Selfish
and narrow interests must give way to the larger interest of advancing the progress of science.
Science is Anti-authoritarian
Science is not subject to the whims and caprices of any other authority other than itself. The
real authority of science resides in the logic of the discipline, the data from nature, and the
force of the scientific community. The basis of scientific knowledge is consensus among self-
appointed experts. Science begins with facts and ends with facts. This does not depend on
what kinds of structures we build in between. For this reason, we do not expect anybody to
forcefully pronounce a discovery or invention that is not scientific as scientific.
Accuracy of Observations and Reports
The practice of science requires that we make accurate observations and reports. Scientists
must be unwilling to compromise with the truth. Observations must be very free from the bias
of the observer. The nature of the hypothesis that we are testing determines the nature of the
observations that we intend to make. Observations depend on theory. This means that
scientists rarely make observations without a pre-planned purpose for making it.
Morality and Science
In pure scientific research, scholars have no obligation to concern themselves with the moral
implications of their work. This is why scientists that worked on the description of the
structure of atoms are not morally liable for the consequences of the atomic bomb that is
capable of mass destruction. However, in applied scientific research such as in genetic
engineering, scholars have obligation to concern themselves with the moral implications of
their work. This is why scientists that developed the atomic bomb who should know the
consequences of the use of the bomb are more liable. This is why the United States
Government banned research into the cloning of human beings while scientists are free to
clone plants and animals. Most scientists believe that a decision to participate or not
8
participate in research that could pose risks to society is a matter of personal ethics rather
than professional ethics.
Orderliness of Nature and Natural Laws
Scientists appreciate that nature is orderly and natural laws are orderly. This appreciation is
helpful to them in spotting anomalous events in nature. Their experiences have taught them to
expect certain things to happen under certain conditions and at particular times. An awareness
of anomaly is a first step that nature is orderly and its laws are orderly.
The Beauty in Nature
When there is orderliness in nature, there is bound to be beauty in it. For instance, things in
nature have shapes, sizes, colours, textures, and arrangements that occur regularly with
respect to each type of thing. A combination of these in nature provides both orderliness and
disorderliness that together make things look beautiful. Scientists, in part, embark upon
scientific inquiries because of the appreciation of the beauty in nature and the fascination that
beautiful events, phenomena, and objects offer. We can find some examples of beauty in
nature in the scenery provided by undulating mountains, the sky at night, the flora and fauna
in the rain forest, ocean, and so forth.
Universality of Science
Science is universal. Scientists assume that the universe is a vast single system in which the
basic rules are the same everywhere. All scientists have obligation to publicise their findings
and formulations to the scientific community. Whichever findings scientists publicise first
have priority of research.
Science describes the world in form of qualities and attributes that human beings can
understand using familiar descriptors such as numbers, shapes, sizes, weights and ratios. In
doing this, science demonstrates that we are related to the whole universe by being a part of it
and therefore, we also have a place in it. Science is knowledge. We can make the study of
science as an end in itself. Science, therefore, performs an intrinsic function by serving as a
subject matter for study that will provide individuals with a sense of intellectual satisfaction,
especially when we make particular observations that demonstrate the regularity and unity in
nature's design. Science satisfies our desire to know.
When science describes the world adequately, such a description enables us to predict what
will happen if we take particular actions. Science, therefore, through its method, guides us
through particular courses of actions that would lead to particular outcomes. The method is
also useful in evaluating the outcomes to be sure that they are valid outcomes. The range of
things that we can achieve now or in the future through science is limitless, but the use to
which we shall put these things is not within the subject matter of science.
The method of science is only one of the methods of knowing that yields knowledge.
Therefore, science is one kind of knowledge that human beings base decisions upon. It does
not take decisions by itself. Hence, science, especially pure science, does not concern itself
with moral issues. However, scientists in applied science have to contend with moral issues
that relate to the knowledge arising from their work. For instance, a scientist attempting to
clone a human being cannot claim ignorance of the full implications of the outcome of his or
her work.
9
Science summarises observations in the form of laws. These laws are as certain as the
instruments that we use and the inferring techniques that we employ, allow. It assumes
normal conditions while doing this. Science uses these summaries to predict unknown past
and future events. If there is any change in the conditions that scientists assumed initially,
predicted events may not occur.
It is difficult for scientists to deduce, with certainty, any past or future event because past and
future conditions are difficult to be wholly known. The farther away these past and future
events are, the more difficult they are to deduce with certainty. There is always the possibility
of changes occurring in the laws that we apply now that we cannot anticipate at present. Even
when these laws applied to past events, we have no way of saying, with certainty, that they
will apply to future events, and vice versa.
Science does not deal with final causes. Hence, it does not accommodate miracles in its
subject matter. Unexpected events do occur during scientific investigations. These events are
usually explained in the end using appropriate theories and laws. Whatever scientists are
unable to explain remains a challenge to the scientific community and their members will not
rest until they are able to reduce the unknown to the familiar.
Scientific laws are generalisations that summarise observations. They are not decrees or a
legal enactment that are enforceable. The only authority of science resides in the logic of
human beings and the force of the scientific community. The scientific community sanctions
what it considers to be a major scientific discovery or invention. It does not lay down any set
of rules for arriving at a discovery or invention because it is not a mechanical process.
The 17th century is very significant in many respects. The many names by which the century
is characterise attest to this significance. We characterise the century variously as the century
of "The Scientific Revolution," “The Century of Genius,” and “The Century of Scientific
Societies.” (Mendelsohn, 1982)
This chapter is therefore organised in a similar fashion to the major developments of the
century. The chapter opens with the history of the development of the scientific method by
Francis Bacon and Rene Descartes. Next, is the description of the life and achievements of
the century's geniuses, such as: Galileo Galilei, Johannes Kepler, William Harvey, and Isaac
Newton. Finally, the chapter closes with a description of the early scientific societies or
academies—their organisation and their role as the forum for the dissemination and exchange
of ideas.
The scientists of the earlier centuries had interest in making discoveries and inventions. They
placed very little emphasis on the practice of science. Perhaps, this is understandable because
even in this modem time, theorising about the practice of science is not the business of
scientists. Philosophers of science have taken it upon themselves to describe what scientists
do, the standards they follow, and should follow, and they describe the standards that their
discoveries and inventions should meet. Occasionally, some philosophers of science are
former scientists who left practical science. It does appear that a good knowledge of science
is a prerequisite for being a knowledgeable philosopher or historian of science.
10
During the 17th century, the movement to study science by observation and experimentation
that began in the 16th century now became a scientific revolution. It was the period that
scientists paid the greatest attention to the scientific method. The sheer number of persons
that paid attention to method then, indicated the need for an acceptable method of conducting
science. Among the people that paid great attention to scientific method during this period
were: Francis Bacon and Rene Descartes. Westfall (1977) has credited Robert Boyle with,
perhaps the best statement of the experimental method that focused on "the activity of
investigation that distinguishes the experimental method of modern science from logic”. (p.
115). Pascal, Gassendi, and Newton also wrote on scientific method. Westfall (1971), put the
date, when experimental method of modem science started to have influence on the scientific
community, at about the year 1590. This was the time, scientists started to base their work on
deliberately contrived experiments. Taylor noted that Galileo Galilei (1564-16431 was the
first scientist to employ the modem scientific method fully in physics and astronomy.
Before then, Aristotle's syllogistic method of reasoning, described in his organon dominated
science. Also, Westfall (1977) stated that Galen's writing in his physiology contained
examples of experimental investigation. Westfall also claimed that Robert Grosseteste of the
medical school, and the logicians based at the University of Padua, Italy in 16th Century,
discussed the precursors of hypothetico-deductive method, Furthermore, during the period
before the scientific revolution, natural history characterised science during which scientists
made observations and recorded them carefully. This method led to a general feeling of
disillusionment among scientists. The general feeling of disillusionment had to do with
results of scientific investigations that did not match the efforts put into them. The Scientists
of the time blamed the method of conducting science, for the low output. However, the
emphasis on method during the period of the 17th century paid off with several discoveries
and inventions during the period and beyond thereby giving the impression, albeit
unintentionally, that science is synonymous with its method.
Francis Bacon wrote a book on the philosophy and method of scientific investigation titled,
Novum Organum (the New Instruments, 1620). He was perhaps the first in the 17th century
to formulate a series of steps to account for the scientific method in his book (Taylor, 1963).
The book was a reaction to Aristotle's treatise in logic referred to as the Organum. Aristotle's
Organum is the “old method of reasoning” while Bacon's Novum Organum is the "new
method of reasoning.” Bacon based his method on the inductive method of objective
observation and experimentation without preconceptions. He wrote Novum Organum in two
books. The first book classifies idols that are impediments to learning; (1) idols of the tribe
that are errors inherent in human nature; (2) idols of the cave that are errors that result from
individual background and experience; (3) idols of the market place that are errors resulting
from the use and misuse of language; and (4) idols of theatre that are errors arising from false
philosophical systems. The second book contains an examination of the inductive method of
experimental science for use in remedying these idols, and to serve as a basis for making
progress.
Francis Bacon's account of the scientific method in his Novum Organum (1620) has four
steps that include:
11
3. By these tables, he would arrive at minor generalisations, which we would call
theorems or rules, and by comparing these, he would rise to general Scientific
Laws.
4. These laws, when found, must confirm themselves by pointing out new
instance of the phenomenon studied. (Taylor, 1963. p.97)
The formulation of these steps in the scientific method marked a major landmark in the
systematisation of science.
Francis Bacon himself was not a practicing scientist. Also, he made no scientific discoveries,
He was a lawyer by profession and his legal background probably inspired him to write on
how to systematise science. He also proposed a system for forming a scientific organisation.
In a fiction called The New Atlantis, Bacon wrote of an academy of science where scientists
worked together on projects and gave reports on their work before the whole academy.
English scientists set up the institution that later became the Royal Society of London in
1660. They set it up according to Bacon's ideas.
Another person that worked on the systematisation of science was Rene Descartes (1569-
1650). Descartes was a mathematician and philosopher as well as a scientist who believed in
reasoning. His “Discourse on Method” was also a new way of finding the principles of
nature. His proposed rules of investigation as follows:
● The first was never to take anything as true that I did not know evidently to be
so.
● The second, to divide each of the difficulties which I might examine into many
portions as should be possible and should be necessary, the better to resolve
them.
● The third, to conduct my thinking in an orderly manner, beginning with the
objects most simple and most easy to understand, in order to rise little by little,
as if by steps, up to the knowledge of the most complex; supposing moreover
that there is an order even among those that do not proceed naturally one from
the other.
● And the last, always to make enumeration so complete and reviews so general,
but I should be certain of having omitted nothing. (Descartes,1960, p.15.)
While Bacon advocated experimentation and induction, Descartes preferred mathematical
reasoning and deduction. Bacon disregarded the role of prior conceptual knowledge about the
data that scientists are to collect and placed too much emphasis on gathering facts. These
facts need to be relevant to the problem. Also, he underestimated the role of mathematics in
the physical sciences. Descartes, on his part placed too much emphasis on mathematical
reasoning about facts. He placed little emphasis on observation. In any case, the two
opposing methods were available for scientists to integrate, and fortunately, Newton did just
that.
A CENTURY OF GENIUSES
The next several paragraphs will describe the works of notable scientists of the 17th century
that are products of the scientific revolution. The list includes great scientists such as:
Galileo, Harvey and Newton.
Galileo Galilei (1565-1643)
We usually freely refer to Galileo as the first modern scientist because, as earlier mentioned,
Galileo was the first person to employ the scientific method in its fullness, and Galileo was
12
an Italian astronomer and physicist born in Pisa in 1364. His two great books are Dialogue
on the Two Chief systems on the World (1632) and Discourses on the Two New Sciences
(1632).
Although Galileo was best known for his mathematical studies of the motion of bodies on
earth, he also invented a number of new scientific instruments. For example, he invented the
first thermometer, called, air thermometer. He also made larger and more powerful
telescopes. He used these new telescopes to make observations of the moon and the planets.
Galileo's telescope studies of the moon revealed that the moon was not a smooth sphere
shining by its own light as had been believed. Instead, the moon's surface showed great
mountains and craters and it showed only reflected light. Galileo also observed that the planet
Jupiter had moons revolving around it, just as our moon revolves around the earth. He also
demonstrated that the Milky Way was really made up of many stars, which previously had
not been visible. His studies of the heavenly bodies increased his conviction that the sun was
the centre of the solar system as stated by Copernicus (the heliocentric theory). Galileo began
opposing the old description of the universe because the earth appeared to him to move. It
was not stationary as stated in the Aristotle-Ptolemy theory of the universe.
We usually refer to the Aristotle-Ptolemy theory as the geocentric theory of the universe. The
Copernican theory of the universe (and its modern modifications) is known as the
heliocentric theory, which posits that the sun is at the center of the universe. The belief in the
heliocentric theory got Galileo into trouble with the Church and supporters of Aristotle's
theory.
Johannes Kepler (1571-1630) and the Laws of Planetary Motion
Johannes Kepler was a German and one of the outstanding astronomers the world has ever
produced. His work helped explain the motion of the planets around the sun. Kepler accepted
the Copernican theory of the universe and made changes in it. His main work was the study
of motion of mars. He proved, mathematically, that the planets orbit the sun (being a
Copernican) in elliptical paths rather than in perfect circles. This was after he had tried
various kinds of "oval" paths that led him to the idea of an elliptical path.
Kepler formulated the following three laws of planetary motion to explain, more accurately,
the motions of the planet:
1. The planets describe ellipses about the sun, the sun being in one focus.
2. The planets move so that the lines joining the sun to the planet sweep out
equal areas of the ellipse in equal times.
3. The squares of the periodic times of the planets are proportional to the cubes
of the major axes of their orbits (The periodic time of the planet is the time of
one revolution about the sun: the major axis or an ellipse is part of the straight
line passing through its foci that is cut off by its circumference) (Taylor, 1963,
pp. 115-116).
These laws are still in use today in astronomy and in planning the orbits of artificial satellites.
Kepler wrote a number of books among which are: Mysterium Cosmographicum ("Cosmic
Mysteries"), Astronomia Nova ("New Astronomy"), and Harmonices Mundi ("The
Harmony of the Universe. He died on November 15, 1630.
William Harvey (1578-1657) and the Circulation of the Blood
William Harvey was an English Doctor and Scientist. He was born on April 1, 1578. Before
Harvey, the heart had been thought of as the source of life, and the seat of the emotions.
13
Harvey showed it to be a pump that kept the blood in circulation through the body in a closed
system of blood vessels. Harvey was unable to establish the connections between the arteries
and the veins.
He gave a series of lectures on the circulation of the blood before the Royal College of
Physicians in 1616. He published the lecture in a book, titled: De Motu Cordis et Sanguinis
("On the Motion of the heart and blood") in 1628. Harvey is usually regarded as the founder
of the science of physiology. He died on June 3, 1657.
14
optics, Newton was appointed Professor of Mathematics at Trinity College in 1669 to
succeed Isaac Barrow, his favourite teacher. His book, Optics was published in 1704.
Newton's greatest and most influential book is the Principia or Mathematical Principles of
Natural Philosophy (Philosophiae Naturalis Principia Mathematica), which was published in
1687. In this book, Newton was able to show the mathematical proof of the law of gravitation
earlier figured out by Johannes Kepler. He also concluded a series of studies made by Gilbert,
Galileo and others. He therefore brought into one science the motion of celestial bodies and
the movement of bodies on the earth.
During later years, Newton played a more active role in public life. He served as a member of
Parliament (1689), Warden (1695), and Master of the Mint (1699). In 1703, he was elected
President of the Royal Society, and served in this capacity until his death. He became a
knight in 1705.
Isaac Newton died in 1727 and he was buried in Westminster Abbey. His statue stands today
in the hall of Trinity College, Cambridge University.
The formation of scientific societies was one of the most important developments of the 17th
Century. These societies filled the vacuum left by the decline of early universities. There was
a felt need to disseminate and exchange scientific information with like-minded colleagues,
hence, the formation of these societies or academies. The earliest of these academies or
societies was established in Rome in 1603. It was called the Academia deiLincei. This
Academy replaced its 1560 counterpart, the Academia Secotorium Naturae founded in Naples
and which was closed down for meddling with witchcraft. Galileo was one of the members of
this new academy. The members of the society were the first to use the microscope for
scientific studies. The name, "microscope," was coined by the members. The society split
over the Copernican theory in 1615 and it was dissolved in 1667 when Leopold Medici was
made a Cardinal.
The Royal Society of London is today the oldest association of scientists. The society grew
out of a series of informal meetings called, "Invisible College" that was held in London and
Oxford. The society started meeting formally in 1660. In 1662, King Charles 11 gave the
society his seal of approval with the name "Royal Society for the Improvement of Natural
Knowledge." The society was privately sponsored.
The society served as a forum for presenting papers on topics of scientific interest and for
presenting demonstrations and experiments. The Royal Society started publishing a journal
the Philosophical Transactions, in 1665. The journal still exists.
In 1666, the French Academie Royale des Sciences (Royal Academy of Sciences) started its
formal meetings in Paris. In 1662, it started to publish a journal. This society replaced an
earlier one formed in 1654 under the auspices of Herbert de Montmor (1600-1679). It
disbanded due to financial difficulties.
The French King became interested in science and so the society became part of the Royal
Court. The King paid the members of the society. The King also provided material support.
The members of the society in turn carried out crown-supported research and became the
inspectors of patents and the designers of new machines.
15
In 1609, the academy consisted of 70 members whose positions were arranged in a
hierarchical order. The members' rights and privileges followed this order until the time of
the French Revolution when the academy was re-organised. Members were subsequently
granted equal rights and privileges. The Academy was responsible for the development and
adoption of the metric system of measurement. Thus, France was the first to recognise the
importance of national science.
It is important to mention here that we in Nigeria have Nigeria Academy of Science (NAS).
There is the need to find out more about the Nigeria Academy of Education (NAE). These are
of professional interest to us as prospective scientists and teachers. Another one is Nigeria
Academy of Letters (NAL) for Humanities professionals.
Most of the early and modern scientific societies published scientific periodicals that carried
reports of the latest experiments, calculations, tables, and diagrams. They replaced the book
as the basic means of transmitting current scientific information. The scientific societies also
provided a forum for checking scientific discoveries and they established unwritten rules for
scientific work. It was only fairly recently that the modern philosophers of science began
emphasising the important role the scientific community, through these societies, plays in
providing appropriate checks and balances for the practice of science. However, space
constraints in journals and time constraints in conferences do not seem to permit meaningful
exchange of ideas as before.
CONCLUSION
We have made an attempt in this chapter to define science and describe its structure,
functions and limitations. An adequate definition of science needs to include all the three
components of the structure of science. These three components include, science products
such as concepts, facts, laws and theories; science processes, and the ethics of science that
include, attributes that characterise scientifically literate persons.
Science describes the world as adequately as current instruments, inferring techniques, and
knowledge allow. Science is therefore both knowledge and method but it is not everything.
Science has its limits that set boundaries for what counts as a problem, the methods that will
be appropriate, and what counts as a solution. More solutions lead to more problems in an
ever-ending cycle.
An attempt has been made in this chapter to present the history of the development of science
during the period of the scientific revolution of the 17th Century. The century was marked by
three major achievements, namely, the formulation of scientific methods, the production of
rare scientific geniuses, and the organisation of scientific societies.
This chapter therefore, first traced the history of the scientific method as formulated by
Francis Bacon and Rene Descartes and used gainfully by Galileo and Newton and other
scientists of the century. Second, the chapter described the life history and the major
contributions of the century's men of science such as Galileo, Kepler, Harvey, and Newton.
Finally, there was a description of the organisation of scientific societies in Italy, England,
and France. These societies or academies set the standard and the pace for the present-day
scientific societies. The Royal Society even exists till today. The scientific revolution of the
17th century provided the basis upon which most of the scientific practices of the 18th, 19th,
and 20th centuries depended.
16
In general, the revolution of the scientific method led to the scientific revolution of the 17th
century while the scientific revolution led to the industrial revolution of the late 18th and early
19th centuries. The term, "industrial revolution,” was used to describe the period of dramatic
economic and technical change in Britain during the period 1760 to 1850.
EVALUATION STRATEGIES
Practice Questions
1. Look up the following words from the dictionary and find out their meanings,
derivations (origins), related idioms and synonyms: structure, science, product,
process, ethics, concept, fact, law, theory, inferring, predicting, interpreting,
hypotheses, experiment, objectivity, superstitions, dynamic, universal, and limitations.
2. Use the words in several sentences with your study partner until you have clearly
understood them.
3. What is science?
4. Describe, in detail, the basic components of the structure of science.
5. “Concept difficulty can be measured in terms of the degrees of complexity,
sophistication and abstractness of the concept.”Explain what you understand by this
statement using appropriate illustrations. What are the implications of the statement
for the learning of science?
6. What do you understand by "the structure of science”?
7. State two differences between an empirical law and a theoretical law.
8. Identify three major types of concepts. Give one example each from your major
subject.
9. Identify and state two laws and two theories from your major subject.
10. Identify and describe five science processes.
11. Identify and describe five scientific attitudes that characterise a scientifically literate
person.
12. State two functions of science.
13. Discuss the limitations of science.
14. Why do we refer to the 17th Century as the century of "the Scientific Revolution"?
15. Justify the characterisation of the 17th Century as "the Century of Genius."
16. Why is the 17th Century usually referred to as, “the Century of Scientific Societies”?
17. Distinguish between the terms "Scientific Revolution" and "Industrial Revolution."
18. Why did Francis Bacon entitle his book, Novum Organum(New Instruments)?
19. State the basic steps of Francis Bacon's scientific method.
20. State the basic steps of Rene Descartes' scientific method.
21. Name one major scientific achievement associated with each of the following
great scientists: Galileo Galilei; Johannes Kepler; William Harvey; and Isaac Newton.
22. Name one early scientific society formed in each of the following countries: Italy,
Britain, France, and Nigeria.
23. What period year(s), do we refer to as the 17th Century?
17
REFERENCES
18
Chapter Two
PHILOSOPHY OF SCIENCE AND SCIENTIFC EXPLANATIONS
Akanji, M.A. and Yakubu, M. T.
Department of Biochemistry, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
What is science? Is there a real difference between science and myth? Is science objective?
Can science explain everything? Provision of answers to these questions provides a concise
overview of the main themes of this chapter which is philosophy of science.
OBJECTIVES
Science
The word science is derived from the Latin word ‘scientia’ meaning knowledge.
Science can be defined in various ways. These definitions include:
the state or fact of knowing; knowledge or cognisance of something specified or
implied;
a branch or study which is concerned either with a connected body of demonstrated
truths or with observed facts systematically classified and more or less colligated by
being brought under general laws and which include trustworthy methods for the
discovery of new truth within its own domain;
an ordered body of knowledge or a search for explanations to natural objects and
phenomena; Such knowledge is derived from the systematic study of nature and
behaviour of materials of the physical universe based on observations,
experimentations, measurements and the formation of laws to describe these facts.
These observed facts can be systematically classified and brought under general
principles.
devotion of man to research or to the attainment of the kind of knowledge which
establishes general laws governing a number of particular isolated facts.
In common usage, the word science is applied to a wide variety of disciplines or intellectual
activities which have certain features in common. Application of the term did not begin with
any formal definition, rather, the various disciplines arose independently each in response to
some particular need. It was then observed that some of these disciplines had enough traits in
common to justify classifying them together as one of the sciences. Science probes into
unknown and investigates established facts. The information that are thus obtained can be
synthesised, classified as generalised and stated as norms, concepts, principles, theories and
laws; an approach which has enabled man to establish the truth and appreciates his personal
self and his environment better.
19
PHILOSOPHY OF SCIENCE
For philosophy to rightly take its place in the sciences, it must make a very significant
contribution to the advancement of the subject matter as a consequence of knowledge.
Philosophical science is aimed at prodding scientists into an extremely healthy state of
scepticism about many of the traditional foundations of their thinking. Studying the
philosophical aspect of science will enable one to be accustomed with the development in
science over the years; this will stimulate an active interest in the discipline. Philosophers of
science therefore concern themselves with what science is all about; its goals, structure and
its activities. Some philosophers of science also use contemporary results in science to reach
conclusions about philosophy
Philosophy of science has historically been met with mixed response from the scientific
community. Though scientists often contribute to the field, many prominent scientists have
felt that the practical effect on their work is limited as buttressed by a popular quote attributed
to the physicist, Richard Feynman which states that "Philosophy of science is about as useful
to scientists as ornithology is to birds." In contrast, some philosophers like Craig Callender
have refuted this belief by pointing out that it is likely that ornithological knowledge would
be of great benefit to birds, if and only when it is possessed. Furthermore, many philosophers
of science have also considered problems that apply to particular sciences such as philosophy
of biology, philosophy of chemistry etc.
20
philosophy of Biology is dominated by investigations about the foundations of evolutionary
theory.
Philosophy of Chemistry
Philosophy of Chemistry is concerned with the methodology and the underlying assumptions
of the science of Chemistry. This aspect is explored by philosophers, Chemists, and
Philosopher-Chemist teams. Specific topics of interest which are normally addressed include
the relationship between chemical concepts and reality where resonance structures are used in
chemical explanations, and the reality of concepts such as nucleophiles and electrophiles.
Others include whether chemistry studies atoms or reaction processes, symmetry in chemistry
specifically the homo-chirality in biological molecules and whether quantum mechanics can
offer explanation to all chemical phenomena.
Philosophy of Mathematics
Philosophy of Mathematics focuses on the philosophical assumptions, foundations, and
implications of mathematics. Topics of concern include but are not limited to the sources of
mathematical subject matter, what it means to refer to a mathematical object, character of a
mathematical proposition, the relationship between logic and mathematics, the kinds of
inquiry that play a role in mathematics, the objectives of mathematical inquiry, the source and
nature of mathematical truth, the relationship between the abstract world of mathematics and
the material universe. Others include what is a number, why does it make a sense to ask
whether "1+1 = 2" is actually true and how can it be ascertained that a mathematical proof is
correct.
Philosophy of Physics
Philosophy of Physics can be viewed as the study of diverse concerns which include the
fundamental aspects of physics, philosophical questions concerning modern physics, the
study and interaction of matter and energy. The main questions concerning the nature of
space, and time, atoms and atomism as well as the interpretations of quantum mechanics,
predictions of cosmology, foundations of statistical mechanics, causality, determinisms and
the nature of physical laws are addressed under this concept.
Philosophy of Psychology
Philosophy of Psychology deals with issues relating to theoretical foundations of modern
psychology. Some of these however are addressed from the epistemological perspectives of
the methodology of psychological investigation; for example, the most appropriate
methodology (mentalism, behaviourism or compromise) for Psychology, the reliability of
self-reports as a data gathering procedure, the conclusions that can be drawn from the test of
null hypothesis and the objective measurement of first-person experiences (emotions, desires,
beliefs, etc.). Other concerns of philosophy of psychology include the philosophical questions
about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as
part of cognitive science or philosophy of mind. Philosophy of psychology is a relatively
young field, because psychology itself only became recognized as a discipline of its own in
the late 1800s.
Three different methods can be used to render science accessible to people and to actually
make it an object for philosophical scrutiny. The methods are:
- The Pedestrian method
- The Critical method
- Original Philosophical method
21
(a) The Pedestrian Method
This method discusses topics in science. Such topics may include magnetism,
electromagnetism, sub-atomic particles, enzymes, free radicals etc. In some cases, the
contributions of notable scientists are mentioned alongside their discoveries. Examples of
such discoveries include, linear propagation of light and weightlessness in space discovered
by Sir Isaac Newton, electricity discovered by Michael Faraday, biological cells by Robert
Hooke and Genetics by Gregory Mendel.
In other words, anyone literate in the language of instruction would be competent to impact
onto others on this aspect of knowledge. This method can therefore be described as barren
and pedestrian. This method is of the view that even historians, linguists and artists can teach
competently the philosophy of science. The bone of contention that just anyone can teach
philosophy of science is aided and abetted by the slogan – “anything goes” syndrome.
Unfortunately, this conception can no longer hold after centuries of the dismemberment of
philosophy into independent disciplines, each with an object sphere of its own.
(b) The Critical Method
This involves taking up science and examining its fundamental assumptions and
presuppositions, its competing theories, its method of inquiry and its relation or otherwise to
other fields of study. A process which embraces this is known as meta science or meta
scientific inquiries of the methodology of science. This method focuses more on the method
and procedure and clarification of concepts. In this method, there are two classes of people
namely a trained philosopher and a trained scientist that can disseminate this approach of
critical method of philosophical science. What this may bore down to is that if what the
entire philosopher does is to criticise the method, procedure, basic assumptions,
presupposition and clarification of its terms, then the fundamental problem of a scientist
combining his functions as a researcher with those of the philosopher as a critic, guide and
guard will arise.
(c) The Original Philosophical Method
This method involves a trained philosopher injecting his apriori metaphysical,
epistemological or ethical notions into science with the aim of uplifting its empirical content
to the standard of it being adjudged the universal truth. In this regard, a philosopher is either
metaphysising, epistemologising or ethicising science. In applying this method to render
science accessible to people, the full meaning of the word philosophy, expresses in Greek,
Philein Sophia which means “love of wisdom” must be put in mind. Wisdom is really
needed to be able to explain what is true or false and to deal with facts and to judge
experience in an uplifting and beneficial manner. Wisdom in its own sense can include
making sense of our existence, our actions and of our destiny by a judicious balance of
intuitive and discursive interpretations of our experience of being. In addition, the core areas
of philosophy namely metaphysics, epistemology and ethics need to be considered as the
watchwords when applying this method to render science accessible for people.
SCIENTIFIC EXPLANATION
The three cardinal aims of science are prediction, control, and explanation; but the greatest of
these is explanation. Scientific explanation aims at understanding science.
Philosophical Context
The concept of scientific explanation is very important in philosophy of science because of
several reasons:
• Most people and scientists intuitively believe that one of the goals of the science is to
explain the phenomena in the world. Some people even believe that explanation is the main
22
goal of science. Whether philosophers accept this intuitive belief is not so important – just the
fact that there is such belief is a good reason to analyse the concept of explanation and in
particular scientific explanation. One further indication that such belief exists is that even an
empiricist, who thinks that prediction of phenomena is the goal of science, still explains why
they don’t accept explanation as goal of science.
• Scientific realists use the “inference of best explanation” (IBE) principle to solve the strong
under-determination problem and this way to prove that science can create true knowledge
even about non-observable (non-empirical) entities. The IBE principle, in short, says that
between strongly empirically equivalent hypotheses, “the truth of the hypothesis which gives
the best explanation of the phenomena should be inferred”.
In addition to predicting future events, scientists more often than not use scientific theories to
explain the events that occur regularly or have already occurred. Philosophers have
investigated the criteria by which a scientific theory can be said to have successfully
explained a phenomenon, as well as what gives a scientific theory credibility or explanatory
power. Several models have been put forward to back up the explanatory power of scientific
theories.
The D-N Account of Hempel and Oppenheim (1948), is intended to capture the form of any
deterministic scientific explanation of an individual event, such as the expansion of a
particular metal bar when heated, the extinction of the dinosaurs, or the outbreak of the
American Civil War. According to Hempel and Oppenheim (1948), such an explanation is
always a deductive derivation of the occurrence of the event to be explained from a set of true
propositions including at least one statement of a scientific law. Intuitively, the premises of a
D-N explanation spell out the relevant initial, background, and other boundary conditions,
together with the laws governing the behaviour of the system in which the explanandum (to
expound observation related to the exploration of a new topic, technology, entity, idea or
confounding concept in the on-going work against a state of boredom) occurred. Hempel and
Oppenheim (1948) cite the following argument, for example, as a typical D-N explanation of
the event of a thermometer’s mercury expanding when placed in hot water: The (cool) sample
of mercury was placed in hot water, heating it. Mercury expands when heated, thus-The
sample of mercury expanded. Because the law or laws that must be cited in a D-N
explanation typically “cover” the pattern of behaviour of which the explanandum is an
instance, the D-N account is sometimes referred to as the covering law account of
explanation.
23
Many scientific explanations of events and other phenomena undoubtedly have the form
proposed by the D-N account: they are logical derivations from laws and other information.
Although ignored for a decade, this view was subjected to substantial criticism, resulting in
several widely believed counter examples to the theory.
The teaspoon of salt was hexed (meaning that certain hand gestures were made over the salt),
The salt was placed in water,
All hexed salt dissolves when placed in water, thus
The salt dissolved.
The explanation appears to attribute the salt’s dissolving in part to its being hexed, when in
fact the hexing is irrelevant.
The second important objection to the D-N account is the insufficient attention to the
explanatory role of causal relations. For example, the height of a flagpole can be cited, along
with the position of the sun and the law that light travels in straight lines, to explain the
length of the flagpole’s shadow. The D-N account was able to explain by casting it in the
form of a sound, law-involving argument for the height of the flagpole that cites, among other
things, the length of the shadow. This consequence of the D-N account-that the height of a
flagpole can be explained by the length of its shadow-seems obviously wrong, and it is wrong
because a cause cannot be explained by its own effects.
The third class of objection to the D-N account focuses on the requirements that every
explanation cites a law, and that (except in probabilistic explanation) the law or laws be
strong enough to entail, given appropriate boundary conditions, the explanandum. One way
to develop the objection is to point to everyday explanations that cite the cause of an event as
its explanation, without mentioning any covering law, as when you cite a patch of ice on the
road as the cause of a motorcycle accident. More importantly for the study of explanation in
science are varieties of explanation in which there is no prospect and no need for, either the
entailment or the probabilification of the explanandum. Perhaps the best example of all is
Darwinian explanation, in which a trait T of some species is explained by pointing to the way
in which T enhanced, directly or indirectly, the reproductive prospects of its possessor.
Attempting to fit Darwinian explanation into the D-N framework creates a host of problems,
among which the most intractable is perhaps the following: for every trait that evolved
because it benefited its possessors in some way, there are many other, equally valuable traits
that did not evolve, perhaps because the right mutation did not occur, perhaps for more
systematic reasons (for example, the trait’s evolution would have required a dramatic
reconfiguration of the species’ developmental pathways).
To have a D-N explanation of T, one would have to produce a deductive argument entailing
that T, and none of the alternatives, evolved. One would have to be in a position, in other
words, to show that T had to evolve. Not only does this seem close to impossible; more
importantly, it seems unnecessary for understanding the appearance of T.
24
Inductive-Statistical Model or IS Account
In addition to the D-N model, Hempel and Oppenheim offered other probabilistic explanation
of events referred to as inductive-statistical or IS account. IS explanation is a law-involving
argument giving good reason to expect that the explanandum event occurred. However,
whereas a D-N explanation is a deductive argument entailing the explanandum, an IS
explanation is an inductive argument conferring high probability on the explanandum. As
with the D-N account of explanation, a number of objections to the IS account have exerted a
strong influence on the subsequent development of the philosophical study of explanation.
Versions of both the relevance and the causal objections apply to the IS account as well as to
the D-N account.
However, Salmon attempted to provide an alternative model that will take care of the short
comings of the models proposed by Hempel and Oppenheim (1948) by developing a
statistical relevance model.
In addition to Salmon's model, others have suggested that explanation is primarily motivated
by unifying disparate phenomena or primarily motivated by providing the causal or
mechanical histories leading up to the phenomenon.
Empirical verification
Science relies on evidence to validate its theories and models, and the predictions implied by
those theories and models should be in agreement with observation. Unfortunately, one short
25
coming of observations is that it is dependent at times on the unaided human senses of sight,
taste, touch and hearing. However, for this to be accepted by most scientists, several
impartial, competent observers should agree on what is observed. Observations should be
repeatable, for example, experiments that generate relevant observations can be (and, if
important, usually will be) done again. Furthermore, predictions should be specific and
scientists should be able to describe a possible observation that would falsify a theory or a
model that implies the prediction. Nevertheless, while the basic concept of empirical
verification is simple, in practice, there are difficulties as described in the following sections:
Induction
How is it that scientists can state, for example, that Newton's Third Law of Motion (to every
action, there is equal and opposite forces) is universally true? After all, it is not possible to
have tested every incidence of an action, and found a reaction. There have, of course, been
several tests, and in each one a corresponding reaction has been found. But is it sure that
future tests will continue to support this conclusion?
One solution to this problem is to rely on the notion of induction. Inductive reasoning
maintains that if a situation holds in all observed cases, then the situation holds in all cases.
So, after completing a series of experiments that support the Third Law, and in the absence of
any evidence to the contrary, one is justified to imply that the Law will hold in all cases.
Although induction commonly works (e.g. almost no technology would be possible if
induction were not regularly correct), explaining why this is so has been somewhat a
herculean task. It is not easy to deduce, the usual process of moving logically from premise to
conclusion, because there is no way of arguing in which two statements are used to prove that
a third statement is true. Indeed, induction is sometimes mistaken. For example, the 17th
century biologists observed many white swans (large graceful birds and none of other
colours, but not all swans are white). Similarly, it is at least conceivable that an observation
will be made tomorrow that shows an occasion in which an action is not accompanied by a
reaction; the same is true of any scientific statement.
One answer would be to conceive a different form of rational argument, one that does not
rely on deduction. Deduction allows scientists to formulate a specific truth from a general
truth: all crows are black; this is a crow; therefore this is black. Induction somehow allows
scientist to formulate a general truth from some series of specific observations For example,
this is a crow and it is black; that is a crow and it is black; no crow has been seen that is not
black; therefore all crows are black.
It is very important for science that the information about the surrounding world and the
objects of study are as accurate and reliable as possible. For the sake of this, measurements
which are the source of this information must be as objective as possible. Before the
invention of tools like weights, meter rule, clock etc, the only source of information available
to humans were their senses of vision, hearing, taste, tactile, sense of heat, sense of gravity,
etc. Because human senses differ from person to person (due to wide variations in personal
chemistry, deficiencies, inherited flaws, etc.) there were no objective measurements before
the invention of these tools. The consequence of this was the lack of a rigorous science.
26
However, with the advent of exchange of goods, trades and agriculture, the need for such
measurements, and science based on standardised units of measurement became imperative.
To further abstract from unreliable human senses and make measurements more objective,
science uses measuring devices such as spectrometers, voltmeters, interferometers,
thermocouples, counters), and more recently, the computers. In most cases, the less the
human involvement in the measuring process, the more accurate and reliable the scientific
data is. Currently, most measurements are done by a variety of mechanical and electronic
sensors directly linked to computers which further reduces the chance of human
error/contamination of information.
Another question about the objectivity of observations relates to the so called experimenters
regress, problems identified from the sociology of scientific knowledge, the cognitive and
social biases of the people that interpret the observations or experiments which unconsciously
interpret and describe what they see in their own way.
CONCLUSION
Overall, the present chapter has made some attempt at several definitions of science with
focus on knowledge. It has also addressed the concept of philosophy of science and scientific
explanations adopting some models such as Deductive-Nomological (D-N), Inductive
Statistical (IS) and Statistical Relevance (SR) with some specific examples. The basis for the
validity of scientific explanations and objectivity of observation in science were also
addressed.
EVALUATION STRATEGIES
Practice Questions
1. Scientia, which means knowledge is a
A. Latin word
B. Greek word
C. Hebrew word
D. Arabic word
2. One of these is not a method used in the study of philosophical science
A. Analytical method
B. Pedestrian method
C. Critical method
D. Original philosophical method
3. One of the most potent tools of the sciences for the discovery of new facts and more
accurate understanding of existing facts is
A. Explanation
B. Exponential
C. Experiment
D. Exfoliation
4. Chemical balance, metre rule and spring balance are instruments of
A. Measurement
B. Marking
C. Shaping
D. Pinging
5. The scientist rejects authority as the ultimate basis for
A. Trial
B. Hypothesis
27
C. Truth
D. Theory
6. The study of the theory of knowledge is referred to as
A. Pedestrain
B. Metaphysics
C. Epistemology
D. Biology
7. Ethics is the study of
A. Moral Behaviour
B. Ethnic Group
C. Experience
D. Reality
8. All the definitions of science revolve around
A. Observation
B. Experimentation
C. Knowledge
D. None of the above
9. Which of the following is not a model that can be for scientific explanation?
A. IS
B. BI
C. DN
D. SR
10. Which of the following is the most recent device?
A. Spectrometer
B. Galvanometer
C. Computer
D. Battery
28
REFERENCES
29
Chapter Three
CONCEPT OF MATTER
Akoshile C.O1. and Abdus-Salam, N2
1
Department of Physics, University of Ilorin
2
Department of Chemistry, University of Ilorin
INTRODUCTION
This chapter discusses the concept of matter. Often, one keeps in the mind what he thinks
matter is. It does not imply a universal definition. Concept is like an accepted norm or
definition. So, while trying to understand matter, the adopted concept is what is perceived
and accepted as matter. Interestingly, only two kinds of matter exist in the world.
OBJECTIVES
What is Matter?
Matter exists as a living and non-living entity. Living matter has the properties of respiration,
growth, movement, metabolism (eating and excretion) and reproduction. Non-living matter
does not exhibit the above properties. Growth in non-living matter only comes if there is an
addition of the same or different matter by some processes to the matter. Matter is
constituted. This means that matter is also made up of something else.
Concept of Matter
Attempt to develop the concept of matter involves many propositions and developing
hypothesis. From such hypothesis, theory of matter emerges. The first step is to know the
properties of matter.
The simplified definition of matter is anything that occupies space, possesses mass of its own,
offers resistance to change of inertia and may be felt by any of our sensory organs. Matter can
exist in any of the three physical states which are solid, liquid, gas or plasma. Matter takes its
own shape when in solid form. It takes the shape of the container when in liquid (and flows
when poured) and occupies all available spaces as gas or plasma.
30
KINETIC THEORY OF MATTER
The word kinetic stands for motion. Greek in the early stages of formulation of the kinetic
theory conceptualised that if attempt is made to continually subdivide matter, a smallest one
will be attained that can exist on its own. This is discrete, that is, it is a repeatable entity,
cannot be continuously fractionalized at will. This entity is called molecule and it is made up
of one or more atoms.
Matter can exist as a mixture or pure substance. Matter in the pure form exists as an element
such as Hydrogen, Oxygen, Nitrogen, etc. or as compounds such as water, ammonia, carbon
dioxide. When matter exists as a mixture, it could be homogeneous such as a solution (e.g.
salt in water) or air mixture (e.g. mixture of N2, O2 , C2, CO2, H2O) or heterogeneous mixture
such as chocolate or soil.
Matter can be represented pictorially while a theory is a statement of facts for understanding,
explaining and making predictions about an observable phenomenon. It is used as a plausible
general principle to explain a phenomenon. A scientific law is a statement of fact that has
been subjected to critical analysis, experimentation and found to correctly explain an
observable phenomenon under condition(s) stated.
An element has only one type of atom e.g. hydrogen. About 118 elements exist in nature and
are arranged into eight (8) periods of the periodic table. These elements are found naturally in
one of the three physical states of matter for example (mercury, bromine as liquid; sodium,
copper as solid and hydrogen, oxygen as gas.
A compound has more than one type of atom bonded together chemically which can only be
separated by a chemical process e.g. H2O is made up of hydrogen and oxygen. A mixture
however, is made up of more than one element or compound in a weak bond that requires no
chemical process to separate, but requires only a simple physical procedure. For example, a
class of boys and girls is a mixture that can be separated by simple instruction of “boys, sit
down” and “girls, stand up”.
The constituent of a homogeneous mixture is in the same state (gas, liquid or solid) and all
constituents are thoroughly mixed. They are visually inseparable. The constituents of
homogeneous mixture may or may not be in the same physical state (Figure 2). The
constituents are visually separable. The constituents (parts) of homogeneous and
heterogeneous mixtures can be separated by physical processes. For example, a mixture of
clay and water (heterogeneous) can be separated by filter paper, and water from salt water
(homogeneous) by boiling the water off.
All matter is capable of change from one physical state to another due to temperature change
experienced by the matter. The change may involve transformation of form when a chemical
31
reaction is involved. The former is referred to as physical change and the latter as chemical
change. The nature of matter obtained when a chemical change occurs is fundamentally
different from the starting matter. For example, when an iron bar is exposed to the right
humidity, temperature and air, it rusts. The product of rust is different from the pure iron bar.
All physical change involves change of state. Many a time, matter changes from solid to
liquid and to gas, but in few others, it changes from solid to gas without passing through
liquid state. Such substances are said to be sublime and the process of change is sublimation.
Examples include iodine and ammonium chloride.
32
Matter as a mixture
Homogeneous Heterogeneous
Avogadro’s number is 6.023 X 1023 which represents number of molecules or atoms present
in a molecular weight or one atomic weight respectively. This number of molecules of
hydrogen will occupy 22.4dm3 at standard temperature and pressure (S.T.P.) of
T = 273K or 0oC and
P = 76cmHg.
Pressure (P) is defined as force exerted per unit area of a surface by a material. Force (F) is
the amount of pull or push on a body. It is measured from Newton’s law of motion as a
function of the acceleration it produces.
F = ma (1)
where m is mass and a is acceleration. When the body is falling freely under gravity, the
acceleration is said to be due to gravity and it has a value a = g = 9.81m/s2.
Acceleration (a) is the rate of change of velocity with time.
a = dy/dt = y2 – y1/t2 – t (2)
where y1, y2 and dy are initial velocity, final velocity and change in velocity respectively and
t1, t2 and dt are the corresponding values of time.
33
Velocity (v) is the rate of change of distance with time in a given direction.
v = ds/dt = s2 – s1/t2 – t1
where s is distance.
STATES OF MATTER
The states of matter are distinguishable by the temperature and appearance of the matter. This
is shown in fig 3 where the cooling curve of temperature against time is plotted.
The above figure 3 shows that a body exists as solid, liquid or gas depending on its
temperature. Example is water in solid form as ice, in liquid form as water and in gaseous
state as steam. This also implies that the type of motion exhibited by matter depends on its
temperature while the temperature depends on how much energy it possesses. This means we
can explain the behaviour of matter by understanding its state or motional behaviour.
Kinetics
The particles of solids only vibrate and rotate about a mean position.
Particles of liquids vibrate and rotate about a mean position but can also easily slide over
each other.
Particles of gas move randomly and are translated from one place to another. In building up
the kinetic theory, some fundamental assumptions are made and employed.
They are:
1. Particle dimension is much less than the distance between collisions.
2. Particle velocity is large such that there are many collisions occurring in a short time
interval.
3. Separation between particles is large such that mutual columbic (charged particle)
forces of attraction or repulsion are negligible.
4. Collisions between particles are perfectly elastic.
5. Particles have no sense of history between collision.
34
6. Motion is random.
𝟎
∫𝟏 𝒏𝒊 𝒗𝒊 𝒅𝒖
Let the particle velocity v = 𝟎
∫𝟏 𝒏𝒊 𝒅𝒖
𝒏𝟏 𝒗𝟏 +𝒏𝟐 𝒗𝟐 +⋯.
The mean velocity of particles v = 𝒏𝟏 +𝒏𝟐 +⋯..
This is shown earlier to depend on temperature (T) i.e. velocity v is proportional to
temperature T; suppose a contained is assumed to be a rectangular box (fig. 4) and has n
particles in it. There is equal probability of a particle going to collide with any of the 6 faces
of the box i.e. n/6 particles are going in any one directions. When they collide with the box, it
results in pressure exerted on that wall of the box.
Let the volume of the gas be V and the number of particles per unit volume be n. then total
number of particles in the box = nv
The number of particles colliding with unit area per second in a given direction = nv/6
If the distance covered by particle = L
And mass of the particle = m
The momentum of the particle = mv
For elastic collision, it will re-bounce in opposite direction with momentum = -mv
Leading to change in momentum = mv – (- mv) = 2mv
Hence, the momentum change for all the particles in x – direction = 2mvnv/6 = mv2 /3
Force F exerted on the wall per unit time is therefore = mv2 /3
Because area = L2,
35
where ni is the number of particles having velocity vi
1
The kinetic energy = (2) Mv2
The product
2 2
PV = (1/3)Mv2 = (1/2Mv2) = 3K.E (5)
3
or PV = RT (6)
Since kinetic energy K.E depends on temperature.
This is Boyle’s law (PV = constant)
This result is experimentally observed and hence it justifies the concept employed and the
assumptions made.
From this, one can obtain molecular velocity. That of hydrogen is 1840 m/s.
2 1
PV = {3 𝑁(2 𝑀𝑣 2 }/V (7)
where N = Na = Avogadro number
PV = RT (8)
where R is a constant and T is temperature in absolute unit, since the velocity depends on the
temperature. The gas constant R is 8.32J (mole)-1 deg-1
Mean free path of molecule is obtained assuming molecule is spherical and of diameter d.
Volume swept per unit time = πd2v and in the process encounters πd2vn molecules for n
molecules present in unit volume.
Distance covered per unit time is v and the mean free path before collision is L.
L = V/πdvn = 1/πdn (9)
which can be proved more rigorously than is done here.
𝟏 𝟏
L = =𝛑𝐝𝐧
√𝟐
(10)
Subatomic particles
Beyond the atomic study are the subatomic particles revealed by probes made using x-rays.
Atomic models were constructed by Rutherford and Bohr. The model showed that the atom
had a small positively charged nucleus surrounded by electrons. Hydrogen is the smallest
atom having nuclear particle of one proton and an electron moving round it in spherical orbit
as shown fig.5
36
Figure 5: Hydrogen atom
More complicated nucleons have electrons moving in elliptical orbits.
The nucleus contains protons and neutrons with the following properties,
Particle Symbol Charge Mass
Proton P positive (+1) 1.673 X 10-27kg
Neutron n neutral (0) 1.675 X 10-27kg
Electron e negative (-1) 9.1 X 10-31kg
where charge of 1e = 1.602 X 10-19 Coulombs
A neutral atom always has equal number of protons and electrons since the neutron has no
charge.
Positive and negative charges (positive-negative or negative-positive) attract each other while
two similar charges (positive-positive or negative-negative) repel each other. Some elements
are observed to be radioactive e.g. uranium. They emit radiation which can be split using
magnetic or electric field as shown in figure 6.
An example of this reaction that leads to the emission of alpha particle is
𝟐𝟐𝟔 𝟐𝟐𝟐 𝟒
𝟖𝟖𝑹𝒂 → 𝟖𝟔𝑹𝒏 + 𝟐𝑯𝒆 + radiation
Other types of radiations are X-rays.
Both alpha and beta particles are deflected by magnetic and electric plates (field) where alpha
particle is Helium nucleus and beta particles are electrons. Examples of other nuclear
reactions are:
𝟏 𝟏 𝟐 𝟎 +
𝟏𝑯 + 𝟏𝑯 → 𝟏𝑫 + 𝟏𝒆(+ 𝜷 ) + 𝝊
Hydrogen Hydrogen Deuterium Positron Neutrino
𝟑
𝟏𝑯 → 𝟑𝟐𝑯𝒆 + 𝟎𝟏𝒆(+ 𝜷− ) + 𝝊̇
Tritium Helium Beta particle Anti-neutrino
The above are called nuclear reactions. A nuclear particle is represented as:
𝑨
𝒁𝑿N
where A is the mass number
Z is the proton number, and
N is the neutron number
Mass defect
During nuclear reaction, mass defect Δm is observed. Δm is the difference in mass between
the mass of the product and the reactants. This seems to imply violation of the law of
conservation of mass. This law has been restated as the law of conservation of energy when
Albert Einstein showed that the change in mass turned into energy, obeying the law:
E = Δmc2 (11)
Mass defect Δm is calculated from the knowledge of values of the mass of protons, neutrons,
electrons, etc. A nucleus 𝑨𝒁𝑿N has mass ZMH+ since it contains Z protons and (A-Z) Mn for
(A-Z) neutrons and the addition of these two masses when subtracted from the mass of 𝑨𝒁𝑿N is
found not to be equal to zero. It gives mass defect Δm i.e. in its formation from constituent
particles; the product has mass defect Δm.
Δm = ZMH + (A-Z) Mn - MA
37
This change in mass then shows up as energy. So, the mass and energy become synonymous.
Mass can change into energy and vice versa. This energy is observed in other nuclear
reactions such as fission or fusion and is about 200Mega - electron volts (200MeV). Fusion
is bonding together of two nuclei while fission is division of a nucleus into two or more
smaller nuclei.
The subject is double-fold. Nuclear war is first discussed, then the threat of nuclear war and
the implications of the subject matter i.e. nuclear war are then deliberated upon.
War has to do with armed combat of two parties in dispute over a matter. This often involves
two or more people or nations. The type of war in question is nuclear war. The last bit of the
discussion of concept of matter treated above has to do with nuclear energy which is the
energy released during a nuclear reaction. The energy is shown to be enormous. It is of the
order of mega-electron volts or millions of electron volts (MeV). What makes it of special
concern is that this enormous amount of energy is released in a short time on a small piece of
land - area producing tremor with tremendous impact over long range. When such a war is
becoming feasible, it is considered a threat. If it eventually happens; it is then a nuclear war
and the consequence(s) is/are regarded as the implication(s) of such war. It is important
therefore to understand the nuclear energy involved.
Nuclear Reactions
There are fission and fusion nuclear reactions. Some are stimulated while some are
spontaneous reactions. Uranium 236 (236U) gives spontaneous fission reaction:
𝟐𝟐𝟖 𝟐𝟎𝟖 𝟐𝟎
𝟗𝟐𝑼 → 𝟖𝟐𝑷𝒃 + 𝟏𝟎𝑵𝒆 + radiation
producing daughter particles of lead (Pb) and Neon (Ne) accompanied by Gamma(γ )
radiation of large energy.
Consider the reaction of lithium and hydrogen.
𝟔 𝟏 𝟑 𝟏
𝟑𝑳𝒊 + 𝟏𝑯 → 𝟐 𝟐𝑯𝒆 + 𝟎𝒏 + 𝟏𝟕. 𝟑𝑴𝒆𝑽
Alpha particles are produced along with positron, neutrino and energy burst of 26MeV.
It is useful to discuss uranium reaction in greater detail. Neutron induced fission reaction of
uranium (236U) is possible. The mass number (A) is 236. The average Coulomb barrier for a
nucleus A >> 240 is 5 to 6MeV. By targeting uranium nucleus with neutron of kinetic energy
1.4MeV, the reaction can commence. It is found that a nucleus with odd - A number of
38
nucleons can be induced even by a zero energy neutron while the more stable even - A
nucleus require more energy. Examples of fissile nuclei are:
And
235 238
The natural occurrence of U is 0.72% of U and 99.27% of U. These are some of the
isotopes of Uranium.
Chain reaction
Chain reaction occurs when the proceed of one reaction leads to another with both source and
product initiating the next reaction and resulting in an avalanche. This result in multiples of
the energy produced per step and in a very short time large amount of energy is released. This
large amount of energy is released in a very short time to a very small volume of space
resulting in an explosion or a bomb. 235U is more useful in this process but as shown earlier,
it is not as prominent in nature. Enrichment technique is employed to increase the
concentration of 235U in a uranium sample.
Enriched uranium will have greater than 18% chance of inducing fission i.e. 1 in every six
collisions. When a neutron moves, it has probability of being captured, scattered elastically or
to produce a fission reaction.
Net random distance moved is about 7cm from starting point
Mean time tp = 10-8sec.
A neutron is replaced on the average by 2.5 new neutrons or by 5 new neutrons in 2 X 10-8s.
If the probability that a newly created neutron will induce fission is f producing (vf – 1)
neutrons, it will produce additional new neutrons in time tp.
These are prompt neutrons and not the delayed neutrons that can come much later. It is found
that the delayed neutrons are dangerous and can lead to health hazard.
If at the initial time, the reaction has n(0) neutrons, at some later time t, there will be n(1) >
n(0) which is exponential as poorly depicted in Fig. (6)
f = 1/v0.4 for 235U by using mass difference to calculate fission energy of form.
39
Figure 6: Neutron Number Growth.
Control is done by inserting or doing a regulated withdrawal of control rods. The heat
generated in the process is used to generate steam (super-heated) which is directed to
turn turbines for the generation of electricity. This is one of the good uses of nuclear
reactor. The gas produced is cooled using pressurized carbon dioxide (CO2) or pressurized
water disallowed to boil.
Another nuclear fuel is plutonium (Pu) obtained from a daughter of uranium
𝟐𝟑𝟗 𝟐𝟑𝟗 𝟎
𝟗𝟐𝑼 → 𝟗𝟑𝑵𝒑 + −𝟏𝒆+u
𝟐𝟑𝟗 𝟐𝟑𝟗 𝟎
𝟗𝟑𝑵𝒑 → 𝟗𝟒𝑷𝒖 + −𝟏𝒆 + 𝒖
239
Pu behaves like 238 4
92𝑈. It has lifetime of 3.5 X 10 years. It is produced or breeded from a
“breeder reactor” of weaponry factories.
40
Figure 7. Spontaneous emission of radiation from a radioactive source.
The magnitude of the energy produced in a nuclear reaction has been shown to be very large
even by 1g of uranium nuclear fuel.
In 1944, during World War II, Hiroshima and Nagasaki in Japan were bombed using
hydrogen bomb in which 150,000 lives were lost in that drop. World War II quickly wound
up within a year of its drop. Besides those who died instantly, many who suffered from the
dust, clay or even after shock, died, developed cancers, became disabled or deformed and so
on. Its other effects which include psychological disorder linger on for many years. This
scenario is well depicted in the film “The Day After”. Not long after, countries like United
Kingdom, France, Russia developed their own nuclear armament.
In 1946, the United States of America Congress passed the Atomic Energy Act. In the same
year following agitations to have arms control and disarmament including right to acquire
Nuclear Free Zone, the “Nuclear Non-Proliferation Treaty” was signed in the League of
Nations that transformed into the United Nations. The intention was not to ever have to use
this weapon of mass destruction again. Signing of this treaty was made voluntary. Those who
had the capability to develop nuclear warheads first constituted themselves into superpowers
with veto power in the Security Council of the United Nations.
By the middle of the nineties in the 20th century, political ideologies have separated the
communists from the democratic countries with each party forming a “bloc” having a trail of
41
weaker countries behind them that are seeking protection. The Soviet Union formed one bloc
while the Americans and the Western Europeans formed the opposing NATO bloc. This led
to the start of the “cold war”. Nuclear warhead launchers with multiple warheads were built
and hidden underground and in water on submarines and targeted towards each other’s most
populous cities and military formation posts. This later metamorphosed to armed control
treaties ‘aimed at limiting how much nuclear warheads can be produced, where they can be
targeted and how they can be hidden or unhidden and to discourage transfer of the technology
to new countries. China with a population of more than 1 billion people eventually joined the
Security Council owning warhead carriers including ground to air missile and non-detectable
stealth missiles.
Effects of nuclear wastes or mishap were observed at the Chernobyl USSR nuclear accident.
High dose of radiation, polluted dust and air carried by wind or in the vicinity of the accident
all endangered and posed dangers to all human beings, animals and vegetation.
Country after country signed the treaty (ie the Nuclear Non-Proliferation Treaty) while some
deliberately refused to sign. South Africa had the potential to build one and in the era of
apartheid refused to sign the treaty and hid her project. In the Middle East, countries like Iran
and Iraq refused to sign the treaty and complained about Israel’s refusal to sign. When Iraq, a
country which openly declared Israel an enemy, was evaluated by Israel to be close to
developing a nuclear bomb, she targeted and bombed the site. Nigeria, before developing
even her Centre for Research in Nuclear Technology, boasted that she is out to develop her
own African bomb. Very soon after making this declaration, she entered “the confusion age”
of economic, political and social disaster of military rule of which she has not fully recovered
up till now.
When the Soviet Union split into many countries, there was fear that the know-how of
nuclear technology would get into the hands of unpredictable world leaders (of the likes of
Idi Amin of Uganda) and every effort was made to contain the disperse of the nuclear
scientists from Soviet Union.
Presently in the beginning of the 21st Century, North Korea was pressurized to disengage
from nuclear development and proliferation by other super powers in Europe and America.
She of recent blew up a nuclear cooling tower in exchange for fossil fuel and food but she is
back to testing nuclear warheads in 2017. Iran may be coming to the stage next. The politics
of nuclear power is on the front burner of world political leaders’ diplomatic activities.
The above has shown the strength or otherwise of the treaty and implications of nuclear war
as opposed to conventional warfare where troops of dissenting nations face each other in man
to man combat. Treaties are also being formulated to keep space free and to free other planets
from nuclear pollution. Nuclear war can be fought remotely to the extent of annihilating most
of the known world. One may just hope that sanity will prevail.
42
EVALUATION STRATEGIES
Practice Questions
1. Using particle in a box model, show that
pV = RT
where R is a constant and T is temperature in absolute unit and P, V take their usual meaning.
2. Compute the mass defect of 63 29𝐶𝑢 in MeV given that it has mass 62.929594 amu
Mass of proton = 1.007825 amu and mass of neutron = 1.008665 amu.
3. Distinguish between the following pairs:
(i) Proton and neutron
(ii) Positron and electron
(iii) Mass of a particle and mass defect in a nuclear reaction
(iv) Fission reaction and fusion reaction
(v) Mixture and compound.
4. Briefly discuss the Nuclear non-proliferation Treaty.
5. Matter is found usually in three phases of solid, liquid and gas. Discuss how the
transition from phase to phase occurs.
BIBLIOGRAPHY
Cutnell, J.D. and Johnson, K.W. (1989) Physics, John Wiley and Sons, NY
Goldstein, H. (1965) Classical Mechanics, Addison- Wesley Publishing Company.
Halliday, D. And Resnick, R., (1981) Fundamentals of Physics. John Wiley and Sons. NY
Marion, J.B. And Hornyak, W.F., (1982) Physics for Science and Engineering. Part 1, CBS
College Publishing, NY, USA.
Tyder, F (1974) A Laboratory Manual of Physics, Fourth Edition, Edward Arnold
(publishers) Lid, London.
43
Chapter Four
CONSERVATION OF CONVENTIONAL AND RENEWABLE ENERGY SOURCES AND THEIR
CONVERSION TECHNIQUES
1
Omosewo, E.O., 2Olaoye J.O., 3Ajibola, T.B., and 4Ajimotokan, H.A.
1
Department of Science Education, University of Ilorin, Ilorin, Nigeria
2
Department of Agricultural and Biosystems Engineering University of Ilorin, Ilorin, Nigeria
3
Department of Physics, University of Ilorin, Ilorin, Nigeria
4
Department of Mechanical Engineering, University of Ilorin, Ilorin, Nigeria
.
INTRODUCTION
Energy is one of the most fundamental parts of our universe. Everything we do is connected
to energy in one form or another. We use energy to do work. Energy lights our cities. Energy
powers our vehicles, trains, planes and rockets. Energy warms our homes, cooks our food,
plays our music, and gives us pictures on television. Energy powers machinery in factories
and tractors on a farm among other areas of application. Therefore, energy drives our day to
day activities, at home, schools, businesses or offices.
Concern for generation and access to avoidable energy remains issues of national interest
while energy trapped in diverse and abundant natural resources within our immediate
environment remains unharnessed for human utilisation. Biomass is one of the available
natural resources and sometimes depicted as waste that has potentials for generation of
sustainable energy. Generally, biomass as matter can be considered as garbage. Some of it
includes stuff lying around our environment which may include dead trees, tree branches,
locks of hair, left-over crops, wood chips (like heaps of corn chaff or rice husk), bark and
sawdust from lumber mills. It can even include used tires and livestock manure.
Modern societies employ automobiles, airplanes, trucks and trains for movement of people
and commodities. They use engines for machines to produce necessities as well as luxuries;
and electricity for motors and lights. There is the need for energy and energy conversion in
all these examples. Therefore, in this chapter, efforts are made to discuss about energy and its
conversion from one form to another. Types, workability, advantages and disadvantages of
solar, fossil fuel, wind, tidal, hydroelectricity, geothermal and nuclear energy are discussed.
Conversion of potential energy to kinetic energy, chemical to electrical to light energy,
electric to heat energy, heat energy to electrical energy and heat energy to light energy are
discussed. Also, conversion of mechanical energy to electrical to heat energy; and conversion
of electrical energy to mechanical to sound energy are discussed. This is followed by the
reference section, while some practice questions are set for students.
Within the context of this presentation, sustainable energy is to ensure sustainable
development, while sustainable development is considered as a pattern of resource use that
aims to meet human needs while preserving the environment so that these needs can be met
not only in the present, but also for generations to come. Therefore, sustainable economic
development is facilitated by energy conservation of the country by bringing about reduction
in energy cost in order to accelerate production of goods and services, so that businesses
would flourish on improved access to energy supply. Without energy conservation,
manufacturers will pass the production cost to the consumers.
44
This chapter comprises of two main topics namely: Conservation of natural and artificial
energy resources and biomass as sustainable sources of renewable energy and its conversion
techniques.
OBJECTIVES
Mechanical Energy
Kinetic Energy: This is the energy associated with masses in motion. An example of this type
of energy is when a boy runs or when water is pouring down a waterfall. It is half the product
of mass and its velocity square.
Potential Energy: This is the energy possessed by a system or an object due to its position.
There are varieties of potential energies. For example, there is mechanical potential energy in
45
the wound spring of a clock or a stretched bowstring. There is gravitational potential energy
in anything lifted against the pull of gravity, such as a stone lifted by a person. There is
chemical potential energy in almost every known substance, since there is hardly anything
known which will not react with some chemical agent and release its energy. There is
electrical potential energy stored in an electrical field. The water at the top of a dam has
potential energy and the potential energy depends on the mass, height, gravity and density.
Electrical Energy: Electricity is different from the other energy sources because it is a
secondary source of energy. We must use another energy source to produce electricity.
Electricity is sometimes called an energy carrier because it is an efficient and safe way to
move energy from one place to another, and it can be used for so many tasks. As we use more
technology, the demand for electricity grows.
The rate of charge flows of electricity with time is called electrical current and is measured in
ampere. The current that flows through a metallic conductor is proportional to the potential
difference across its ends, provided temperature and all other physical quantities are constant.
The constant of proportionality is Resistance, measured in Ohms.
Power is the rate of using or producing energy, it is measured in watt.
(a) Solar cells (called “photovoltaic” or “photoelectric” cells) convert light directly into
electricity. In a sunny climate, you can get enough power to run a 100w light bulb from just
one square metre of solar panel. (Fig. 1.1).
Solar
Energy Power
Out
+
46
(ii) Water can be pumped through pipes in the panel. The pipes are painted black, so they
get hot when the sun shines on them. This helps out the central heating system and reduces
the fuel bill. However, in the U.K or U.S.A, there may be need to drain the water out to stop
the panels freezing in the winter.
Solar heating is worthwhile in places like USA and Australia, where lots of sunshine is
obtained. As technology improves, it becomes worthwhile in the U.K. However, during
summer in the U.K, a lot of domestic water can be heated.
To water tank
Glass cover
Solar Energy Shiny
Surface
Black Surface
From water
tank
47
Propeller blades
gearbox and
Wind generator in
housing which
can be rotated
to face the wind
Tower
Fig. 1.3: Arrangement of Wing Energy Assembly
48
(iv) Water quality and quantity downstream can be affected, which can have an impact on
plant life.
Tidal Energy
The oceans contain an enormous amount of thermal energy which cannot be extracted by a
heat engine whose operating temperature matches the surface temperature of the ocean.
However, the use of the Ocean Thermal Energy Converter allows the extractingof heat from
the upper part of the ocean, converting some of the energy to useful work, and rejecting the
remainder to the cooler deep region.
The features of Ocean Thermal Energy Converters (OTEC) are:
(1) essentially pollution free and
(2) make use of the sun, which replenishes the internal energy of the ocean surface with
its radiation.
At present, there are about two tidal power schemes operating in the world, one in France and
the other one in Russia. This is because the construction of such a system in the ocean
presents some engineering problems.
Advantages of Tidal Energy
(i) The energy is free – no fuel needed, no waste produced.
(ii) Not expensive to operate and maintain.
(iii) Can produce a great deal of energy.
(iv) It produces no greenhouse gases or other wastes.
49
Natural gas is a mixture of various compounds of carbon and hydrogen and small quantities
of non-hydrocarbons existing in the gaseous phase, or in solution with oil, in natural
underground reservoirs. It is classified into two categories;
Associated gas and Non-associated gas.
Associated gas is natural gas originating from fields producing both liquid and gaseous
hydrocarbons simultaneously. On the other hand, non-associated gas is natural gas which is
obtained independently. It is generally found in the space above an oil reservoir or an aquifer.
Typically, the major constituents of associated natural gas are methane (about 50 per cent by
volume), ethane (about 20 per cent) and propane (about 10 per cent). The major constituent
of non-associated natural gas is methane (more than 90 per cent by volume).
Natural gas provides around 20% of the world’s consumption of energy, and as well as being
burnt in power stations, is used by many to heat their homes.
Advantages of Fossil Fuels
(i) Very large amounts of electricity can be generated in one place using coal, fairly
cheaply.
(ii) Transporting oil and gas to power station is easy.
(iii) Gas-fired power stations are very efficient
(iv) A fossil fuelled power station can be built anywhere, so long as you can get large
quantities of fuel to it. Didcot power station in Oxfordshire has a dedicated rail link to
supply the coal
Disadvantages of fossil fuels
(i) Basically, the main drawback of fossil fuels is pollution. Burning any fossil fuel
produces carbon dioxide, which contributes to the ‘greenhouse effect’ warming the
earth.
(ii) Mining coal can be difficult and dangerous. Strip mining destroys large areas of the
landscape
(iii) Burning coal produces more carbon dioxide than burning oil or gas. It also produces
sulphur dioxide, a gas that contributes to acid rain. We can reduce this before
releasing the waste gases into the atmosphere.
(iv) Coal-fired power stations need huge amounts of fuel which means train-loads of coal
almost constantly. In order to cope with changing demands for power, the station
needs reserves. This means covering a large area of country-side next to the power
station with piles of coal.
Nuclear Energy
Nuclear power is generated using Radio nuclei, such as uranium, for fission or very light
nuclei for fusion.
Fission or heat water to steam turns turbines electrical
Fusion makes make steam turbines turn of generators power sent
heat around country
Nuclear reactors must have strategic arrangement for
(1) removing thermal energy from the reactor and
(2) a scheme for controlling the energy output.
Countries such as Russia, France and England have successfully operated nuclear
breeder reactor electric power plants.
50
Special Problem with handling Plutonium
Extracting plutonium is not simple because the element is extremely dangerous. It has
chemical effects on the body. Also, it is radioactive and it decays by emitting alpha particles,
which are among the most damaging nuclear particles to the internal organs of the human
body. If plutonium is taken into the body in soluble form, it concentrates in the bones and the
liver and tends to remain there. If taken into the lungs as small particles, it can produce
intense local damage and possibly induce cancer. For these reasons, plutonium processing is
done by remote control method. Another problem is that plutonium can be stolen to produce
atomic bomb such as the one used in World War II.
Nuclear fusion Reactors
Fusion occurs when two light nuclei are fused together to form a heavier nucleus and energy
is released.
Problem of Fusion
Very high temperature of the order of 108 K is required to overcome the coulomb repulsive
forces between two light nuclei. This poses severe technological problem due to the fact that
materials that withstand such a high temperature are difficult to come by.
Advantages of Fusion over Fission
(i) Easily achieved with highest elements.
(ii) Raw materials are cheaply available. Hydrogen can be obtained by electrolysis of
seawater which is cheap and plentiful for use.
(iii) Produces less dangerous by-products.
(iv) By-products are non-radioactive
In Britain, nuclear power stations are built on the coast and sea water is used for cooling the
steam which is ready to be pumped. Also, carbon dioxide gas is blown through the reactor to
carry the heat away. Carbon dioxide is chosen because it is very good coolant, able to carry a
great deal of heat energy. It also helps to reduce any fire risk in the reactor (it is around
600oC). Any country that produces energy through nuclear reactor should have computers
that will shut the reactor down automatically if things get out of hand. One such example
where things got of hand due to the absence of a sophisticated system is Chernobyl, a city in
Ukraine. The reactor overheated, melted and excessive pressure blew out the containment
system before they could stop it. Then, with the coolant gone, there was a serious fire. Many
people lost their lives.
Advantages of Nuclear Power
(i) Nuclear power costs about the same as coal. So it is not expensive to make.
(ii) Does not produce smoke or carbon dioxide, so, it does not contribute to the
greenhouse effect.
(iii) Produces huge amounts of energy from small amounts of fuel.
(iv) Produces small amounts of waste
(v) Nuclear power is reliable.
Disadvantages of Nuclear Power
(i) Although not much waste is produced, it is very, very dangerous. It must be sealed up
and buried for many years to allow the radioactivity to die away.
(ii) Nuclear power is reliable, but a lot of money has to be spent on safety. If it goes
wrong a nuclear accident can be a major disaster.
51
Geothermal Energy
Geothermal energy is energy from heat inside the earth. Thermal energy within the earth is
termed geothermal energy. Human beings live on a crust of earth with varying thickness.
Beneath the crust is a semi-molten rock layer called the mantle. Beneath the mantle is a
molten core of iron and nickel. The earth’s crust is warmed from within by the hot interior
and by radioactive decay products, in particular, emissions from uranium – 238, thorium –
232 and potassium – 40. The temperature of the crust increases about 2oC for each 100-m
penetration. Geothermal energy is difficult to exploit. However, there are areas in which hot
molten rock (magma) is forced up through structural defects and these sometimes produce
concentrated sources of hot water or steam that can be tapped for a variety of energy uses.
Thermal energy can be extracted by drilling to the rock, injecting water into the entrance and
recovering the steam formed when the water contacts the hot dry rock.
Although present in many parts of the world, these systems are probably incapable of making
a substantial contribution to the energy demand of the industrialised nations, but could, on the
other hand, constitute the decisive factor for the industrial take-off of many developing
countries. New technologies are being studied or experimenting for creating artificial
hydrothermal systems so that geothermal energy could make a significant contribution to the
world’s energy demand.
Types of Geothermal Fields
(1) Hot water fields: These contain water in the 500 – 1000C temperature rage, which can be
utilised for domestic heating, agricultural heating (green house) or in industrial processes
requiring heat.
(2) Wet Steam fields: These contain pressurised water at temperatures that are usually well
above 1000C, and small quantities of steam in the shallower, lower pressure parts of the
reservoir. An impermeable cap rock prevents the fluid from escaping to the surface, thus,
keeping it under pressure.
(3) Superheated Steam fields: These are similar geologically to the wet steam fields. The
water and steam coexist, with steam as the continuous predominant phase. This type of
field produces dry steam (with no water in the liquid phase) generally superheated, with
small quantities of other gases, particularly CO2 and S2. The superheated steam is used in
generating electricity.
N.B: The electric energy produced by geothermal means is nowadays competitive with all
other renewable and conventional energy sources and can draw on a technology based on
some tens of years of work experience.
52
(iii) Hazardous gases and minerals may come up from underground and can be difficult to
safely dispose of.
Conservation of Energy
One kind of energy may be converted easily into another such as potential to Kinetic type or
chemical to electrical form, while the total energy always remains the same. That is, energy is
neither created nor destroyed in any given physical system. This idea is called the law of
conservation of energy.
Examples of the law of conservation of energy are: when a bullet leaves a gun with a certain
kinetic energy. As it flies through the air, some of its energy is lost, some of its energy is
energy due to its friction with the air. As the bullet strikes its target, more energy is converted
to sound and light. Heat will also be developed in the target. Another example is the kinetic
energy of wheel as it spins can lift water and store it in a tank. The work done in lifting the
water is stored as potential energy. If the water is allowed to fall back to earth, this potential
energy is again converted to kinetic energy.
Further example is when zinc and copper bars are partially immersed in a sulfuric acid
solution, a chemical reaction takes place. If the parts of these bars that are above the surface
of the acid solution are connected to each other by a wire, it will be found that an electric
current will flow through this wire. This arrangement is called voltaic cell. If a globe is
connected to this arrangement, it will glow. Can we add more examples from our day to day
experiences?
The following is also an example of transformation of different forms of energy into heat and
power.
Oil burns to make heat Heat boils water Water turns to steam
Steam pressure turns a turbine Turbine turns an electric generator
Generator produces electricity Electricity powers light bulbs Light bulbs give off
light and heat.
It is difficult to imagine spending an entire day without using energy. In a home where
electricity supplies all of the energy requirements, the average energy consumption is shown
below:
Air conditioner and heater = 50%
Water heater = 20%
Lighting and small appliances = 10%
Refrigerator = 8%
Others = 5%
Ovens and stoves = 4%
Clothes dryer = 3%
53
transparent blanket that contributes to the global warming of the earth, or “greenhouse
effect”. It is possible that this warming trend could significantly alter our weather. Possible
impacts include a threat to human health, environmental impacts, such as rising sea levels
that can damage coastal areas and major changes in vegetation growth patterns that could
cause some plant and animal species to become extinct. Furthermore, sulphur dioxide is also
emitted into the air when coal is burned. The sulphur dioxide reacts with water and oxygen in
the clouds to form precipitation known as “acid rain”. Acid rain can kill fish and trees and
damage limestone buildings and statues.
CONVERSION DEVICES FOR RENEWABLE AND NON RENEWABLE ENERGY
Any known source of renewable energy is not useful in its natural state except it is
converted through appropriate conversion devices.
54
Fig. 1.5: A typical wind Energy Conversion System
Source: http://www.daviddarling.info/encyclopedia/G/AE_generator.html
The major components of a typical wind energy system include: wind turbine, a generator,
interconnection apparatus and control systems. Generators for wind turbines include
synchronous generators, permanent magnet synchronous generators, and induction
generators which could either be the squirrel-cage type or wound rotor type. For small to
medium power wind turbines, permanent magnet generators and squirrel-cage induction
generators are often used because of their reliability and cost advantages. Also, induction
generators, permanent magnet synchronous generators, and wound field synchronous
generators are currently used in various high power wind turbines.
Hydro Power Energy
The major components of a typical hydropower system include:
Penstock: Main pressure building conduit. Must use care if head is large, 130 psi or greater.
High head pressure can be dangerous
Power Switch & Breaker: These are safety device to disconnect power
Transformer: AC conversion of generator output voltage to transmission line or use voltage.
AC Output to Load or Grid: Transmission line to point of use, voltage and current define
loss based on wire gauge and distance.
You can help solve these global problems.
55
It provides not only food but also energy, building materials, paper, fabrics, medicines and
chemicals. Biomass has been used for energy purposes ever since man discovered fire.
Today, biomass fuels can be utilised for tasks ranging from heating the house to fuelling a car
and running electrical and electronic appliances.
If we burn biomass efficiently (extract the energy stored in the chemical bonds) oxygen from
the atmosphere combines with the carbon in plants to produce carbon dioxide and water. The
process is cyclic because the carbon dioxide is then available to produce new biomass. This
process leads credence to earnings of Carbon Credit. Hence, the use of biomass can help
reduce global warming compared to a fossil fuel-powered plant. Plants use and stores carbon
dioxide (CO2) when they grow. CO2 stored in the plant is released when the plant material is
burned or decays. By replanting the crops, the new plants can use the CO2 produced by the
burned plants. So, using biomass and replanting helps close the carbon dioxide cycle.
However, if the crops are not replanted, then biomass can emit carbon dioxide that will
contribute towards global warming.
Thus, the use of biomass can be environmentally friendly because the biomass is reduced,
recycled and then reused. Today, new ways of using biomass are still being discovered. One
way is to produce ethanol, a liquid alcohol fuel. Ethanol can be used in special types of cars
that are made for using alcohol fuel instead of gasoline. The alcohol can also be combined
with gasoline. This reduces our dependence on oil or fossil fuel.
Wood may be the best-known example of biomass. But wood is just one example of biomass.
Various biomass resources are derived from different sources. Bagasse from sugarcane, corn
fiber, rice straw and hulls, and nutshells are examples of agricultural residues. Sawdust,
timber slash, and mill scrap are derived from wood waste. Examples of municipal wastes
include the paper trash and urban yard clippings. Energy crops are fast growing trees like
poplars, willows and jatropha. Grasses like switch grass or elephant grass are also examples
of energy crops. The methane captured from landfills, municipal waste water treatment, and
manure from cattle or poultry, are typical examples of biomass. (Olaoye, 2001; Corcoran et
al., 2008).
RELATIONSHIP OF BIOMASS WITH RENEWABLE AND NONRENEWABLE
ENERGY
Six most utilised renewable energy sources are hydropower, tidal wave, solar, wind,
geothermal, and biomass. All energy forms derived from biomass sources are renewable.
Renewable energy sources refer to forms of energy sources that can be regenerated in a short
period of time after utilisation while non-renewable energy sources cannot be regenerated in
a short period of time. Non-renewable energy are mainly sourced from the ground as liquids,
gases and solids. Examples of these include crude oil, natural gas and coal, respectively.
These products are essentially biomass as they are formed from the buried remains of plants
and animals that lived millions of years ago. Fig. 2.1 shows various forms of renewable and
non-renewable energy.
Fossil fuels contain the same constituents - hydrogen and carbon - as those found in fresh
biomass. Environmental impacts pose a significant distinction between biomass and fossil
fuels. When a plant decays, it releases most of its energy back into the atmosphere. In
contrast, fossil fuels are locked away deep in the ground and do not affect the earth’s
atmosphere unless they are burned.
56
Rising world fuel prices, the growing demand for energy, and concerns about global warming
are the key factors driving the increasing interest in renewable energy sources and in biomass
in particular. Biomass is the best alternative energy source as it is available in large amounts
and production of some form of energy from biomass is also less costly. The use of
renewable energy is not new. Now biomass that could normally present a disposal problem is
converted into electricity and other useful energy alternative.
Solar Natural
Tidal Gas
Biomass
Wave Nuclea Coal
Wind Water r
57
feedstock. Sambo (2009) estimated the quantities of available biomass resources in million
tonnes in Nigeria as follows; fuelwood 39.1, Agro-waste 11.2 and Saw Dust 1.8 while the
corresponding heat values in Mega Joules are estimated as 531.0, 147.7, and 31.433,
respectively. The resources quantity for Municipal Solid Waste is estimated as 4.075 million
tonnes.
Figure 2.2 clearly illustrates various characteristics of biomass and diverse forms of energy
that can be produced. Table 2.1 presents different sources of biomass, their forms, and their
conversion process to respective forms of energy.
Figure 2.2: Characteristics of biomass and diverse forms of energy that could be produced
58
5 Aquatic biomass Aquaculture Methanol
6 Weeds Whole plant body Methane
B Residues/wastes/weeds
Rural/urban wastes/ Combustion Fire/fuel
1
industrial wastes Pyrolysis Fuel oil
Fermentation Ethanol
2 Forestry wastes -do- Combustion Fuel
Pyrolysis Oil gas
Gasification Gas
Fermentation Ethanol
3 Agricultural wastes Wastes Fermentation Methane
Weeds & aquatic
4 Wastes Fermentation Methane
biomass
5 Cattle dung Wastes Fermentation (biomass) Methane
Source: http://www.world-agriculture.com, Heiermann, et al., 2009
Pyrolysis
Pyrolysis is the process of heating wood and agricultural residues at temperature varying
between 540 - 1100 0C in the absence of air for several hours to breakdown plant materials
into a complex mixture of liquids. Earthen kilns, pit kilns, brick kilns or portable steel kilns
are used for this process. The resulting product has a high calorific value, easy to transport,
store and distribute and more efficient in burning and has a characteristic feature of creating
less pollution. A good example of pyrolysed product is charcoal. From eight tonnes of wood,
around one tonne of charcoal can be made. The gases that are produced during the process of
pyrolysis can be converted or synthesised into methanol and liquids which are used as fuels.
(Shah et al., 1989). Depending on the temperature, the degradation stages of pyrolysis
revealed different products. The stages of pyrolysis by breaking down wood and agricultural
residues into mixture of liquid in the absence of air are identified by four distinct temperature
zones 0 – 170 0C yielding evaporation of moisture, 170 – 270 0C resulting into evolution of
carbon monoxide and carbon dioxide, 270 – 400 0C yielding evolution of methanol and at
400 – 500 0C to produce charcoal with optimum carbon content. (http://www.world-
agriculture.com).
Destructive distillation
The destructive distillation is carried out in long steel retorts or earthen kilns. Only the wood
wastes such as branches, trunks of trees are used as raw materials. The process involves
decomposition of wood at high temperatures in the absence of air. At an initial temperature of
230 0C, the moisture is evaporated and then the temperature is raised to 370 0C and
maintained for 6 hours. At the end of this period, wood is converted to charcoal which is
cooled in the absence of air for 48 hours. After cooling, they are spread in open sheds, two
days for drying and thereafter ready for supply to consumers. Figure 2.3 displays a typical
setup during the process of destructive distillation of wood in Nigeria to produce charcoal.
59
Figure 2.3: The process of destructive distillation of wood to produce charcoal.
Gasification
Gasification is a process of degradation of carbonaceous material (wood wastes) under
controlled air or pure oxygen at a high temperature of l000 0C. As a result of gasification,
high amount of gases are produced. Biomass gasification is done in gasifiers designed in
various ways. Gasifiers are generally classified based on the physical conditions of the feed
stocks in the gasifiers. These include fixed bed gasifier, stired bed gasifier, tumbling bed
gasifier, fluidised bed gasifier. (FOE, 2009).
The vapours of the volatile matters that are formed during the distillation process
subsequently condensed to tar, methanol, acetic acid, methyl acetate, oil and gas. Charcoal
is used as a fuel, source of carbon in making carbon-di-sulphide. Methanol is used as a
solvent and antifreeze for automobiles. Acetic acid is used as a raw material for the
manufacture of acetic anhydride, sodium acetate, cellulose acetate, ethyl acetate, and butyl
acetate among other applications. Methyl acetate is a solvent used in paint industry. A
product of tar known as "pitch" is used as rubber softener. Oils are used as solvent and
insecticide. Gases are used as fuel for heating wood distillation and fuel for boilers
(http://www.world-agriculture.com, Nan et al., 1994).
In the course of gasification, a number of chemical reactions take place. As soon as the
biomass is ignited four distinct zones are set-up in the gasifier-unit. Biomass when introduced
into the gasifier, enters into drying zone where the temperature is 200 - 400 °C. The products
of this zone are vapours of tar, organo-chemicals, and liquid oils. (Guo, 2004). After drying
zone it enters the pyrolysis zone, of temperature 400 – 750 °C. Pyrolysis at this point results
in char, organic liquids and some gases. At 750 - l000 °C, mainly gases like carbon-di-oxide,
carbon mon- oxide, hydrogen, methane etc. are produced and this stage is called as
gasification zone. The oxidation zone which is at l000 - l400 oC also produces gases like
nitrogen, carbon-di-oxide and hydrogen. (Shah et al., 1989; Guo, 2004.). When the process of
oxidation is over, with the steam treatment, ash, an inert material is formed. The gasification
process, thus, ultimately results in a number of gases which are in mixture. This mixture of
gases is referred to as producer gas. Producer gas somewhat burnt like a natural gas which
can be used as a fuel for engines. The composition is given in Table 2.2.
60
Table 2.2: Composition of Producer Gas.
S/No Components of producer gas Percentage Composition
1 Carbon monoxide 20 – 22
2 Hydrogen 15 – 18
3 Methane 2 – 4
4 Carbon-di-oxide 9 – 11
5 Nitrogen 50 – 53
Source: http://www.world-agriculture.com/
Anaerobic Bio-gasification
Degradation of organic matter in the absence of air to methane and carbon-di-oxide is
called anaerobic bio-gasification. In villages, cattle manure is used as a fuel for cooking
purposes. Consequently, preparation of biogas has become popular among rural people.
Biogas can be utilised for cooking, lighting, operating diesel engines, water pumps etc.
Animal wastes such as cattle dung, chicken droppings, and night soils are the main raw
material for the production of biogas. The main advantages of the bio-gasification are that the
biogas production can be started by constructing permanent structures such as tanks at a
convenient place at home (Plöchl et al., 2009, Heiermann et al., 2009). It can be
manufactured with least maintenance. Initial investment is also cheap. Gober gas or biogas
contains 60 % methane and 40 per cent carbon-di-oxide. The digested manure contains 1.5 -
2 % nitrogen and other soil nutrients, which can be used as an organic fertilizer. The biogas
can be easily purified to methane which is an enriched fuel gas of high calorific value.
Fermentation and Distillation
The materials for alcoholic fermentation to produce ethanol are Sugary, Starchy and
Lignocellulosic materials. The first two categories are first generation energy crops and the
third category is referred to as the second generation energy crop. The sources of cellulose
and lignocellulosic materials are the agricultural wastes and wood. All these categories are
in abundant distribution in Nigeria. Nigeria being traditional agro-based country with the soil
conditions that support production of diverse types of feedstock for production of biofuels.
Biofuel is a transportation fuel in the form of ethanol.
Ethanol is an alcohol fuel made from the sugars found in grains, such as corn, sorghum, and
wheat, as well as potato skins, rice, sugar cane, sugar beets, and yard clippings. Researchers
are experimenting with "woody crops", mostly small poplar trees and switch grass, to see if
they can grow them cheaply and abundantly to avoid total dependency on food crop. Any
gasoline powered engine can use E10, which is a mixture of 10 % ethanol and 90 % gasoline.
Only specially made vehicles can run on E85, a fuel that is 85 % ethanol and 15 % gasoline.
There are two major treatments that are necessary after the pre-treatment operations. These
are fermentation and distillation. Fermentation is to convert the larger, complex dextrose
molecules into smaller, simple sugar molecules called maltose and glucose.
Once the mash’s sugar is fermented by yeast, then the fermented liquid product is put into a
still where it is heated for distillation. Distillation can happen because water boils at 100 oC
and ethanol boils at 78 oC hence, ethanol can distil out and leave most of the water behind.
(http://www.ethanolrfa.org/industry/statistics/#E). Ethanol production in Nigeria is still
limited to application in beverages, allied and pharmaceutical industries.
61
Trans esterification of Vegetable Oil
The basic treatment in the production of biodiesel is referred to as trans esterification. This
process is achieved in four principal stages as follows: pre-treatment of feedstock, trans
esterification, methyl ester purification and glycerol purification. Pre-treatment of feedstock
is to remove contaminants and components that will be detrimental to subsequent processing
steps. Trans esterification is the reaction process followed by the separation of the methyl
ester and glycerol. Methyl ester purification stage is to remove the excess methanol, catalyst
and glycerol from the trans esterification process. Also at this stage the methanol removed is
recycled to the trans esterification process. The glycerol obtained is further processed to
remove impurities such as catalyst and methyl esters that are contained in order to produce a
higher grade glycerol if economics dictate. Figure 2.4 presents the flow chart of the biodiesel
process.
Biodiesel is a fuel made with vegetable oils, fats, or greases - such as recycled restaurant
grease. Biodiesel fuels can be used in diesel engines without changing them. It is the fastest
growing alternative fuel in the United States. Biodiesel, a renewable fuel, is safe,
biodegradable, and reduces the emissions of most air pollutants. (Wood, 1993,
http://journeytoforever.org/biodiesel)
Rural economic development in both developed and developing countries is one of the major
benefits of biomass. The new incomes for farmers and rural population improve the material
welfare of rural communities and this might result in a further activation of the local
economy (Braun and Pachauri, 2008, Von Braun et al., 2008, Arndt et al., 2008). This new
market diversification will provide farmers with stable income, and strengthen the local
economy by keeping income recycling through the community. In the end, this will mean a
reduction in the emigration rates to urban environments, which is a very common situation in
many areas of the world.
The number of jobs that can be created through the production chains of biomass would be
enormous. For instance, investigation revealed that 17,000 jobs can be created per every
million of gallons of ethanol produced. About 50 million acres of land would be required to
produce 5 quadrillion Btu’s (British Thermal Units) of electricity. These would increase
overall farm income tremendously.
62
The use of biomass energy has many environmental benefits. It can help mitigate climate
change, reduce acid rain, soil erosion, water pollution and pressure on landfills, provide
wildlife habitat, and help maintain forest health through better management. Biomass-energy
systems can increase economic development without contributing to the greenhouse effect
since biomass is not a net emitter of CO2 to the atmosphere when it is produced and used
sustainably. It also has other benign environmental attributes such as lower sulphur and NOx
emissions and can help rehabilitate degraded lands (Arndt et al., 2008). There is a growing
recognition that the use of biomass in larger commercial systems based on sustainable,
already accumulated resources and residues, can help improve natural resource management.
Biomass is generally and wrongly regarded as a low-status fuel, and in many countries rarely
finds its way into statistics. It offers considerable flexibility of fuel supply due to the range
and diversity of fuels which can be produced. Biomass energy can be used to generate heat
and electricity through direct combustion in modern devices, ranging from very-small-scale
domestic boilers to multi-megawatt size power plants electricity (e.g. via gas turbines), or
liquid fuels for motor vehicles such as ethanol, or other alcohol fuels.
Biomass can be adopted for generation of energy. If the energy is properly utilised it will
meet a sizeable percentage of the country's demands for fuel as well as energy. Agricultural
and forest residues can be collected to produce fuels, organic manures and chemical feed
stock. Waste from urban and industrial locations can be converted to fuel in boilers and as a
feedstock for producing methane and some liquid fuels. Some specific energy plants could be
cultivated for use as energy feed stock and cultivation of commercial forestry, aquatic and
marine plants for different products. Thus production of biomass helps in developing cheaper
source of energy from unutilised agricultural residues, wastes, forest wastes, plantations etc.
which is an alternative source of energy for fossil fuel energies.
A major criticism often levelled against biomass, particularly against large-scale fuel
production, is that it could divert agricultural production away from food crops, especially in
developing countries. The basic argument is that energy-crop programmes compete with food
crops in a number of ways (agricultural, rural investment, infrastructure, water, fertilizers,
skilled labour etc.) and thus cause food shortages and price increases. However, this so-called
'food versus fuel' controversy appears to have been exaggerated in many cases.
The subject is far more complex than has generally been presented since agricultural and
export policy and the politics of food availability are factors of far greater importance. The
argument should be analysed against the background of the world's (or an individual
country's or region's) real food situation of food supply and demand (ever-increasing food
surpluses in most industrialised and a number of developing countries), the use of food as
animal feed, the under-utilised agricultural production potential, the increased potential for
agricultural productivity, and the advantages and disadvantages of producing biofuels.
Generally, there are obvious factors that may lead to food shortage and price increase beside
production of biofuel from biomass. These factors include combination of policies which
were biased towards commodity export crops and large acreage increases of such crops,
hyper-inflation, currency devaluation, and price control of domestic foodstuffs. Developing
countries are facing both food and fuel problems. Adoption of appropriate agricultural
practices is necessary to be able to utilise available land and other resources to meet both
food and fuel needs.
63
CONCLUSION
Various types of biomass were highlighted as forms of alternative energy. The energy
potential of these important commodities was examined. The benefit of biomass is not limited
to energy source but it could lead to increase in farm income and market diversification,
reduction of agricultural commodity surpluses and derived support payments. Biomass
development can enhance international competitiveness, revitalisation of retarded rural
economies, and reduction of negative environmental impacts. The reduction of negative
environmental impacts is most important issues related to utilisation of biomass as energy
source. The environmental benefits of biomass were discussed and some of the
environmental benefits include ability to mitigate climate change, reduce acid rain, soil
erosion, water pollution and pressure on landfills, provide wildlife habitat, and help maintain
forest health through better management. Biomass-energy systems can increase economic
development without contributing to the greenhouse effect. The processes involved in
conversion of biomass to various sustainable energy products were highlighted with their
unique characteristics. The issues related to energy crop and food security were presented and
obvious factors that may lead to food shortage and price increase beside production of biofuel
from biomass were highlighted and it was concluded that adoption of appropriate agricultural
practices is necessary to be able to ultilise available land and other related natural resources
to meet both food and fuel needs for biomass utilisation.
EVALUATION STRATEGIES
Practice Questions
1. Describe the energy conversions that take place in
(a) A boy riding a bicycle
(b) An electric pressing iron
(c) Radios and televisions
(d) The Telephone
(e) Electric Motor
4. Use typical annotated diagram to explain the components and working principle of
(a) Wind Energy converting devices
(b) Solar Energy converting devices
(c) Hydro Energy converting devices
64
5. Identify the various stages in transformation of Biomass to Renewable Energy and
discuss in details the products and by products of the following processes:
(a) Combustion
(b) Pyrolysis
(c) Destructive Distillation
(e) Gasification
(f) Anaerobic Bio-gasification
(g) Fermentation and Distillation
(h) Trans-esterification of Vegetable Oil
7. Write for or against production of Biodiesel from vegetable oil seeds and Food
Security
8. With the aid of Schematics diagram present the processing flow diagram of
a) Biodiesel processing
b) Biofuel processing
REFERENCES
Arndt, C., R. Benfica, F. Tarp, J. Thurlow and R. Uaiene. (2008). “Biofuels, Poverty, and
Growth A Computable General Equilibrium Analysis of Mozambique”.IFPRI
Discussion Paper 00803,October 2008. International Food Policy Research Institute.
28pp
Corcoran, B.A., J. C. Henry, R. R. Rice, H. D. Rismani-Yazdi, and A. D. Christy. (2008).
“Cellulosic Ethanol from Sugarcane Bagasse Using Rumen Microorganisms”. An
ASABE Meeting Presentation Paper Number 085148. ASABE Annual International
Meeting Sponsored by ASABE, Rhode Island Convention Center, Providence, Rhode
Island, June 29 – July 2, 2008.
EIA. (2004). Energy Information Administration, Energy INFOcard, October 2004.
FOE. (2009). Friends of the Earth. Briefing: Pyrolysis, Gasification and Plasma.
www.foe.co.uk. Accessed on 2ndSept., 2009.
Guo, J. (2004). Pyrolysis of Wood Powder and Gasification of Wood-Derived Char.
Eindhoven: TechnischeUniversiteit Eindhoven, 2004. - Proefschrift. ISBN 90-
386-1935-9. http://alexandria.tue.nl/extra2/200411302.pdf. Accessed on 2nd Sept.,
2009.
Heiermann, M, M. Plöchl, B. Linke, H. Schelle, and C. Herrmann. (2009). “Biogas Crops –
Part I: Specifications and Suitability of Field Crops for Anaerobic Digestion”.
Agricultural Engineering International: the CIGR Ejournal. Manuscript 1087. Vol. XI.
June, 2009.
Henniges, O. and J. Zeddies. (2006). Bioenergy and Agriculture: “Promises and Challenges,
Bioenergy in Europe: Experiences and Prospects”. Focus 14, Brief 9 of 12 December
2006. 2020 Vision for Food, Agriculture, and the Environment.2pp.
http://www.eia.doe.gov/kids/energyfacts/science/formsofenergy.html (Accessed on
24/07/2009)
http://www.world-agriculture.com/(Accessed on 19/06/2009).
http://journeytoforever.org/biodiesel (Accessed on 21stJuly, 2009).
65
http://home.clara.net/darvill/alternerg/wave.htm (Accessed on 10th July 2008).
Keeney, D.R., and T.H. DeLuca. (1992). "Biomass as an Energy Source for the Midwestern
U.S."American Journal of Alternative Agriculture, Vol. 7 (1992), pp. 137- 143.
Nan, L., Best, G., Coelho, C., Neto, De C. (1994). “Integrated Energy Systems in China: The
Cold Northeastern Region Experience: The Research Progress of Biomass Pyrolysis
Processes”. Fao Corporate Document Repository. Natural Resources Management
and Environment Department. http://www.fao.org/docrep/t4470e/t4470e0a.htm.
Accessed 2nd Sept., 2009.
NBS (2007). National Bureau of Statistics. Annual Abstract and Statistics, Federal Republic
of Nigeria., Abuja. Dec., 2007.
Olaoye, J. O.; (2001), “Utilization of Biomass Resources as Renewable Energy in Nigeria”.
Proceedings of the 2nd International Conference & 23rd Annual General Meeting of
the Nigerian Institution of Agricultural Engineers (A division of NSE); 23: 457 – 462.
Omosewo, E.O. (1988). Environmental Science. An unpublished Technical report in the
Department of Science Education, University of Ilorin, Ilorin.
Omosewo, E.O. (1991). Relevance of the physics education programmes of Nigerian higher
institutions to the teaching of senior physics. An unpublished Ph.D. Dissertation,
University of Ilorin, Ilorin.
Omosewo, E.O. (1992) Sources and uses of energy. An unpublished technical report,
Department of Science Education, University of Ilorin, Ilorin.
Plöchl, M., Monika Heiermann, Bernd Linke, Hannelore Schelle. “Biogas Crops Part II:
Balance of Greenhouse Gas Emissions and Energy from Using Field Crops for
Anaerobic Digestion”. Agricultural Engineering International: the CIGR Ejournal.
Manuscript number 1086. Vol. XI. June, 2009.
Rosentrater, K. A., D. Todey, and R. Persyn. (2009). “Quantifying Total and
Sustainable Agricultural Biomass Resources in South Dakota – A Preliminary
Assessment”. Agricultural Engineering International: the CIGR Journal of Scientific
Research and Development. Manuscript 1059- 1058-1.
Sambo, A. S. (2009). “Strategic Developments in Renewable Energy in Nigeria”.
International Association for Energy Economics Third Quarter 2009. 15 – 19.
Shah, J.K., T.J. Schultz, and V.R. Daiga, (1989). "Pyrolysis Processes." Section 8.7 in
Standard Handbook of Hazardous Waste Treatment and Disposal, ed. H.M.
Freeman. McGraw-Hill Book Company, New York, NY.
Sticklen, M. (2006), “Plant genetic engineering to improve biomass characteristics for
biofuels”Current Opinion in Biotechnology, 17(3): 315-319.
USDE. (1980). U.S. Department of Energy. “Energy Balances in the Production and End-
Use of Alcohols Derived from Biomass: A Fuels-Specific Comparative Analysis of
Alternate Ethanol Production Cycles”. DOE/PE/70151-T5. October 1980.
Von Braun, J., A. Ahmed, K. Asenso-Okyere, S. Fan, A. Gulati, J. Hoddinott, R. Pandya-
Lorch, M. W. Rosegrant, M. Ruel, M. Torero, T. V. Rheenen, K. V. Grebmer. (2008).
High Food Prices: The What, Who, and How of Proposed Policy Actions. Policy
Brief. May 2008. International Food Policy Research Institute.
Von Braun, J. and R. K. Pachauri. (2008). The Promises and Challenges of Biofuels for the
Poor in Developing Countries. Policy Brief. International Food Policy Research
Institute. 16pp
Wood, P. (1993). New Ethanol Process Technology Reduces Capital and Operating Costs for
Ethanol Producing Facilities, Fuel Reformulation. Information Resources Inc.,
Washington, DC.
66
Chapter Five
APPLICATIONS OF BLOOD GROUP SYSTEMS AND DNA FINGER PRINTINGS
*1
ARISE, R.O., 2KURANGA, S.A., and 3OGUNJEMILUA, S.B.
1Department of Biochemistry, Faculty of Life Sciences, University of Ilorin, Ilorin, Nigeria
2Department of Surgery, Faculty of Clinical Sciences, University of Ilorin, Ilorin, Nigeria
, H.A.
3Department of Family Medicine, University of Ilorin Teaching Hospital, Ilorin, Nigeria
Blood group is a characteristic of an individual's red blood cells (RBC), defined in terms of
specific substances on the surface of the cell called antigen. Which antigen is present on the
surface of the blood cells depends on its DNA and this is defined by the source DNA (i.e. the
parents). Thus, the antigen on blood cells surface, hence the blood group is a characteristic,
specific to a person. Rather than using the surface antigen specified by the DNA, DNA
fingerprinting on the other hand employs unique sequence of bases in a person’s DNA for
identification. It offers a very quick way to compare shorter DNA sequence of any two living
organisms;able to determine whether two DNA samples are from the same person, related
people, or non-related people rapidly. Scientists use a small number of sequences of DNA
that are known to vary among individuals a great deal, and analyse those to get a certain
probability of a match.
More realistically, knowledge of DNA sequences and blood groups can prove useful in
identification projects. These include reuniting families torn apart by war or by the actions of
repressive regimes, identifying corpses, checking paternity, and most commonly,
investigating and prosecuting crimes. Forensic uses of blood groupings and DNA finger
printing technology inspires great hope but arouses considerable controversy.
OBJECTIVES
67
(viii) define blood transfusion;
(ix) explain what happens when blood of incompatible group is transfused into an
individual;
(x) highlight pathological conditions that may result from mismatched blood
transfusion;
(xi) differentiate between "blood group" and "blood type";
(xii) describe the ABO and the Rhesus factor (Rh factor) systems;
(xiii) explain the various classes of the ABO system;
(xiv) differentiate between the various blood groups;
(xv) state the importance of the Rhesus factor in females or below childbearing
age;
(xvi) enumerate the various uses of ABO and Rhesus Blood Group;
(xvii) explain how erythroblastosis fetalis occur;
(xviii) explain the importance of ABO system in organ transplant;
(xix) classify blood on the basis of blood transfusion;
(xx) differentiate between DNA fingerprint and conventional fingerprint that
occurs only on the finger tips;
(xxi) state what the letters A, C, G and T stand for in genetics;
(xxii) highlight the importance of the DNA code;
(xxiii) enumerate the laboratory steps involved in DNA fingerprinting;
(xxiv) state the various applications of DNA fingerprinting;
(xxv) give examples of inherited disorder diagnosed using DNA fingerprinting; and
(xxvi) state the applications of tandem repeats
Austrian scientist, Karl Landsteiner discovered the ABO blood group system in 1901; he was
awarded the Nobel Prize in Physiology or Medicine in 1930 for this. Landsteiner and Wiener
68
discovered the second most important antigen set, the Rhesus system, in 1937. It is named
after the Rhesus monkey, in which the factor was first identified. The phrases "blood group"
and "blood type" are often used interchangeably, although this is not technically correct.
"Blood group" is used to refer specifically to a person's ABO status, while "blood type" refers
to both ABO and Rh factors.
Blood Group O: This group contains none of the antigen (‘A’ or ‘B’ in their blood cells but
contain both the antibodies (anti A and anti-B) in their plasma. People with blood group ‘O’
are known as “universal donors” as they can donate their blood to a person belonging to any
of the ‘ABO’ group (with matching rhesus status). Thus, people with ‘O’ blood type can
donate blood to people with blood group ‘A’, ‘B’ ‘AB’ or ‘O’ (however, compatibility with
other antigens like rhesus needs to be matched).
Overall, the ‘O’ blood type is the most common blood type in the world, although in some
areas, such as Sweden and Norway, the ‘A’ group dominates. The ‘A’ antigen is overall more
common than the ‘B’ antigen. Since the ‘AB’ blood type requires the presence of both ‘A’
and ‘B’ antigens, the ‘AB’ blood type is the rarest of the ABO blood types. There are known
racial and geographic distributions of the ABO blood types. According to Benes (1993), it
can be partly attributed to the relation among blood types and particular illnesses: apparently,
certain blood types give greater (or lesser) resistance to various diseases. For instance, type
‘O’ people have lessened resistance to the Black Plague, and therefore type ‘O’ is less
common in European populations. This ABO grouping system is determined by testing a
suspension of red cells with anti-A and anti-B serum or testing serum with known cells.
69
suitable conditions produce the corresponding antibody in the serum. These antibodies are
then used to detect the presence of Rh groups in cells. Matching the Rhesus factor is very
important, as mismatching (an Rh positive donor to an Rh negative recipient) may cause the
production in the recipient of an antibody to the Rh(D) antigen, which could lead to
subsequent haemolysis. This is of particular importance in females of or below childbearing
age, where any subsequent pregnancy may be affected by the antibody produced. For one-off
transfusions, particularly in older males, the use of Rh(D) positive blood in an Rh(D)
negative individual (who has no atypical red cell antibodies) may be indicated if it is
necessary to conserve Rh(D) negative stocks for more appropriate use. The converse is not
true: Rh +ve patients do not react to Rh -ve blood.
Rh disease occurs when an Rh negative mother who has already had an Rh positive child (or
an accidental Rh +ve blood transfusion) carries another Rh positive child. After the first
pregnancy, the mother immune system is sensitised against Rh +ve red blood cells, such that
during the second Rh +ve pregnancy, an antibody (known as immunoglobulin G) is produced
which can cross the placenta and haemolyse the red cells of the second child. This reaction
does not always occur, and is less likely to occur if the child carries either the ‘A’ or ‘B’
antigen and the mother does not. In the past, Rh incompatibility could result in stillbirth, or in
death of the mother, or both. Rh incompatibility was until recently the most common cause of
long term disability in the United States. At first, this was treated by transfusing the blood of
infants who survived. At present, it can be treated with certain anti-Rh +ve antisera, the most
common of which is Rhogam (anti-D). It can be anticipated by determining the blood type of
every child of an RhD -ve mother; if it is Rh +ve, the mother is treated with anti-D to prevent
development of antibodies against Rh +ve red blood cells.
c. Paternity testing: Although blood group studies cannot be used to prove paternity, they
can provide unequivocal evidence that a male is not the father of a particular child. Since the
red cell antigens are inherited as dominant traits, a child cannot have a blood group antigen
70
that is not present in one or both parents. For example, if the child in question belongs to
group ‘A’ and both the mother and the putative father are group ‘O’, the man is excluded
from paternity. By using multiple red cell antigen systems and adding additional studies on
other blood types (HLA [human leukocyte antigen], red cell enzymes, and plasma proteins),
it is possible to state with a high degree of statistical certainty that a particular male is the
father.
Each person has a unique DNA fingerprint like the fingerprints that came into use by
detectives and police laboratory during the 1930s. Unlike conventional finger print that
occurs only on the finger tips and can be altered by surgery, a DNA fingerprint is the same
for every cell and therefore every part of the body of a person. It cannot be altered by any
known treatment. Consequently, DNA fingerprinting is rapidly becoming the primary method
for identifying and distinguishing among individual human beings.
The characteristics of all living organisms including humans are essentially determined by
information contained within DNA (deoxyribonucleic acid) that they inherit from their
parents. The molecular structure of DNA can be vividly described as a zipper with each tooth
represented by one of four letters A, C, G or T, coding for adenine, cytosine, guanine and
thymine respectively with opposite teeth combining to form an A-T or G-C pairs.
The information contained in DNA is determined primarily by the sequence of the letters
along the zipper. Therefore, two DNAs having the same composition but in different
sequence present different information. For example, the sequence ACGCT represents
different information than the sequence AGTCC in the same way that the word “POST” has a
different meaning from “STOP” or “POTS”, even though they use the same letters. The
characters of a human being are the result of information contained in the DNA code.
There are so many millions of base pairs in each person’s DNA with a different and unique
sequence. Using these sequences, every person could be identified solely by the sequence of
their base pairs. However, because there are also many millions of base pairs, the task would
be very time-consuming. Instead, scientists are able to use a shorter method, because of
repeating patterns in DNA. These patterns do not, however, give an individual “fingerprint”,
but they are able to determine whether two DNA samples are from the same person, related
people, or non-related people.
Living organisms that look different or have different characteristics also have different DNA
sequence. The more varied the organisms, the more varied the DNA sequences. DNA
fingerprinting is a laboratory procedure that requires four steps namely:
1. Isolation of DNA: DNA must be recovered from the cells or tissues of the body. Only
a small amount of tissue, (blood, hair, or skin) is needed. The amount of DNA found
at the root of one hair strand is usually sufficient.
2. Cutting, sizing and sorting: Special enzymes called restriction enzymes are used to
cut the DNA at specific places. For example, an enzyme called EcoR1, found in
bacteria, will cut DNA only when the sequence GAATTC occurs. The DNA pieces
are sorted according to size by a sieving technique called electrophoresis. The DNA
pieces are passed through a gel or a jelly-like product made from seaweed (agarose).
3. Transfer of DNA to nylon: The distribution of DNA pieces is transferred to a nylon
sheet by placing the sheet on the gel and soaking them overnight.
71
4. Probing: Adding radioactive or coloured probes to the nylon (nitrocellulose paper)
sheet produces a pattern called the DNA fingerprint. Each probe typically sticks in
only one or two specific point(s) on the nylon sheet. This method of probing is known
as hybridisation. The final DNA fingerprint is then built by using several probes (5-10
or more) simultaneously.
DNA fingerprints are useful in several applications of human health care research and the
justice system:
a) Diagnosis of inherited disorder: DNA fingerprinting is used to diagnose inherited
disorders in both parental and new-born babies in hospitals around the world. These
disorders may include cystic fibrosis, haemophilia, sickle cell anaemia and others.
Early detection of such disorders enables medical practitioners and the parents for
proper treatment of the child. Genetic counsellors use DNA fingerprint information to
help prospective parents understand the risks of having an affected child and in their
decision concerning affected pregnancies as well as marriages.
b) Developing cures for inherited disorders: Research programmes aimed at locating
inherited disorders on the chromosomes depend on the information contained in DNA
fingerprint. By studying the DNA fingerprints of relatives who have history of some
particular disorder, or by comparing larger groups of people with and without the
disorder, it is possible to identify DNA patterns associated with the disease in
question. This is a necessary first step in designing and eventual genetic cure for these
disorders.
c) Paternity and maternity: Each individual is characterised by arrays of DNA
sequence inherited from his/her parent. Although some sequences may be common
between individuals, some are unique to a particular line, particularly, those involving
repetition of sequence (e.g. CAGCAGCAGCAG). Such repeated DNA sequence is
known as tandem repeats. It describes a pattern that helps determine an individual's
inherited traits. Because a person inherits his or her tandem repeats from his/her
parents, the patterns of the tandem repeats can be used to establish paternity and
maternity. The patterns are so specific that a parental tandem repeats pattern can be
reconstructed even if only the children’s repeats patterns are known.
d) Criminal identification and forensic: DNA isolated from blood, hair, skin, cells or
other genetic evidence left at the scene of crime can be compared, through their
tandem repeats patterns, with the DNA of a criminal suspect to determine guilt or
innocence. Tandem repeats patterns are also useful in establishing the identity of a
homicide victim, either from DNA found as evidence or from the body itself.
e) Personal identification: Since tandem repeats is unique to a particular individual or
family line, it does allow for identification of an individual or member of a family.
CONCLUSION
The successful applications of blood groups and DNA finger prints would change the
relations between criminals and victims in unpredictable ways. Obviously, adding a powerful
new weapon to the arsenal of the law will increase rate of detection and conviction, removing
dangerous people from circulation and also prove valuable in clearing the innocent. Yet it
will be naive to think that criminals will remain passive in the face of the new technology, or
72
that the prospect of inevitable capture and incarceration will stay their hands. Helpful though
blood groupings and DNA finger prints may be, it is not a panacea.
EVALUATION STRAGECIES
Revision Exercise
1. Red blood cells are produced in the
(a) Bone marrow (b) Liver (c) Brain (d) Blood
2. Blood transports all these substances except
(a) Water (b) Oxygen (c) Digested food particles (d) Carbondioxide
3. Knowledge of DNA sequences and blood group groups can prove useful in the
following except
(a) Reuniting families (b) Identification of corpses (c) Paternity testing (d) Fertilization
4. Blood for transfusion must be citrated to prevent
(a) Clotting (b) Precipitation (c) Deposition (d) Crystallisation
5. The substances on the red blood cell membranes are otherwise known as
the………………
(a) Epitopes (b) Popes (C) Epiteph (d) Epitaph
6. The scientist that discovered ABO blood group is
(a) Karl Landsterner (b) Charles Wedner (c) Karlweiner (d) Weiner Landstetner
7. The rhesus factor was discovered in
(a) 1901 (b) 1930 (c) 1937 (d) 1900
8. Blood group is used to describe specifically a person’s ………….Status
(a) ABO (b) Rh factor (c) AB) (d) O
9. Blood group A person contains what type of antibody in his/her plasma
(a) Anti A (b) Anti B (c) Anti AB (d) Anti O
10. Which of the following blood group does not possess antigens on its ------- erythrocyte
membrane (a) A (b) B (c) AB (d) O
REFERENCES
Agre, P and Carton, J.P. (1991): Molecular biology of the Rh antigens. Blood 78:551-563.
Avent, N.D., Liu, W. and Warner, K.M. (1996): Immunochemical Analysis of the human
erythrocyte Rh polypeptide. J. Biol. Chem. 271: 14233- 14239.
Landsteiner, K. and Wiener, A.S. (1940): An agglutinable factor in human blood recognized
by immune sera for rhesus blood.Proc. Soc. Exp. Biol. Med. 43:223-224.
73
Chapter Six
INTRODUCTION
Traditional medicine according to WHO (2003), refers to health practices, approaches,
knowledge and beliefs incorporating plant, animal and mineral based medicines, spiritual
therapies, manual technologies and exercises, applied singularly or in combination to
diagnose, treat and prevent illnesses or maintain well-being. By this description, traditional
medicine involves various forms of therapies such as massage, music and dance, mind and
spirit, preventive, psychotherapy, therapeutic fasting or dieting and herbal. Traditional
medicine has received renewed attention in the last decade, maintained its popularity in all
regions of the developing world and its use is rapidly spreading in industrialised countries.
Herbal medicine is the act or practice of using herbs, herbal preparations including extracts to
maintain health and to alleviate or cure diseases therapy. A medicinal herb or simply herb is a
plant or plant part valued for its medicine, aromatic or savoury qualities. These herbs produce
and contain a variety of chemical substances called phytochemicals (secondary metabolites
like alkaloids, tannins, saponins, flavonoids, anthraquinones etc; vitamins like vitamins A, B,
C etc; amino acids like methionine, asparagine, leucine, valine etc; and minerals like
potassium, sodium, calcium etc.) which act upon the body to bring about physiological
changes. Herbal medicine, an integral part of the development of modern civilisation, is an
important part of culture and traditions of African people including Nigeria. Today, most
people in Nigeria rely on herbal medicine for their health care needs since they are generally
more accessible and affordable. As a result, there is an increasing trend, worldwide, to
integrate herbal medicine into the primary health care system. World Health Organisation
(WHO) estimates that 4 billion people (representing 80% of world population) presently
living in developing countries of the world relies on herbal medicinal products as primary
source of healthcare and traditional medical practice. The world health organisation in 2004
noted that of 119 plant-derived pharmaceutical medicines, about 74% are used in modern
medicine in ways that correlated directly with their traditional uses as plant medicines by
native cultures.
OBJECTIVES
74
Medicinal herbs
A medicinal herb is any plant which contain substances in its root, stem, leaf, fruit or flower
that can be used for therapeutic purposes or which are precursors for the synthesis of useful
drugs. They have similar properties as conventional pharmaceutical drugs. These plants can
be in the form of food, spices, perfumery plants, microscopic plants like fungi, actinomycetes
(for isolating drugs like antibiotics), fibre plants (e.g. cotton, flax, and jute, used for preparing
surgical dressings). Medicinal plants have been identified and used from prehistoric times.
Nigeria flora has made and would continue to make great contributions to health care of
Nigerians. In fact, the indigenous medicinal plants form an important component of the
natural wealth and culture of Nigeria. Like other traditional medicines, the use of medicinal
herbs for managing several ailments has come a long way since the ancient times and are
making new waves everyday. Most of the herbal medications are prepared mainly by
grinding, pounding, chewing, boiling, cooking, roasting or smoking. The herbs are conveyed
in water, alcohol, tea, mineral (‘soft’) drinks (7up), pap or milk. Some of these facilitate the
activity of the medicinal plants. The herbal preparations are administered orally, topically,
inserted or inhaled. The plant parts frequently used include the roots, stem, leaves, stem
barks, root barks, flowers, seeds, juice/sap, tubers, rhizomes, fruits or whole plants. Herbs can
be taken in the form of decoction (a concentrated liquor resulting from heating or boiling)
and straining it, infusion (extract prepared by soaking leaves of plants in a liquid) or as a
poultice (a soft, moist mass of material, typically consisting of herbs that is applied to the
body to relieve soreness and inflammation and kept in place with a cloth). It may also be used
as prophylactic (to prevent the onset of the disease) and curative (to manage the disease after
its onset).
Most of the medicinal uses of herbs seem to have been developed through (i) observations of
wild animals, (ii) trial and error and (iii) ‘Doctrine of signature’: a concept used by the
herbalist of Renaissance period which establishes that any plant part which resembles the
organs of human body is created for the cure of the ailments of that part. e.g. Fadogiaagrestis
(Barkingaigai-Hausa) that resembles the male organ is used in managing male sexual
dysfunction like disorders of libido and erectile dysfunction.
Today, the values of these herbs are being lost due to lack of awareness, overuse, bush
burning, drought, urban development, deforestation and pollution. In folk medicine, different
plants because they contain several phytochemicals are now being explored in the
management of several diseases and related conditions such as diarrhoea, gonorrhoea,
inflammation, catarrh, bronchitis, impotence (sexual dysfunction), infertility, chronic coughs,
rheumatism, fever (malaria and thyphoid), sexually transmitted diseases, convulsion,
epilepsy, disease of the respiratory system, ulcer, skin disease, hernia, HIV/AIDS, dysentery,
pile, menstrual suppression, ring worm, snake bite etc as depicted in Table 1.
75
(H) Contraceptives,
Improve Boiled powdered seeds
fertility
2. Azadirachtaindica Dongoyaro Neem tree, Leaves, Malaria fever Leaf and stem bark
(H), Seed, decoction
Margosa Stem
Eke-oyibo bark Leaf poultice
(Y) Septic boil
Root bark decoction
Anthelmintic,
Syphilis,
Ripe fruits
Antipyretic
Piles,
Urinary disease
4. Carica papaya Ibepe (Y), Pawpaw Leaves, Malaria fever Boiled leaves
Qwanda (H), Fruits, Antipyretic Cold infusion of leaves
Ojo (I) Seeds, Leaf decoction and
Xylopiaaethionicafruits
Roots Diabetes
Fresh leaves
Gonorrhoea,]
Syphilis, ]
Unripe fruits
Amoebic ]
dysentery ]
Diuresis, Cooked unripe fruits
Laxative Cooked ripe fruits with
melon
76
Abortifacient
Galactagogue,
Mild
convulsion
77
Tonic,
Gonorrhoea,
Goitre Fruit
Aphrodisiac,
Diuretic,
Dysentery
9. Ocimumgratissimum Efinrinajase Tea Bush Leaves, Prevention of Ground fresh leaves with
(Y), miscarriages alligator pepper
Stem, (Aframomomummelequeta)
Nehonwu and applied to lower
(I), Whole
abdomen of pregnant
herb
women
Aaidoya ta
gida (H) Decoction of stem and
leaves
Cough,
Cold, Cold infusion of the herb
Fever,
Chest pain,
Diarrhoea
Convulsion,
Colic pains
10. Vernonia Ewuro (Y), Bitter leaf Leaves, Diabetes Leaf extract
amygdalina
Shiwaka Root, Pneumonia Leaf decoction
(H),
Stem Laxative, Root decoction
Olubu (I)
Anitpyretic
Stomachache Chew stem
78
REASONS FOR THE INCREASING USE OF MEDICINAL HERBS
People use medicinal herbs for several reasons and these include:
1. Less expensive and safer- Most Nigerians especially those living in rural communities
don’t have free access to orthodox medicine. Where such exist, the rising cost of imported
medicines has posed a big problem. Therefore, many people use herbal remedies as a way to
fight the high cost of prescription medication because medicinal herbs are less expensive. The
fact that herbs grow naturally makes it easily available and affordable in low-income country
like Nigeria. It is estimated that about 75% of the populace still prefer to solve their health
problems with medicinal herbs. People believe that herbs are safer to use simply because they
are natural. Researches have however shown that some of these herbs are indeed not safe
even at their acclaimed therapeutic doses. For example, Allium sativum (Garlic-English;
Aayu-Yoruba) may cause excessive bleeding while Pausinystaliayohimbe (Yohimbine-
English; idiagbon-Yoruba) can result in hypertension and increased heartbeat.
79
ADVANTAGES AND DISADVANTAGES OF HERBAL MEDICINE
Disadvantages
Herbs are not without disadvantages, and herbal medicine is not appropriate in all situations.
Few of the disadvantages are considered as follows:
1. Inappropriate for many conditions: Modern medicine treats sudden and serious illnesses
and accidents much more effectively than herbal or alternative treatments. An herbalist would
not be able to treat serious trauma, such as a broken leg, nor would he be able to heal an
appendicitis or a heart attack as effectively as a conventional doctor using modern diagnostic
tests, surgery, and drugs.
2. Lack of dosage instructions: Another disadvantage of herbal medicine is the very real
risks of doing oneself harm through self-dosing with herbs. While one can argue that the
same thing can happen with medications, such as accidentally overdosing on cold remedies,
many herbs do not come with instructions or package inserts. There is a very high risk of
overdose.
3. Poison risk associated with wild herbs: Harvesting herbs in the wild is risky, if not
foolhardy, yet some people try to identify and pick wild herbs. They run a very real risk of
poisoning themselves if they don't correctly identify the herb, or if they use the wrong part of
the plant.
4. Medication interactions: Herbal treatments can interact with medications. Nearly all
herbs come with some warning, and many, like the herbs used for anxiety like the Valerian
and St. John's Wort, can interact with prescription medication like antidepressants. Such
interactions can be very dangerous and fatal.
5. Lack of regulation: Because herbal products are not tightly regulated, consumers also run
the risk of buying inferior quality herbs. The quality of herbal products may vary among
batches, brands or manufacturers. This can make it much more difficult to prescribe the
proper dose of an herb.
80
MYTHS AND FACTS
Myths on herbs are popular beliefs or stories that are associated with medicinal plants
especially one considered to illustrate cultural ideal. It is also an idea or explanation which is
widely held but untrue or unproven. Myth is often interchangeable with legend or allegory
and may be associated with religious/cultural beliefs, feelings or practice. On the other hand,
facts are things that are known and proved to be true. Several myths and facts have been told
about medicinal herbs, but the most common ones are as described:
1. Herbal efficacy - Herbs have been used by all cultures throughout history. Indeed, many
drugs today are of herbal origin. Several laboratories have reported the effectiveness of
common indigenous medicinal herbs against array of diseases. Research has provided
scientific evidence to the acclaimed sex enhancing potentials of Fadogiaagrestis
(Barkingaigai-Hausa) stem. Experimental evidence has also showed that
Massulariaacuminata (pakoijebu/orinijebu-Yoruba) has antibacterial potentials, thus
justifying its use as chewing stick in traditional medicine. Numerous other diseases or
complaints such as hernia, snake bite, arthritis, gout etc. have been treated using herbs alone
or a mixture of medicinal plants with animal parts. Today, plant medicines including
vincristine and vinblastine isolated from the Rose periwinkle and used to treat childhood
leukemia and Hodgkin's disease while diosgenin extracted from yam Discoreavillosa. has
been used in the treatment of rheumatism and as oral contraceptives. The use of herbs has
also helped to improve quality of life. The vast majority of herbs contain a complex mixture
of compounds or bioactive agents like alkaloids, saponins, flavonoids, phenolics, terpenes,
and polysaccharides that confer buffering, modulatory and modifying activities on the
medicinal plants. Therefore, administration of isolated ingredients cannot easily mimic the
effects of extracts from whole plant. All these have clearly supported the fact that medicinal
plants are indeed efficacious. If however, they are not, herbal medicine and medicinal plants
would have gone into extinction.
2. Multiherbal therapy- Most herbalists/herb sellers use as many as between 6-14 plants for
a given disease condition with claim that some herbs will reduce symptoms of the diseases
while others will improve digestion and absorption of the remaining herbs; this practice has
been going on from time immemorial in the folk medicine of several countries. In Nigeria, it
is common among the rural population to combine several herbs which they claim to have
particular curative properties, whereas only one or some of the combined herbs may be
containing the active ingredient for curing particular diseases. This myth is indeed very true
and could be regarded as a fact. For instance, herbs such as Azadiractaindica(Neem –English,
dongoyaro-Hausa, eke-oyibo-Yoruba), Lymbopogancitractus(Lemon grass-English) and
Psidiumguasava(Guava-English, quaba/gilofa-Yoruba, giba-Hausa, ugwoba-Igbo) leaves are
always combined together for the treatment of malaria fever. However, research has reported
that only the Neem plant contains the active substance against plasmodium parasite (the
causative organism of malaria fever) whereas lemon grass and guava act as flavouring agent
(to reduce bitter taste of Neem) and diuretic (to increase rate of urine excretion, so as to
prevent the accumulation of the Neem in the body) respectively.
While the above may be true, many other herbal medicine advocates propose that the
therapeutic benefit of herbal products stems from the synergestic action of several natural
components in the herbs. Some constituents that are thought to be inactive may actually play
a role in the pharmacokinetics of the active component.
81
3. Safety- How safe are these herbs? Many consumers use herbs on self-medication because
there is a wide-held misconception that ‘natural’ means ‘safe’. Many forget that herbs are
drugs since they contain many chemicals. Just because something is ‘natural’ does not mean
that one should be lured into a false sense of security. The myth in this regards is that plants
commonly used in traditional medicine are assumed to be safe. This position is based on the
long usage in the treatment of diseases according to knowledge accumulated over the
centuries. However, the fact on safety of medicinal herbs is that researches have shown that
many plants used as food or in traditional medicine are potentially toxic, affecting the normal
functioning of the organs (organ dysfunction), selective and total adverse effect on the normal
functioning of the blood, reducing sperm count and motility and inflicting infertility.
Medicinal plants can also initiate the process of carcinogenesis that will eventually lead to
cancer. Medicinal plants can cause photosensitivity, skin irritation, excessive daytime
sleepiness, liver inflammation and interaction with anticlotting systems.
It is true that when these herbs are taken in prescribed quantity, most of them are ‘safe’, but
indiscriminate use or its abuse is inimical to the health of consumer. Some of these herbs
could even kill. A typical example is the case of a man who took an overdose of a sex-
enhancing medicinal plant just because he wanted to satisfy his female partner only to result
in premature death because for a long period of time, his erect male organ did not return to
the flaccid state. Besides, some of these herbs like Aloe (Aloe vera), mistletoe (Viscum
album), Wild yam (Discoreavillosa) might be dangerous during the first trimester of
pregnancy as they may cause premature contractions that can lead to abortion and birth
defects. Overdose of ginseng (Panax ginseng) can also lead to androgynous babies (babies
with over stimulation of male sex hormones). It has been reported that toxic effects due to the
use of herbs are usually associated with liver toxicity (hepatotoxicity). Other toxic effects on
the kidney (kidney dysfunction), nervous system, blood and circulatory systems have also
been published. Therefore, the fact that herbs are of natural origin does not mean that they are
safe!
4. Misidentification of herbs- Medicinal plants which include barks, bulbs, stem and roots
are dried and sold as semi and processed products. These products are seldom labelled but
bulk stock may be identified in local names and packaging is rudimentary. The use of local
names will not allow easy identification of medicinal plant products and desiccation through
drying usually render taxonomic identification of the plants difficult.
Poisoning from traditional medicine is usually a consequence of misidentification, incorrect
preparation or inappropriate administration and overdose. Many cases of poisoning remain
unrecorded, and mortality from traditional plant medicines may be higher than currently
known. Demographic studies indicate that the majority of traditional medicine-related
poisoning affects children.
5. Herbal knowledge- The safety of patients in traditional medicine is being compromised
nowadays because a growing number of herb sellers or healers do not possess sufficient
knowledge, skill and experience to practice successfully. At times, wrong diagnoses are made
since there is no way to carry out laboratory tests. This results in misidentification and
misadministration of medicinal herb. In addition, the love to make more money may be the
consequence of purposeful adulteration; potentially toxic plants that are readily available may
be added or substituted for the main active medicinal plant. Similarly, medicinal herbs that
have become old in stock may be added to a recipe just to get rid of it without necessarily
having a significant role in the recipe. All these are stories being told about medicinal plants
and they are actually true.
82
6. Quality control and standardisation of medicinal herbs- Due to reliance on medicinal
herbs, there is the need in accordance with WHO to ensure quality control of these herbs and
its standardisation. There is the myth that standardisation of herbs will reduce or eliminate the
beneficial effects of such plants. This, however, is not true as some studies have refuted this.
Again, the consumer has no way of knowing exactly the content and what effect the contents
may have as against standard pharmaceutical drugs. Therefore, one cannot be sure of what he
is getting because herbal products are not regulated nor are manufacturers responsible for
proving the efficacy and safety of their remedies. Standardisation should therefore be aimed
at regulating the number of plants being put together for a particular disease. For instance, in
the treatment of cholera, as many as four plants: Alligator pepper
(Aframomummeleguetaseeds), common wild sorghum (Sorghum arundinaceum seeds), Bitter
lemon (Momordicacharantialeaves) and Akoejirin-Yoruba anbdUkwuani-Igbo (M. cissoids
leaves) are needed. There is the need to actually reduce it to one or two that are actively
involved in alleviating the disease condition. In addition, several of these herbs do not carry
labels and when they do, some of the items listed may not be there and could be misleading.
The labels also may not list the active ingredients, side effects, how it should be taken, the
quantity (dosage) and frequency.
CONSERVATION OF MEDICINAL PLANTS
Medicinal plants are globally valuable sources of new drugs. The existence of these plants
are being faced with several threats that include degradation of habitat due to expanding
human activity, forest degeneration, destructive collection of plant species, invasion of exotic
species that compete with native species, increased spread of diseases, industrialisation, over-
exploitation, human socioeconomic change and disturbance, changes in agricultural practices,
excessive use of agrochemicals, natural and man-made calamities and genetic erosion, among
others. Therefore, there is the need to do away with these threats of extinction to medicinal
plants and conserve the botanicals.
Conservation of medicinal plants is the act by which the environment is managed in such a
way to obtain the greatest value for the present generation and maintaining the potential of
the herbs for the future. For most of the endangered medicinal plant species, no conservation
action has been taken. For example, there is very little evidence of medicinal plants in the
gene-banks. Also, too much emphasis has been placed on the potential of the medicinal plants
for discovering new potent and novel drugs, and too little on the many problems involved in
the use of traditional medicines by local populations. For most countries, there is not even a
complete inventory of medicinal plants. Much of the knowledge on their use is held by
traditional societies, whose very existence is now under threat. Little of this information has
been recorded in a systematic manner. Furthermore, to meet the requirements of expanding
regional and international markets healthcare products and needs of growing populations,
large quantities of medicinal plants are harvested from forests. Specifically, in Nigeria, large
number of medicinal plants is extracted from the wild to meet the increasing demand for raw
materials needed for domestic consumption. As a result, the natural resources are rapidly
depleting.
Conservation of biological diversity involves protecting, restoration and enhancing the
variety of life in an area so that the abundance and distribution of species and communities
contributes to sustainable development. The ultimate goal of conservation biology is to
maintain the evolutionary potential of species by maintaining natural levels of diversity
which is essential for species and populations to respond to long and short term
environmental changes in order to overcome stochastic factors failing which would result in
extinction.
83
CONSERVATION STRATEGIES FOR MEDICINAL PLANTS
The best means of conservation is to ensure that the populations of species of medicinal
plants continue to grow and evolve in the wild - in their natural habitats. Such in situ
conservation is achieved by setting aside areas as nature reserves and national parks
(collectively termed "Protected Areas") and by ensuring that as many wild species as possible
can continue to survive in managed habitats, such as farms and plantation forests. The various
strategies (In-situ conservation and Ex-situ conservation) for conserving medicinal plants are
as discussed:
84
each species and so are of limited use in terms of genetic conservation, botanic
gardens have multiple unique features. They involve a wide variety of plant species
grown together under common conditions, and often contain taxonomically and
ecologically diverse flora. Botanic gardens can play a further role in medicinal plant
conservation through the development of propagation and cultivation protocols, as
well as undertaking programs of domestication and variety breeding.
e. Seed storage modules: Usually seeds, being natural perennating structures of plants,
represent a condition of suspended animation of embryos, and are best suited for
storage. By suitably altering their moisture content (5-8%), they can be maintained for
relatively long periods at low temperatures (-18 °C or lower). However, in several
species, rhizome/bulb or some other vegetative part may be the site of storage of
active ingredients, and often, such species do not set seed. If seeds set, they may be
sterile or recalcitrant i.e., intolerant of reduction in moisture or temperature, or,
otherwise unsuitable for storage.
85
PLANT TISSUE CULTURE TECHNIQUE IN MEDICINAL PLANTS
86
CONCLUSION
The use of herbs (medicinal plants) in the management of diseases is on the increase
worldwide and is becoming an integral part of the health care delivery system. This is due to
its efficacies and reduced side effects which in some cases have been validated by scientific
data. The use of these medicinal plants has been surrounded by myths in which some are true
while others are not and should therefore be used with caution. Finally, the threat of
extinction being faced by various plants of medicinal values can be addressed through
various technologies that in vivo and ex-vivo conservation and plant tissue culture.
EVALUATION STRATEGIES
Practice Questions
1. The role of lemon grass in the multi-herbal treatment of malaria is
A. That it contains the active ingredient
B. It is a colorant
C. It is a flavouring agent
D. It increases urine excretion
2. There is a school of thought in herbalism that “natural” means “safe”.
A. True
B. False
C. Neither True nor False
D. None of the above
3. Indiscriminate use of plants is NOT inimical to the health of consumers
A. True
B. False
C. Neither True nor False
D. None of the above
4. The inability of the male copulatory organ to return back to the original flaccid state is
known medically as
A. Erection
B. Arousal
C. Erectile dysfunction
D. Failure of detumescence
5. Poisoning resulting from the practise of traditional medicine may be a consequence of the
following except
A. Misidentification
B. Dosage
C. Inappropriate administration
D. Recording
87
REFERENCES
Adesina, S. K. (2008). Traditional medical care in Nigeria. TODAY Newspaper.
Astin, J. A. (1998). Why patients use alternative medicine: results of a national study. JAMA.
27(9): 1548-1553.
Bodenstein, J. W. (1973). Observations on medicinal plants. South African Medical Journal
47: 336-338.
Dash, G. K. and Sahu, M. R. (2007). Medicinal herbs: Myths and facts are they all safe?
Pharmacognosy Reviews 1(2): 261-264.
Fennell, C. W., Lindsey, K. L., McGaw, L. J., Sparg, S. G., Stafford, G. I., Elgorashi, E. E.,
Grace, O. M. and van Staden, J. (2004). Assessing African medicinal plants for efficacy
and safety: pharmacological screening and toxicology. Journal of Ethnopharmacology 94:
205-217.
Gbile, Z. O. and Adesina, S. K. (1987). Nigerian flora and its pharmaceutical potential.
Journal of Ethnopharmacology 19: 1-16.
Gill, L. S. (1992). Ethnomedical uses of Plants in Nigeria. Uniben Press, Nigeria. Pp. 1-276.
Kamatenesi-Mugisha, M. and Oryem-Origa, H. (2005). Traditional herbal remedies used in
the management of sexual impotence and erectile dysfunction in western Uganda. African
Health Sciences, 5(1): 40-49.
Nalawade, S. M., Sagare, A. P., Lee, C. Y., Kao, C. L. and Tsay, H. S. (2003). Studies on
tissue culture of Chinese medicinal plant resources in Taiwan and their sustainable
utilisation. Botanical Bulletin-Academic Sinica 44:79-98.
Natesh, S. (1997). Conservation of medicinal and aromatic plants in India-An overview. Pp.
1-11. In: Medicinal and Aromatic Plants. Strategies and Technologies for Conservation.
Proceedings of the Symposium State-of-the-Art Strategies and Technologies for
Conservation of Medicinal and Aromatic Plants. Kuala Lumpur, Malaysia, 29-30
September 1997. Ministry of Science, Technology and Environment and the Forest
Research Institute, Malaysia.
Natesh, S. (2000). Biotechnology in the conservation of medicinal and aromatic plants. Pp.
548-561. In: Biotechnology in Horticulture and Plantation Crops. KL Chadha, PN
Ravindran and Leela Sahajram (eds), Malhotra Publishing House, New Delhi, India
Owonubi, M. O. (1988). Use of local herbs for curing diseases. Pharma. Herbal Med. 4(2):
26-27.
Patterson, E. (1996). Standardized extracts: herbal medicine of the future? Herb. Market.
Rev., 2:37-38.
Savage, A. and Hutchings, A. (1987). Poisoned by herbs. British Medical Journal 295: 1650-
1651.
World Health Organisation (1976). African Traditional Medicine. Afro-Tech Rep. Series 1.
pp. 3-4. WHO Brazaville.
World Health Organisation. (1977). Resolution-promotion and development of training and
research in traditional medicine. WHO DOCUMENT NO 30: 49.
WHO traditional medicine strategy 2002-2005. Document WHO/EDM/TRM/2002.1.
Yakubu, M. T., Akanji, M. A. and Oladiji, A. T. (2005). Aphrodisiac potentials of aqueous
extract of Fadogiaagrestis(Schweinf. Ex Heirn) stem in male albino rats. Asian Journal
of Andrology 7(4): 399-404.
88
Chapter Seven
ATMOSPHERIC ENVIRONMENT, AIR POLLUTION AND PUBLIC HEALTH
1Adekola, F.A., and 2Abdul Raheem, A.M.O.
1
Department of Industrial Chemistry, University of Ilorin, Ilorin, Nigeria
2*
Department of Chemistry, University of Ilorin, Ilorin, Nigeria
*Corresponding e-mail: modinah4@yahoo.co.uk
INTRODUCTION
The total global environment consists of four major realms. These are atmosphere,
hydrosphere, lithosphere and biosphere. From space, the atmosphere otherwise referred to as
the earth's atmosphere, looks like a thin blue veil. This fragile, nearly transparent envelope of
gas supplies the air that we breathe each day and it has a mass of about 5.15 x 10 15 metric
tons, held to the planet by gravitational attraction [Stern, 1997]. It also regulates the global
temperature and filters out dangerous levels of solar radiation. The atmosphere extends up to
about 500 km above the surface of the earth. A constant exchange of matter takes place
between the atmosphere, biosphere and hydrosphere with relative weights ratio of
300:1:69,100 respectively [Dara, 2004]. The atmospheric temperature, pressure and density
vary considerably with altitude.
OBJECTIVES
At the end of the chapter, students are expected to:
(i) List the atmospheric segments
(ii) explain the position of atmosphere on the global environment;
(iii) explain the importance of various atmospheric compositions;
(iv) definition of pollutant and pollution;
(v) classify pollutions into various forms;
(vi) critically access the impact of air pollution;
(vii) evaluate the impact of biomass burning;
(viii) access the health implications of indoor air and outdoor pollution and enumerate
possible solutions;
(ix) define Volatile Organic Compounds; and
(x) explain various examples of Volatile Organic Compounds (VOCs) and their
possible health effects.
THE ATMOSPHERE
Air is all around us, odourless, colourless and essential to all life on earth as it acts as a
gaseous blanket, protecting the earth from dangerous cosmic radiation from outer space. It
helps in sustaining life on earth by screening the dangerous ultraviolet (UV) radiations (< 300
nm) from the sun and transmitting only radiations in the range 300 nm to 2500 nm,
comprising of near UV, visible and near infrared (IR) radiations and radio waves (0.01 to 4 x
105 nm) [Smart, 1998]. The atmosphere also plays a vital role in maintaining the heat balance
on the earth by absorbing the IR radiation received from the sun and re-emitted by the earth.
In fact, it is this phenomenon, called “the greenhouse effect”, which keeps the earth warm
enough to sustain life on the earth. Yet, the air is actually a combination of gaseous elements
that have a remarkable uniformity in terms of their contribution to the totality of life. Thus,
89
oxygen (O2) supports life on earth; nitrogen (N2) is an essential macro - nutrient for plants;
and carbon (IV) oxide (CO2) is essential for photosynthetic activity of plants. Moreover,
atmosphere is a carrier of water from the ocean to land, which is so vital for the hydrological
cycle. Any major disturbance in the composition of the atmosphere resulting from
anthropogenic activities may lead to disastrous consequences or may even endanger the
survival of life on earth [Dara, 2004, Abdul Raheem et al., 2009]. The constituent elements
are primarily nitrogen and oxygen, with a small amount of argon (Ar). Below 100 km, the
three main gaseous elements, which account for about 99.9 % of the total atmosphere, are N2,
O2 and Ar and they have concentration by volume of 78.09 %, 20.95 %, and 0.93 % of
respectively [Stanley, 1975].
The presence of trace amounts of other gases would account for the remaining 0.07 %. These
remaining trace gases exist in small quantities and they are measured in terms of a mixing
ratio. This ratio is defined as the number of molecules of the trace gas divided by the total
number of molecules present in the volumes sampled. For example, ozone (O3), CO2, oxides
of nitrogen (NO2 + NO) as NOx and chlorofluoro carbons (CFCs) are measured in parts per
million by volume (ppmv), parts per billion by volume (ppbv) as well as microgram per cubic
meter (µgm-3), [Dale, 1976]. If by any way as a result of human activities otherwise known
as anthropogenic, the concentrations of these trace compounds are increased or other forms of
pollutants introduced and bio accumulate over time, then it becomes hazardous to lives
exposure.
LAYERS OF ATMOSPHERE
Atmosphere is the mixture of gases surrounding a celestial body with sufficient gravity to
maintain it. The studies of Earth’s atmosphere is well studied and the science is known as as
Meteorology. The figure 1 gave illustration of atmospheric layer with their distances to the
Earth’s surface, while table 1 summarizes the major region of the atmosphere and their
characteristics.
Troposphere
This is the layer of the atmosphere closest to the Earth's surface, extending up to about 10-15
km above the Earth's surface. It contains 75 % of the atmosphere's mass. The troposphere is
90
wider at the equator than at the poles. Temperature and pressure drops as you go higher up
the troposphere.
Stratosphere
This layer lies directly above the troposphere and is above 35 km deep. It extends from about
15 km above the earth’s surfaces.
Mesosphere
Directly above the stratosphere, extending from 50 to 80 km above the Earth's surface, the
mesosphere is a cold layer where the temperature generally decreases with increasing
altitude. Here in the mesosphere, the atmosphere is very rarefied nevertheless thick enough to
slow down meteors hurtling into the atmosphere, where they burn up, leaving fiery trails in
the night sky.
Thermosphere
The thermosphere extends from 80 km above the Earth's surface to outer space. The
temperature is hot and may be as high as thousands of degrees as the few molecules that are
present in the thermosphere receive extraordinary large amounts of energy from the Sun.
However, the thermosphere would actually feel very cold to us because of the probability that
these few molecules will hit our skin and transfer enough energy to cause appreciable heat is
extremely low.
Exosphere
The final layer is the exosphere, which gradually get thinner as it reaches into the vacuum of
space at around 700 km above the earth’s surfaces. The atmosphere is so attenuated at the
altitude that average air molecules travel without colliding. The density of the atmosphere at
an altitude of about 9700 km is comparable to that of interplanetary space.
Stratosphere - 56 to – 2 11 to 50 O3 -ve
Mesosphere - 2 to – 92 50 to 85 +
O2 NO
+
+ve
+ +, +
Thermosphere - 92 to 1200 85 to 500 O2 , O NO -ve
91
POLLUTANTS
These are substances introduced into the environment in an amount sufficient to cause
adverse measurable effects on human beings, animals, vegetation or materials. Pollutants are
referred to as primary pollutants, if they exert the harmful effects in the original form in
which they enter the atmosphere e.g. CO, NOx, HCs, SOx, particulate matter and so on. On
the other hand, secondary pollutants are products of chemical reactions among primary
pollutants e.g. ozone, hydrogen peroxide, peroxyacetylnitate (PAN) and peroxybenzoyl
nitrate (PBN). Classification of pollutants can also be according to chemical compositions
i.e. organic or inorganic pollutants or according to the state of matter i.e. gaseous or
particulate pollutants. Air pollution is basically made up of three components and these are
source of pollutants, the transporting medium, which is air and target or receptor which could
be man, animal, plant and structural facility.
The various chemical and photochemical reactions taking place in the atmosphere mostly
depend upon the temperature, composition, humidity and the intensity of sun light. Thus, the
ultimate fate of chemical species in the atmosphere depends upon these parameters.
Photochemical reactions take place in the atmosphere by the absorption of solar radiations in
the UV region. Absorption of photons by chemical species gives rise to electronically excited
molecules. These reactions are not possible under normal laboratory conditions except at
higher temperature and in the presence of chemical catalysts [Hansen et al, 1986]. The
electronically excited molecules spontaneously undergo any one or combination of the
following transformations: Reaction with other molecules on collision; Polymerisation;
Internal rearrangement; Dissociation; De - excitation by fluorescence or De - activation to
return to the original state [Dara, 2004]. Any of these transformation pathways may serve as
an initiating chemical step or a primary process. The three steps involved in an overall
92
photochemical reaction are Absorption of radiation, Primary reactions and Secondary
reactions.
Smoggy atmosphere show characteristics variations with time of the day in levels of different
pollutants such as NO, NO2, hydrocarbons, aldehydes and oxidants. A generalised plot
showing these variations is shown in Figure 2. This shows that shortly after dawn the level of
NO in the atmosphere decreases markedly, a decrease which is accompanied by a peak in the
concentration of NO2. During the mid – day the levels of aldehydes and oxidants become
relatively high, however, the concentration of total hydrocarbons in the atmosphere peaks
sharply in the morning, then decreases during the remaining daylight hours. The variations in
species concentration shown in the above Figure may be explained by a generalised reaction
scheme in Figure 3. This is based on the photochemically initiated reactions which occur in
an atmosphere containing oxides of nitrogen, reactive hydrocarbons, and oxygen. The
various chemical species that can undergo photo - chemical reactions in the atmosphere
include NO2, SO2, HNO3, N2, ketones, H2O2, organic peroxides and several other organic
compounds and aerosols.
POLLUTION
Environmental pollution especially air, is one of the main causes of deteriorating living
conditions as the breathing of safe air is as important as safe water or food, but the human
population in developing countries are compelled to breath polluted air resulting from the
combustion of fossil fuels for transport, power generation, cooking and other diffused
sources (Albalaket al., 1999). The health consequences of air pollution are considerable.
93
Figure 4: Digital images of polluted environment
(a) Air pollution site (b) land polluted site
The World Health Organisation (WHO) has estimated that 800,000 people per year die from
the effects of air pollution. In particular, air pollution contributes significantly to respiratory
diseases (Bruce et. al., 2000). Air pollution from combustion sources (figure 4) has been
responsible for acid – rain, global warming and malfunctioning of human / animal’s
haemoglobin [Stanley, 1975], irritation of throat, brochonimonia, asthma etc. (Abdul Raheem
et al., 2006) Other causes arising from human activities include inappropriate solid waste
disposal, gas flaring and oil exploration. Air pollution can also arise from natural causes such
as volcanic eruption, whirlwind, earthquake, decay of vegetation, pollen dispersal, as well as
forest fire ignition by lightning.
Several studies have been carried out on the quantification of pollutants and analysing their
consequences on public health and the environment. It has been estimated that each year
between 250 and 300 million tons of air pollutants enter the atmosphere above the United
States of America [Onianwaet al., 2001; Abdul Raheem et al., 2009]. Tropospheric pollution
causes degradation of crops, forests, aquatic systems, structural materials, and human health.
It was reported that NOx air pollution is becoming a far-reaching threat to USA National
Parks and Wilderness Areas as these areas are suffering from harmful effects of oxides of
nitrogen pollution [Environmental Defense Fact Sheet (EDFS), USA 2003]. It has also been
confirmed that NOx along with other pollutants contributes to ground – level ozone (smog)
[Abdul Raheem, 2007; Abdul Raheem et al., 2017] pollution which can cause serious
respiratory problems, especially in young children and the elderly, as well as healthy adults
that are active outdoors. In addition, ground level ozone is an air pollutant that causes human
health problems and damages crops and other vegetation. It is a key ingredient of urban
smog. Furthermore, another report confirmed worsening ozone concentration in nearly all the
national parks over the last decade in U.S.A [Environmental Defense Fact Sheet (EDFS),
USA 2003].
An assessment of new vehicles emission certification standards was carried out in
metropolitan area of Mexico City and the results show that light duty gasoline vehicles
account for most carbon (II) oxide (CO) and NOx emissions [Schifter et al., 2006]. The
European Environmental Agency also reported very recently that more than 95 %
94
contribution to nitrogen oxides emission to the air comes from fuel combustion processes
from road transport, power plants and industrial boilers [EEA, 2006]. There is reported
evidence of average chronic damage to the human lung from prolonged ozone exposure
[EEA, 2006]. Sulphur in coal, oil and minerals are the main source of the Sulphur (IV) oxide
(SO2) in the atmosphere. Moreover, peak concentrations above European Union limit still
occur, especially close to point sources in the cities.
Asian cities have some of the highest levels of air pollution in the world. In Asia, hundreds of
thousands of people in urban areas get sick just by breathing the air that surrounds them.
However, the WHO, 2006 estimates that dirty air kills more than half a million people in Asia
each year, of which burden falls heaviest on the poor as reported by Ogawa, 2006. The
worsening of the situation has been attributed to cumulative effects of rapid population
growth, industrialisation and increased use of vehicles. Environmental damage frequently
results from several primary and secondary pollutants acting in concert rather than from a
single pollutant.
Tropospheric oxidants such as ozone, propyl benzene nitrile (PBN), propyl acetyl nitrile
(PAN) as well as methyl and non-methyl organic pollutants illustrate the complexity of
atmospheric chemistry and processes. They help to form acidic and toxic compounds thereby
contributing to greenhouse warming and hence, damage to human health, animal, plant life
and materials [USEPA, 1998; Dara, 2004].
Significant changes in stratospheric ozone, high above the troposphere, can affect
tropospheric oxidants level [USEPA, 1998]. If increased Ultraviolet –B (UV-B) radiation
penetrates a depleted ozone shield, the photochemical formation of ground level oxidants
may be enhanced. Greenhouse warming could amplify this effect: A study carried out in three
U.S. cities; Nashville, Philadelphia, and Los Angeles showed that a large depletion of
stratospheric ozone, coupled with the greenhouse warming, could increase smog formation
by as much as 50 % [Adelman, 1987].
The study also showed that NO2 concentration might increase more than ten folds. There is
however progress towards the reduction of anthropogenic emissions of NOx, CO, volatile
organic compounds in Europe and North American [Jonson et al, 2001]. However, the
concentration of air pollutants emitted into the atmosphere is on the increase in the Southeast
Asia and other parts of the World including Nigeria [Jonson et al., 2001; Abdul Raheem et
al., 2009]. It is therefore expected that the emissions from Africa and other parts of the
Worlds that are yet to take strict and effective controlling measures on emissions will
influence the free tropospheric levels in most of the Northern Hemisphere.
BIOMASS BURNING
This is another way (of polluting the atmosphere) by which man unknowingly contributes to
emission of greenhouse gases and consequently impact negatively on health. Biomass
burning is the burning of living and dead vegetation. It includes the human-initiated burning
of vegetation for land clearing and land-use change as well as natural, lightning-induced fires.
Scientists’ estimate that humans are responsible for about 90 % of biomass burning with only
a small percentage of natural fires contributing to the total amount of vegetation burned
[WMO, 1991].
Biomass burning can equally be viewed as emission of combustion products that includes
greenhouse gases into the atmosphere and the loss of biomass useful as valuable material
95
resource. The extent of biomass burning generally in Africa and particularly in Nigeria
coupled with the complexity of the reactions involved; alongside its health implications
constitute an important motivation to view it as a global problem deserving priority attention
and careful multisided investigation because air has no boundary i.e. it is transboundary.
Burning vegetation releases large amounts of particulates (solid carbon combustion particles)
and gases, including greenhouse gases that helps warm the Earth. Greenhouse gases may lead
to an increased warming of the Earth or human-initiated global climate change. Studies
suggest that biomass burning has increased on a global scale over the last 100 years, and
computer calculations indicate that a hotter Earth resulting from global warming will lead to
more frequent and larger fires. Biomass burning particulates impact climate and can also
affect human health when they are inhaled, causing respiratory problems [Menzel et al.,
1991].
Since fires produce CO2, a major greenhouse gas, biomass burning emissions significantly
influence the Earth's atmosphere and climate. Biomass burning has both short- and long-term
impacts on the environment. Vegetation acts as a sink, a natural storage area for carbon
dioxide by storing it over time through the process of photosynthesis. As burning occurs, it
can release hundreds of years’ worth of stored carbon dioxide into the atmosphere in a matter
of hours. Burning also will permanently destroy an important sink for carbon dioxide if the
vegetation is not replaced. It is hypothesised that enhanced post - burn biogenic emissions of
these gases are related to fire-induced changes in soil chemistry and/or microbial ecology.
Biomass burning, once believed to be a tropical phenomenon, has been demonstrated by
satellite imagery to also be a regular feature of the world's boreal forests. One example of
biomass burning is the extensive 1987 fire that destroyed more than 12 million acres of
boreal forest in the People's Republic of China and across its border in the Soviet Union
[Cahoon et al., 1991].
Recent estimates indicate that almost all biomass burning is human-initiated and that it is
increasing with time. With the formation of greenhouse and chemically active gases as direct
combustion products and a longer-term enhancement of biogenic emissions of gases, biomass
burning may be a significant driver for global change [Cahoon et al., 1991]..
The historic data indicate that biomass burning has increased with time and that the
production of greenhouse gases from biomass burning has increased with time. Furthermore,
the bulk of biomass burning is human initiated. As greenhouse gases build up in the
atmosphere and the Earth becomes warmer, there may be an enhanced frequency of fires. The
enhanced frequency of fires may prove to be an important positive feedback on a warming
Earth. However, the bulk of biomass burning worldwide may be significantly reduced. Policy
options for mitigating biomass burning have been developed (Andrasko et al., 1991). For
mitigating burning in the tropical forests, where much of the burning is aimed at land clearing
and conversion to agricultural lands, policy options include the marketing of timber as a
resource and improved productivity of existing agricultural lands to reduce the need for
conversions of forests to agricultural lands. Improved productivity will result from the
application of new agricultural technology (fertilizers, etc.). For mitigating burning in tropical
savannas grasslands, animal grazing could be replaced by stall feeding since savannas
burning results from the need to replace nutrient-poor tall grass with nutrient-rich short grass.
For mitigating burning of agricultural lands and croplands, incorporate crop wastes into the
soil, instead of burning, as is the present practice throughout the world. The crop wastes
could also be used as fuel for household heating and cooking rather than cutting down and
destroying forests for fuel as is presently done.
96
INDOOR AIR QUALITY
Human exposure to air pollution is thus dominated by the indoor environments. Cooking and
heating with solid fuels such as cow dung, wood agricultural residuals, coal is about the
largest source of indoor air pollution globally. When used in cooking stoves, these fuels emit
substantial amounts of pollutants including respirable particles, carbon-monoxide, nitrogen,
sulphur oxides and benzene. Nearly half of the world continues to cook with solid fuels e.g.
50 – 75 % of people in Africa, South America, India and China (Bruce et al., 2000).
Indoor air pollution exposure has been linked with lower respiratory infectious chronic
obstructive pulmonary disease, trachea bronchus, lung cancer and asthma.
Volatile Organic Compounds (VOCs) is one of the air pollutants present both in indoor and
outdoor settings but the concentration of the indoor setting is much higher than outdoor.
Many definitions have been given to VOCs. These definitions vary by locale and they are
more a matter of law than a matter of science [Baumbach and Vogt, 2003]. For example, the
European Union Directive 2004/42/CE which covers VOC emissions from paints and
varnishes defines VOC as any organic compound having initial boiling point less than or
equal to 250 0C measured at a standard atmospheric pressure of 101.3 KPa [EPA-JEA, 1986].
Directive 94/63/EC which regulates VOC emissions from storage and distribution of petrol
simply defines “vapours” as any gaseous compound which evaporates from petrol.
Generally, Volatile Organic Compounds (VOCs) are a range of high vapour pressure
flammable gases from certain solids and liquids which can easily vaporise at room
temperature. This includes a variety of chemicals, some of which may have short or / and
long term adverse health effects when inhaled e.g. benzene is a probable human carcinogen
and toxic; formaldehyde is both an irritant and a sensitizer.
VOCs from out-gassing of fabrics, building and automobile materials etc. are an important
contributor to sick building syndrome (SBS). Photostats and printers off-gas / emit VOCs
which are important contributors to photochemical smog.
Daily large quantities of VOCs are emitted into the atmosphere from both anthropogenic and
natural sources. VOCs have been found to be a major contributing factor to ozone [Abdul
Raheem et al., 2008; Ghauri, 2007], a common air pollutant which has been proven to be a
public health hazard. The formation of gaseous and particulate secondary products caused by
oxidation of VOCs is one of the largest unknowns in the quantitative prediction of earth’s
climate on a regional and global scale, and on the understanding of local air quality.
To be able to control and model their impact, it is essential to understand the sources of
VOCs, their distribution in the air [based on Ventilation rate, Indoor and outdoor activities,
relative humidity, temperature and building structural characteristic] and the chemical
transformations which remove them from the atmosphere. However, If the concentration is
higher along with other gaseous pollutants in the troposphere and continues to accumulate
over time, the overall concentration can have a negative effect on health, vegetation and
97
structures (Abdul Raheem, 2007; Ghauri, 2007). The findings on indoor concentrations of
VOC’s reveal the following:
Volatile Organic Compounds are essential air pollutant that needed to be addressed critically
most especially the indoor VOCs since about 95 % of our activities are been carried out at
our homes, offices, workshops, shops, schools and in our different working places. Due to
these reasons and many more, the indoor air pollutants are said to be two (2) to five (5) times
greater than that of outdoor air pollutants. At times, this may be as high as one thousand
(1000) times higher. Since Photostats and printers emit VOCs, many of the indoor air quality
(IAQ) problems that may be faced by the operators of these equipment may be prevented if
the staffs and the building occupants understand how their activities affect Indoor Air Quality
(IAQ). Therefore, the following precautions and recommendations if properly followed can
greatly reduce the effects of these VOCs if they cannot be totally eradicated:
EVALUATION STRATEGIES
Practice Questions
Class exercise and assignment
i. Explain the position of atmosphere on the global environment;
ii. Explain the importance of various atmospheric compositions;
iii. Explain definition of pollution;
iv. Classify pollutions into various forms and lay more emphasis on Air pollution;
v. Critically access the impact of air pollution;
vi. Evaluate the impact of biomass burning;
vii. Access the health implications of indoor air and outdoor pollution and enumerate possible
solutions;
viii. Define Volatile Organic Compounds; and
98
ix. Give various examples of Volatile Organic Compounds (VOCs) and their possible health
effects.
REFERENCES
Abdul Raheem, A.M.O., Adekola, F.A., Obioh, I.B. (2006). Determination of sulphur (IV)
oxide in Ilorin City Nigeria, during dry season. J. Appl. Sci. Environ. Mgt. 10 (2) 5 –
10.
Abdul Raheem, A.M.O., Adekola, F.A., Obioh, I.B. (2017). Environmental Impact
Assessment of Daytime Atmospheric Sulphur Dioxide in Lagos – Nigeria. Journal
Chemical Society of Nigeria 42(2) 71-75.
Abdul Raheem, A.M.O., (2007). Measurement, Modelling and Analysis of ozone and ozone
precursors in the ambient environment of Lagos and Ilorin cities, Nigeria, Ph.D.
Thesis, University of Ilorin, Nigeria.
Abdul Raheem, A.M.O., Adekola F.A., Obioh, I. A., (2009). The Seasonal Variation of the
Concentrations of Ozone, Sulfur Dioxide and Nitrogen Oxides in Two Nigerian
Cities, Environ Model Assess 14:487-509.
Adelman. G. (Ed) (1987). Encyclopedia of Neuroscience, vol 2 Birkhauser, Boston Advance
in Chemistry series (1959) 21: Ozone Chemistry and Technology, American
Chemical Society, Washington D.C.
Andrasko, K. 1.,. Ahuja, D. R , Winnett, S. M. and Tirpak, D. A. (1991). Policy options or
managing biomass burning to mitigate global climate change. In 1.S. Levine (ed.),
Global Biomass Burning: Atmospheric Climatic, and Biospheric Implications. MIT
Press, Cambridge, Massachusetts, pp. 445-456.
Baumbach, G, Vogt, U (2003). Influence of inversion layers on the distribution of air
pollution in urban areas Water, Air and Soil Pollution; Focus 3: 65 – 76, Kluwer
Academic Publishers, Netherlands
Bruce N., Perez-Padilla R. and Albalak R. (2000). Indoor air Pollution in developing
countries: A major environmental and public health challenge, Bulletin of the World
Health Organization 2000; 78: 1078-92
Cahoon, D. R., Jr., J. S. Levine, W. R. Cofer III, J. E. Miller, G.M. Tennille, T. W. Yip, and
B. J. Stocks, (1991). The great Chinese fire of 1987: A view from space. In J. S
Levine (ed.), Global Biomass Burning: Atmospheric, Climatic, and Biospheric
Implications. MIT Press, Cambridge, Mass.
Climate Biosphere Interaction: Biogenic Emissions and Environmental Effects of Climate
Change Edited by Righard G. Zepp ISBN 0-471-58943-3 Copyright 1994 John Wiley
and Sons, Inc.
Dale, F.R., 1976 Strategy of Pollution control, Willars Grant Press, Boston, pg8
Dara, S.S, 2004 A Textbook of Environmental Chemistry and Pollution Control. S. Chand &
Company Ltd. New Delhi – 110055.
Ghauri, B., Lodhi. A .and Mansha. M., 2007 Development of baseline (air quality) data in
Pakistan, Environmental Monitoring and Assessment 127 (1 – 3) 237 – 252.
Hansen, J., et al (1986). U.S. EPA Washington D.C. The Greenhouse Effect Project of Global
Climate Change; in Effects of Changes in Atmospheric Ozone and Global Climate
Vol. 1 Over view: Robert T. Watson, 69-82.
Jonson, J.E., Sundet, J. K. and Tarrason, L., (2001). Model calculations of present and future
levels of ozone and ozone precursors with a global and a regional model. Atmospheric
Environment 35: 525 – 537.
Menzel, W. P., E. C. Cutrim, and E. M. Prins, (l991). Geostationary satellite estimation of
99
biomass burning in Amazonia during BASE-A. In J. S Levine (ed.), Global Biomass
Burning: Atmospheric, Climatic, and Biospheric Implications. MIT Press, Cambridge,
Mass., pp. 41~6.
Ogawa, 2006, WHO 2006, Air Pollution in Asian Cities Spurs Health Worries. In the
Pakistan Daily Times of Monday, December 11.
Onianwa, P.C., Fakayode, S.O., and Agboola, B.O. 2001 Daytime Atmospheric sulphur
dioxide concentrations in Ibadan city, Nigeria. Bulletin of Chemical Society of
Ethiopia 15 (1) 71-77.
Stanley, E. M. (1975). Environmental chemistry (pp. 390–391, 2nd (ed.). Boston, MA: CRC.
Stern, A.C., 1997 Air Pollution II, Academic Press Washington D.C, II.
World Meteorological Organisation (WMO), 1991.Report of the WMO Meeting of Experts
on the Atmospheric Part of the Joint U.N. Response to the Kuwait Oilfield Fires.
World Meteorological Organisation, Geneva, Switzerland, 103 pp.
100
Chapter Eight
FOOD PRODUCTION AND PRESERVATION FOR FOOD SECURITY: THE PLACE OF COOPERATIVES IN NIGERIA
1
Omotesho, A.O., 2Joseph, J.K., 1Muhammad-Lawal, A. and 2Kayode, R.M.O.
1
Department of Agricultural Economics and Farm Management, University of Ilorin, Ilorin, Nigeria
2
Department of Home Economics and Food Science, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
Food is necessary for health, growth and normal functions of living organisms. It is the
material that enables man to grow and reproduce himself (Lapedes, 1977). Essentially food is
a mixture of chemicals which could be separated into different components having different
functions in the body. The major constituents of food are water, protein, carbohydrates, fats,
vitamins and minerals. Based on the knowledge of the chemical constituents and their
functions in the body, food is classified either as proper foods (Carbohydrates, proteins and
fats) or accessory foods (water, inorganic salts and vitamins) (Franson, 1972).
One of the greatest world major problems today is how to eliminate hunger and food
insecurity. This challenge is greatest in the developing countries where people starve for lack
of adequate food and nourishment. The common strategy adopted has been increasing output
without considering the quantity and quality of the food that get to the ultimate consumer
(Joseph, 1994, 1996; Omotesho et. al., 1995). This chapter, therefore, highlights the
importance of food security, the constraints militating against getting enough high-quality
food to the consumers, and methods of food preservation. Besides, the chapter looks at the
role of cooperatives in mobilising resource-poor, small-scale farmers in agricultural
production and food security.
OBJECTIVES
Food security has been described as an important aspect in any consideration of wealth and
economic sustainability of a nation. It is generally defined as a situation that exists when all
people, at all times, have physical, social and economic access to sufficient, safe and
nutritious food that meets their dietary needs and food preferences for an active and healthy
life (FAO, 2002). Important aspects to be considered in food security issues include the
availability of foodstuff, the quality of the diet, the stability of supplies over time and space
and access to food produced (Honfoga & van den Boom, 2003). It is recommended that an
101
individual should consume between 65-86 g crude proteins per day, out of which 35 g (40 %)
must be from animal products.
Although there is variation in the estimate of the food insecure people all over the world,
available statistics show that a large proportion of the world population have the problem of
food insecurity. It was observed that more than eight hundred million people were food
insecure (Wiebe, 2003; FAO, 2005). According to a World Food Program estimate, hunger
affects one out of seven people on the planet. Food insecurity is a particularly serious issue in
many low-income countries. For instance, sub-Saharan Africa and South Asia stand out as
the two developing-country regions where the prevalence of human malnutrition remains
high. The largest absolute numbers of undernourished people are in Asia, while the largest
proportion of the population that is undernourished is in Africa, south of the Sahara. In terms
of proportionality, this was estimated at 34 % in Africa and 23 % in South Asia (FAO,
1998). Though nutrition insecurity is generally being reduced worldwide, the problem is
actually growing worse in Africa. This is due to increasing population growth and poor
progress in efforts directed at reducing food insecurity in many countries on the continent
(Benson, 2004).
With an estimate of sixty thousand people majority of whom are children dying each day of
hunger, food insecurity is considered a common phenomenon in Africa. The majority of the
deaths related to food insecurity are reported to occur in sub-Saharan Africa. In virtually all
rural sub-Saharan Africa, fluctuation in food security has become a fact of life that majority
of the people have to contend with (Milich, 1997). In sub-Saharan Africa, the total number of
hungry people increases each year. Given that food deficits are projected to rise, the problem
probably will only get worse (Trueblood & Shapouri, 2002; Paarlberg, 2002).
In view of its global dimension, the international community has placed the elimination of
famine and hunger on its agenda. The participants at the food summit organised by the Food
and Agriculture Organisation (FAO) in 1996 pledged to reduce the number of hungry people
by half by the year 2015 (World Food Summit WFS, 1996b; FAO, 2005; Meade & Rosen,
2002). Thereafter, the United Nations Rio+20 summit in Brazil in 2012 committed
governments to create a set of Sustainable Development Goals (SDGs) that would be
integrated into the follow-up to the Millennium Development Goals (MDGs) after their 2015
deadline. Consequently, sustainable food security was adopted as the second goal of the
Sustainable Development Goal agenda. This aims at ending hunger and achieving long-term
food security including better nutrition through sustainable systems of production,
distribution and consumption by the year 2030.
Food insecurity and hunger are forerunners to nutritional, health, human and economic
development problems. They connote deprivation of basic necessities of life. As such, food
security has been considered as a universal indicator of households‟ and individuals‟
personal well-being. The consequences of hunger and malnutrition are adversely affecting the
livelihood and well-being of a massive number of people and inhibiting the development of
many poor countries (Gebremedhin, 2000). Malnutrition affects one out of every three
preschool-age children living in developing countries. This disturbing, yet preventable state
of affairs causes untold suffering and presents a major obstacle to the development process. It
is associated with more than half of all child deaths worldwide. It is, therefore, the bane of a
major waste of resources and loss of productivity which are common occurrences in
developing countries. This is because children who are malnourished are less physically and
102
intellectually productive as adults. As such, malnutrition is a violation of the child’s human
rights (Smith et. al., 2003).
More than 800 million people have too little to eat to meet their daily energy needs. Most of
the world’s hungry people live in rural areas and depend on the consumption and sale of
natural products for both their income and food. It tends to be concentrated among the
landless or among farmers whose plots are too small to provide for their needs. For young
children, lack of food can be perilous since it retards their physical and mental development
and threatens their very survival. Over 150 million children under five years of age in the
developing world are underweight. In sub-Saharan Africa, the number of underweight
children increased from 29 million to 37 million between 1990 and 2003 (United Nations,
2005).
Furthermore, food insecurity has been identified as the principal cause of increasing and
accelerated migration from rural to urban areas in developing countries. Unless this problem
is addressed in an appropriate and timely fashion, the political, economic and social stability
of many countries and regions may well be seriously affected, perhaps even compromising
world peace (FAO, 1996). This is because hunger can provide a fertile ground for conflict,
especially when combined with factors such as poverty and difficulty in coping with disasters
(United Nations, 2005).
The root problem of inadequate access to food is poverty. This is in the sense of the failure of
the economic system to generate sufficient income and distribute it broadly enough to meet
households' basic needs. The problem can be addressed by either giving food directly to the
poor (non-market distribution of aid); increasing their incomes so that they have greater
entitlement to food through the market (given existing marketing costs); and/or reducing the
costs of food delivered through markets by fostering technical and institutional innovations in
farm-level production and the marketing system (Jayne et.al., 1994). The 1996 World Food
Summit reaffirmed that a peaceful, stable and enabling political, social and economic
environment is the essential foundation which will enable States to give adequate priority to
food security and poverty eradication.
Democracy, promotion and protection of all human rights and fundamental freedoms,
including the right to development and the full and equal participation of men and women are
essential for achieving sustainable food security for all (FAO, 1996). Attaining food security
is, therefore, a primary responsibility which rests with individual governments. Access to the
components of nutrition security, over and above those required for food security, is also a
challenge that must be addressed. Investments in education, sanitation, and access to health
care must continue and be increased if the advances required in nutrition security are to be
made.
103
Ultimately, the responsibility of ensuring food security for all lies with national governments
who have the duty of establishing the conditions and institutions necessary to enable their
citizens to have access to the basic requirements of food and nutrition security. The basic
determinants of food and nutrition security in any one African country will never be exactly
the same as those of another. This is because of the different historical factors, agro-
ecological conditions, economic comparative advantages and institutional structures at play
in each of the countries. As such, a single detailed policy and action prescription will not
enable national governments in different countries to effectively address malnutrition. It
must, however, be recognised that all African countries can attain nutrition security if
sufficient commitment exists. Political will must be applied and dedicated efforts made to
marshal the human, institutional, and material resources necessary for the task (Benson,
2004).
For there to be an improvement in food and nutrition security situation of a country, national
governments must address a number of issues including the following:
i. Enhancing the means to acquire food, whether through cash incomes or access to
productive resources. Considering the importance of agriculture as a source of income to
rural households, there is a need for improvement in their agricultural production. The
effectiveness of on-farm production determines the level of access to food enjoyed by both
farmers and the broader population to whom they are linked through the market. Increased
food supplies simultaneously increase the income of farming households and reduce the
prices people pay for food in the marketplace, both of which enhance nutrition security.
Moreover, increases in the production of both food and non-food crops contribute to the
broader economy, both in rural areas and in urban manufacturing centres.
ii. Improved education for the chain of food handlers, especially, from the producers to the
consumers. This is because the knowledge of why and how food spoilage occurs and the
agents responsible for spoilage becomes imperative to enable steps to be taken to prevent or
minimiseit if we must have fresh and wholesome foods that are safe and nutritious for human
consumption at all times (Muhammad and Kayode, 2014). The knowledge of food spoilage
and preservation is critical in the food chain to achieve nutritional security and enhanced food
productivity for economic growth. Furthermore, training will ensure that people can provide
themselves and their dependents with nutritionally balanced and hygienically prepared food.
iii. Provision of access to sufficient quantities of food items. This may require formulation of
policy for sustained, broad-based, economic growth. It is estimated that to end hunger in Sub-
Saharan Africa by 2050, a 3.5 % annual average growth rate in per capita Gross Domestic
Product (GDP) is necessary for the region.
iv. Direct nutrition interventions to provide food to those suffering from acute hunger and
malnutrition and nutrition information and supplements to women of childbearing age and
young children are necessary. Such interventions are a vital component of any effort to build
the quality of human capital, encourage economic growth, and improve standards of living.
v. Provision of clean water, adequate sanitation and effective health services. This is very
important for the individuals to benefit from the food consumed. Poor health situation of the
individuals may prevent them from having nutrition security.
vi. Efforts must be made to open national markets to international trade, both within Africa
and globally, as national food availability should not depend upon national food production
alone. The nutritional security of the population of a country is enhanced by the degree to
which it invests in building the institutional and legal frameworks and physical infrastructure
needed to facilitate open, reciprocal and free trade.
vii. The issue of gender equity must be addressed, as a close link exists between improved
child nutrition and the extent to which women participate in making economic decisions
104
within their households. Greater social equity enhances women’s access to resources thereby
increasing the diversity and quantity of food they can provide and improve the level and
quality of the care they can give to their dependents.
viii. Locally conceived and implemented action has been shown to be the most effective way
to improve food and nutrition security. National governments should give broad direction to
local efforts and facilitate the success of such efforts through resource allocation, institutional
support, and the provision of necessary expertise.
ix. Central governments should ensure that budgetary allocations reflect the central
importance that food and nutrition security have for the welfare of all people, as well as the
immense economic benefits they provide for relatively little cost. In this regard, donor
funding should be viewed as a secondary resource and used to complement the resources
allocated by governments.
x. Dedicated advocacy should be used to inform policymakers at all levels of the critical role
that improved nutrition plays in development and poverty alleviation. Without this, it is
unlikely the malnourished will receive any attention in any planning and resource allocation
decisions made in the democratic, decentralised, bottom-up political systems emerging across
Africa. The need to improve food and nutrition security must be communicated effectively
and understood widely; its significance for the welfare of all members of society must be
recognised. Ultimately, advocacy must build the political will needed to ensure that resources
are provided to help individuals and households attain food and nutrition security.
FOOD PRESERVATION
One major issue that is central to the achievement of sustainable food security in Nigeria is a
reduction in food spoilage. This is because of its effects on the quantity and quality of food
available to the consumer. Spoilage is the adverse change in the quality of food due to
biological reactions. Similarly, the terms deterioration or autolysis could be described as the
adverse changes in the quality of food as a result of physical and chemical reactions that
occur within the food system (Muhammad & Kayode, 2014).Food spoilage results when the
nutritional value, texture, taste of the food are damaged, the food become harmful to people
and unsuitable for consumption. On the basis of spoilage, food may be divided into three
categories namely perishable foods, semi-perishable foods and stable foods. The inclusion of
any food in any of the three categories is based on the level of moisture content of the food as
this factor plays a direct prominent role in food spoilage. Food spoilage may be caused by
mechanical damage, microbiological activity (bacteria, yeasts, and moulds), autolysis
(oxidation reaction and enzymatic activity), insect and rodent attacks, as well as temperature
related factors.
Food preservation is the act of protecting food from deterioration and decay so that it will be
available for future consumption. Preservation of food has been a major anxiety of man over
the centuries. Contamination with microorganisms and pests causes considerable losses of
foods during storage, transportation and marketing (15% for cereals, 20% for fish and dairy
products and up to 40% for fruits and vegetables). Particularly, pathogenic bacteria are an
important cause of human suffering and one of the most significant public health problems all
over the world. The World Health Organisation (WHO) reported that the infectious and
parasitic diseases represent the most frequent cause of death worldwide (35%).
Any method that will create unfavourable conditions for the factors which are capable of
adversely affecting the safety, nutritive value, appearance, texture, flavour, and keeping
qualities of raw and processed foods, could be used to preserve foods. Since thousands of
food products with different physical, chemical, and biological properties can undergo
105
deterioration from such diverse causes as microbial, natural food enzymes, insects and rodent
infestations, heat, cold, light, oxygen and moisture content, food preservation methods differ
widely and are optimised for specific products. Numerous processing techniques have been
developed to control food spoilage and raise safety. Traditional methods of preserving foods
include dehydration, smoking, salting, controlled fermentation (including pickling), and
candying; certain spices have also long been used as antiseptics and preservatives. Among the
modern processes for food preservation are refrigeration (including freezing), canning,
pasteurisation, irradiation, and the addition of chemical preservatives.
106
(ii) High-Temperature Short Time (HTST): Heat treatment between 71.7 and 75 0C for 15 to
16 seconds; and
(iii) Ultra High Temperature (UHT): Heat treatment between 121 and 138 0C (or higher) for 2
to 5 seconds.
Heating is done under pressure to produce turbulence and to prevent burning of the milk
inside a tubular or on a plate-type heat exchanger. The milk produced is essentially sterile and
when packaged under aseptic conditions, may be stored at room temperature for several
months. Permanent stability, that is the shelf life of about two years, is obtained with foods
that can withstand prolonged heating such as bottled juices. There is a greater loss of flavour
from foods that are exposed to a longer time-temperature relationship. Therefore, temporary
stability (limited shelf life) is only obtained with some foods where prolonged heating would
destroy its quality. These foods such as milk usually require subsequent refrigeration. “High-
Temperature Short-Time” (HTST) and “Ultra High Temperature” (UHT) processes have
been developed to retain a food’s texture and flavour quality parameters (Ihikoronye &
Ngoddy, 1985).
(iii) Tyndallisation
This is a cycle of heat process aimed at destroying the vegetative forms of microbes at a high
temperature between 70 – 100 °C followed by cooling to 37 °C by allowing the resistant
spores to germinate and finally reheating at a high temperature to destroy the germinated
spores. The cycle of heating, cooling and reheating is repeated until the microbial destruction
level is satisfactory (Badejo, 1999).
(iv) Sterilisation
Sterilisation is a method of heat treatment that is aimed at removing all microorganisms.
Temperature sterilisation could be achieved in two ways, i.e. Low-Temperature Longer time
(LTLT) or High-Temperature Shorter Time (HTST). For example, milk and milk products
are sterilised at temperatures of 121 0C for 5 minutes or 149 0C for 6 seconds. Under this
process, all pathogenic and toxin-forming organisms (including both vegetative cells and
spores) are destroyed, as well as other types of organisms, which if present could grow in the
food and cause spoilage under normal handling and storage conditions. To prevent
recontamination sterilised products must be packaged in aseptic and hermetically sealed
containers such as cans and bottles.
Types of commercially sterile processes include canning, bottling, and aseptic processing.
Most commercially sterile food products have a shelf life of 2 years or longer. The
disadvantage of this process, however, is that high temperatures can diminish product
appearance, texture, and nutrient quality. Examples of such foods that could be preserved by
this method include all forms of cooked food, milk, beer and wine.
(b) Low Temperatures Treatment
The slowing of biological and chemical activity with decreasing temperature is the principle
behind cooling (refrigeration) and freezing preservation. In addition; when water is converted
to ice, free water required for its solvent properties by all living systems is removed. Low-
temperature treatment prevents the spoilage of food as well as the proliferation of harmful
bacteria. Low-temperature treatment involves freezing and refrigeration.
(i) Freezing
Freezing turns water in food into ice crystals which rupture the microbial cells. Water is
unavailable for reactions to occur, and for micro-organisms to grow. Some microbial cells are
destroyed during freezing due to denaturation as a result of increased concentration of solute
in the frozen food while, others are prevented from multiplying (US History Encyclopedia,
2006). Because not all the diversities of bacteria are killed, however, those that survive
reanimate in thawing food and often grow more rapidly than before freezing. In freezing,
food temperature is reduced to about (-17 0C). However, the freezing compartments of some
107
home refrigerators are not designed to give a temperature of -17 0C; the temperature needed
for prolonged storage of frozen foods.
After we harvest plants or slaughter animals, enzyme reactions can continue and result in
undesirable colour, flavour and texture changes in the food. Freezing slows down but does
not destroy enzymes in fruits and vegetables. That is why it is important to stop enzyme
activity before freezing. The two methods that can be used are blanching and adding
chemical compounds such as ascorbic acid. Fish meat, peas, vegetables and ice-cream are
usually preserved using the freezing method.
(ii) Refrigeration
Refrigeration is the process of lowering the temperature and maintaining it in a given space
for the purpose of chilling foods, preserving certain substances, or providing an atmosphere
conducive to bodily comfort. Storing perishable foods, furs, pharmaceuticals, or other items
under refrigeration is commonly known as cold storage. Chilling slows down microbial
activities and chemical changes resulting in spoilage. In chilling, food is kept at 0–4 0C.
However, at this temperature range some spoilage microorganisms (psychrophiles) may still
be alive and grow slowly, so the food cannot be stored for a long time (US History
Encyclopedia, 2006).
(c) Controlled Reduction in Moisture Content
The presence of adequate water (moisture) allows numerous biochemical and microbial
activities in food (Badejo, 1999). Water exists in foods in various forms either as free water
or bound water. Water activity can, therefore, be lowered by removing water from food or
making water available in the food material.
(i) Drying
Drying food is a combination of continuous mild heat with air circulation that will carry the
moisture off. When sufficient water is removed from foods, microorganisms (bacteria, yeasts,
and moulds) will not grow, and many enzymatic and non-enzymatic reactions will cease or
be markedly slowed. The process of drying foods removes roughly 80 to 90 % of the water
content of fruits and vegetables. Because drying removes moisture, the food becomes smaller
and lighter in weight. When the food is ready for use, the water is added back and the food
returns to its original shape. Drying of foods could be achieved using any of the following:
air, oven and microwave oven.
Dried food items can be kept almost indefinitely, as long as they are not rehydrated. The
disadvantages of this preservation method include the time and labour involved in
rehydrating the food before eating. Moreover, rehydrated food typically absorbs only about
two-thirds of its original water content, making the texture tough and chewy. Examples of
food preserved by this process include various dried food products such as fruit, coffee, fish,
meat and vegetables.
(ii) Dehydration
This is a preservative method which involves the substantial reduction of moisture under
controlled conditions of temperature humidity and air flow. Dehydration involves the transfer
of sufficient heat to food to cause moisture evaporation. It is different from conventional
drying because dehydration is a controlled drying process. Foods preserved by dehydration
contain considerably lower water activity and less total water than concentrated foods.
Dehydration could be in form of convection drying in which food to be dried is placed in
contact with a stream of heated air and heat is transferred to the product mainly by
convection. Dehydration can also be by direct contact drying in which the food product is
placed in direct contact with a hot surface; here heat is transferred through conduction.
Radiant heat drying is also another form of dehydration process using the microwave
(Ihikoronye & Ngoddy, 1985).
108
Freeze-drying (lyophilisation) is also another method of dehydration in which food is frozen
first and later sublimation occurs thus effecting drying. Freeze drying ensures retention of
shape, size and volatile components of the food. It is however expensive. Examples of food
preserved by this process include food products like fruit juices, meat and milk.
(iii) Evaporation
Evaporation is a type of heat processing operation which involves a reduction in moisture
content of food materials to obtain a concentrated product with high total solids. The
reduction in total moisture content often reduces the water activity (amount of available water
in food) of a food material. Concentrated food product with little or no moisture does not
encourage the growth of microbes. Evaporation can also be carried out at a lower temperature
using a vacuum evaporator (Badejo, 1999; Muhammad & Kayode, 2014).
(d) Smoking
The smoke is obtained by burning oak or a similar wood under low breeze/wind at about 93
to 104 °C. Preservative action is provided by such’ bactericidal chemicals in the smoke as
formaldehyde (HCHO) and creosote (antiseptic obtained from wood tar), and by the
dehydration that occurs in the smokehouse: When foods are smoked they absorb various
chemicals from the smoke including formaldehyde, formic acid, ketones, acetaldehydes,
aldehydes, waxes, resins, tar and alcohol, among other compounds.
Formaldehyde is considered to be the most important antimicrobial compound in wood
smoke. It is known that wood smoke is more effective against vegetative cells than bacterial
spores and that the rate of germicidal action of the wood smoke varies with the kind of wood
employed. The aldehydes cause many microbes to die and the acidslower the pH of the food.
Examples of foods preserved by this method include fish, meat, ham and sausage (Badejo,
1999; Muhammad & Kayode, 2014).
(e) Sugaring and Salting
This is the act of treating food with salt, strong salt solution or strong sugar solution. After
adding salt or sugar, the water potential outside the micro-organisms is higher than that inside
the micro-organisms. As a result, water essential for enzyme action and microbial growth is
removed by osmosis; the microbial cannot continue to live. However, a high concentration of
salt and sugar may make the foods very salty and sugary respectively. Examples of food
preserved by this process include bacon, salted fish, soy sauce, jam, fruits in heavy sugar
syrup.
(f) Pickling in Vinegar
Food is kept in vinegar since microorganisms cannot grow well in low pH value solutions.
Vinegar (acetic acid) slows the growth of spoilage bacteria, gives flavour and softens bones.
Vinegar, however, is only a temporary preservative, because enzymes continue to act,
softening and spoiling the product. Some kind of vinegar such as apple cider vinegar will
darken most vegetables and fruits. Examples of foods preserved by this method are sauces,
pickled onions and cucumbers (Shephard, 2006).
(g) Chemical Preservatives
Historically, many toxic substances have been used as food preservatives. Borates, fluorides
and various phenols, that serve as antimicrobials, enzyme inhibitors and antioxidants have all
been used. However, in the course of time, it became apparent that their efficiency in killing
microorganisms was coupled with considerable toxicity to man. Today, the U.S Food and
Drug Administration and comparable agencies in various countries vigorously regulate the
chemicals that may be added to foods as well as the conditions of their use. There is much
pressure to remove chemicals from the food supply, especially where their effects can be
achieved by other means.
109
(h) Irradiation
This is the act or process of exposing the amount of energy in the form of speed particles or
rays for improving food safety, eliminating and reducing organisms that destroy the food
products. This is a very mild treatment because a radiation dose of 1 kGy represents the
absorption of just enough energy to increase the temperature of the product by 0.36 0C. It
means that, heating, drying and cooking cause higher nutritional losses. Moreover,
heterocyclic ring compounds and carcinogenic aromatic produced during thermal processing
of food at high temperatures were not identified in irradiated foods. Ionising radiation or
irradiation is used as a method to destroy enzymes and micro-organisms in food, delay
ripening of fruits, and vegetables; inhibit sprouting in bulbs, and tubers; remove insects from
grains, cereal products, fresh and dried fruits and vegetables; and destroy bacteria in fresh
meats, all with minimal effect on the nutritive value of food (Anon, 1991). Irradiated foods
are not radioactive. Radiant energy disappears from the food once it is removed from the
source of ionising radiation because the food itself never comes into direct contact with the
radiation source. The international bodies including the Food and Agriculture Organisation
(FAO), the International Atomic Energy Agency (IAEA), WHO and Codex Alimentarius
Commission (CAC) investigate projects on food irradiation to verify the safety and quality of
different irradiated products. It has shown that irradiation used on alone or in combination
with other methods could improve the microbiological safety and extend shelf-life.
Furthermore, people are very confused to distinguish irradiated foods from radioactive foods.
At no time during the irradiation process does the food come into contact with the radiation
source and, it is not possible to induce radioactivity in the food by using gamma rays or
electron beams up to 10 MeV. The strength of irradiation source and length of food exposure
to the irradiation determine the dose; thus food can be treated at different regimes namely;
radurisation, radicidation and radappertisation to achieve different levels of food treatment
success.
(i) Canning
Canning is the process of preserving food by cooking, sealing it in sterile airtight cans or jars,
and boiling the containers to kill or weaken any remaining bacteria as a form of sterilisation
(Shephard, 2006). Sterilisation of the food can also be achieved by irradiation. The process
was invented (1809) by Nicolas Appert, a French confectioner. Various foods have varying
degrees of natural protection against spoilage and may require that the final step occurs in a
pressure cooker. High-acid fruits like strawberries require no preservatives to can and only a
short boiling cycle is required, whereas marginal fruits such as tomatoes require longer
boiling time and addition of other acidic elements. Low acid foods, such as vegetables and
meats require pressure cooking treatment. Food preserved by canning is at immediate risk of
spoilage once the can has been opened. Lack of quality control in the canning process may
allow ingress of water or micro-organisms. Most such failures are rapidly detected as
decomposition within the can causing gas production and the can could swell or burst.
However, there have been examples of poor manufacture due to under processing and poor
hygiene practices allowing contamination of canned food by the obligate anaerobe,
Clostridium botulinum which produces an acute toxin within the food, leading to severe
illness or death (Ihikoronye & Ngoddy, 1985).
(j) Fermentation
Many foods, such as cheeses, wines, and beers are fermented foods that keep for a long time
before spoilage. This is because their production uses specific starter microorganisms that
combat spoilage from other less benign organisms. These micro-organisms keep pathogens in
check by creating an environment toxic to themselves and other micro-organisms by
producing acid or alcohol. The starter micro-organisms, salt, hops, cool storage temperatures,
low levels of oxygen and/or other methods are commonly used to create the specific
110
controlled conditions that will support the desirable organisms that produce food fit for
human consumption (Ihikoronye & Ngoddy, 1985).
(k) Controlled environment
Preservation of controlled environment involves controlled atmospheric storage (CAS) and
modified atmospheric storage (MAS). CAS and MAS imply the manipulation which may be
addition or removal of gases from storage environment to achieve an atmospheric
composition condition different from the normal condition. The normal composition of air is
20 % oxygen, 79 % Nitrogen and 1% carbon dioxide. In MAS this normal gas composition is
altered or modified to achieve a reduction in oxygen content while increasing the level of
carbon dioxide and nitrogen. Controlled atmospheric storage is a preservation process
whereby the gaseous environment is modified to the desired level and controlled at this level
within strict limits throughout the storage period (Africa Conference Brief 1980).
Gaseous storage atmosphere changes in MAS continuously throughout the storage period
since the gaseous composition is not controlled. The idea of MAS and CAS is traced back to
the fact that the elevation of carbon dioxide and reduction of oxygen retards catabolic
reactions in freshly harvested respiring foods and slows the growth of aerobic spoilage
microorganisms. Both MAS and CAS are known to retard physiological processes
(respiration and ripening), minimise mechanical injury and microbial infections to maintain
optimum quality and extend the shelf-life of food products. Therefore either of the 2 methods
is employed in the storage and preservation of fruits and vegetables.
Other practices that can delay or prevent food spoilage are as follows:
COOPERATIVES IN NIGERIA
The resource-poor farmers remain the bedrock of agricultural production especially in the
developing countries including Nigeria. They account for over 90 % of all agricultural output
in Nigeria. They are however burdened with high and rising prices of farm inputs, low
efficiency of farming and processing techniques, inadequate production and processing
facilities, poor pricing and heavy constraints in obtaining credits and insurance. All of these
are further compounded by the general economic downturn and government drives to remove
all subsidies on inputs such as fertilisers, vaccines and foundation stock. Consequently, the
cooperative option comes into focus as a viable way to effectively mobilise farmers to form
111
groups and pool resources so as to become more effective in agricultural production and
processing including preservation.
A cooperative is a voluntary association of people usually of limited means who have joined
together to achieve common economic interest through the formation of a democratically
controlled business organisation, making an equitable contribution to the capital required and
accepting a fair share of risks and benefits of the undertaking. The forerunner to modern day
cooperatives was started in 1844 by the Rochdale Pioneers in England with the ideals of
universality, democracy, liberty, unity and fraternity, self-help, equity and justice. The
introduction of cooperatives in Nigeria in 1935 by the colonial government marked the
beginning of using cooperatives for the development of the agricultural sector of the Nigerian
economy. The government has over the years been encouraging the growth and development
of the organisation through legislation, administration, financial assistance, technical
assistance as well as educational development.
Since the majority of our food producers are the resource-poor farmers that face many
obstacles and have limited means, the cooperative option provides the best avenue for
mobilising their resources for enhanced agricultural production. Under the structure of
cooperatives, farmers can form themselves into groups, which represent their interest and
where they also find new strength in working together. Presently, many types of cooperatives
exist across the country. There is, however, a growing trend of adopting the multi-purpose
approach. Farmer cooperatives focusing essentially on production, processing and marketing
of crop and livestock products are very few. Nevertheless, well-organised farmer’s
cooperatives can perform the following roles:
(1) The absence of felt needs among members. Cooperatives were generally introduced by
the government and there is, therefore, the absence of felt need essential to the survival of the
spirit of the cooperative.
(2) The cooperative movement is still in its infancy as only about 1% of the farming
population is involved in cooperatives. Besides, less than 20% of the total agricultural export
crops is marketed through the cooperative. The proportion is even less for staple food crops
produced and consumed locally.
(3) Poor management of cooperatives as most people in the management cadre 139 usually
lack experience and are not well trained.
(4) Financial assistance from cooperative credit unions is insignificant because of their
limited resources. Moreover, banks are not so willing to give credit facilities to farmers
because of hazards involved in agriculture.
112
(5) The inability of cooperatives to become a mass movement in spite of government funding
and support.
(6) Lack of active membership.
(7) Dishonesty and corruption among members and management staff occupying
administrative positions in the societies.
(8) Deliberate ganging up by the capitalists to frustrate cooperative advancement and growth
in order to protect their own business. This can be accomplished through the use of influence
on banks not to grant financial assistance to cooperatives. They can also carry out organised
manufacturing, trading or supply establishment to the detriment of cooperatives.
CONCLUSION
The importance of sustainable food security for growth and development of Nigeria cannot
be overemphasised. To achieve this feat, there is the need to have increased agricultural
production and efficiency in food supply and distribution. In view of the major role played by
the small-scale farmers in production and post-harvest handling, all efforts must be made to
respond positively to the considerable challenges in the food value chain by taking the
advantage of the potentials offered by cooperatives in achieving food security in Nigeria.
EVALUATION STRATEGIES
REFERENCES
Africa Conference Brief 1 IFPRI Bretch, P.E. (1980), “Use of Controlled Atmosphere to
Retard Deterioration of Produce”, Food Tech, 314:145-149 Buckland, L. and J.
Badejo, F.M. (1999). Introduction to Food Science and Technology. Kitams Academic and
Industrial Publishers. ISBN978-34805-3-7.
Benson, T. (2004), Assessing Africa’s Food and Nutrition Security Situation 2020 Committee
on World Food Security (CFS), Rome, 23 – 26 May 2005.
Committee on World Food Security (2005). Report of the 31st Session of the Committee on
World Food Security, Rome 23 – 26 May.
Diouf, J. (2005), “Towards the World Food Summit and Millennium Development Goal
Targets: Food Comes First” Foreword, The State of Food Insecurity in the World
2005, FAO.
FAO (1996), World Food Summit, Corporate Document Repository.
FAO (1998) “Urgent Action Needed to Combat Hunger as Number of Undernourished in the
World Increases” available online @ www.fao.org. Retrieved on 15th December
2005.
113
FAO. 2002. The State of Food Insecurity in the World 2001. Rome.
FAO (2005), “FAO Warns World Cannot Afford Hunger”.
Frandson, R.D. (1972), Anatomy and Physiology of Farm Animals 2nd edition.
Gebremedhin, T. G. (2000), “Problems and Prospects of the World Food Situation” Journal
of Agribusiness 18, 2 (Spring 2000): 221- 236.
Honfoga, B.G. and G.J. M. van den Boom (2003), “Food-Consumption Patterns in Central
West Africa, 1961 to 2000, and Challenges to Combating Malnutrition” Food and
Nutrition Bulletin, 24(2):167-182, The United Nations University.
Ihikoronye, A.L. &Ngoddy, P.O. (1985). Integrated FoodScience and Technology for the
Tropics-Macmillan PublishersLtd London.
Jayne, T.S., Tschirley, D. L., Staaz, J.M. Scaffer, J.D., Weber, M. T., Chisvo, M., &
Mukumbu, M. (1994), Market Oriented Strategies to Improve Household Access to
Food-Experience from Sub-Saharan Africa, MSU International Development Paper
No15, Michigan State University.
Joseph J.K. (1994), Preservation and Storage of Foods InReading In General StudiesIn
Nigeria, Unilorin Library And Publication Committee, Ilorin.
Joseph, J.K. (1996). Foods and Delicacies OfYagbaland Of Nigeria:The Problem of and
Solutions to Gradual Extinction, Centrepoint, Humanity Edition 6(2)149- 162.
Lapedes, D.N. (1977).Encyclopaedia of Food, Agriculture And Nutrition, Mc Graw Hill
Book Company, New York 142 Leith-Rose (1939), African Women, Ronteledoe 33
And Kegan Paul, London.
Meade, B. &Rosen, S. (2002).Measuring Access to Food in Developing Countries:
The Case of Latin America Paper prepared for the 2002 AAEA-WAEA meetings.
Milich, L. (1997).Food Security available at http://ag.arizona.edu/~/milich/foodsec.html.
Muhammad, N.O. & Kayode, R.M.O. (2014). Food Spoilage and Preservation: In History
and Philosophy of Science, Adekola, F.A. & Abdul-salam, N. (Eds.) Chapter14; 128-
140. University of Ilorin Press, General Studies Division, University of Ilorin,
Nigeria.
Omotesho, O.A., Joseph, J.K., Ladele, A.A &Ajagbe, O.K. (1995). Animal Protein Crisis in
the Nigerian Food Basket: A Preliminary Survey of Three LGAs In Oyo State of
Nigeria.
Paarlberg R. (2002), Governance and Food Security in anAge of Globalization A2020 Vision
for Food Agriculture, & the Environment 2020 Brief 72 February 2002.
Ricketts, E. (1983), Food, Health and You, Macmillan, New York, Washington.
Shephard, S. (2006). Pickled, Potted, and Canned; How the Art and Science of Food
Preserving Changed the World.
Smith, L.C; U. Ramakrishnan; A. Ndiaye; L. Haddad & R. Martorell (2003), TheImportance
of Women’s Status for Child Nutrition in Developing Countries, Research Report
Abstract 131,
Trueblood, M. &Shapouri, S. (2002).Food Insecurity in the Least DevelopedCountries and
the International Response AAEA Selected Paper Long Beach, California July 29-
31, 2002.
United Nations (2005).Millennium Development Goals, United Nations Department of Public
Information.
US History Encyclopedia (2006). Through a partnership of Answers Corporation. Wool rich,
Willis R. (1967). The Men Who Created Cold: A History of Refrigeration.
Wiebe, K. (2003) Land Quality, Agricultural Productivity, and Food Security atLocal,
Regional, and Global Scales Paper prepared for presentation at the American
Agricultural Economics Association Annual Meeting, Montreal, Canada, July 27–30,
2003 Economic Research Service, USDA.
114
Chapter Nine
GLOBAL THREAT OF COUNTERFEIT MEDICINES
OBJECTIVES
115
2. Pakistan Manual of Drug Laws defines CM as а drug, the label or outer packing of
which is an imitation of, resembles or so resembles as to be calculated to deceive,
the label or outer packing of а drug manufacture (WHO, 2003a).
3. Philippines Republic Act No. 82036 defines CM as medicinal products with
correct ingredients but not in the amounts as provided, wrong ingredients, without
active ingredients, with insufficient quantity of active ingredients, which results in
the reduction of the drug's safety, efficacy, quality, strength or purity. The CM is
deliberately and fraudulently mislabelled with respect to identity and/or source or
with fake packaging, and is applicable to both branded and generic products
(WHO, 2003a).
4. Nigerian Counterfeit and Fake Drugs and Unwholesome Processed Foods
(Miscellaneous Provisions) Decree, defined fake drug as “any drug product which
is not what it purports to be; or any drug or drug product which is so coloured,
coated, powdered or polished that the damage is concealed or which is made to
appear to be better or of greater therapeutic value than it really is, which is not
labelled in the prescribed manner or which label or container or anything
accompanying the drug bears any statement, design, or device which makes а
false claim for the drug or which is false or misleading; or any drug or drug
product whose container is so made, formed or filled as to be misleading; or any
drug product whose label does not bear adequate directions for use and such
adequate warning against use in those pathological conditions or by children
where its use may be dangerous to health or against unsafe dosage or methods or
duration of use; or any drug product which is not registered by NAFDAC in
accordance with the provisions of the Food, Drugs and Related Products
(Registration, etc.) Decree 1993, as amended (WHO, 2003a)."
116
they purport to be or are represented to possess (Swaminath, 2008). AMs purport to
be or are represented as medicines whose names are recognised in an official
compendium, but have strength differing from the standard set forth in such
compendium. In addition, they have quality or purity that fall below the standard set
forth in such compendium. AMs consist in whole or in part any filthy, putrid or
decomposed substance, or any medicine that has been prepared, packaged or stored
under unsanitary conditions where it may have been contaminated with filth thereby
rendering it injurious to health. An AM is also any medicine that is packed in a
container which is composed in whole or in part of any injurious or deleterious
substance which may render the content injurious to health. Any medicine which
bears or contains for the purposes of colouring, any colour other than one which is
prescribed, or contains any harmful or toxic substance which may render it injurious
to health, or has been mixed with some other substance so as to reduce its quality or
strength is also AM (Swaminath, 2008).
Thus, CMs include products with correct ingredients, wrong ingredients, no active
pharmaceutical ingredient (API) or fake packaging and labelling, substandard and
adulterated products with correct labelling. These can be illustrated “mathematically”
as shown below:
Fake packing and/or labelling + correct quantity of correct ingredient =
counterfeit medicine.
Fake packing and/or labelling + incorrect quantity of correct ingredient =
counterfeit medicine.
Fake packing and/or labelling + no active ingredient = counterfeit medicine.
Fake packing and/or labelling + wrong ingredient = counterfeit medicine.
Genuine packing and/or labelling + incorrect quantity of correct ingredient
(deliberate) = counterfeit medicine.
Genuine packing and/or labelling + no active ingredient (deliberate) =
counterfeit medicine.
Genuine packing and/or labelling + wrong ingredient (deliberate) = counterfeit
medicine.
Genuine packing and/or labelling + incorrect quantity of correct ingredient (not
deliberate) = substandard medicine.
Genuine packing and/or labelling + unsanitary processing (Bad Manufacturing
Practice) = adulterated medicine.
Genuine packing and/or labelling + unsanitary facility for processing (Bad
Manufacturing Practice) = adulterated medicine.
Unsanitary packing and/or labelling + unsanitary facility and processing (Bad
Manufacturing Practice) = adulterated medicine.
Genuine packing and/or labelling + correct quantity of correct ingredient +
sanitary facility and processing (cGMP ) = genuine medicine.
118
In Nigeria, the era 1985 to 2000 heralded the regime of counterfeiting of products
medicines (Akinyandenu, 2013; Erhun et al, 2001). Furthermore, an estimate of 70 %
of drugs in circulation in Nigeria was either fake or adulterated in 2002 while 48 % of
goods and drugs imported into the country in 2004 were substandard or counterfeit
(WHO, 2006b).
Internet-based sales of pharmaceuticals are a major source of counterfeit medicines
(WHO, 2006b). Pre-requisite for legally operated Internet pharmacies include
government licensed facilities and dispensing of medication on presentation of a valid
patient prescription. However, WHO reports showed that illegal Internet pharmacies
are operated internationally, sell products that have an unknown or vague origin and
sell medications without prescriptions (WHO, 2006b).
The death casualties due to CMs are enormous. In Nigeria, 109 children died after the
use of paracetamol syrup in 1990 (Bonati, 2009) while 14 children died after the
administration of chloroquine phosphate injection in 1994 (Aluko, 1994). Also,
Nigerian supply of 88,000 Pasteur Merieux and SmithKline Beecham meningitis
vaccines to Niger in 1995 resulted in about 2,500 deaths after vaccination
(Akinandenu, 2013) while 84 children were reported dead between late 2008 and
early 2009 due to diethylene glycol-contaminated My Pikin Baby Teething Mixture
(Bate et al., 2009). Also, there were cases of counterfeit artesunate-amodiaquine
(Ehianeta et al., 2012). The battle against counterfeit medicines in Nigeria led to the
promulgation of the Counterfeit and Fake Drugs and Unwholesome Processed Foods
(Miscellaneous Provisions) Act No.25 of 1999 which makes provision for the
prohibition of sale and distribution of counterfeit, adulterated, banned, fake,
substandard or expired drugs (PCN, 2009b). In addition, the promulgation of
National Agency for Food and Drug Administration and Control (NAFDAC) Decree
No. 15 of 1993 led to the establishment of NAFDAC with the mandate: to regulate
and control quality standards for foods, drugs, cosmetics, Medical devices, chemicals,
detergents and packed water imported, manufactured locally and distributed in
Nigeria. The Federal task force on Counterfeit and Fake Drugs and Unwholesome
Processed Foods (Miscellaneous Provisions) Act operates within NAFDAC.
Other CMs casualties include 89 people died in Haiti after using cough syrup
containing di-ethylene glycol 1995 (WHO 2006b) and 30 people died in Cambodia
after taking counterfeit antimalarial medicine in 1999 (WHO 2006b). Furthermore, 38
% of 104 antimalarial drugs for sale in pharmacies in Southeast Asia did not contain
any active ingredients, causing a number of preventable deaths from the disease
according to a study in 2001 (WHO 2006b). In 2004, a trail of death was caused by
fake medicine (WHO, 2006b).
All kinds of medicines have been counterfeited which include antibiotics, hormones,
analgesics, steroids, and antihistamines. These drugs form almost 60% of the products
reported. In terms of types of counterfeits and their magnitude, the products reported
can be grouped into six categories (WHO 2003a):
Products without active ingredients, 32.1 %;
Products with incorrect quantities of active ingredients, 20.2 %;
Products with wrong ingredients, 21.4 %,
Products with correct quantities of active ingredients but with fake packaging, 15.6
%;
Copies of an original product, 1 %; and
119
Products with high levels of impurities and contaminants, 8.5 %.
Specific examples of counterfeiting showing country of origin, the year and the type of
counterfeiting are as shown in the table below (WHO, 2012):
SFFC medicine Country/Year Report
1. Avastin (for cancer United States of Affected 19 medical practices in the USA.
treatment) America, 2012 The drug lacked active ingredient
Smuggled into the UK. Contained undeclared
2. Viagra and Cialis (for United Kingdom,
active ingredients with possible serious health
erectile dysfunction) 2012
risks to the consumer
3.Truvada and Viread (for United Kingdom, Seized before reaching patients. Diverted
HIV/AIDS) 2011 authentic product in falsified packaging
4. Zidolam-N (for Nearly 3 000 patients affected by falsified
Kenya, 2011
HIV/AIDS) batch of their antiretroviral therapy
Smuggled into the USA. Contained
5. Alli (weight-loss United States of
undeclared active ingredients with possible
medicines) America, 2010
serious health risks to the consumer
6. Anti-diabetic traditional Contained six times the normal dose of
medicine (used to lower China, 2009 glibenclamide. Two people died, nine people
blood sugar) were hospitalised
United Republic of Discovered in 40 pharmacies. The drug
7. Metakelfin (antimalarial) lacked sufficient active ingredient
Tanzania, 2009
FACTORS ENCOURAGING COUNTERFEITING OF MEDICINES
1 Lack of or weak enforcement of existing laws on the quality, safety and efficacy of
both imported and locally manufactured medicines encourage counterfeiting.
Offenders are not afraid of arrest and prosecution (WHO, 2003a).
2. Weak penal actions encourage medicine counterfeiting since there is no fear of being
apprehended and prosecuted. This is especially so when there are more severe
penalties for counterfeiting non-medicinal products (WHO, 2003a).
3. Lack of cooperation between stakeholders results in counterfeiters escaping detection,
arrest, prosecution, and conviction. Also, failure of pharmaceutical manufacturers,
wholesalers, retailers and the public to report to national drug regulatory authorities
contribute to the flourishing of the menace (WHO, 2003a).
4. Lack of control by exporting countries and within free trade zones results in
flourishing of counterfeit medicines. Many exporting countries do not regulate export
pharmaceutical products to the same standard as those for domestic use.
Pharmaceutical products are sometimes exported through free trade zones where drug
regulation is lax, giving room for repacking and relabeling (WHO, 2012).
5. Availability/accessibility of advanced technology encourage counterfeiting. A study
in The Lancet showed that counterfeiters' ability to reproduce holograms and other
sophisticated printing techniques had dramatically improved between 2001 and 2005,
making detection even more difficult (WHO, 2006b).
6. Poverty sustains counterfeiting. Paying for medicines constitute major recurrent
public health expenditures (second to salaries) and also account for over half of all
private health expenditures (Dukes, 1997). Some people seek medicines that are sold
more cheaply. These are often available from non-regulated outlets, where the
incidence of counterfeit medicines is likely to be higher.
120
7. Low-availability of testing facilities makes detection of counterfeit medicines
difficult. Thus, offenders are emboldened to perpetuate counterfeiting.
123
CONCLUSION
CMs cause serious public health crisis and constitute major threat to National and
International Security. They create negative economic impact and violate Intellectual
Property Rights. It is a must to take positive actions today to combat CMs. We must not
forget: No action today, no solution tomorrow!
EVALUATION STRATEGIES
Practice Questions
1. List six (6) examples of reported counterfeiting stating the products; country and
year of occurrence; and the type of counterfeiting.
2. Counterfeiting of medicines constitute a global threat. Describe the magnitude of
the burden of counterfeiting.
3. Discuss the implications of counterfeiting of medicines as a global threat.
4. Describe the mPedigree in the context of anti-counterfeiting of medicines.
5. What factors have contributed to increased prevalence of counterfeit medicines
REFERENCES
124
fromhttp://www.whpa.org/Counterfeit_Medicines_IPJ_Vol27_No2_CONGRE
SS.pdf. Accessed on 1/5/2014.
Pharmacists Council of Nigeria. (2009a). The 4-Part Compendium of the Minimum
Standard of the Assurance of Pharmaceutical Care in Nigeria (2nd ed.).
Ibadan: ARK Ventures.
Pharmacists Council of Nigeria. (2009b). Ta A compilation of Pharmacy Laws, Drugs
and Related Laws and Rules in Nigeria, 1935-2000 (2nd ed.). Abuja:Clear
Impression Limited.
Sharma, Y. (2011, March 30). Fighting fake drugs with high-tech solution. Science
and Development Network. Retrieved form
http://www.scidev.net/en/health/detecting-counterfeit-drugs/features/fighting-
fake-drugs-with-high-tech-solutions-1.html. Accessed on4/4 2011.
Swaminath, G. (2008). Faking it – I The Menace of Counterfeit Drugs. Indian Journal
of Psychiatry, 50, 238-240.
West, D. (2009). Purchasing and inventory management. In M. Weitz & K. J. Davis
(Eds.), Pharmacy management: essentials for all practice settings (2nd ed., pp.
383-399). United States of America: The McGraw-Hill Companies Inc.
United States Code. (2010). Adulterated drugs and devices. Retrieved from
http://www.law.cornell.edu/uscode/text/21/351. Accessed on 3/5/2014.
World Health Organisation. (2002). The selection and use of essential medicines.
Retrieved from http://apps.who.int/medicinedocs/pdf/s4875e/s4875e.pdf.
Accessed on 3/2/2014.
World Health Organisation. (2003a). General information on counterfeit. Retrieved
from http://www.who.int/medicines/services/counterfeit/overview/
en/index1.html. Accessed on 5/5/2014.
World Health Organisation. (2003b). Substandard and counterfeit medicines.
Retrieved from http://www.who.int/mediacentre/factsheets/2003/fs275/en/.
Accessed on 5/5/2014
World Health Organisation. (2006a). What are sub-standard medicines? Retrieved
from http://www.who.int/medicines/services/counterfeit/faqs/06/en/. Accessed
on 6/5/2014.
World Health Organisation. (2006b). Counterfeit medicines. Retrieved from
http://www.who.int/medicines/services/counterfeit/impact/ImpactF_S/en/inde
x1.html. Accessed on 6/5/2014.
World Health Organisation. (2012). Medicines: spurious/falsely-
labeled/falsified/counterfeit (SFFC) medicines. Retrieved from
http://www.who.int/mediacentre/factsheets/fs275/en/. Accessed on 5/5/2014.
125
Chapter Ten
FUNDAMENTALS OF POULTRY PRODUCTION
Adeyina, A.O. and Atteh, J.O.
Department of Animal Production, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
The word poultry refers to birds of economic value to humans. Domestic fowl, chicken,
turkey, duck, goose, pigeon are typical examples of domesticated poultry while the non
domesticated ones include ostrich, quail and pheasant.
126
OBJECTIVES
MANAGEMENT SYSTEMS
Rearing environment of the birds is determined by the climatic condition, size of the flock,
available facilities and the purpose of rearing. There are two broad management systems and
they include the free range (extensive) and the confined (intensive).
127
Advantages of free range systems
It is cheap to maintain
Birds are more resistant to disease outbreak by the reason of large space
Birds have advantage of greens and exposure to sunlight for vitamins A and D
respectively.
Confined management
The practice of confinement allows for additional housing and labour expenditures and it
affords proper feeding, ease of management, disease control and record keeping. The housing
and feeding schedule under this system are of two considerations: intensive and semi
intensive systems
Intensive management
This system can be considered under two divisions: deep litter and cage systems. These two
housing systems have a common building structure, but are different by the reason of what is
within the house. The general housing pattern for both systems include roof, ridge, wire
mesh, dwarf wall foundation and foot dip.
128
Cage system
Cages are specifically designed for laying hens, although they can be used by other birds. In
each unit, birds could access feed (feeders) and water (drinkers). The cage units are joined
together along the length of the house to form a row. The rows of cage unit can be arranged
in either conventional or California/stair. The California form is most common among the
Nigerian poultry farmers and in the arrangement, the droppings (feaces) of the birds on top do
not fall on the ones under. The feaces are accumulated in a pit under the cage. Cages are sold
in units of 48, 96 or 192 with 2-3 birds in a unit depending on their age and size. Improved
cages made from wood and designed according to available space and flock sizes are now in
use. The advantages of the intensive system include ease of management, clean eggs and
maintained performance. However, the capital outlay is much and there is the problem of
flies. Birds in cages may experience “cage layer fatigue”- a condition that is common with
highly productive birds.
129
There should be enough space to accommodate poultry equipment (feeders and drinkers) and
to avoid overcrowding of birds.
General equipment
These are those used at any stage of growth of the birds. They include feeders and drinkers.
The sizes of this equipment depend on the age, size and number of birds or flock size. The
small feeders are usually 0.6 – 1m in length while the large feeders are between 1.5 – 1.8m.
The large feeders could be of tube or cylindrical. Most drinkers are in the form of four litre
fountains or a long PVC pipe that is prepared for drinking purposes. Automatic feeders and
drinkers are also available especially on large commercial farms.
Specific equipment
The specific equipment is those associated with a specific stage in the life of the birds.
Examples include brooders, laying nest, debeakers, and Candler.
130
Different types of drinkers
131
Incubation: This stage covers the period of egg fertilisation to the time the egg hatches into
chick. The stage requires the provision of favourable environmental conditions that is
necessary for hatching. The conditions include temperature of 37 0C, relative humidity of 50-
60 % and ventilation. Incubation can either be natural or artificial. The natural incubation
requires the mother hen to sit and hatch the egg. Such mother must however be broody –
natural tendency to sit and incubate eggs until they are hatched.
Natural incubation
Artificial incubation is carried out with the use of incubators which can be natural or forced
draught type providing an ideal condition to make eggs hatch. The component of incubator
includes heating device, temperature regulator, humidifying device, ventilation and egg
turning devices. The heat source could be from paraffin, butane gas, coal solar or electricity.
Eggs to be incubated should be collected at frequent intervals and should not be stored for
more than 7days at temperatures between 10-14 0C and 75-85 % relative humidity. Eggs can
be washed in water containing sanitizer and detergent at a temperature above 38 0C or at least
12 0C warmer than the egg being washed.
Artificial incubator;
132
BROODING
Artificial brooding
2. If the chicks appear to be unhealthy then, check for unhealed navel and inform the source
of your chicks immediately.
REARING
This is the management of birds from 8-20/24 weeks of age. The objective of this operation is
usually to ensure that birds start laying at the right age and size. Rearing is the management
for weight and light to get the birds to the point where they are both physically and
physiologically matured for egg production. Light is very important for attainment of sexual
maturity and initiation of egg laying. The normal daylight in the tropics provides the
necessary lightning for the rearing stage.
134
a. Energy; The energy content of diet available to the bird for maintenance and
production of meat and egg, is referred to as metabolisable energy. It is express in ME
per unit weight as calorie per gram (cal/g) or kilocalorie per kilogram (kcal/kg). 1kcal
equals 4.2 kilojoule. Energy in feed of poultry comes mainly from carbohydrate but
also fat and protein. Chicken usually eat to meet their energy requirement. Therefore,
if the energy of feed is high it will result in low feed intake and vice versa.
Recommended energy level in poultry diets are about 2800 kcal to 2850 kcal/kg for
layers and about 3000 kcal/kg for broilers. When environmental temperature is high,
it is advisable to use more concentrated diets to allow birds get enough nutrients in
spite of low feed intake. Sources of energy ingredients include maize, wheat, barley,
tubers, fats and oil.
b. Protein.; Poultry requirement for protein is the requirement for amino acid with
reference to essential amino acid. Essential amino acids are those that cannot be
synthesised by the body of the bird and must be supplied in the diet. They include
arginine, histidine, methionine, threonine, valine, isoleucine, lysine, phenylalanine,
tryptophan and leucine. The main limiting amino acids are lysine and methionine. A
shortage of these essential amino acids will limit production. Protein requirement of
chicken is usually expressed as crude protein which must supply the required amino
acids. Protein is usually very expensive, so ration must be balanced with required
protein to avoid wastage. The feed of broilers usually contains 23 % and 20 % crude
protein at starter and finisher phases respectively. The sources of protein ingredients
include soybean, groundnut cake, palm kernel cake and fish meal.
c. Fat; Fat is a high source of energy in poultry diet. It is the richest source of energy
containing 2.25 times more energy than carbohydrates. Fat in poultry diets also
functions as a source of essential fatty acids such as linoleic acid and also a carrier of
fat soluble vitamins such as vitamins A, D, E and K. The problems with use of fat in
poultry diet include difficulty of mixing, oxidative rancidity and high cost. The
sources of fat include palm oil, vegetable oil, tallow (Animal fat) and full fat
ingredients.
d. Vitamins; Vitamins are required in poultry diets and are involved in enzyme systems
and natural resistance to disease. They are needed in very small quantities but are vital
to sustain life. Most vitamins cannot be synthesised; therefore, poultry diets should
be fortified with vitamins. Natural vitamins could be obtained from young and green
plants, seeds and insects. Poultry in confinement would have to be given diet
supplemented vitamins. The level of vitamin supplement should be higher than that
recommended. Vitamins may be purchased in a synthetic form and added to diets as
premix. Diets without extra vitamins may lead to low productivity.
e. Minerals.; Calcium and phosphorus are mostly important in poultry diets for bone
strength and maintenance. The requirement for these minerals cannot be met by
regular feed ingredients and therefore, has to be supplemented. The sources of
calcium and phosphorus include bone meal, limestone and oyster shell. Other
important macro-minerals include sodium and chlorine added as salt. The micro-
minerals such as zinc, manganese, iron etc can be supplied from mineral premix
which is available at feed ingredient shops.
f. Water; Water is the universal solvent in which nutrients are transported in the body
and waste products are excreted. Water is essential to poultry for body temperature
regulation. Withdrawal of water from bird would cause low productivity, dehydration
and death. It is important that water be made readily available for birds even if feed is
of short supply. Sources of water include rain water, tap water, well spring etc.
135
PRINCIPLES OF POULTRY FEED PRODUCTION
The choice of what is put in commercial poultry feed in terms of nutrients is hardly known by
farmers. The farmers just buy and use the feed with expectations of good laying performance
and feed to gain ratio. However, these nutrients can be confirmed to ascertain if they are
combined in the right proportion. Poultry feeds are produced based on these basic factors:
The nutrient requirement of the bird;
The nutrient composition of the available ingredients and limitation to its use; and
The cost of available ingredients.
Feedstuff or ingredients used in poultry feeds can be classified into five broad groups:
Cereals (maize, wheat, barley, rye, rice) and cereal-by-products (wheat offal, rice
bran, maize bran etc). These ingredients are mainly for energy.
Tubers (cassava, yam etc) and oils (lard). They are also used for energy.
Protein rich ingredients from plant source such as soybean, groundnut cake, palm
kernel cake etc.
Animal protein such as meat meal, fish meal, blood meal etc.
Mineral, vitamin supplements and additives. These include vitamin premix, synthetic
amino acid (lysine and methane), enzyme, antitoxin etc.
No single ingredient contains enough nutrients to meet the requirement of the birds. The
combination of various ingredients is therefore done carefully to arrive at the nutrients
requirements of the birds. Nutrients in chicken diet can be calculated by using Pearson
square.
Assuming that there are four different feedstuffs to produce a feed of 16.5 % crude protein,
using Pearson square, the following steps can be followed to arrive at the requirement;
Step1
The requirement is calculated based on 95 % for the ingredient and 5% for additives that may
be added.
Therefore the 16.5 % crude protein would be
16.5 * 100/95 = 17.37 %
Step 2
The available ingredients and their nutrient composition are shown below in table 1
Table 1: available ingredients and their nutrient composition
Feedstuff C.P Energy
Maize 10 3434
*C.P=Crude protein
136
Assuming that the maize is cheaper probably because you produced it yourself, then fix the
proportion of use depending on the cost
For example, cereal contribution (Maize : 2, Wheat offal: 1
Maize 2 → 2* C.P (maize)
= 2 * 10 = 20
Wheat offal 1 → 1 * C.P (Wheat offal)
= 1 * 17 = 17
20 +17 = 37
Therefore, 37/ 2+1 = 37/3 = 12.33 % C.P
Protein Ingredient contribution (Soya bean : 3, GNC: 1)
Soybean 3 → 3 * C.P (soy bean)
3 * 42 = 126
Groundnut cake (GNC) 1 → 1* C.P (GNC)
1 * 45 = 45
126 + 45 = 171
Therefore, 171/1+3 =171/4 = 42.75
Step 3
Place the percentage contribution of cereals on left top of the square and percentage
contribution of protein on bottom left. Then place the required crude protein in the centre as
shown below:
Maize + wheat offal 12.33 25.38
17.37
Soybean + GNC 42.75
From our example, 25.38 % is required for cereal, wheat offal contribution while 5.04 % is
required for soy bean and GNC combination.
137
Step 4
Express the proportion as a percentage of the total mix of the diet as follows
Cereal combination
(25.38/ 30.42) * 95 = 79.26 %
79.26/1+2 = 26.42 %
Therefore,
Proportion of wheat offal = 26.42 (1/3)
Proportion of maize = 52.84 (2/3)
Protein combination
(5.04/30.42) * 95 = 15.74 %
15.74/1+3 = 3.94
Proportion of GNC = 3.94 (1/4)
Proportion of soy bean = 3.94 *3 = 11.81(3/4)
Step 5
Confirming the nutrients in the diet
Table 2: Nutrients in the diet
The 5 % allowance is for additives such as bone meal, essential amino acids and vitamins as
may be required by the birds for better performance.
Feeding schedules
Pullet chicks from day 1-8weeks of age can be fed with chick mash containing 18-20 % crude
protein and metabolisable energy of 2900 Kcal/kg. Grower mash containing 16 % crude
protein and 2900 Kcal/Kg can be given to chicken that are between 8/9-14 weeks while feed
of 12-13 % crude protein can be given to birds between 14-20 weeks of age. This schedule is
also applicable to rearing of cockerels. Birds that are older than 20 weeks of age which are
raised for egg production can be given layers mash containing 16 % crude protein, 2850
138
kcal/kg metabolisable energy supplemented with 3 % calcium. Broilers from 1-6 week of age
can be fed with broiler starter containing 23 % crude protein 3200 kcal/kg of metabolisable
energy. It is advisable that broiler should not be kept longer than between 8-10 weeks as they
tend to deposit more fat beyond this age.
Disease can occur in poultry at any age and in any breed. Sick birds will always be
unproductive and behave strangely. Disease in poultry are important because they can cause
outright mortality, economic loss and some are zoonotic (they can be transmitted to man). It
is always better to prevent disease than to cure.
139
inform of dust or spray. Internal worm can be treated with the use of deworming
agent. Broad spectrum dewormer to treat both internal and external parasite may also
be used.
Vaccination involves the stimulation of immunity through the introduction of disease causing
agent into the system. The disease-causing agent could either be live, killed or attenuated
(weakened) virus called vaccine. The combination of good flock management and
vaccination program would prevent disease outbreak.
140
Week 8: Birds at this stage should be dewormed followed by anti-stress. Diseased and
deformed birds should be cured. This approach is not applicable to broilers as broilers at
this age should be at market weight.
Week 10: The litters of the poultry house should be changed and anti-biotics is given to
stimulate growth. Bird should also be dewormed followed with anti-stress.
Week 12: Lasota vaccine can be administered followed with anti-stress.
Week 15: The litter should be cleaned and about 10 sample bird be weighed for growth
uniformity. Birds should be vaccinated with fowl pox vaccine followed with anti-stress.
Week 16: Birds should be given NDV komorov followed with anti-stress.
Week 20: Birds should be dewormed and then vaccinated at about 3 days interval with
lasota followed with anti-stress for the next 3 days. About 10 sample birds should be
weighed for growth uniformity.
Week 21: Extra calcium source such as oyster shell and limestone should be given. Light
should also be made available from 5 am -7 pm. At this stage, birds should be given
layers’ mash after about 5% egg production. This procedure is not applicable to
cockerels.
Week 24: Most of the birds should be laying at this stage depending on breeds of the
birds. First egg should be observed at 18weeks and 50% egg production at 22 weeks. It is
advisable that cockerels should be sold while the layers continue with egg production for
the next 12-18 months. It is also advisable that the birds should be vaccinated with lasota
every 4-6 weeks to prevent the incidence of new castle disease followed with anti-stress.
The feed and water should also be fortified with vitamins to improve production.
EVALUATION STRATEGIES
141
REFERENCES
Aduku A.O (1991) Practical Animal feed production in the tropics. Department of
Animal production, Ahmadu Bello University, Zaria, Nigeria.
Anthony J. Smith(2001) The Tropical Agriculturalist. Poultry. Centre for Tropical
Veterinary Medicine. University of Edinburgh, UK. Published by Macmillan
Education Ltd. London.
Atteh J.O(1990) Poultry production manual. Department of Animal production,
University of Ilorin, Nigeria.
Eekeren, N. van, Maas,A. Saatkamp, H.W. and Verschuur, M. (2006). Small- scale
chicken production. Agrodok 4. Published by Agromisa and CTA.
Lawan M(1994). Backyard poultry management. Technical paper, monthly review
meeting(MTRM) Yobe State ADP, Gashua, Yobe State.
142
Chapter Eleven
HUMAN NUTRITION- A LIFE COURSE APPROACH
1*Temidayo
Oladiji and 2Deborah Opaleke
1Department of Biochemistry, University of Ilorin, Ilorin, Nigeria
2Department of Home Economics and Food Science, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
Good nutrition is a necessary condition for good health and general well-being. Good
nutrition can only be attained by eating good and nutritious foods. Further, it can be
maintained by eating foods also prepared under good hygienic condition with proper
management of the nutrients (for maximum nutrient retention during processing). Nutrition
contributes to wellness. Therefore, healthful diet is part of disease prevention. Examples of
nutrient deficiency diseases include scurvy, goiter, rickets, and anemia. Some forms of
chronic diseases are influenced by nutrition. These include heart disease and diabetes.
Nutrition is the science that interprets the relationship of foods to the functioning of the living
organism. This will include the intake of food, elimination of wastes and syntheses that are
essential for maintenance, growth and reproduction. In other words, the interaction of food,
the nutrients and other substances therein, their action and balances in relation to health and
diseases. Nutrition is so important that it has become a national goal to promote optimal
health and disease prevention.
Unfortunately, a general dearth of knowledge exists in human nutrition especially as it relates
to nutrition of the vulnerable groups (children, pregnant women and the aged). Also, the
association between the onset of certain diseases and diet need to be understood. The present
chapter therefore provides information on some aspects of human nutrition.
OBJECTIVES
FOOD NUTRIENTS
Food is able to perform its functions because they contain nutrients. Nutrients are chemicals
in foods that our body use for energy and to support the growth, maintenance, and repair of
our tissues. There are two broad classes:
i. Macronutrients are nutrients that are required in relatively large amounts. These
are carbohydrates, lipids and proteins.
ii. Micronutrients are nutrients required in smaller amounts like vitamins and
minerals.
Nutrients can also be classified as either organic or inorganic. Organic nutrients contain an
element of carbon that is an essential component of all living organisms e.g. Carbohydrates,
143
lipids, proteins and vitamins. Inorganic nutrients however do not contain carbon e.g. Minerals
and water.
In man, nutrients are supplied through the food and different food items contain different
amount of nutrient. For example, rice and bread are rich in carbohydrate while food like
beans is rich in protein.
The food sources and functions of food nutrients are summarised in Table 1
Good nutrition implies the intake of diets containing all the nutrients in proportions
necessary for the proper functioning of the body. A diet containing all required nutrients in
adequate quantity is said to be a balanced diet. However, because foods are generally rich in
different types of nutrients, a combination of foods will often be necessary to attain balance.
For example, a meal of pounded yam taken with vegetable soup and meat or fish will likely
contain all the macro nutrients and many of the micronutrients in adequate quantities. The
disruption of the concept of balance coupled with changes in the way food is prepared,
processed, and lifestyle changes lead to imbalance in energy metabolism which has been
proposed to be responsible for the appearance of various metabolic syndromes (affliction of
affluence).
NUTRIENT REQUIREMENTS
The amount of nutrient required varies with age, sex, activities and general well-
being. Special requirements of pregnancy and lactation have also been identified. There are
standards for measuring healthy people’s energy and nutrient requirement.
A set of four values are used to measure the nutrient intake of healthy people
Dietary Reference Intakes (DRI) consists of 4values:
Estimated Average Requirement (EAR): The average daily intake level of a nutrient that
will meet the needs of half of the healthy people in a particular category used to
determine the Recommended Dietary Allowance (RDA) of a nutrient
Recommended Dietary Allowances (RDA): The average daily intake level required to
meet the needs of 97 – 98 % of healthy people in a particular category
Adequate Intake (AI): Recommended average daily intake level for a nutrient
144
Based on observations and experimentally determined estimates of nutrient intakes by
healthy people Used when the RDA is not yet established e.g. for calcium, vitamin D,
vitamin K, and fluoride.
Tolerable Upper Intake Level (UL): Highest average daily intake level likely to pose no
risk of adverse health effects to most people Consumption of a nutrient at levels above
the UL, the potential for toxic effects and health risks increases.
The Food Guide Pyramid (Figure 1) was introduced in 1992 and further modified in 2005 to
reflects the latest in nutrition science. Dietary Guidelines place stronger emphasis than past
guidelines on decreasing calorie intake as well as increasing physical activity.
NUTRITION IN CHILDREN
Malnutrition remains one of the most common causes of morbidity and mortality among
children throughout the world. Approximately 9 % of children below 5 years of age suffer
from wasting and are at risk of death or severe impairment of growth and psychological
development. In 2008, of the 8.8 Million global deaths of children under 5 years of age, 93 %
occurred in the developing countries of Africa and Asia and over a third of all deaths can be
attributed to underlying malnutrition
Childhood Malnutrition
Protein Energy Malnutrition (PEM): This is the most common deficiency disease in the world
with about one hundred children affected to a moderate or severe degree.
PEM is caused by a deficient intake of energy and usually, protein. Secondary PEM can
result as a result of other complications like AIDS, chronic kidney failure, inflammatory
bowel disease and other illnesses that impair the body’s ability to absorb and / or utilise
nutrients, or compensate for nutrient loss.
Causes
Several factors have been identified to be the cause of childhood malnutrition. These include:
-poverty/ insufficient food production
145
-poor living conditions leading to consumption of contaminated foods and subsequent
exposure to infection frequently
-lack of education/ ignorance on nutrient requirement for children
-another pregnancy resulting in improper feeding of children especially breast feeding
-return of mother to work
-poor family planning / multiple births creating competition for the limited food
- paternal dispute oftentimes resulting in child neglect
- war and famine which oftentimes make food unavailable
Summarily, all these factors can be defined by lack of education, poverty and lack of food.
Signs:
Some of the features of PEM are:
i. muscle wasting;
ii. the belly may look prominent; and
iii. the skills develop more slowly.
The more severe ones are marasmus, kwashiorkor, marasmic kwashiorkor and iron
deficiency anaemia.
Child usually is aged 1-3 years and presents the following features:
i. Growth retardation
ii. More subcutaneous fat than in marasmic child
iii. There is oedema (mainly in the feet and lower legs)
iv. The child appears moon faced
v. Hair is discoloured
vi. Hair is sparse and is easily pulled out
vii. Presence of anaemia
viii. Loss of appetite
The body’s immune system is weakened, behavioural development is slow and mental
retardation may occur. Children may grow to normal height but are abnormally thin. It can
also develop in children who are severely burned, suffered trauma or had sepsis (tissue
destroying infection).
146
Figure 2: (a) Marasmus (b) Kwashiorkor
Prevention: The best way to prevent childhood malnutrition is by breast feeding a child for
the first six months and thereafter, introduces the child to balanced weaning foods.
Maternal exposures to toxins (e.g. Heavy metals, prescribed drugs, drugs, organic
chemicals).
147
Inborn errors of metabolism (e.g. Galactosemia, PKU-lack of the enzyme
phenylalanine hydroxylase)
Risk of transmission of diseases from mother to infant (HIV/AIDS, Tuberculosis).
Adolescence is a time of rapid growth, and the primary dietary need at that period is for
energy - often reflected in a voracious appetite. Ideally, foods in the diet should be rich in
energy and nutrients. Providing calories in the form of sugary or fatty snacks can mean
nutrient intake is compromised, so adolescents should choose a variety of foods from the
other basic food groups:
Plenty of starchy carbohydrates - bread, corn meal, rice, pasta, breakfast cereals,
couscous and potatoes
Plenty of fruit and vegetables - at least five portions every day
Two to three portions of dairy products, such as milk, yoghurt and pasteurised cheeses
Two servings of protein, such as meat, fish, eggs and beans
Adolescents are advised against too much fatty foods as well as sugar rich drinks.
Other important dietary habits to follow during adolescence include:
Drinking at least eight glasses of fluid (preferably water) a day.
Eating regular meals, including breakfast, as it can provide essential nutrients and
improve concentration in the mornings. Choose a fortified breakfast cereal with semi-
skimmed milk and a glass of fruit juice.
Taking regular exercise, which is important for overall fitness, cardiovascular health,
as well as bone development.
Not every adult has the same nutritional needs. An adult individual needs to balance energy
intake with his or her level of physical activity to avoid storing excess body fat. Dietary
practices and food choices are related to wellness and affect health, fitness, weight
management, and the prevention of chronic diseases such as osteoporosis, cardiovascular
diseases and diabetes.
The actual amount of any nutrient a person needs, as well as the amount each individual gets
from his or her diet will vary. Many adults do not receive enough calcium from their diets,
which can lead to osteoporosis later in life. Other nutrients of concern are potassium, fiber,
magnesium, and vitamin E. Some population groups also need to get more vitamin B12, iron,
folic acid, and vitamin D. These nutrients should come from food when possible, then from
supplements if necessary.
As teenagers reach adulthood, the basal energy needs for maintaining the body's
physiological functions (basal metabolic rate or BMR) stabilise, therefore energy
requirements also stabilise. BMR is defined as the energy required by the body to keep
functioning.
It is important to reduce energy intake at the onset of adulthood, and to make sure that all of
one's nutritional needs are met. This can be accomplished by making sure that an adequate
amount of energy is consumed (this will vary by body weight, degree of physical fitness, and
muscle vs. body fat), and that this amount of energy is adjusted to one's level of physical
activity. Foods that are chosen to provide the energy must be highly nutritious, containing
high amounts of essential nutrients such as vitamins, minerals, and essential proteins.
148
It is usually at this age that young adults start gaining body fat and reducing their physical
activity, resulting in an accumulation of fat in the abdominal areas. At the onset of adulthood,
energy requirements usually reach a plateau that will last until one's mid-forties, after which
they begin to decline, primarily because activity levels and lean muscle mass (amount of
muscle vs. body fat), which represents the BMR, decrease. It is believed that the changes in
body composition and reduced lean muscle mass occur at a rate of about 5 % per decade, and
energy requirements decrease accordingly. However, these changes in body composition and
decreased energy requirements can be prevented by maintaining regular physical activity,
including resistance training, which helps maintain lean muscle mass and prevent deposition
of excess body fat.
Many adults ignore the role that fluids play in nutrition. Most people will get adequate
hydration from normal thirst and drinking behaviour, especially by consuming fluids with
meals.
Energy Protein Total SFA Carbo- Dietary Choles- Ca Na Fe Vit Folic Vit
Age
* Fat hydrates fibre terol A Acid C
group
(kcal) (g) (g) (g) (g) (g) (mg) (g) (mg) (mg) (mcg) (mcg) (mg)
Men
18- 29 0.4-
2550 68 71 23.6 351 26 300 1700 6 750 200 30
yrs 0.5
30- 59 0.4-
2500 68 69 23.0 344 25 300 1650 6 750 200 30
yrs 0.5
60 yrs
0.4-
and 2100 68 58 19.3 289 21 300 1400 6 750 200 30
0.5
above
Women
18 - 29 0.4-
2000 58 56 18.6 275 20 300 1350 19 750 200 30
yrs 0.5
30- 59 0.4-
2000 58 57 19.0 275 20 300 1350 19 750 200 30
yrs 0.5
60 yrs
0.4-
and 1800 58 50 16.7 248 18 300 1200 6 750 200 30
0.5
above
Pregnan
t
women
- full 1.0-
+285 +9 +8 +6 +39 +3 300 +200 19 750 400 50
activity 1.2
-
1.0-
reduced +200 +9 +6 +2 +28 +2 300 +150 19 750 400 50
1.2
activity
Lactatin
g
women
- first 6 1.0-
+500 +25 +14 +4.6 +69 +5 300 +350 19 1200 400 50
months 1.2
149
- after 6 1.0-
+500 +19 +14 +4.6 +69 +5 300 +350 19 1200 400 50
months 1.2
Legend:
SFA - saturated fat; Ca - calcium; Na - sodium; Fe - iron; Vit A - vitamin A; Vit C -
vitamin C
Please note:
Recommended energy intakes (*) are for individuals with sedentary to light
activity levels.
The above recommended daily intakes vary depending on physical activity
and physiological state of an individual, e.g. pregnancy and lactation.
The recommended dietary allowances are average daily intakes of nutrients
over a period of time for the majority of the population. They are not absolute
daily dietary requirements.
Nutrition in the aged
The elderly population is divided into 3 age groups:
1. Ages 65-74 referred to as young elderly;
2. Ages 75-84 referred to as the elderly; and
3. Ages 85+ referred to as the old.
The aging process shows inter-individual variability in its rate of development. The
determinants of the rates of aging of systems and tissues have been reported to be largely
genetic. Therefore, premature aging of cells and tissues can occur and is due to genetic
factors and to long-term exposure to physical or chemical environments that cause
irreversible tissue damage. However, the aging process alters body composition so that
nutritional status changes as humans get older. Although the energy needs decrease as people
age because lean body mass decreases and levels of physical activity decrease, the chance of
meeting the needs of all nutrients also decreases.
Elderly persons are particularly vulnerable to malnutrition. This is because attempts to
provide them with adequate nutrition usually encounter many practical problems. According
to WHO, the nutritional requirements of the elderly are not well defined, and older person’s
energy requirement per kilogram of body weight is also reduced because both lean body mass
and basal metabolic rate decline with age. Also, the processes of ageing have been reported to
affect other nutrient needs. For example, while requirements for some nutrients may be
reduced, some data suggest that requirements for other essential nutrients may in fact rise in
later life.
150
v. Digestive problems.
vi. Gastrointestinal problem such as changes in sense of taste and smell, less secretion of
saliva and digestive juices, causing absorption problems and slower movements of the
gastrointestinal tract.
Another factor is the price of foods rich in micronutrients, which further discourages their
consumption. Compounding this situation is the fact that the older people often suffer from
decreased immune function which contributes to this group’s increased morbidity and
mortality. Other significant age-related changes include the loss of cognitive function and
deteriorating vision, all of which hinder good health and dietary habits.
In the aged, malnutrition could result in one or more of the following:
• Lower quality of life
• Depression
• Increased susceptibility to infection
• Sarcopenia (loss of muscle mass)
• Increased number of falls
• Complications during hospitalisation
• Increased tendency for pressure sores
• Increased mortality
Foods which may help promote regular bowel function, and prevent constipation include:
• Fruits
• Vegetables
• Dried fruits
• Whole grain cereals
• Beans
• Plenty of fluid. At least (30 ml/kg)
• yoghurt
It is also necessary for the aged to engage in some physical activities daily.
Dietary changes seem to affect risk-factor levels throughout life and may have an even
greater impact in older people. It is therefore of uttermost importance to give adequate
attention to diet during old age.
The fetus is not a parasite; it depends on the mother’s nutrient intake to meet its nutritional
needs. Periods of growth and development of the fetal organs and tissues occur throughout
pregnancy therefore essential nutrients must be available in required amounts during these
times for fetal growth and development to proceed optimally.
Well-nourished women will generally have adequate reserves of nutrients which can maintain
delivery of most substrates to the fetal tissues, even if their intakes are compromised in the
short to medium term. This is because the delivery of nutrients depends on maternal intake,
maternal stores and Placental exchange.
151
Animal experiments show that under nutrition in utero leads to persistent change in blood
pressure, cholesterol metabolism, insulin response to glucose and obesity. The immediate
response of a fetus to under nutrition is to break down its own substrates to provide energy.
More prolonged under nutrition leads to growth retardation. Slowing of growth in late
gestation leads to disproportion in organ size (e.g. reduced growth of the kidney which is
rapidly developed during late gestation).
OBESITY
Obesity is defined as an excess of body fat. By using body weight as an index, obesity is a
weight greater than 20 % more than the average desirable weight for men and women of a
given height. Obesity results when there is an imbalance between energy intake and energy
expenditure. Obesity is a risk factor for other degenerative diseases, such as type II (adult
onset) diabetes, diseases of heart and circulation, and certain cancers.
Causes
1. Over Eating: This may result from primary failure in the regulation of ingesting
behaviour at the cognitive level. This usually results in excessive caloric intake that
exceeds energy needs of the individual.
2. Sex: It is common in women in whom it is liable to occur after pregnancy, especially
repeated pregnancies. A woman may gain as much as 12.5 kg during pregnancy. Most
of this will be in form of adipose store mainly for the demand of lactation. However,
many women gain more weight and retain part of this weight becoming progressively
obese with each succeeding child. Males have a higher resting metabolic rate than
females, so males require more calories to maintain their body weight.
3. Age: As a general rule, as you grow older, your metabolic rate slows down and you
do not require as many calories to maintain your weight.
4. Environment/Culture: An environment where people eat high fat and high sugar diets
and take little exercise, causes more problems with excess weight and obesity than
one where people eat low fat diets and get regular exercise. Cultural belief also
contributes to the development of obesity
152
5. Emotional Factors: Many people over eat when they're stressed, bored or angry. Over
time, the association between emotion and food can become firmly fixed. Depression
or stresses are also causes of obesity and other patterns of disordered eating.
6. Lack of Physical Activity: Lack of physical exercise is definitely one of the major
causes of weight gain and obesity.
Body mass index (BMI): This is calculated by dividing the body mass in kg by square of
height in meters. (BMI = kg/m2). The BMI is used to define underweight, overweight and
obese individuals.
BMI (kg/m2) Categories:
Underweight = <18.5
Normal weight = 18.5–24.9
Overweight = 25–29.9
Obesity = BMI of 30 or greater
Skinfold thickness = Waist/hip-ratios
Body Fat Distribution: "Pears" vs. "Apples"
Excessive body fat resulting from an imbalance between energy intake and expenditure is the
most important nutritional problem in developed countries and is rapidly becoming a global
epidemic; a definition of a healthy diet that fails to address this problem would be deficient.
Some well-intended guidelines are highly prescriptive in terms of energy intake or servings
per day of each food group. A fundamental problem is that even the healthiest combination of
foods consumed in slight excess, by only a percentage or two, over an extended period will
lead to overweight.
CASE STUDY
A 25 year old female weighing 81 kg, height of 1.67 m and of moderate physical activity has
a BMI of 29 kg/m2. Healthy BMI ranges from 18.5 – 24.9 kg/m2, therefore, Healthy weight
range is 43.3 - 58.5 kg. Based on the current weight, height and activity level, Daily Calorie
requirement is 2291 calories. This is what is needed by this person to maintain the current
weight. However, food is not the only thing that affects weight management; exercise or lack
of it will impact the amount of calories burned and therefore the net caloric intake. To change
your weight by 0.5 kg or 1 lb in one week, you must increase or reduce your net caloric
intake by 500 calories per day or 3500 calories per week.
TREATMENT
A poor diet and lack of exercise are the most common causes of obesity, there are some fairly
simple treatments for obesity.
i. Incorporating more natural foods into your diet.
ii. Drinking more water
iii. Cutting out junk food and getting into the habit of exercising several times every
week
153
CONCLUSION
It is a fact now that optimal health is linked to good nutrition. The importance of good
nutrition is the main preventive measure against a high number of human diseases. The
importance of good nutrition to achieve optimal health is unquestionable. There is growing
number of people who are aware of the importance of good nutrition and are turning towards
foods that are enriched with different herbs, vitamins and minerals. More and more foods will
be tailored to improve health and extend life. It is clear now that nutrient needs vary with age
and physiological state. Therefore, nutrition education should be emphasised for all as this
will help prevent a lot of diseases.
EVALUATION STRATEGIES
Study Questions
i. What is nutrition?
ii. List the features of Marasmus and kwashiorkor
iii. List the causes of malnutrition in children
iv. How is the BMI useful in measuring obesity?
v. What causes improper nutrition in the aged?
REFERENCES
Begum, M. R. (2007) A textbook of foods, nutrition and Dietetics. Sterling private publishers
limited, New Delhi. India
Food and Nutrition Board, (2001) Institute of Medicine, National Academies of Science.
DRIs for individuals
Hodder and Stoughton Publisher. London. UK: Fox, A.B and Cameron, A.G (1990). Food
Science: A chemical approach
Oladiji, A.T. (2003). Tissue levels of iron, copper, zinc and magnesium in iron deficient rats.
Biokemistri, 14 (1): 75- 81.
Oladiji, A.T. and Ogunyemi O.O. (2003). Status of some citric acid cycle dehydrogenases in
selected tissues of rats fed iron- deficient diets. Nigerian Journal of Biochemistry and
Molecular Biology, 18(3): 133 - 138.
Oladiji,A.T., Abodunrin, T.P., and Yakubu, M.T. (2007). Some physicochemical
characteristics of the oil from Tetracarpidiumconophorum (Mull. Arg.) Hutch and
Dalz Nut, Nigeria. Journal of Biochemistry and Molecular Biology, 27(2): 93-98.
Oladiji, A. T., Jacob, T. O. and Yakubu, M. T. (2007). Anti-anaemic potentials of aqueous
extract of Sorghum bicolor (L.) Moenchstem bark in rats. Journal of
Ethnopharmacology 111: 651-656.
Oladiji, A.T. (2012). Little Giants in Foods.102nd Inaugural Lecture, University of Ilorin,
Ilorin.March, 22nd 2012. Published by University of Ilorin Library and Publications
Committee.
Shils E. Maurice, Moshe Shike, A. Catharine Ross, Benjamin Caballero, and Robert J.
Cousins (2005). In: Modern nutrition in health and diseases. Lippincott Williams &
Wilkins, UK.
USDA (2010). Dietary Guidelines for Americans U.S. Department of Agriculture U.S.
Department of Health and Human Services www.dietaryguidelines.gov.
WHO (1999). Management of severe malnutrition: a manual for physicians and other senior
health workers. Geneva, WHO.
Wills, D.E (1985). Biochemical basis of Medicine. John Wright and Sons LTD., Bristol.
154
Chapter Twelve
ACHIEVING KNOWLEDGE MANAGEMENT FUNCTIONALITIES IN THE
EMERGING TECHNOLOGY-DRIVEN SOCIETY USING INFORMATION
TECHNOLOGY
Jimoh, R.G. and Oyelakin, A.M.
Department of Computer, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
Science, University of Ilorin, Ilorin, Nigeria
Corresponding email: Jimoh_rasheed@unilorin.edu.ng,
Knowledge Management (KM) is an emerging concept in the emerging knowledge-driven
society. KM refers to how knowledge is created, stored and shared in an attempt to derive
economic value. KM can be defined as a process of extracting value from intellectual capital
and sharing such knowledge with employees, customers, management teams and any person
or group of people that will find it useful. It brings out the economic value of knowledge
since it is true that “you don’t know what you know, until you are asked about it”.
Knowledge, despite its economic value remains untapped and difficult to access without
appropriate knowledge management functionality. The main KM functionalities are described
in Table 1 with their associated technologies.
History revealed that the development of modern Information Technology (IT) originated
from the need to analyse, input, store, process, output and to retrieve stored data and
information which are the core KM dimensions. For example, U.S.A census required
automation of data processing in a timely and inexpensive manner. IBM (International
Business Machines Corporation) deployed punched cards and sorting machines as late as
1900s to as recently as the end of the Second World War. Punched card and sorting
machines were mechanical analysers and they were replaced by the more efficient electronic
analysers. All these testify to the relevance of IT as a means of achieving KM objectives in
the emerging technology-driven society.
Going by the common saying that, “knowledge is power”. In the 21st century, decision
makers at all levels of organisations are overwhelmed with an avalanche of information on
conditions, developments and events that affect both the present and future of our society
(Sondra & Reynold, 2003). The IT approach is forcing it on virtually all organisations to be
knowledge-driven (information-driven). In this case, most of the organisational activities are
carried out by invoking the stored information in an automated manner to support various
decisions. The useful information in most cases is not readily available. The nature of the
information to be derived varies from one KM functionality to the other.
This paper discusses the various KM functionalities and the necessary technology/tool
required to achieve such functionality. The primary objective of this paper is to enrich the
knowledge workers with all it takes to decide the appropriateness of tools/technologies for
specific KM functionality in all disciplines of human endeavour.
155
such as searching and retrieving, categorising/taxonomy, composing, summarising, storing
and distributing information.
KM is not about technology but rather, technology has dramatically increased its awareness
and power. According to Frappaolo and Capshaw (1999), knowledge management can be
considered as the practices and technologies that facilitate the efficient creation and exchange
of knowledge at organisational level. Technology facilitates knowledge management
practices in organisations (Frappaolo, 2006). An example of these technologies is the
intranet.
OBJECTIVES
Repository
Electronic Network
KM objectives are achieved via its functionalities as shown in Table 1. The following
discussion will center on those functionalities and the IT driving them.
156
Categorising Computer Languages
Composing Office Suite Applications
Summarising Artificial Intelligence
Storing Storage Media
Distributing Networks
Workflow Groupware
According to Poremsky (2004), a web search engine can be defined as a tool that lets you
explore databases containing the text from hundreds of millions of Web Pages. When the
search engine software finds much of the search request (often referred as hits), then it
presents them with brief descriptions and clickable links.
(1) Special Search Engines: As the name implies, this category of search engines are
designed for general purpose to cater for people of diverse information needs.
(a) Ask Jeeves (http://www.ask.com/): is a unique search engine that allows users to ask
questions in simple English. The secret is that it combines a natural language engine
with a proprietary knowledgebase which gets smarter with the change of time.
(2) Google (http://www.google.com/): This search engine offers pre-defined search
leading to over 1,090,000 pages. It is referred to as a selective search engine.
Directory Searches for KMt:- This works with pre-defined categories.
(a) Yahoo (http://www.yahoo.com/): This search engine offers KM resources that are
listed in categories that are pre-defined, plus more than 1,160,000 subscribers.
(b) LookSmart (http://www.looksmart.com/): It includes a search engine and as well a
category-based portal, but without search result counts.
(c) About.com (Directory) (http://www.about.com/): This search engine offers wide
subject areas and searches across all of the subject areas.
(3) Knowledge Management Category Searches: this deal with categories that are not
pre-defined.
(a) AltaVista (http://www.altavista.com/): This is considered as one of the web's most
powerful search engines. It supports a pre-defined Knowledge Management search
that leads more than 574,000 pages.
157
(b) Business.com (http://www.business.com/): This search engine offers a category
recognised currently as the foremost business search engine and its business directory
is made to help its users find information such as companies, products, services, etc.
(c) Teoma (http://www.squirrelnet.com/search/Teoma.asp): It offers methods that help
users refine their search and search results. It also provides link collections.
(d) Hotbot (Category http://www.hotbot.com/):This is regarded as a well-respected
search engine that provides more than 16,348,000 pages listed in a recent search. It as
well offers a dedicated Knowledge Management
(4) Open Directory-Directory: This category of search engine, people recommend all
the entries. Users looking for information about Knowledge Management can search
across all categories or can specifically check the Knowledge Management category.
(a) Netscape Netcenter (http://search.netscape.com/search/webhome): This search
engine provides special category for locating KM resources, with the provision of
subtopics for knowledge discovery and creation.
(b) Lycos (http://www.lycos.com/): This search engine offers a relevant area about
Knowledge Management with accompanying subcategories such as companies,
creation of knowledge and publications etc.
(5) Web Searches for Knowledge Management: This category of search engine
contains tools used for mining the web-based knowledge.
(a) Excite (http://www.excite.com/): It offers search that covers Pre-defined Knowledge
Management.
(b) Euro Seek (http://www.euroseek.com/): This is an European search engine that
allows users to search for Knowledge Management topics in 25 languages and
covering 43 countries. It is a Pre-defined search for the Knowledge Management in
the United Kingdom.
(6) Meta Search Engines: This category of search engine is primarily designed for data
mining.
(a) GotchaMetaKM (http://www.icasit.org/km/resources/portals.htm): This is a portal
that offers KM resources: KM Websites - Portal Sites and it is under the International
Centre for Applied Studies in Information Technology (ICASIT) at the George Mason
University. It provides a compilation of KM portals and as well as resources that are
very helpful to academics and industry practitioners and researchers.
(b) Dogpile ((http://www.dogpile.com/): This search engine is considered to be
remarkable just like Google and it fetches for users top results from other 15 search
engines.
(7) Meta Search Sites: These are web sites purposely for searching purposes.
BeauCoup (http://www.beaucoup.com/): This tool is not a Pre-defined KM search;
instead it is a metasearch site that offers accompanying links of almost 70 Search
Engines. It is different from Dogpile since it doesn’t offer searching the Engines for
users.
Apart from the aforementioned seven categories, Glossbrenner and Glossbrenner (2001)
revealed five Specialised Search Engines as follows:
i) Deja: here, users can search (recent newsgroup postings (one month’s worth) and
Usenet newsgroup archives including the older postings that date back to May 1999).
ii) Topica: with this search engine, one can search for directory of over 90,000 mailing
list and message archives for public lists.
iii) Argus Clearinghouse: users can search for directory of subject-specific guides to the
internet resources, which were prepared by subject-matter experts. The directory is
being revived and rated by information and library studies professionals.
158
iv) Info space: This engine is people –finding tools and it can be used to search world-
wide email directory, telephone directory that lists the residents living in USA ,
Canada, and Europe and as well celebrity directory ( athletes, authors, journalists,
movie stars, etc.)
v) Zip 2 for business searches: It offers directory of US business which is searchable
by name and business type. Zip 2 also offers database of maps and driving
directories.
In KM, knowledge requires to be categorised to make it more useful. Since the size and type
of available information accumulates with time, a design of “Information Architecture” needs
to be created for easy sorting of information. One mode of achieving this is to sort them
according to the file format into which the information entry was made and saved (Riley,
2003), for example, Word processors, Spreadsheets, data tables and so on. Each of these
software tools has different format, features and functions.
A completely new mode of sorting information is based on the semantics of the content and
this is accomplished in three ways; namely:
1. Key words in titles;
2. Word-count frequency in the body of the content; and
3. Graphics meanings.
The file format and semantic content sorting modes have no basic and essential
relationship between them. Unfortunately, the adoption of either of the two modes for
categorisation process results in loss of information, that is undesirable in KM. This is where
computer languages come in. The loss is overcome by introducing the new available
computer languages such as the eXtensibleMarkup Language (XML) which can be defined as
“a system for defining, validating, and sharing document formats” (Tittel, 2002). The
suitability of XML here can be traced to its ability to create a meta-data. If XML is used to
create all the meta-data in every file, there is an extension of XML which is known as
Resource Description Framework (RDF), and it handles semantic categorisation.
159
If office suite applications are user friendly and well improved, then they can significantly
contribute to Knowledge Management.
Today commercial summarisers can analyse a text of any length, on any subject, and can
generate both short and long summaries of the document as the user wishes. They can
summarise Word documents, Web pages, PDF files, email messages and even text from the
Clipboard. These summarisers employ sophisticated statistical and linguistic algorithms. The
summariser will almost certainly demonstrate to be the most important instrument for
personal efforts in KM.
In every organisation, storing and seeking information are extremely important and
crucial.This necessity required the need for electronic storage medium. Storing this
information is very important in the emerging knowledge society. Information Systems
revealed the utilisation of hard-wired data inputs and output like long printed scrolls of
papers. Unfortunately, neither the input nor the output could cope with huge amounts of
generated data in this era.
Ultimately, the introduction of Magnetic Tape was realised and it became the choice of
storage media for large volume of data. Up to the present day, magnetic tape is used as a
secondary medium (archival storage, PCs backup and other systems storage) in business
organisations. It is true that sheer amount of data users need to access huge data kept in Data
Warehouses, However, tape storage is changing and it is moving beyond backup. One of the
other growing magnetic tape business applications is the 36-tack magnetic tape cartridges in
robotic automated drive assemblies. This technology can directly access large number of
cartridges, in fact amounting more than hundred. It offers cheaper storage that can
complement magnetic disks to cover the storage demand of large data warehouse and other
online business needs.
The arrival of smaller computers known as mini, micro, desktop, or personal computers had
created the need for cheaper and compact storage. Magnetic Disks are recognised as the
most common type of secondary storage media, the reason being that they offer fast access
and high storage capacities at a reasonable cost. In fact, there are several types of Magnetic
160
Disks, but they can generally be categorised into two: removable disk cartridges and fixed
disk units. Examples of these Magnetic Disks technologies include floppy disks (1.44
megabytes of storage area) and Hard disk drives (offers higher speeds, and large data
recording densities).
Optical Disks are secondary storage medium technologies utilising compact disks (CDs) and
digital versatile disks (DVDs). Optical disks have become very important technologies since
most the companies use CD-ROM to distribute their programs. Many organisations use this
technology to distribute products and corporate information that once used to fill cabinets of
book shelves.
In knowledge management, it is of primary importance that different generations of storage
media need “universal viewers” to make their content available in spite of the obsolescence
of their formats. It is true that currently CDs, DVDs, Floppy and other storage devices are
facing the prospect of possible obsolescence. If so, then with strategised application of XML
in the future is expected to answer and offer the basic solution for this problem.
KM emphasises sharing of knowledge for certain economic benefits. Sharing knowledge thus
requires distributing such knowledge across a computer network. “Distributing information
can be done either openly as on the Internet, exclusively over an Intranet, or in some
combination of the two as with an Extranet”. Network is a set of devices “sometimes called
nodes, or stations interconnected by media linking equipment. Data communications
networks are systems of interrelated, interconnected devices like computers and computer
equipment. For example, a simple as a personal computer (PC) connected to another PC or
connected to a printer by media link or a large and complex public telephone network, where
millions of telephones are interconnected. The networks can be wireless or wired. It can be
positioned on ordinary carriers such as telephone lines. The networks can be separated from
the external environment by firewalls for security purpose.
To create effective network for KM, the network must be supported by high bandwidth thus
enabling the transmission of data, text, voice and multimedia concurrently. Such network is
referred to as broadband. That is, a wide bandwidth capable of transporting multiple signals
and traffic types simultaneously. When implementing broadband networks that support KM
system, it is important to consider the location as this will determine what kind of physical
medium connections used such as wired or wireless to use. The following are the main means
of connections:
1. Telephone dial up (modem);
2. A number of high speed phone lines, examples are, ISDN, DSL, and T1;
3. Cable Modem; and
4. Wireless, the satellite and other airborne links.
161
information and feedback output, involved people and even tools required for each step in
business process. This is where the concept of “paperless office” originated.
Groupware can be defined simply as a tool that helps people to work to together more easily
and efficiently. It allows communication, coordination, and collaboration. Terms like
collaborative computing, or Group Support Systems (GSS) also refer to groupware. The use
of groupware in organisations supports the free flow of information, which in turn results
improved innovation and facilitated collective leadership. Groupware is so important because
it offers considerable benefits over single-user systems. Some of which include:
CONCLUSION
EVALUATION STRATEGIES
Questions
1. Knowledge Management can be defined as:
(a) The process of managing the intellectual aspects of knowledge
(b) The act of using available knowledge economically
(c) The process of deriving value from intellectual capital and sharing the derived
value
(d) All of these
2. Knowledge Management Functionality includes the following:
(a) Exploring, Summarising, Composing and Categorising, Storing and Distributing
(b) Searching, Categorising, Composing, Summarising, Storing, Distributing and
Workflow
(c) Searching, Learning, Composing, Categorising, Summarising and Storing
(d) All of these
162
3. Technology-driven Knowledge Management is defined as:
(a) Practices used for sharing organisational knowledge
(b) The process of running the organisation using computing devices to aid
knowledge sharing
(c) The practices and technologies that facilitate the efficient creation and exchange
of knowledge at organisational level
(d) All of these
4. The hardware component that is used to input data is the ………..
(a) Monitor
(b) Joystick
(c) Casing
(d) Keyboard
5. The instruction given to the machine to perform some specific tasks is called ………..
(a) Recorded video
(b) Hardware
(c) Software
(d) None of the above
6. The computer storage system can be classified into ………..
(a) Secondary & Nursery storage
(b) Primary & Secondary storage
(c) Universal secondary storage
(d) Institution storage
7. Which of the following is the smallest unit of storage? ………..
(a) Byte
(b) Bit
(c) Kilobyte
(d) Megabyte
8. A program is ………..
(a) A set of instructions
(b) A definition of a problem
(c) A set of solution
(d) Games theory.
9. The following are examples of search engine except ………..
(a) Lycos
(b) Netscape
(c) Facebook
(d) Teoma
10. Knowledge Management can be achieved by ………..
(a) Knowledge Management Service
(b) Knowledge Management System
(c) Knowledge Management Site
(d) Knowledge Management Serve
163
REFERENCES
Glossbrenner, A &Glossbrenner E. (2001). Visual quick start guide: Search Engines for the
World Wide Web. Berkeley, CA, USA: Peachpit Press.
Frappaolo, C. (2006). Knowledge Management. Chichester West Sussex, England: Capstone
publishing Ltd. (A Willey Company).
Frappaolo, C., &Capshaw, S. (1999, July). Knowledge Management Software: Capturing the
essence of know-how and innovation. The information management journal, 44-48.
Knowledge Management Resource Center (KMRC) (2008). Knowledge Management
Explorer: Search Engines and Portals. Retrieved November 4, 2008, from:
http://www.kmresource.com/exp_searchengines.htm
Merriam-Webster Online Dictionary. (2008). Definition of functionality. Retrieved on
November 4, 2008 from: http://www.merriam-webster.com/dictionary/functionality
Mitchell, H. (2005). Technology and Knowledge Management. In M.E
Jennex(Ed.),Knowledge Management: Concepts, Methodologies, Tools, and Applications.
(pp. 41-47). Hershey, PA, USA: Information Science Reference (an imprint of IGI
Global).
Poremsky, D. (2004).Visual quick start guide: Google and other Search Engines. Berkley,
CA, USA: Peachpit Press.
Riley, B. T. (2003). Knowledge management and technology. International tracking survey
report number two. Retrieved on November 4, 2008 from:
http://www.rileyis.com/publications/research_papers/tracking03/IntlTrackingRptJune03n
o2.pdf
Sondra, K. P. & Reynolds, F. (2003). Innovation Technologies for the 21st Century Decision
Maker: Research, Retrieve, Organise and Manage Information, Proceeding of
EDUCAUSE Mid-Atlantic Regional Conference, 1- 8.
Tittel, E.d. (2002). Schaum's Outline of XML. Black lick, OH, USA: McGraw-Hill Trade.
Retrieved November 7, 2008, from:
http://site.ebrary.com/lib/uum/Doc?id=10045482&ppg=196.
164
Chapter Thirteen
SCIENCE AND TECHNOLOGY IN THE SERVICE OF MAN
Ayeni, A.A.
Department of Telecommunication Science, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
Science has become a major vehicle in man's transition to modernisation. Scientific
knowledge itself has continued to grow from generation to generation. While the ancient man
had a magical view of natural phenomena, the medieval man held the animistic view of the
universe which suggests that the intrinsic nature of the spirits is in control of natural
phenomena. This view was later considered not rational enough and it gave way to the
mechanistic view, which ascribes cause to every effect in nature. Scientific knowledge is
proven while the theories are derived from facts of experience acquired by observation and
experimentation. Prejudicial assertions based on personal opinions, preference, and
speculation is out of place in science as science It is based on what we can see, hear and
touch and requires objectivity in reporting.
OBJECTIVES
Science is the totality of man's investigation about events in his environment. His mastery of
the environment, a consequence of his investigations, lead to answers to questions about the
world around him such as those involving the geometry of the earth, the structure and
contents of the atom, the origin of man which have been tasking the intellect of man from one
generation to the other. Equally stimulating is the curiosity of man about interpersonal
behaviours and commodities demand and supply profiles. Answers to these questions have
varied in form over the years. For example, while Dalton theorised the indivisibility of the
atom, a later generation scientist theorised the divisibility of the atom, thus extending the
frontier of knowledge further. Rather than contradicting ideas before it, each new idea is seen
as a new discovery necessitated by observational and analytical technologies available at the
time of investigation. One thing is clear about the roles science is playing in our life; it
separates the 'primitive' man from the 'modern' man. These terms can be relative but the
following can be said, without any fear of contradiction, about science:
1. It broadens our horizon about events around us and thus improves our comprehension
of the environment of our abode;
2. It replaces superstition, myths and taboos with facts obtained from data and tests,
thereby liberating man from the fear of the unknown; and
3. Each new discovery emboldens and encourages man to search for more.
165
HISTORY OF SCIENCE
It would seem that the desire of man to master his environment and the events therein is as
old as man himself. There is no doubt, however, that the methodology has varied from one
generation to the other. The scientific Revolution of the seventeenth century fired the desire
to know more and the courage to ask questions the more. Hence, the erroneous reference, by
some, to this period as the beginning of the scientific era.
Methodology of Science
To ensure that we actually master the environment, there is the need to be able to make
general statements or claims that have the potential of becoming rules or laws if legitimate.
To make such statements, there is the need to scrutinise the facts obtained. The Inductivist
school opines that for such general statements to be legitimate, the following are necessary:
(1) The number of observation statements forming the basis of a generalisation must be
large.
(2) The observation must be repeated under variety of condition.
(3) No accepted observation statement should conflict with the derived universal law.
There is, therefore, an underlying assumption that space and time are real, have dimensions
and are continuous. Not only that, every event has a cause.
Scope of Science
If every process of seeking to know more about events and the environment in the true sense
is science, then, the scope of science is a lot wider than is being conventionally claimed, and
in fact, it is growing.
The following form a list of categories of fields being branded as science:
It should be noted that subdivision exists and continue to exist. Presently, we have
Biochemistry: Microbiology, Botany, Zoology, Geophysics etc, note that the listing above is
not rigid.
The idea that knowledge can be acquired only through a process enumerated in science seems
to suggest any idea, event or process that cannot be so investigated does not exist in the realm
of reality. This has a lot of implications on many activities of man. To a large extent, man is
emboldened to seek for more knowledge. This includes raising questions about the creation
and existence of man and the myths of the Deity connection. Here lies the so-called conflict
between Science on one hand, Religion and tradition on the other.
Technology
Science broadened man's intellectual horizon about his environment, thus preparing him to
dominate it and put it into use to satisfy his ever-increasing desires of life. This is what
technology is all about. If this is so, technology is not only as old as man; it has not been of
any race or generation as is being erroneously branded.
166
The early man used stones to kill animals both for protection and for food. He made fire, and
cooked with it, from raffia palm. He protected himself' from cold weather using leaves. He
provided health care services for himself from provided social entertainments. He moved
from one place to another in search of wants. He communicated and finally, he enjoyed
himself within the limit of his ability.
The arts of palm wine tapping, basket making, trap making and selling of animal traps, black
soap making, dyeing as were practised by ancient Africans were technologies in their own
right. The present Accounting System and Booken Algebra have a precursor in the recordings
of “esusu” a contributory scheme where the contributor made marks on the wall (in Yoruba
land of Nigeria) as follows:
I - represents one contribution
II - represents two contributions
IIIIIIIIII - represents ten contributions
IIIIIIIIII IIIIIIIIII - represents twenty contributions
The modern silo has a precursor in 'Ule aka' a highly sophisticated architectural piece, made
to store and preserve food during harvesting. These were ingenious and technological. To
claim otherwise amounts to a needless attempt to thwart the course of History.
Essentially therefore what the so called modern technology sought to achieve, if anything,
are:
1. To produce on a large scale to meet an increasing population and so demand
2. To shorten work duration thereby allowing for leisure.
Today, technology is severally referred to as modern technology, scientific technology or
knowledge based technology. In the true sense of it, modernism is not a static parameter. For
as long as perfection is not attained technology will continue to change form. For example,
the requirement of portability and so miniaturisation in electronics make the use of integrated
circuits a better alternative to the discrete circuits of the earlier generation etc.
The process is continuing. Be that as it may, it is worthy to note that technology has
developed man in many ways. These include:
(1) Food and Agriculture
Mechanised farming replaced the traditional method, enhancing cultivation of larger
areas over shorter period of time, yielding more products, so also the use of fertilizer
to increase crop yield.
(2) Transportation
Transportation on foot has been replaced with transportation by motor vehicles on
road, and the use of air and waterway transportation system saved a lot of time over
what it used to be.
(3) Health Care System
Modem health care delivery system, based on scientific diagnosis and management,
has brought many diseases under control, thus improving tremendously on man's life
expectancy. Diseases are no longer considered as punishment from evil spirits, but
events to be studied and tackled accordingly. So far, man has-gone as far as
diagnosing and managing illness over a distance in what is known as tele-medicine.
(4) Communication
The 'Aroko' of yesteryears has given way to telephones and even today wireless
phones of different generations are carrying not just human speech but also data and
video information.
167
(5) Energy
Modern electric energy has been generated from all forms of sources such as fossil
fuels, wind, water, thermal, nuclear, solar and others. Some of which are even being
discovered have resulted in meeting multi-energy requirements of man. This is not
only in illumination but in enhancing the increasingly complex industrial system.
(6) Manufacturing / lndustry
Manufacturing has become high technology driven, thereby improving on products
both in quality and quantity.
(7) Water Supplies
The traditional sources of water such as rain, spring, well, rivers, streams and brooks
which were noted as sources of continuation and consequent diseases have been
replaced by more hygienic, well treated public water systems which flow into homes.
ETHICS
The statements above seem to have painted a rosy picture of technology, as technology that
has done no harm to men but only benefits. This is not true, especially if we reflect on the
level of insecurity of life occasioned by extensive development and deployment of weapons
and other instruments of destruction. What about the atmospheric pollution and
environmental degradation occasioned by deployment of high technology in the exploitation
of earth resources; the millions of men who are being sent to their early grave through
psychological traumas occasioned by loss of jobs to machines; and the intellectual
redundancy of man who are reduced to machine operators from the state of being active
thinkers. What about man's dependence on synthetic drugs with heavy doses of chemicals?,
all replacing our natural medication. What about the destruction of our natural habitat through
urbanisation and the urban culture, fast life, dishonesty, commercialisation of everything
including our sacred bodies and erosion of our cultural values?, which have the potential of
obliterating our history. Similarly, uncomplimentary is the social separation and its resultant
individualistic lifestyle occasioned by job specialisation, through division of labour. All of
these show that it is misleading to see Technology, Politics and Humanity as unlimited
saviours. All of the above expose the double-headed sword that technology is. They show
that it is misleading to see technology as the beginning and the end of all solutions to mans'
problems. Unless a measure of socio-political control is applied to the invention and
deployment of technology, it may as well turn out to be a tool for annihilation of man.
Certainly, this was not the purpose for which it was created. This fact rationalises the control
of some governments over developments in human generic biology.
EVALUATION STRATEGIES
Practice Questions
(i) explain the roles of science
(ii) write a brief note on the history of science
(iii) list with examples, the categories of activities branded as science
(iv) Explain how technology has developed man
(v) Discuss the side effects of technology to man and his environment
168
REFERENCES
Berkner, LV. (1964). The Scientific Age. The impact of Science on Society.
Bueche, F. (1980). Physical Science.
Chalmers, A. F. (1980) “What is this then called Science? Open University Press
M. lIoriKoynes, pp. 1-7
Willis, H. T. and Grahem-Solornon, T. W. (1974), Science Technology and
Freedom. HongtonMitthy Company, Boston, pp. 131 - 132
169
Chapter Fourteen
CONCEPTS OF HYPOTHESIS TESTING IN SCIENCES, SOCIAL SCIENCES
AND HUMANITIES
Adeleke, B.L. and YAHYA, W.B.
Department of Statistics, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
In many real-life situations, researchers often hypothesised about the relationship between
several factors, collect some information (data) to examine the perceived relationship and
draw some inference based on the results of their investigations. Thus, the prime interest is to
examine the strength of information obtained from samples as a basis for making a judgment
about the generality of individuals or units (population) from where the samples are taken.
This is the general basis of hypothesis testing. Hypothesis testing therefore is the process of
understanding how reliable one can generalise the observed results in a sample under study to
the larger population from which the sample was drawn.
There are numerous research studies in sciences, social sciences and humanities in which
statistical hypothesis tests were largely and efficiently employed (Oakes, 1986; Huberty,
1994; Mayo, 1996; Mulaik et al., 1997; Yahya, 2009; Yahya et al., 2011). The development
of statistical hypothesis tests was primarily led by Sir Ronald Fisher in the early 1930s. His
works in this direction were followed and improved upon thereafter by many others (Oakes,
1986; Huberty, 1993; Underwood, 1997; Hilborn and Mangel, 1997). Fisher’s influence on
statistics was very enormous and his impacts in the development of many statistical theories
and methodologies especially in the areas of hypothesis testing are very crucial to statistics
till date.
A statistical hypothesis test is a procedure through which we obtain a statement on the truth
or falsity of a proposition regarding some characteristics of a population of interest on the
basis of empirical or numerical evidence. In other words, hypothesis test provides a method
for understanding the extent at which one can extrapolate observed results in sample under
study to the larger population from which the sample was taken.
As an illustration of this basic concept, it may be of interest to ascertain whether the average
performance of students in a general studies (GNS) course examined through the newly
introduced computer based test (CBT) is better or worse than their average performance in
the manual based test (MBT). One way to accomplish this task is to collect numerical
evidence (samples) on the performance of students in that course in a CBT examination and
compare the computed (estimated) average performance of students to what the students’
average performance used to be prior to the introduction of the CBT. This is addressing the
problem as a one sample test problem. Another way is to look at the problem as a two-sample
test problem by taking random samples of students’ scores in previously conducted CBT and
MBT examinations in that course to ascertain whether the two performances are the same or
not. The results derived from either of the above tests would determine the kind of statement
(inference) to be made about the general performance of the students in the course.
The importance of statistical hypothesis tests in science, social sciences and humanities
cannot be over emphasised. Lack of access to the entire population of units under
investigation is a major factor that makes dependable inference about the population from
170
sample inevitable. In drug discovery research for instance, any drug is designed to work on
all patients with similar aliments. However, before any drug is sent to the market, it is
imperative to ascertain whether it works as designed or not through standard statistical tests
using randomly selected individuals (samples) from the population of people that are having
the ailment. Based on the results from the sample, one can conclude about the efficacy of
such drug.
OBJECTIVES
The importance of statistical hypothesis test in science, social science and humanities
research cannot be overemphasised. Firstly, a statistical hypothesis helps to determine the
focus and direction for a research effort. By developing a good statistical hypothesis, the
researcher would be able to state clearly, the purpose of the research activities. Another
importance is that a good statistical hypothesis helps to determine what type of variables are
to be considered or not to be considered in a study. This would enable the researcher to have
an operational definition of the variable of interests in his/her research.
As would be discussed later, one of the basic steps in statistical hypothesis testing is the
transformation of the research question into null and alternative hypotheses (Browner et al.
2001). The null hypothesis, usually denoted by H0 is a competing hypothesis that negates or
contradicts the research hypothesis which is the alternative hypothesis that is usually denoted
by H1. The research’s hypothesis H1 is the hypothesis of interest that the investigator wants to
ascertain its veracity based on empirical evidence from the sample data. The null hypothesis
on the other hand is the hypothesis of no effect which is basically the opposite of the research
hypothesis.
In a general term, the null and alternative hypotheses are two concise statements, usually in
mathematical form, of possible versions of “truth” about the relationship between the
predictor of interest (empirical evidence from sample) and the outcome in the population
(Davis and Mukamal, 2006). These two statements must be mutually exclusive (non-
overlapping) and exhaustive by covering all the possible truths concerning the characteristics
of interest under study. For example, the research hypothesis (H1) could be of the following
forms:
i) H1: “The average performance of students in a CBT GNS examination is different
from their average performance in an MBT GNS examination”
ii) H1: “The average performance of students in a CBT GNS examination is better than
their average performance in an MBT GNS examination”
iii) H1: “The average performance of students in a CBT GNS examination is worse than
their average performance in an MBT GNS examination”
171
The first research hypothesis in i. only indicates a bidirectional type of the effect of interest.
If a test results uphold H1, this would only indicate that the average performance of students
under the two test types are not the same without indicating which of them is better. This is
called a two-tail (2-tail) alternative hypothesis set. If the Greek symbols 𝜇𝐶𝐵𝑇 and 𝜇𝑀𝐵𝑇 are
used to indicate the average performance of the students in CBT and MBT examinations in
that course respectively, the conventional form for stating H1 would be H1: 𝜇𝐶𝐵𝑇 ≠ 𝜇𝑀𝐵𝑇 .
The second and third research hypotheses in (ii) and (iii) are one directional in that they both
indicated the direction of the effect of interest. They are both referred to as one-tail (1-tail)
hypothesis set with hypothesis (ii) being right-sided and hypothesis (iii) being left-sided.
Also, the two research hypotheses (ii) and (iii) can be conventionally written as H1: 𝜇𝐶𝐵𝑇 >
𝜇𝑀𝐵𝑇 and H1: 𝜇𝐶𝐵𝑇 < 𝜇𝑀𝐵𝑇 respectively.
The corresponding null hypothesis (H0) for each research hypothesis set H1 stated in (i) to
(iii) above are given as follow:
i) H0: “The average performance of students in a CBT GNS examination is not different
from their average performance in an MBT GNS examination”
ii) H0: “The average performance of students in a CBT GNS examination is not better
than their average performance in an MBT GNS examination”
iii) H0: “The average performance of students in a CBT GNS examination is not worse
than their average performance in an MBT GNS examination”
iv) The three null hypotheses above negate their respective alternative hypotheses sets H1
as earlier stated. These hypotheses can be written in the short forms as H0: 𝜇𝐶𝐵𝑇 =
𝜇𝑀𝐵𝑇 , for null hypothesis (i), H0: 𝜇𝐶𝐵𝑇 ≤ 𝜇𝑀𝐵𝑇 (conventionally stated as H0: 𝜇𝐶𝐵𝑇 =
𝜇𝑀𝐵𝑇 ) for null hypothesis (ii) and H0: 𝜇𝐶𝐵𝑇 ≥ 𝜇𝑀𝐵𝑇 (conventionally stated as H0:
𝜇𝐶𝐵𝑇 = 𝜇𝑀𝐵𝑇 ) for null hypothesis (iii).
If both the null and alternative hypotheses above are stacked together, we have their
respective complete statistical representations as follows:
i) H0:𝜇𝐶𝐵𝑇 = 𝜇𝑀𝐵𝑇 versus H1: 𝜇𝐶𝐵𝑇 ≠ 𝜇𝑀𝐵𝑇 (Two-tail hypothesis test)
ii) H0: 𝜇𝐶𝐵𝑇 = 𝜇𝑀𝐵𝑇 versus H1: 𝜇𝐶𝐵𝑇 > 𝜇𝑀𝐵𝑇 (One-tail hypothesis test, right sided)
iii) H0: 𝜇𝐶𝐵𝑇 = 𝜇𝑀𝐵𝑇 versus H1: 𝜇𝐶𝐵𝑇 < 𝜇𝑀𝐵𝑇 (One-tail hypothesis test, left sided)
When a statistical test rejects the null hypothesis H0 in favour of the alternative set H1, the
test result is said to be significant (Huberty, 1994, Royall, 1997).
It is important to remark that if the null hypothesis H0 is not rejected in any of the one-tail
hypotheses tests for a given data set, this does not necessarily imply that the two population
means 𝜇1 and 𝜇2 being compared are the same, especially when H0 is rejected under the two-
tail hypothesis for the same data set. Failure to reject H0 under such one-directional
alternative sets simply suggests that the stated order of differences in means (𝜇1 > 𝜇2 or 𝜇1 <
𝜇2 ) in the alternative H1 is not supported by the data but not that 𝜇1 = 𝜇2 .
172
TYPE I AND TYPE II ERRORS AND POWER OF A STATISTCAL HYPOTHESIS
TEST
When statistical hypothesis is being tested, two types of errors are likely to be committed.
These are referred to as Type I and Type II errors in statistics (Ware et al., 1992; Quinn and
Keough, 2002; Adeleke, 2002). The concepts of Type I and Type II errors were originally
provided by Jerzy Neyman and Egon Pearson (Neyman and Pearson, 1928).
The Type I error (Lind et al., 2004) is the error committed when a correct null hypothesis is
wrongly rejected. This is usually being measured in quantitative term by the Greek letter 𝛼
that is defined as the probability of wrongly rejecting a correct null hypothesis. Since 𝛼 is a
probability, its values range between 0 and 1. Conversely, the probability of taking a correct
decision by accepting a true null hypothesis is given by 1- 𝛼. Explaining this differently, the
probability of committing Type I error 𝛼 is often call the level of significance of a statistical
hypothesis test whose value is usually chosen prior to the commencement of the test. The
value of 𝛼 chosen by an investigator indicates how much he/she wants to risk committing a
Type I error. Fisher (1935) earlier recommended 5 % for value of 𝛼 which indicates 1 in 20
chance of false rejection of the correct null hypothesis. Fisher (1956) thereafter argued that a
fixed value of 5 % for 𝛼 may too stringent a condition and recommended that the value of 𝛼
should be chosen depending on the situation being investigated. However, it should be noted
that the closer the chosen value of 𝛼 is to zero the smaller the chance of committing Type I
error in a significance test problem. In many social science researches, the levels of
significance commonly used are 5%, 1% and 0.1% which simply translates to 95 %, 99 %
and 99.9 % correct acceptance of true null hypothesis respectively. In behavioural sciences, 5
% is a common choice for value of 𝛼 (Lun,t 2011).
On the other hand, Type II error is the error committed when an incorrect null hypothesis is
accepted (Gigerenzer, 1993). This error rate is denoted by Greek letter 𝛽 and it represents the
probability of wrongly accepting a false null hypothesis that should be rejected. The converse
of this naturally translates to the power of a statistical significant test.
The power of a statistical test, denoted by 1 − 𝛽, is the probability of correct rejection
of a false null hypothesis (Quinn and Keough, 2002). This is the probability of taking correct
decision that is, rejecting H0 when there is actually strong numerical evidence against it from
the sample data. Putting this differently, the power of a statistical test is the ability of the test
to detect the desired alternative hypothesis set when it is true. The concepts of Type I and
Type II errors as well as power of statistical tests are presented in the confusion matrix given
in Table 1.
Table 1: Confusion matrix showing Type I error, Type II error and the probabilities of
taken correct decisions in statistical hypothesis test
173
THE BASIC STEPS IN STATISTICAL HYPOTHESIS TESTING
The basic logic of hypothesis testing is to prove or disprove the research question as stated
under the alternative hypothesis set H1. The steps to be followed while performing a
statistical hypothesis test are reported in various forms in the literature (Underwood, 1997;
Mulai et al., 1997; Davis and Mukamal, 2006). A common feature to all the steps proposed
for carrying out statistical hypothesis test requires formulation of hypothesis of interest to be
tested and the usage of appropriate test statistics.
Each of the above five steps is discussed in detail in what follows. However, since most of
the hypothesis tests problems in sciences, social sciences and humanities often centre on the
comparison of population means, therefore, our subsequent discussions shall be focused to
this direction.
In a classical test on mean performance of students in a GNS course for example, the research
question may be stated as “Is the mean score of student in GNS 111 course equal to 60%?”
Under this problem, the target population is all the students that wrote GNS 111
examinations, the variable of interest is continuous and it represents the scores obtained by all
the students in that course. The parameter of interest is the mean score of all the students in
the course with hypothesised value of 60. Any randomly selected scores (sample) drawn from
the entire scores of students in that course should be distributed normal with a mean of 60
usually written as 𝑋𝑖 ~𝑁(60, 𝜎 2 ), where 𝜎 2 is the parameter that measure the spread
(variance) of the selected scores.
To this end, typical scores of 1000 students were simulated from normal distribution with a
mean of 60 and a unit standard deviation. The density plot and the histogram of the simulated
scores are shown in Fig 1a. and b. respectively. The histogram or density plot of any random
variable of interest that possesses the normality property should be bell-shaped and
symmetrical around the mean as shown in Fig 1a. and b.
174
Figure 1 : The density plot (a) and the histogram (b) of simulated scores of 1000 random
samples (students) from a normal distribution with a mean of 60 and 1 standard deviation
Unfortunately, this normality assumption, as the word implies, is usually being assumed but
not always been tested to ascertain its existence in many research works. However, if the
distribution of the sample collected is not normal, neither the z nor the t test statistic can be
appropriate to perform the test. This is an indication that some kind of data transformation
(e.g. logarithm, reciprocal, square root etc.) might be necessary to attain normality.
To illustrate the logarithm transformation technique to attain normality for instance, 1000
scores of students were simulated from chi-square (non-normal) distribution with 60 degrees
of freedom. This implies that the mean of the simulated samples (scores) is 60. The histogram
of the simulated chi-square samples that shows the asymmetric (skew) nature of the sample
data and the histogram of their natural log transformation that yielded a bell-shape (normal)
distribution are presented in Fig 2a. and b. respectively.
a. b.
Figure 2: The histogram of 1000 chi-square random samples with 60 degrees of freedom (a)
showing a right skewed distribution and the histogram the log transformed samples (b)
showing a bell-shape (normal) distribution.
175
The illustrations in Fig.1 and Fig.2 clearly underscore the need to usually examine the
behaviour of sample data for their conformance to the required distributional assumption
while carrying out statistical hypothesis tests. Any deviation from the assumed probability
distribution may require little data transformation as demonstrated in Fig. 2.
Apart from normality assumption, other assumption about the data that must be met at this
stage is that of independent of random samples that are drawn from the population of interest.
All these basic assumptions on the variable of interest must be met in order to guarantee
efficient and reliable statistical hypothesis tests.
The second step in hypothesis testing is the formulation of both the null and the alternative
hypothesis. The research question is stated as the alternative hypothesis H1 while its negation
would be the null hypothesis H0 as given below using our illustrative example on students’
performance in GNS111 examination.
H0: The mean score of student in GNS 111 is 60 % ( i.e. H1: 𝜇 = 60)
H1: The mean score of student in GNS 111 is not 60 % (i.e. H1: 𝜇 ≠ 60)
Any of the one-directional alternatives could also be of interest to the investigator.
That is, H1: The mean score of student in GNS 111 is greater than 60 % (i.e. H1: 𝜇 > 60) or
H1: The mean score of student in GNS 111 less than 60 % (i.e. H1: 𝜇 < 60).
If the null hypothesis is defined by the parameter 𝜇 as in our example here, then the
appropriate test statistics should be a function of the sample mean 𝑥̅ (xbar) and possibly the
sample standard deviation (s) if the population standard deviation 𝜎 is unknown. Therefore,
the test statistic in this case is
𝑋̅ − 𝜇
𝑍=
𝑠. 𝑒. (𝑋̅)
The statistics Z has a normal distribution with zero mean and a unit standard deviation
𝜎
usually written as 𝑍~𝑁(0,1). The standard error of sample mean (𝑠. 𝑒. (𝑋̅)) is 𝑛 where 𝑛 is
√
the number of samples used to compute 𝑋̅. The sample standard deviation s is often used in
place of its population value 𝜎 when the latter is not known, which is most common. Under
such a situation, the distribution of Z becomes a student t with 𝑛 − 1 degrees of freedom,
often written as 𝑡~𝑡𝑛−1 with𝑍 being replaced by 𝑡 in the above test statistic 𝑍.
As a general remark, if the value of 𝜎 is estimated by s from large samples of size 𝑛 (e.g. 𝑛 >
120), the distribution of the test statistic 𝑍 or t would essentially be standard normal. This
shows that the same conclusions would be made using the 𝑍 or t statistic under the large
sample situations.
In our earlier null hypothesis H0: 𝜇 = 60 against the two-sided alternative H1: 𝜇 ≠ 60,
suppose scores of 25 students were randomly drawn with a mean of 62.5 % and a known
population standard deviation of 6, then the test statistic Z becomes
(62.5 − 60)
𝑍̂ = √25 = 2.08
6
176
Compute the p-value or the Critical Value of the Test Statistic and take a decision
The fourth step in hypothesis testing is the computation of the p-value or the critical value of
the test statistic computed in step three. The decision of the test results will be based on
either of these two values. Interestingly, many researchers are more familiar with the use of
critical values than the p-values in significant tests. The applications of the two values are
presented here.
The p-value of a statistical test here denoted by pv is the probability of obtaining the value of
the test statistic or more extreme value when H0 is true (Quin and Keough, 2002; Davis and
Mukamal, 2006). For a test statistic 𝑍 as earlier given for instance, its p-value is computed by
𝑝(𝑍 > 𝑍̂) = 1 − 𝑝(𝑍 ≤ 𝑍̂). The 𝑝(𝑍 ≤ 𝑍̂) is the cumulative density function (cdf) of the
standard normal distribution whose value can be easily read from any statistical table or
computed using statistical package.
To use the p-value for inference purposes, the investigator must have chosen a
particular level of significance 𝛼 for the test against which the computed p-value would be
compared. For a two-sided alternative hypothesis test, the decision rule is to reject the null
hypothesis H0 in favour of the alternative set H1 if the 2pv (2 x p-value) is less than the
chosen significance level 𝛼. For one tail alternative test, the rule is to reject H0 if pv is less
than 𝛼. However, when the null hypothesis is rejected, the outcome is said to be "statistically
significant" and when it is not rejected then the outcome is said to be "not statistically
significant".
In our example, the value of the test statistic 𝑍̂ is 2.08 and its p-value is 1 − 𝑝(𝑍 ≤ 2.08)
which is 0.0188 (i.e. 1-0.9812). The value 0.9812 of 𝑝(𝑍 ≤ 2.08) is computed using R
statistical package (www.cran.org) and can be easily read as well from any statistical table.
With a p-value (pv) of 0.0188, 2pv = 0.0376 and the null hypothesis H0 that the mean
performance of students in GNS111 is 60% (H0: 𝜇 = 60) is rejected in favour of the
alternative hypothesis that the students performance is not 60% (H1: 𝜇 ≠ 60) at 5%
significance level since 0.0376 is less than 0.05. Hence, the test is statistically significant.
However, if 1% significance level is intended by the investigator here, the null hypothesis
would not be rejected since pv > 0.01.
If the sample standard deviation s(= 6) is used in the computation of the value of the test
statistic 𝑍, the distribution of 𝑍 becomes a student t distribution with 24 degrees of freedom.
Here, the p-value is 1 − 𝑝(𝑡 ≤ 2.08) at 24 degrees of freedom which is 0.0242. With 2pv =
0.0484, the null hypothesis would be rejected as before at 5% level of significance.
The critical value of the test statistic is another tool by which a decision can be made on the
significant tests based on sample data. The critical value 𝐶𝛼 of a statistical test is the quantile
value obtained from the distribution of the test statistic at size 𝛼 of the test. The value of 𝐶𝛼 is
a threshold value against which the estimated value of the test statistic would be compared in
order to form a decision as to whether the null hypothesis should be rejected (if the test
statistic value is greater than 𝐶𝛼 ) or not being rejected (if the test statistic value is less than
𝐶𝛼 ).
In our earlier example, the 𝑍 statistic is distribute standard normal; therefore its critical value
is the quantile value 𝑍1−𝛼 at significance level 𝛼. The rule that would guide us to be able to
2
decide whether to accept or reject the null hypothesis at any chosen 𝛼 value which is usually
177
called the decision rule becomes that of rejecting the null hypothesis H0 in favour of the
alternative set H1 if the inequality |𝑍̂| ≥ 𝑍1−𝛼 holds for two-sided alternative hypothesis set.
2
For alternative hypothesis sets H1: 𝜇 > 60 or H1: 𝜇 < 60, the rule is to reject H0 if 𝑍̂ ≥ 𝑍1−𝛼
or 𝑍̂ ≤ −𝑍1−𝛼 respectively.
At 5% 𝛼 level, the critical value 𝑍1−𝛼 is 𝑍0.975 = 1.96 and with 𝑍̂ = 2.08, the estimate of the
2
test statistic, the decision is to reject the null hypothesis (H0: 𝜇 = 60) that the mean
performance of students in GNS 111 is 60 % in favour of the alternative (H1: 𝜇 ≠ 60) that
the mean performance of students in GNS 111 is not 60 %.
For one directional alternative hypothesis H1: 𝜇 > 60 or H1: 𝜇 < 60, the critical value 𝑍1−𝛼
is 𝑍0.95 = 1.645. For the alternative hypothesisH1: 𝜇 > 60 and with 𝑍̂ = 2.08 the decision is
to reject H0 in favour of H1 since 𝑍̂ = 2.08 > 𝑍0.95 = 1.645.
For one directional alternative hypothesis H1: 𝜇 < 60, the estimate of the test statistic 𝑍̂ =
2.08 is not less than −𝑍0.95 = −1.645 (the critical value), hence decision not to reject H0 in
favour of H1 should be taken. However, the decision not to reject H0 here does not imply that
the mean performance of students is 60 %. This test result only shows that the directional
alternative stated under H1 (𝜇 < 60) is not supported by the data and the two possibilities are
either the mean performance is not significantly different from 60 % (𝜇 = 60) or greater than
60 % (𝜇 > 60) which is the case here as confirmed by the result of one directional hypothesis
H1: 𝜇 > 60 presented earlier.
Generally, when testing the null hypothesis H0 against any of the one directional alternative
hypotheses H1 in one sample or two samples significant tests, the acceptance of H0 in either
case must be revalidated through the results of the test of H0 against the two sided alternative
hypothesis H1 in order to make a correct judgment on the population based on sampled data.
In our example, the conclusion is that the average performance of students in GNS 111 is
actually more than 60% based on the sampled data.
CLASSICAL EXAMPLE
We consider a more classical example from the literature to illustrate the use of hypothesis
testing in the field of Engineering.
Example:
In an engineering experiment as reported by Hsu et al. (2002), 45 steel balls lubricated with
purified paraffin were subjected to a 40 kg load at 600 rpm for 60 minutes. The average wear,
measured by the reduction in diameter, was 673.2 μm, and the standard deviation was 14.9
μm. Assume that the specification for a lubricant is that the mean wear be less than 675 μm, is
it sufficient to conclude that this specification was met based on the 45- sample data drawn
from lubrication experiment?
178
Solution:
Firstly, we transform the problem into statistical language by considering the 45 samples as a
set of random samples X1, . . . , X45 of wear diameters with a mean (𝑋̅) of 673.2 μm and
standard deviation (s) of 14.9 μm.
Following the steps required in hypothesis testing as earlier discussed, the research question
here is whether the lubricant has met the specification by reducing the mean wear diameter of
all the steel balls below 675 μm or not.
Arising from the research question is the formulation of the null and alternative hypothesis to
be tested as stated below:
H0: The lubricant does not meet the specification (i.e. the mean wear diameter is not less than
675 μm) versus
H1: The lubricant meets the specification (i.e. the mean wear diameter is less than 675 μm).
In statistical symbols, the hypotheses above can be simply stated as
H0: μ = 675 versus H1: μ <675.
The null hypothesis H0 above is simply suggestive of the fact that the apparent difference
between the sample mean of 673.2 and 675 is just due to chance unless this is proved
otherwise by a test’s results.
The appropriate test statistic for the hypothesis is the 𝑡 statistic given by
√𝑛(𝑋̅ −𝜇 )
0
𝑡= ~𝑡𝑛−1
𝑠
which has a student t distribution with 𝑛 − 1 degrees of freedom (df). The decision rule for
this test problem is to reject H0 if
𝑡̂ < −𝑡𝑛−1,1−𝛼
or 𝑝(𝑡 > 𝑡̂) ≤ 𝛼 (any intended significance level)
where 𝑡̂ is the estimate of the test statistic𝑡, 𝑡𝑛−1,1−𝛼 is the critical value of the 𝑡-distribution
at significance level 𝛼 with 𝑛 − 1df. Also, 𝑝(𝑡 > 𝑡̂) = 1 − 𝑝(𝑡 ≤ 𝑡̂) is the p-value of the test
statistic 𝑡̂ at 𝑛 − 1 df.
To compute the test statistic, we already have from the sample information provided, that
𝑋̅ = 673.2, 𝑠 = 14.9 and 𝑛 = 45, the sample size. Also, the hypothesise parameter value is
𝜇0 = 675. Therefore, the estimate of the test statistic 𝑡 is
√45(673.2 − 675)
𝑡̂ = = −0.81
14.9
At 5 % level of significance, the critical value 𝑡44,0.95 for the test is 1.68 with 44 degrees of
freedom (df). Based on this critical value, it may be difficult to reject the null hypothesis H0
based on the available sample data since −0.81 ≮ −1.68 . Hence, it cannot be concluded that
the lubricant meets the required specification (i.e. accepting H1) given the data.
The above result does not mean that the null hypothesis H0 is true, but it only shows that H0 is
plausible. By being plausible, we mean that if more sample data are collected on the problem
being investigated the conclusion might no longer be in favour of H0.
Using the p-value concept, the value of 𝑝(𝑡 > −0.81) at 44 df is 0.79. Here, the p-value of
0.79 is interpreted to mean that if H0 is true, there is 79 % chance of observing a value of the
test statistic (or sample mean) whose disagreement with H0 is as least as great as that which
was actually observed. Since this probability is large, we do not reject H0 as before and
179
conclude that the lubricant did not meet the required specification given the available sample
data analysed.
In this section, we present a catalogue of some statistical concepts that are often misused or
misinterpreted in significant hypothesis testing.
It is wrong to think that the p-value is the probability that the null hypothesis is correct. The
p-value is the probability of obtaining the current value of the test statistic or value that is
more extreme assuming that the null hypothesis H0 is true. The smaller the p-value of a test
statistic, the stronger the evidence in support of rejection of H0
a) Failure to reject the null hypothesis H0 does not imply that H0 is correct. Such a test
result only shows that there is no sufficient evidence in the current data that could
have led to the rejection of H0.
b) It is not correct to believe that 5 % level of significance (𝛼) is a standard for all tests.
The choice of 𝛼 value should depend on the problem under investigation. If 𝛼 value
of 5 % seems reasonable for a particular test problem, it may be too bad a choice in
some others especially in drug discovery and studies.
c) It is wrong to believe that small p-value is an indication of large effect size. The two
p-values of 0.01 and 0.0001 both suggested strong evidence to reject the null
hypothesis but never an indication that the effect size in the former is smaller than the
latter.
d) It is not correct to believe that statistical significance or non-significance implies the
importance or non-importance of the relation being investigated. In other words,
statistical significance does not necessarily mean biological significance in a
biological investigation (Quinn and Keough, 2002). Small effects that might not be
statistically significant might be biologically significant most especially in genetic
studies where small changes in some physiological measurements could result to
serious biological impact.
CONCLUSION
The basic rudiments of statistical hypothesis testing have been thoroughly presented in this
work especially when the variable of interest is of at least in the interval scale. However, the
procedures of test hypothesis involving variable measurements at other two levels of nominal
and ordinal scales as proposed by Steven (1946) follow similar patterns provided here with
little modifications to capture the discreteness of the sample data.
While it is necessary to specify the level of significance of a test problem prior to the
commencement of the test, the investigator should be cautious at thinking that a 5% level is
suitable for all hypothesis test problems. As a result, we recommend the use of the p-values
of a statistical test instead of the critical values that are 𝛼 specific. The moment a p-value of a
hypothesis test is computed, it is left to the owner of the data (biologist, educationist,
industrialist, etc.) to determine the size of Type I error𝛼 he is comfortable with. This
obviously would depend on the type of problem under investigation.
Is has been reported here that statistically non-significant test does not necessarily mean that
the null hypothesis is completely true. It only shows that there is insufficient information at
the investigators disposal to warrant the rejection of the null hypothesis. In criminal trial for
180
instance, a jury would only decide whether an accused person is innocent (H0) or guilty (H1)
based on the burden of proof placed before him by the prosecutors. Within the legal
framework, the required burden of proof is “proof beyond a reasonable doubt” using a
number of factual arguments. Under this consideration, a non-guilty verdict (failed to reject
H0) does not mean that the accused is innocent. It only shows that the prosecutor does not
currently have sufficient information at his disposal to “proof his case beyond a reasonable
doubt”!
Finally, statistically significant test should never be misconstrued for biological significance.
Study have shown that when sufficient samples are taken, very little shift in location can be
detected even if such shift is not of biologically relevance (Quinn and Keough, 2002; Yahya
et al., 2012). On the other hand, a small shift in location that has been declared statistically
non-significant may make huge biological difference. The import of all these is that whenever
a significant result is obtained in an hypothesis test problem, it is also important to examine
whether the difference in the effects as reported by the test is biologically meaningful.
However, it is important to remark that what constitutes biological significance has nothing to
do with statistics or the statistician (the analyst). Rather, it lies with experts’ biological
judgment of the problems under investigation. At this point, the biologist or experts in the
field of investigation should carefully determine the size of the shift in location (effect size)
that would constitute biological significance prior to the test. This would help the statistician
to determine appropriate sample size that would be suitable to achieve appreciable power by
the test.
EVALUATION STRATEGIES
181
REFERENCES
182
Chapter Fifteen
MICROBES AND DISEASES
Ande, A.T.
Department of Zoology, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
Microbe is a noun coined from micro-organism, i.e. organisms that cannot be seen with the
naked eye. They exist as single cells or cell clusters and can be appreciated with the aid of
special gadgets such as the microscope. They form a very large and diverse group of
organism that combines plant and animal features. They include bacteria, fungi, viruses and
protozooans.
Microbes play very important roles in nature than their small size suggest. They form a
significant integral part of the community that ensure interaction between living and non-
living components and hence the sustenance of all ecosystems. They ensure the synthesis and
degradation of special organic substances in the course of their existence. Man has over the
years taken advantage of this in a number of ways such as the production of beer, yogurt and
antibiotics, baking, soak-away system, etc.
Microbes are ubiquitous. They live within the body, as well as on the body surfaces of higher
plants and animals (host). Their activities may be beneficial or detrimental to their host.
Microbes in the latter category are described as pathogens. To a large extent microbes are
host specific, hence different hosts have different range of micro-organisms that associate
with them. These are collectively described as microflora of the respective host.
Human microflora occur on the skin, orifices, i.e. mouth, nose, anus, vagina etc; body fluids
such as saliva, blood, semen etc., as well as, in the tissues. Most of these form normal body
microflora, in which case their presence is not harmful to their host. The detrimental or
pathogenic forms, however, elicit disease condition i.e. an alteration in the host system
resulting in the loss of productivity, in man as a result of the interaction with the microbe.
This disease possibility necessitates a discussion on Microbes and Diseases, believing that a
thorough understanding of the etiology of microbial disease of man will reduce their
incidence and therefore enhance human livelihood.
OBJECTIVES
HOST INVASION
To elicit disease condition, microbes must establish contact, multiply and colonise its host
either superficially or in the tissue. To achieve this, they require to have conquered the series
of defence mechanisms put up by an unwilling host. The most prominent of this is the
impenetrable skin that serves as barrier. The determined microbes, however, make their way
183
in by adhering to the surface and producing chemical substances (enzymes) that break the
barrier for them. Some of them take advantage of skin lacerations by way of wounds, while
others reside in obscured spaces with restricted access but clement environment such as the
mouth, anus and vagina. Microbes are known to show preferences for site of occurrence. e.g.
Nisseriagonorrhea, a microbe causing sexually transmitted disease known as Gonorrhea
which sticks strongly to the inner lining of the urinogenital tract than any other place.
PATHOGENICITY
Microbes initiate some level of alterations in the host's system as a result of their activities.
Some of these activities are aimed at improving their survival chances in the areas of food
acquisition and avoidance of the host's defence actions. Some of these activities include:
Production of excretory wastes that may be intolerable to the host and hence referred
to as toxins
Deprivation of the host of its nutrients
Confiscation of host tissue for personal use by the microbe. e.g. viral infections
Destruction of the host tissue, e.g. anemia resulting from malaria infection
Initiation of tissue changes that may lead to cancers or tumours and
Reduction of host immune response thereby giving room for opportunistic infections.
RESERVOIRS
Microbes reside temporarily in one or more natural environments known as reservoirs. The
major reservoirs are water, soil, atmosphere, human, domestic and wild animals. In their bid
to survive they move from one reservoir to the other and become successful when established
in their host. Diseases that are contractible by humans from other animals are termed
zoonotic diseases.
TRANSMISSION
The essence of successful transmission bid is to ensure the establishment of the offspring or
progenies of a particular microbe in another host of the same kind; a mission that entails
movement across reservoirs. They, therefore, utilise media that are inevitably used by their
prospective host such as air, water, food, etc... The following four basic ways are often
employed:
184
Direct contact with infected persons, animals or contaminated objects
Pathogens frequently utilise the opportunity of contact to get across to a new host. Such a
pathogen is described as-a contagion. Such contacts may be direct or indirect, e.g. sexual
intercourse, kissing, sharing of toiletries, renting of dresses, etc. The entry may be via open
wound, e.g. Tetanus bacteria from soil.
Inoculation through the skin
Pathogens may take advantage of the interaction between blood sucking parasite of man and
man. The microbe makes itself available to the vector when it feeds on human blood. It is
then transmitted to another host while consuming blood from its new host. Such pathogens
usually grow and multiply in the vector. e.g. Plasmodium and Anopheles mosquito.
MICROBIAL DISEASES
Microbes and the diseases initiated by them are numerous. In this chapter, some of the
examples of these diseases shall be reviewed with emphasis on those that are of common
occurrence in Nigeria. The diseases have been grouped on the basis of their causative agent
into bacterial fungal, viral and protozoan diseases. The name, location in the host, mode of
transmission, part of the host body affected, clinical manifestation, treatment and prevention
of each disease are described briefly.
Viral diseases: These are caused by the smallest organism described, i.e. virus. They cannot
be seen under the common light microscope. They are obligate parasites that show evidence
of living only when in their host. Outside their host they are inert; hence they me regarded as
organisms at the borderline between living and non-living things. Viruses usually occur, live
and multiply within the cell of its host and they take over the control of such cells from their
respective nuclei and from the host. Thus, one of the general characteristics of viruses is their
ability to alter the working method or system of the cells of a host. Table 1 shows a summary
of the etiology of six common viral diseases and possible treatments and preventive
measures. Viral diseases are usually difficult to treat owing to the fact that they reside within
the host's cell where drugs cannot be tolerated. An effective drug at this point may kill the
185
host cell before the virus. More emphasis is, therefore, placed on prevention, which ensures
that the pathogens do not find their way into the host cell. When they do, however, the
microbe is usually left for the host's immune system to handle.
Bacterial Diseases: Several human diseases are caused by bacteria of various types. Bacteria
are among the smallest living organisms. They are the first group of micro-organisms to be
discovered as a disease causing organism. Most of them gain entry into their host via the
mouth, nose, vagina, anus or lacerated skin. They obtain their food from their host by
secreting enzymes that break down food substances to simpler forms that are readily
absorbed by them. They also secrete waste products that are usually toxic to their host. Table
2 gives specific information on the etiology, clinical manifestation, treatment and prevention
of some common bacterial diseases of man.
Fungal Diseases: Disease causing fungi in man are few. Trichophyton sp causes 'ringworm'
and 'athletes foot' diseases in children and adults, respectively. This fungus obtains its
nourishment from the outer layer of the skin with the aid of root-like structures called hyphae.
It gives a small dark or red patch that grows, but becomes restricted to the outer margin of a
portion with restored skin colouration and without hair strands later, hence the name
'ringworm'. These patches frequently occur on the scalp, inside the thighs and armpits. On the
scalp, it gives a scaly bald patch without hair strands. Fungal disease spreads by personal or
indirect contact achieved by way of sharing clothing and other personal effects such as
sponge, comb, socks etc. The best preventive measure is to ensure personal cleanliness and
discourage the use of damp socks and cover shoes. They are frequently treated with antiseptic
devices.
Protozoan diseases: Protozoans are tiny single celled organisms. Most of them are free
living. A few of them, e.g. Plasmodium sp are however parasitic and they cause malaria and
trypanosomiasis (sleeping sickness), in man.
186
Malaria is caused by Plasmodium sp, a pathogen that resides in the blood of man. They
colonize the red blood cell, feed within it, reproduce in it and break it open, thus destroying
it. This destruction results in anaemia. The pathogens also produce toxins that initiate rigors
associated with malaria fever. The pathogens gain access into human blood stream through
the feeding action of female Anopheles mosquito, which acts as a vector. The mosquito
collects the pathogen from an infected individual while sourcing for blood, develops the
pathogen to an infective form and subsequently passes the infective pathogen to a healthy
individual during bloodsucking activity. The symptoms of the disease include fever, i.e. high
body temperature, headache, pains especially at the joints, little and deeply coloured urine
etc. Due to the high level of debility involved, chronic sufferers are usually incapable of
carrying out their daily chores and so it leads to significant economic losses. Prevention is
only achievable by avoiding contact with mosquitoes. Taking drugs to prevent malaria lead to
undesirable effects such as the development of resistance i.e. Plasmodium strain that will not
respond to the common antimalarial drugs, thus complicating the management of malaria.
Treatment is readily achieved through the use of drugs.
Trypanosomiasis: Trypanosoma resides in the blood stream of man where they obtain their
food and reproduce. They produce toxin which makes the host sick. The microbes are
transmitted from one human host to the other by Tsetse fly (Glossina sp) while prospecting
for blood. Signs and symptoms of the disease include fever, emaciation, sleeping almost
always and eventually death. Prevention involves avoidance of tsetse fly and treatment entails
the use of drugs.
CONCLUSION
Microbes are desirable members of the ecosystem. The pathogenic forms, though conflict
with human interest by causing diseases, can neither be eradicated nor totally avoided. If man
must enjoy his stay in the ecosystem, therefore, he must understand, tolerate and manage the
disease-causing micro-organisms adequately. The management of pathogenic organisms
requires simple attitudinal changes, since they obviously take advantage of common
attitudinal lapses of man to perpetuate themselves, A three-pronged approach towards
achieving this include (1) Host self-empowerment to cope with the challenges put up by low
level of microbial infection by feeding well, sleeping well, being on the slow lane of life, etc
(2) Discouraging the spread of the microbes through improved hygienic procedures,
maintaining a clean environment, eating clean and well cooked food and water; avoiding
overcrowding, improved ventilation, avoid sharing personal effects, kissing and making love
especially when you are not in love, correct and judicious use of handkerchief and being
'perfect gentlemen and ladies'. (3) Attacking the pockets of microbe reservoirs that have been
prevented from spreading through the application of correct and advised treatments from a
doctor and taking the prescribed drugs religiously.
EVALUATION STRATEGIES
Practice Question
(1) Give the definition and various groups of microorganisms.
(2) Identify various microbial reservoirs.
(3) Explain ways by which microbes can invade a susceptible host.
(4) Differentiate between pathogenicity and normal human flora.
(5) Establish ways or routes of transmission of microbes.
(6) Analyse the etiology and prevention of various microbial diseases.
187
REFERENCES
188
Chapter Sixteen
GEOMOPHOLOGY OF AFRICA
Ige. O.O. and Ogunsanwo, O.
Department of Geology and Mineral Sciences, University of Ilorin, Ilorin, Nigeria
INTRODUCTION
The physical environment of an area is as important as the people living within it. It contains
features which are products of natural and/or artificial activities. Such features include hills,
valleys and plain surfaces. They present beautiful landscapes and in places, are tourist centres
(e.g Zuma rock in Nigeria). Almost every state in Nigeria and countries in Africa has its
peculiarity in terms of landscape.
Therefore, it is important as students and citizen of Africa to know, not only variations in
characteristics of landscapes from region to region but understand the origin of the elements
(mountain, valleys and river systems) of those landscapes in our physical environment.
OBJECTIVES
Geomorphology is ‘coined’ from two words ‘Geo’ and ‘Morpho’. The ‘geo’ means earth
while ‘morpho’ means shape, setting or arrangement. Hence, Geomorphology is the scientific
discipline concerned with surface and features of the earth, including the landforms and
forms under the ocean and the physical, chemical and biological factors, such as weathering,
erosion and wave actions that act on them (Dorothy, 1990). Geomorphology is also defined
as the study of the nature and origin of landforms, particularly of the formative processes of
weathering and erosion that occur in the atmosphere and hydrosphere (Brooks, 2013).
The continent of Africa is an island with surface area of 30,310,000 km2 and bounded by the
Atlantic Ocean on the western and southern margins, the Indian Ocean on the southeastern,
and the Mediterranean Sea in the North and Red sea in the north eastern margin (Fig. 1). The
continent which forms very large and relatively uniform landmass of plateau, estuaries and
inland basins represents one-fifth of the total land surface of the earth and is characterized by
interesting landscape.
189
Figure 1: World Continents showing position of Africa
Volcanism
Volcanism is a term use to describe the movement of hot magma (liquid from melting of
rocks) from the point of generation to the surface through vents (cracks). This process is
usually associated with earthquake, which may be due to collision of lithospheric plates (solid
earth) or any other natural occurrence. When the “hot magma” has dropped in temperature, it
will begin to cool into different shapes of solid materials called volcanoes (Fig. 2). Most
rocks that are seen around us today are products of volcanism and are still undergoing
weathering action.
190
than it otherwise would. Finally, there are two general trends to explore in relation to rock
composition: rock that contains a relative abundance of silica (SiO2) and aluminum
(aluminum oxide) will melt at a lower temperature (heat content); while a rock containing a
relative abundance of ferromagnesian (Fe, Mg, and Ca) ions will melt at higher temperatures
(heat content).
The melting of continental crust generates felsic magma enriched in silica and aluminum,
while melting of mantle rock (asthenosphere) and oceanic crust forms ferromagnesian-rich,
mafic magma. The earth’s crust naturally contains higher water content (because of its
proximity to the hydrosphere) than the mantle, accounting for higher water (and thus gas)
content in felsic to intermediate magmas. The relatively high content of silica and water in
continental crust also correlates with the lower melting temperatures of felsic to intermediate
magmas. Mantle material melts at greater depth and higher temperatures and pressures, not
requiring as much “assistance” from silica and water in the melting process.
The composition of magma depends on three main factors:
i) degree of partial melting of the crust or mantle;
ii) degree of magma mixing; and
iii) magmatic differentiation by fractional crystallization.
Weathering
Weathering is the group of destructive processes that change the physical and chemical
characteristics of rocks at or near surface. The process can be mechanical, chemical or
biological. Mechanical weathering is the physical disintegration of rocks into loose
191
sediments by agents such as temperature changes, pressure release due to unloading of
overlying materials, crystal growth and frost action. In frost action, the water that has trickled
into cracks in rocks can freeze and expand by about 9% when temperature drops below 0 0C
(320F). The expanding ice pushes the rock apart, extends the joints and breaks the rocks into
pieces of different shapes. Chemical weathering is the decomposition of rocks in the
presence of active water, low temperature, oxygen and CO2. The chemical composition of the
new product may be completely different from the parent material. Chemical such as acid-
water solution that dislodges and dissolves the tightly bound minerals of rocks into new
products in the presence of water and air Biological weathering involves actions of
microorganisms and plant roots on soils to further disintegrate them to smaller sizes. The
three bring about soil formation which is always overlying the rocks. However, the type of
weathering processes that occur at any particular place depend on the climate.
Erosion
This is the removal (transport) of weathered rock materials downslope, and away, from
their original site of weathering. Erosion processes are driven primarily by the force of
gravity, which may be aided by a flowing medium such as water (e.g. rivers), and ice
(e.g. glaciers), or gravity may act alone (e.g. rock falls). Wind can also remove weathered
materials (e.g. deflation).
During transportation of the weathered rock materials, the angular particles commonly
abrade (rub or scour) the surfaces over which they pass, wearing away and lowering the
rocks. Thus, landslide debris may erode the slope or channel along its course, the
sediments in rivers erode the rocky sections of their beds, and the rock fragments in
glaciers erode the valley floor.
Erosion Processes
These are usually considered under four distinct categories:
192
Mass Wasting: the processes that occur on slopes, under the influence of
gravity, in which water may play a part, although water is not the main
transporting medium. Mass wasting, or landsliding.
Fluvial: the processes that involve flowing water, which can occur within
the soil mass (e.g. soil piping), over the land surface (e.g. rills and gullies),
or in seasonal or permanent channels (e.g. seasonal streams and rivers).
Erosion Controls
The type and magnitude of erosion depends upon several factors including:
Climate: exerts a fundamental control on the types and rates of erosion in
an area, because climate determines the amount and seasonal distribution
of water (rainfall), the temperature (tropical, temperate or polar), and
factors such as the sunshine hours, the wind strengths, and wind patterns.
Topography: mountain areas have a higher elevation and thus greater
potential energy than the lowlands. This, combined with the steeper slope
angles, results in more dynamic erosion in upland areas than on the
surrounding plains.
Rock Type: the type of rock determines how susceptible an area is to
erosion. Within the same climatic regime, each rock type responds
differently to weathering and erosion, exhibiting a characteristic resistance
or weakness to the prevailing conditions. Thus, some rocks are relatively
resistant and form higher ground, whereas others are less-resistant and
form valleys and lowlands.
Rock Structure: highly jointed or faulted rocks are usually more intensely
weathered along the lines of weakness in the rock mass. Consequently,
these softer weathered materials are more easily eroded out, with the result
that river valleys are usually located along the line of a major fault or joint
set.
The ultimate result of erosion is to reduce all mountains, ridges, and high ground to a flat
plain (termed a peneplain) that slopes very gently from the centre of a landmass to the
sea.
Landslides
Landsliding, or slope failure, is a general term that encompasses the gravity-controlled,
mass wasting processes that affect hill slopes (earth surface) throughout the world.
Natural Slopes
Under normal circumstances, natural slopes (i.e. slopes that are largely unmodified by
human activities) reach a state of quasi-equilibrium, in which the slope is eroded to an
angle that is relatively stable with regard to the underlying rock type and structure, the
soil type and thickness, the extent and type of the vegetation cover, the surface and
subsurface hydrology, and the prevailing climatic conditions and local weather patterns.
Weathering processes continually act upon the slopes, weakening the underlying rocks.
Groundwater flushes-out some of the weathered materials from the joints in the rocks and
from the overlying soils, and hillside streams deepen their channels.
The rocks and soils of the slope progressively become weaker and less stable, so sections
of the slope periodically readjust to a more stable profile by failing (landsliding)
193
Importantly, if one or other of the factors on the slope changes, such as the tree cover is
removed by fire or forestry, or an exceptionally heavy rainfall occurs, then large areas of
a hillside may be subject to erosion, including failure (landslide).
In addition, steep stream courses carry considerable amounts of surface runoff during
heavy rainstorms. This water, and the included debris, can severely erode the stream
channel, destabilising the stream banks and the adjacent slopes, triggering slope failures.
In extreme circumstances, earthquakes may shake an area and loosen large masses of
material, causing landslides or disturbing the previous equilibrium.
Man-made Slopes
This process creates a very steep cutting (a cut slope), which changes the geometry of the
original slope, affects the groundwater regime, and may expose unfavourably oriented
joint planes or other lines of weakness within the rock
Man-made slopes are, by their very nature, steeper than most natural slopes. They are not
in a natural equilibrium with the profile of the original hillside into which they are
excavated. Consequently, some forms of engineering stabilisation works are normally
required.
Failures of man-made slopes primarily occur along joint planes in fresh rock, and in some
cases along relict joint planes in weathered rock. These discontinuities, which are
commonly clay-filled present lines of weakness that allow blocks of material to become
detached from the slope when the friction on the plane is overcome, or when the material
that originally supported the toe of the slope is removed.
194
Figure 3: Geology of Africa showing the Islands (Africa Atlasses,2007)
These activities, though relatively stable since then, resulted in several different elevations
called Hills (mountains) and valley today (Fig. 3). The relief is higher towards the north
(approximately 300 m) from the southern and western margins. The Saharan (north) plateaus
are dominated by the Hoggar, Darfur and Tibesti ranges. The relief in the Eastern and
Southern Africa is more compartmentalized than anywhere. Troughs caused by faults in the
bedrock form the rifts valley which runs roughly north to south over 4000 km. The blocks
which were raised on either side formed the upland plateaus which culminate in mountain
ranges. Such mountains include the Ruwenzori in Uganda (5,119 m), Elgon (4, 321 m),
Karisimbi (4,507 m), Batu (4,307 m), Batu (4,307 m) and Kilimanjaro (5, 895 m). These
195
mountains are products of volcanic activities. Other important mountains in Africa are briefly
discuss below,
Atlas Mountains: This mountain system runs from southwestern Morocco along the
Mediterranean coastline to the eastern edge of Tunisia. Several smaller ranges are included,
namely the High Atlas, Middle Atlas and Maritime Atlas. The highest peak is Mt. Toubkal in
western Morocco at 13,671 ft. (4,167 m).
Ethiopian Highlands: The Ethiopian Highlands are a rugged mass of mountains in Ethiopia,
Eritrea (which is sometimes referred to as the Eritrean Highlands), and northern Somalia in
the Horn of Africa. The Ethiopian Highlands form the largest continuous area of its altitude
in the whole continent, with little of its surface falling below 1500 m (4,921 ft), while the
summits reach heights of up to 4550 m (14,928 ft). It is sometimes called the Roof of Africa
for its height and large area.
Hoggar (Ahaggar) Mountains: The Hoggar Mountains, also known as the Ahaggar, are a
highland region in central Sahara, or southern Algeria, along the Tropic of Cancer. They are
located about 1,500 km (900 miles) south of the capital, Algiers and just west of
Tamanghasset. The region is largely rocky desert with an average altitude of more than 900
metres (2,953 ft) above sea level. The highest peak is at 3,003 meters (Mount Tahat).
Kalahari Desert: It's about 100,000 sq. miles (259,000 sq. km) in size and covers much of
Botswana, the southwestern region of South Africa and all of western Namibia. The desert
plateau is criss-crossed by dry rivers beds and dense scrub. A few small mountain ranges are
situated here including the Karas and the Huns. Large herds of wildlife are found in the
Kalahari Gemsbok National Park, located in South Africa near its border with Namibia.
Namib Desert: The Namib is a coastal desert in southern Africa that stretches for more than
2,000 km (1,200 miles) along the Atlantic coasts of Angola, Namibia, and South Africa,
extending southward from the Carunjamba River in Angola, through Namibia and to the
Olifants River in Western Cape, South Africa. From the Atlantic coast eastward, the Namib
gradually ascends in elevation, reaching up to 200 km (120 miles) inland to the foot of the
Great Escarpment.
Annual precipitation ranges from 2 mm (0.079 in) in the most arid regions to 200 mm (7.9 in)
at the escarpment, making the Namib the only true desert in southern Africa. The Namib is
also the oldest desert in the world and its geology consists of sand seas near the coast, while
gravel plains and scattered mountain outcrops occur further inland. The desert's sand dunes,
some of which are 300 m (980 ft.) high and span 32 km (20 miles) long, are the second
largest in the world after the Badain Jaran Desert dunes in China
Sahel Desert: The Sahel is a wide stretch of land running completely across north-central
Africa, just on the southern edges of the ever-expanding Sahara Desert. This border region is
the transition zone between the dry areas of the north and the tropical areas of the south. It
receives very little rain (six - eight inches a year) and most of the vegetation is a savanna
growth of sparse grasses and shrubs.
Sahara Desert: Covering almost one-third of the continent, the Sahara is the largest desert in
the world at approximately 3,500,000 sq. miles (9,065,000 sq. km) in total size. Topography
includes areas of rock-strew plains, rolling sand dunes and numerous sand seas.
It ranges in elevation from 100 ft. below sea level, to peaks in the Ahaggar and Tibesti
mountains that exceed 11,000 ft. (3,350 m). Regional deserts include the Libyan, Nubian and
the Western desert of Egypt, just to the west of the Nile. Almost completely without rainfall,
a few underground rivers flow from the Atlas Mountains, helping to irrigate isolated oases. In
the east, the water is of the Nile help fertilize smaller parts of the landscape.
196
HYDROGRAPHY AND DRAINAGE
The hydrographic characteristic of the African continent is restricted to places with high
rainfall. Vast land area of Africa has no river flowing to the sea (Endoreism). Such areas lie
in the arid zone stretching from the Atlantic to the Red sea except River Nile in Egypt (6,670
km) which drains to the Mediterranean Sea. In such areas, rain water either evaporates back
to atmosphere or infiltrates the subsurface.
Like other places in the world, water accumulates in the depressions, basins, troughs and
lakes created by several rift systems. The lakes are often narrow and sometimes very deep.
For instance, the Lake Tanganyika in Tanzania plunges to a depth of 1,435 m and Lake
Victoria (largest lake in Africa) with total area of 83,000 km2. Also, terrain irregularities
initiate the associating rapid and waterfalls along the rivers system. The water from the
atmosphere, basins, lakes etc. is ultimately drained into the ocean by several rivers, streams,
and rivulets of the surface of Africa. Some of these rivers drain to Lake Chad, the rivers
Niger and Benue confluence at Lokoja and empty into the Atlantic Ocean through the Niger
delta. The important rivers in Africa include the Nile (6400km), Niger (4160 km), Congo
(4800km) and Zambezi (3000 km) (Fig. 4). The important lakes in Africa include the Kivu
(Burundi), Turkana (Kenya), Edward (Rwanda), and Tanganyika (Tanzania). Others are Lake
Malawi, Lake Nweru, and Lake Victoria etc., which also form part of drainage system. The
following river systems are emphasized because of their great influencing in the definition of
drainage pattern of Africa
Congo River Basin: The Congo River Basin of central Africa dominates the landscape of the
Democratic Republic of the Congo and much of neighbouring Congo. In addition, it stretches
into Angola, Cameroon, the Central African Republic and Zambia. The fertile basin is about
1,400,000 sq. miles (3,600,000 sq. km) in size and contains almost 20 % of the world's rain
forest. The Congo River is the second longest river in Africa, and it is network of rivers,
tributaries and streams help link the people and cities of the interior.
Great Rift Valley: This is a dramatic depression on the earth's surface, approximately 4,000
miles (6,400 km) in length, extends from the Red Sea area near Jordan in the Middle East,
south to the African country of Mozambique. In essence, it's a series of geological faults
caused by huge volcanic eruptions centuries back, that subsequently created what we now
call the Ethiopian Highlands, and a series of perpendicular cliffs, mountain ridges, rugged
valleys and very deep lakes along it is entire length. Many of Africa's highest mountains front
the Rift Valley, including Mount Kilimanjaro, Mount Kenya and Mount Margherit
Nile River System: The longest river in the world (flows north), rising from the highlands of
southeastern Africa and running about 4,160 miles (6,693 km) in length, to then drain in the
Mediterranean Sea. In simple terms it's a series of dams, rapids, streams, swamps, tributaries
and waterfalls. Numerous (major) rivers comprise the overall system, including the Albert
Nile, Blue Nile, Victoria Nile and White Nile.
197
Figure. 4: Drainage Pattern of Africa (Africa Atlasses, 2007)
Like the surface of the earth, the floor of the ocean is also characterized by several features of
different shapes. The African continent has two margins (edges) - the passive and active
margins. The passive margin is the one that develops on geologically quiet coast that is void
of earthquake, volcanoes and young mountain belt. The active continental margin, usually
associated with the convergent plates boundaries (where two continental plates collide), is
typically characterized by earthquakes, volcanoes and associating young mountain belts
(Plummer et al., 2007). Hence, morphological features of the ocean floor are restricted to the
active continental margins. Examples of such features include the Mid Oceanic Ridge, the
Seamounts and the Guyots.
198
Mid Oceanic Ridge: This is a giant undersea mountain range that extends around the globe
like seams on a baseball. Along the African plate, it is more than 8, 000 km long, 2,500 km
wide and rise 3km above the ocean floor.
Seamounts: These are conical mountains on the sea floor associated with the Mid-Oceanic
Ridge. They occasionally rise above the sea level to form islands
Guyots: These are flat-topped seamounts. The flat top which are generally below the sea
level are due to cutting action of sea waves.
CONCLUSION
Africa is one of the major continents in the planet earth. It has land mass area of
30,310,000Km2 with varying physical characteristics from region to region. The major
factors responsible for differential geomorphologic characteristics are volcanism, landslide,
weathering, erosion activities. The relief is higher towards the north with mountains
Kilimanjaro (5,895 m) and Mount Kenya (5, 199 m) being products of volcanism.
There are different rivers with varying dimensions in Africa. Locally, rivers Niger and Benue
are good examples of rivers in Africa but the longest river in the continent is The Nile River
(6400km). The ocean floor also has morphological peculiarity with its characteristic
emergence of the seamounts and guyot.
EVALUATION STRATEGIES
Practice Questions
(1) explain geomorphological variations in Africa countries.
(2) List and discuss factors which are responsible for geomorphological variations.
(3) describe important physical features in Africa, especially Nigeria.
(4) explain morphological characteristics of the ocean floor.
REFERENCES
199