THE HISTORY AND EVOLUTION MICROSCOPE
During that historic period known as the Renaissance, after the "dark" Middle
Ages, there occurred the inventions of printing, gunpowder and the
mariner's compass, followed by the discovery of America. Equally remarkable was
the invention of the light microscope: an instrument that enables the human eye,
by means of a lens or combinations of lenses, to observe enlarged images of tiny
objects. It made visible the fascinating details of worlds within worlds.
Invention of Glass Lenses
Long before, in the hazy unrecorded past, someone picked up a piece of
transparent crystal thicker in the middle than at the edges, looked through it, and
discovered that it made things look larger. Someone also found that such a crystal
would focus the sun's rays and set fire to a piece of parchment or cloth. Magnifiers
and "burning glasses" or "magnifying glasses" are mentioned in the writings of
Seneca and Pliny the Elder, Roman philosophers during the first century A. D., but
apparently they were not used much until the invention of spectacles, toward the
end of the 13th century. They were named lenses because they are shaped like the
seeds of a lentil.
The earliest simple microscope was merely a tube with a plate for the object at one
end and, at the other, a lens which gave a magnification less than ten diameters --
ten times the actual size. These excited general wonder when used to view fleas or
tiny creeping things and so were dubbed "flea glasses."
Birth of the Light Microscope
About 1590, two Dutch spectacle makers, Zaccharias Janssen and his son Hans,
while experimenting with several lenses in a tube, discovered that nearby objects
appeared greatly enlarged. That was the forerunner of the compound microscope
and of the telescope. In 1609, Galileo, father of modern physics and astronomy,
heard of these early experiments, worked out the principles of lenses, and made a
much better instrument with a focusing device.
Anton van Leeuwenhoek (1632-1723)
The father of microscopy, Anton van Leeuwenhoek of Holland, started as an
apprentice in a dry goods store where magnifying glasses were used to count the
threads in cloth. He taught himself new methods for grinding and polishing tiny
lenses of great curvature which gave magnifications up to 270 diameters, the finest
known at that time. These led to the building of his microscopes and the biological
discoveries for which he is famous. He was the first to see and describe bacteria,
yeast plants, the teeming life in a drop of water, and the circulation of blood
corpuscles in capillaries. During a long life, he used his lenses to make pioneer
studies on an extraordinary variety of things, both living and non-living and
reported his findings in over a hundred letters to the Royal Society of England and
the French Academy.
Robert Hooke
Robert Hooke, the English father of microscopy, re-confirmed Anton van
Leeuwenhoek's discoveries of the existence of tiny living organisms in a drop of
water. Hooke made a copy of Leeuwenhoek's light microscope and then improved
upon his design.
Charles A. Spencer
Later, few major improvements were made until the middle of the 19th century.
Then several European countries began to manufacture fine optical equipment but
none finer than the marvelous instruments built by the American, Charles A.
Spencer, and the industry he founded. Present day instruments, changed but little,
give magnifications up to 1250 diameters with ordinary light and up to 5000 with
blue light.
Beyond the Light Microscope
A light microscope, even one with perfect lenses and perfect illumination, simply
cannot be used to distinguish objects that are smaller than half the wavelength of
light. White light has an average wavelength of 0.55 micrometers, half of which is
0.275 micrometers. (One micrometer is a thousandth of a millimeter, and there are
about 25,000 micrometers to an inch. Micrometers are also called microns.) Any
two lines that are closer together than 0.275 micrometers will be seen as a single
line, and any object with a diameter smaller than 0.275 micrometers will be
invisible or, at best, show up as a blur. To see tiny particles under a microscope,
scientists must bypass light altogether and use a different sort of "illumination,"
one with a shorter wavelength.
The Electron Microscope
The introduction of the electron microscope in the 1930's filled the bill. Co-
invented by Germans, Max Knoll, and Ernst Ruska in 1931, Ernst Ruska was
awarded half of the Nobel Prize for Physics in 1986 for his invention. (The other
half of the Nobel Prizewas divided between Heinrich Rohrer and Gerd Binnig for
the STM.)
In this kind of microscope, electrons are speeded up in a vacuum until their
wavelength is extremely short, only one hundred-thousandth that of white light.
Beams of these fast-moving electrons are focused on a cell sample and are
absorbed or scattered by the cell's parts so as to form an image on an electron-
sensitive photographic plate.
Power of the Electron Microscope
If pushed to the limit, electron microscopes can make it possible to view objects as
small as the diameter of an atom. Most electron microscopes used to study
biological material can "see" down to about 10 angstroms--an incredible feat, for
although this does not make atoms visible, it does allow researchers to distinguish
individual molecules of biological importance. In effect, it can magnify objects up
to 1 million times. Nevertheless, all electron microscopes suffer from a serious
drawback. Since no living specimen can survive under their high vacuum, they
cannot show the ever-changing movements that characterize a living cell.
Light Microscope Vs Electron Microscope
Using an instrument the size of his palm, Anton van Leeuwenhoek was able to
study the movements of one-celled organisms. Modern descendants of van
Leeuwenhoek's light microscope can be over 6 feet tall, but they continue to be
indispensable to cell biologists because, unlike electron microscopes, light
microscopes enable the user to see living cells in action. The primary challenge for
light microscopists since van Leeuwenhoek's time has been to enhance the contrast
between pale cells and their paler surroundings so that cell structures and
movement can be seen more easily. To do this they have devised ingenious
strategies involving video cameras, polarized light, digitizing computers, and other
techniques that are yielding vast improvements, in contrast, fueling a renaissance
in light microscopy.
CRITIQUE:
I do not see any negativity in this invention, rather, I have appreciated much of
how this instrument was discovered and has evolved through time. Having been
able to hear the presentation given by the group who presented about the history
and evolution of microscope, makes me wish to go back to the olden times and
witness how the microscope was discovered and how I wish I could also travel
through time to be able to witness how it evolved. The microscope is a very useful
instrument today. It helped us in so many ways not just in allowing us to see the
smallest objects that goes beyond what our naked eyes could see but I could say
that its greatest help is to help us see and identify the different microbes, the
bacteria, the viruses and etc. that gives us illness. I think the greatest help of the
microscope is to lengthen our lives. Without the microscope we will die younger
and faster without any proper antidote to every microbial and viral disease because
it will be very hard for us to determine the kind of illness that is attacking human
beings and animals. This invention is truly useful and I am thankful to those people
in the olden times who were granted wisdom by the Lord to discover and invent
this important apparatus. They must have truly lived their lives in its fullness by
being able to contribute a very useful instrument then, now and in the future.
THE HISTORY AND EVOLUTION OF TELESCOPE
Phoenicians cooking on sand first discovered glass around 3500 BCE, but it took
another 5,000 years or so before glass was shaped into a lens to create the first
telescope. Hans Lippershey of Holland is often credited with the invention
sometime in the 16th century. He almost certainly wasn’t the first to make one, but
he was the first to make the new device widely known.
Galileo’s Telescope
The telescope was introduced to astronomy in 1609 by the great Italian
scientist Galileo Galilei -- the first man to see the craters on the moon. He went on
to discover sunspots, the four large moons of Jupiter and the rings of Saturn. His
telescope was similar to opera glasses. It used an arrangement of glass lenses to
magnify objects. This provided up to 30 times magnification and a narrow field of
view, so Galileo could see no more than a quarter of the moon's face without
repositioning his telescope.
Sir Isaac Newton’s Design
Sir Isaac Newton introduced a new concept in telescope design in 1704. Instead of
glass lenses, he used a curved mirror to gather light and reflect it back to a point of
focus. This reflecting mirror acted like a light-collecting bucket -- the bigger the
bucket, the more light it could collect.
Improvements to the First Designs
The Short telescope was created by Scottish optician and astronomer James Short
in 1740. It was the first perfect parabolic, elliptic, distortionless mirror ideal for
reflecting telescopes. James Short built over 1,360 telescopes.
The reflector telescope that Newton designed opened the door to magnifying
objects millions of times, far beyond what could ever be achieved with a lens, but
others tinkered with his invention over the years, trying to improve it. Newton’s
fundamental principle of using a single curved mirror to gather in light remained
the same, but ultimately, the size of the reflecting mirror was increased from the
six-inch mirror used by Newton to a 6-meter mirror -- 236 inches in diameter. The
mirror was provided by the Special Astrophysical Observatory in Russia, which
opened in 1974.
Segmented Mirrors
The idea of using a segmented mirror dates back to the 19th century, but
experiments with it were few and small. Many astronomers doubted its viability.
The Keck Telescope finally pushed technology forward and brought this innovate
design into reality.
The Introduction of Binoculars
The binocular is an optical instrument consisting of two similar telescopes, one for
each eye, mounted on a single frame. When Hans Lippershey first applied for a
patent on his instrument in 1608, he was actually asked to build a binocular
version. He reportedly did so late that year. Box-shaped binocular terrestrial
telescopes were produced in the second half of the 17th century and the first half
of the 18th century by Cherubin d’Orleans in Paris, Pietro Patroni in Milan and
I.M. Dobler in Berlin. These were not successful because of their clumsy handling
and poor quality.
Credit for the first real binocular telescope goes to J. P. Lemiere who devised one
in 1825. The modern prism binocular began with Ignazio Porro's 1854 Italian
patent for a prism erecting system.
CRITIQUE:
The telescope is a very important invention. If Galilei had not invented the
telescope during the 16th century, he will not be able to have a glimpse of what is in
the outer space and because Galilei was able to see the craters of the moon, the
sunspots, the four large moons of Jupiter and the rings of Saturn, up until today
the modern astronomers have continued to search and identify the objects that
they could still discover in the outer space. The ancient astronomers were the
foundation of the astronomy today.
Upon knowing the evolution of the telescope and even the creation of the
binoculars as well, brings me into wonder how awesome these ancient people were
because they were the first one to seek for understanding on the need of having
such instrument to be able to further explain the universe that we’re living in.
Kudos to ancient astronomers who invented the telescope and who ignited the
passion of the modern astronomers to further investigate and develop instruments
that could explain more on what is going on in our universe. I believe that even the
idea of inventing the space crafts were actually founded on what the ancient
astronomers contributed.
THE HISTORY AND EVOLUTION OF COMPUTER
Throughout human history, the closest thing to a computer was the abacus, which
is actually considered a calculator since it required a human operator. Computers,
on the other hand, perform calculations automatically by following a series of built-
in commands called software.
In the 20th century breakthroughs in technology allowed for the ever-evolving
computing machines we see today. But even prior to the advent of microprocessors
and supercomputers, there were certain notable scientists and inventors that
helped lay the groundwork for a technology that has since drastically reshaped our
lives.
The Language Before the Hardware
The universal language in which computers carry out processor instructions
originated in 17th century in the form of the binary numerical system. Developed
by German philosopher and mathematician Gottfried Wilhelm Leibniz, the system
came about as a way to represent decimal numbers using only two digits, the
number zero and the number one. His system was partly inspired by philosophical
explanations in the classical Chinese text the “I Ching,” which understood the
universe in terms of dualities such as light and darkness and male and female.
While there was no practical use for his newly codified system at the time, Leibniz
believed that it was possible for a machine to someday make use of these long
strings of binary numbers.
In 1847, English mathematician George Boole introduced a newly devised
algebraic language built on Leibniz work. His “Boolean algebra” was actually a
system of logic, with mathematical equations used to represent statements in logic.
Just as important was that it employed a binary approach in which the relationship
between different mathematical quantities would be either true or false, 0 or
1. And though there was no obvious application for Boole’s algebra at the time,
another mathematician, Charles Sanders Pierce spent decades expanding the
system and eventually found in 1886 that the calculations can be carried out with
electrical switching circuits. And in time, Boolean logic would become
instrumental in the design of electronic computers.
The Earliest Processors
English mathematician Charles Babbage is credited with having assembled the
first mechanical computers — at least technically speaking. His early 19th century
machines featured a way to input numbers, memory, a processor and a way to
output the results. The initial attempt to build the world’s first computer, which he
called the “difference engine,” was a costly endeavor that was all but abandoned
after over 17,000 pounds sterling was spent on its development. The design called
for a machine that calculated values and printed the results automatically onto a
table. It was to be hand cranked and would have weighed four tons. The project
was eventually axed after the British government cut off Babbage’s funding in
1842.
This forced the inventor to move on to another idea of his called the analytical
engine, a more ambitious machine for general purpose computing rather than just
arithmetic. And though he wasn’t able to follow through and build a working
device, Babbage’s design featured essentially the same logical structure as
electronic computers that would come into use in the 20th century. The analytical
engine had, for instance, integrated memory, a form of information storage found
in all computers. It also allows for branching or the ability of computers to execute
a set of instructions that deviate from the default sequence order, as well as loops,
which are sequences of instructions carried out repeatedly in succession.
Despite his failures to produce a fully functional computing machine, Babbage
remained steadfastly undeterred in pursuing his ideas. Between 1847 and 1849, he
drew up designs for a new and improved second version of his difference engine.
This time it calculated decimal numbers up to thirty digits long, performed
calculations quicker and was meant to be more simple as it required fewer parts.
Still, the British government did not find it worth their investment. In the end, the
most progress Babbage ever made on a prototype was completing one-seventh of
his first difference engine.
During this early era of computing, there were a few notable achievements. A tide-
predicting machine, invented by Scotch-Irish mathematician, physicist and
engineer Sir William Thomson in 1872, was considered the first modern analog
computer. Four years later, his older brother James Thomson came up with a
concept for a computer that solved math problems known as differential equations.
He called his device an “integrating machine” and in later years it would serve as
the foundation for systems known as differential analyzers. In 1927, American
scientist Vannevar Bush started development on the first machine to be named as
such and published a description of his new invention in a scientific journal in 1931.
Dawn of Modern Computers
Up until the early 20th century, the evolution of computing was little more than
scientists dabbling in the design of machines capable of efficiently perform various
kinds of calculations for various purposes. It wasn’t until 1936 that a unified theory
on what constitutes a general purpose computer and how it should function was
finally put forth. That year, English mathematician Alan Turing published a paper
called titled "On computable numbers, with an application to the Entscheidungs
problem," which outlines how a theoretical device called a “Turing machine” can
be used to carry out any conceivable mathematical computation by executing
instructions. In theory, the machine would have limitless memory, read data, write
results and store a program of instructions.
While Turing’s computer was an abstract concept, it was a German engineer
named Konrad Zuse who would go on to build the world’s first programmable
computer. His first attempt at developing an electronic computer, the Z1, was a
binary-driven calculator that read instructions from punched 35-millimeter film.
The problem was the technology was unreliable, so he followed it up with the Z2, a
similar device that used electromechanical relay circuits. However, it was in
assembling his third model that everything came together. Unveiled in 1941, the
Z3 was faster, more reliable and better able to perform complicated calculations.
But the big difference was that the instructions were stored on external tape,
allowing it function as a fully operational program-controlled system.
What’s perhaps most remarkable is that Zuse did much of his work in isolation. He
had been unaware that the Z3 was Turing complete, or in other words, capable of
solving any computable mathematical problem — at least in theory. Nor did he
have any knowledge of other similar projects that were taking place around the
same time in other parts of the world. Among the most notable was the IBM-
funded Harvard Mark I, which debuted in 1944. More promising, though, was the
development of electronic systems such as Great Britain’s 1943 computing
prototype Colossus and the ENIAC, the first fully-operational electronic general-
purpose computer that was put into service at the University of Pennsylvania in
1946.
Out of the ENIAC project came the next big leap in computing technology. John
Von Neumann, a Hungarian mathematician who had consulted on ENIAC project,
would lay the groundwork for a stored program computer. Up to this point,
computers operated on fixed programs and altering their function, like say from
performing calculations to word processing, required having to manually rewire
and restructure them. The ENIAC, for example, took several days to reprogram.
Ideally, Turing had proposed having the program stored in the memory, which
would allow it to be modified by the computer. Von Neumann was intrigued by the
concept and in 1945 drafted a report that provided in detail a feasible architecture
for stored program computing.
His published paper would be widely circulated among competing teams of
researchers working on various computer designs. And in 1948, a group in England
introduced the Manchester Small-Scale Experimental Machine, the first computer
to run a stored program based on the Von Neumann architecture. Nicknamed
“Baby,” the Manchester Machine was an experimental computer and served as the
predecessor to the Manchester Mark I. The EDVAC, the computer design for which
Von Neumann’s report was originally intended, wasn’t completed until 1949.
Transitioning Toward Transistors
The first modern computers were nothing like the commercial products used by
consumers today. They were elaborate hulking contraptions that often took up the
space of an entire room. They also sucked enormous amounts of energy and were
notoriously buggy. And since these early computers ran on bulky vacuum tubes,
scientists hoping to improve processing speeds would either have to find bigger
rooms or come up with an alternative.
Fortunately, that much-needed breakthrough had already been in the works. In
1947, a group of scientists at Bell Telephone Laboratories developed a new
technology called point-contact transistors. Like vacuum tubes, transistors amplify
electrical current and can be used as switches. But more importantly, they were
much smaller (about the size of a pill), more reliable and used much less power
overall. The co-inventors John Bardeen, Walter Brattain, and William Shockley
would eventually be awarded the Nobel Prize in physics in 1956.
And while Bardeen and Brattain continued doing research work, Shockley moved
to further develop and commercialize transistor technology. One of the first hires
at his newly founded company was an electrical engineer named Robert Noyce,
who eventually split off and formed his own firm, Fairchild Semiconductor, a
division of Fairchild Camera and Instrument. At the time, Noyce was looking into
ways to seamlessly combine the transistor and other components into one
integrated circuit to eliminate the process in which they were pieced together by
hand. Jack Kilby, an engineer at Texas Instruments, also had the same idea and
ended up filing a patent first. It was Noyce’s design, however, that would be widely
adopted.
Where integrated circuits had the most significant impact was in paving the way
for the new era of personal computing. Over time, it opened up the possibility of
running processes powered by millions of circuits – all on a microchip the size of
postage stamp. In essence, it’s what has enabled our ubiquitous handheld gadgets
much more powerful than the earliest computers.
CRITIQUE:
The invention of the computer is one of the most technical inventions throughout
history. As what was shared by the reporters last time, the first computer was as
huge as a room then as it evolved through time, it became smaller and smaller. And
today, in our generation we have the laptops, the netbooks, the tablets, and even
our handy smartphones. Aside from that, the cost of such computers also evolved.
The oldest ones were the most expensive and recent ones were the less expensive.
We are truly blessed today. The ancient people who started the invention of the
computers were so awesome! They placed their hearts and lives in the discovery of
such device. They were truly amazing! Our lives today are much more convenient
because of the invention of the computers. As a teacher, work has become less, the
preparation of the IM has become easier, the reports and other visuals needed in
the classroom structuring has become much easier to accomplish. The students
today present their reports in an easier way. I still could remember my high school
days when the computers were scarce, we had to do more hand work. Researching
for information on our topics will confine us to the libraries for hours and would
even take days to accomplish just to search for right books as resources. But today,
it only need one touch to google whatever inquiry that we need. Moreover, the
bookings during travels only need minutes by the use of smartphones. The
hospitals have also benefited a lot. Computerized instruments have contributed a
lot to save lives. Transportations today especially in the developed countries are
computer operated. Computers are everywhere. It seems that we will be
handicapped without it. Oh how it greatly helped us and affect our daily lives!
However, my greatest concern nowadays is the invention of the Artificial
Intelligences or the AIs. Somehow I feel weird about it. Questions like, what will
people do when we will be surrounded by these AIs? It seems to me that the time
will come that we will be very dependent on them. There was one video that I saw
on youtube where a certain store uses AIs to do the manual labor of men and the
inventor said that one day, people will no longer do the cashiering in the stores
because they will be replaced by AIs. It seems fascinating to hear at first but it
makes me feel worried because if that will be the case, what will the human beings
do. What if someday, we will be totally replaced by the AIs?
Humans are amazingly made by God. We are capable of inventing something
wonderful. We must ponder, that yes, the Lord has given us the free will to decide
on what we need to decide with the life that he gave us. I think we should take time
to think about the motives that we have in our hearts before we do something
whether in little things or in great things. Why are we doing such thing? Is it out of
need or greed? After all these amazing, fascinating, and awesome inventions I still
would want to end in prayer and ask for guidance to the author of our lives, the
creator of all things, the beginning and the end and say, thy will be done dear Lord!