QC Merged
QC Merged
By
SESSION: 2023-2024
CERTIFICATE
I, Yuvraj soni , student of 3rd Semester B.Tech, in Computer Science and Engineering,
SobhasariaGroup of Institutions Sikar hereby declare that the Seminarofentitled
“QuantumComputing” hasbeencarriedoutbymeandsubmittedinpartialfulfilment
fulfillment of the requirements the IVSemester degree of Bachelorof Technology
in Computer Science and Engineering of Bikaner Technical University, Bikaner during
academic year 2024-2025.
This is opportunity to express my heartfelt words for the people who were part of this seminar
in numerous ways, people who gave me unending support right from beginning of the
Seminar.
I am grateful to seminar coordinators Mr.DKAgarwal forgivingguidelinesto
make the seminar successful. Without their guidance and persistent help this report would not
have been possible. I must acknowledge the faculties and staffs of Computer Science
Engineering from Sobhasaria Group of Institutions, Sikar.
I extend my thanks to Mr. Dileep K Agarwal Head of the Department for his cooperation
and guidance.
I want to give sincere thanks to the principal, Dr. L. Solanki for his valuablesupport.
Yours Sincerely,
Yuvraj Soni
23ESGAI028
Abstract
Quantum theory is one of the most successful theories that have influenced the course of
scientific progress during the twentieth century. It has presented a new line of scientific
thought, predicted entirely inconceivable situations and influenced several domains of
modern technologies. There are many different ways for expressing laws of science in
general and laws of physics in particular. Similar to physical laws of nature, information
can also be expressed in different ways. The fact that information can be expressed in
different ways without losing its essential nature, leads for the possibility of the automatic
manipulation of information.
All ways of expressing information use physical system, spoken words are conveyed by air
pressure fluctuations: “No information without physical representation”. The fact that
information is insensitive to exactly how it is expressed and can be freely translated from
one form to another, makes it an obvious candidate for fundamentally important role in
physics, like interaction, energy, momentum and other such abstractors. This is a project
report on the general attributes of Quantum Computing and Information Processing from a
layman’s point of view.
2 Literature Survey
of Q.C
Quantum Teleportation
3.2 Introduction of EPR correlation term in Expansion 27
Theorem
candidate.
4.2 Need for ModifiedCoulombPotentialand itsanalysis 31-33
4.3 Analysis of Quantum Dots using Modified Coulomb
33-35
Potential.
4.4 Studyof Quantum Wires using Modified Coulomb 35
Potential
4.5 Visit to Nano Technology Lab in Barkatullah University,
36-37
Bhopal
References
41
CHAPTER 1
1.0 INTRODUCTION
With the development of science and technology, leading to the advancement of
civilization, new ways were discovered exploiting various physical resources such as
materials, forces and energies. The history of computer development represents the
culmination of years of technological advancements beginning with the early ideas of
Charles Babbage and eventual creation of the first computer by German engineer Konard
Zeise in 1941. The whole process involved a sequence of changes from one type of physical
realization to another from gears to relays to valves to transistors to integrated circuits to
chip and so on. Surprisingly however, the high speed modern computer is fundamentally
no different from its gargantuan 30 ton ancestors which were equipped with some 18000
vacuum tubes and 500 miles of wiring. Although computers have become more compact
and considerably faster in performing their task, the task remains the same: to manipulate
and interpret an encoding of binary bits into a useful computational result.
The number of atoms needed to represent a bit of memory has been decreasing
exponentially since 1950. An observation by Gordon Moore in 1965 laid the foundations
for what came to be known as “Moore’s Law” – that computer processing power doubles
every eighteen months. If Moore’s Law is extrapolated naively to the future, it is learnt that
sooner or later, each bit of information should be encoded by a physical system of
subatomic size. As a matter of fact this point is substantiated by the survey made by Keyes
in 1988 as shown in fig. 1. This plot shows the number of electrons required to store a
single bit of information. An extrapolation of the plot suggests that we might be within the
reach of atomic scale computations with in a decade or so at the atomic scale however.
1014
1012
1010
No. Of 108
Impurities
106
104
102
5
Matter obeys the rules of quantum mechanics, which are quite different from the
classical rules that determine the properties of conventional logic gates. So if computers
are to become smaller in future, new, quantum technology must replace or supplement
what we have now. Not withstanding, the quantum technology can offer much more than
just cramming more and more bits to silicon and multiplying the clock speed of
microprocessors. It can support entirely a new kind of computation with quantitatively as
well as qualitatively new algorithms based on the principles of quantum mechanics.
With the size of components in classical computers shrinking to where the behaviour
of the components, is practically dominated by quantum theory than classical theory,
researchers have begun investigating the potential of these quantum behaviours for
computation. Surprisingly it seems that a computer whose components are all to function
in a quantum way are more powerful than any classical computer can be. It is the physical
limitations of the classical computer and the possibilities for the quantum computer to
perform certain useful tasks more rapidly than any classical computer, which drive the
study of quantum computing.
A computer whose memory is exponentially larger than its apparent physical size, a
computer that can manipulate an exponential set of inputs simultaneously – a whole new
concept in parallelism; a computer that computes in the twilight (space like) zone of Hilbert
Space (or possibly a higher space – Grassman Space & so on), is a quantum computer.
Relatively few and simple concepts from quantum mechanics are needed to make quantum
computers a possibility. The subtlety has been in learning to manipulate these concepts. If
such a computer is inevitability or will it be too difficult to build on, is a million dollars
question.
6
CHAPTER 2
LITERATURE SURVEY
The idea of computational device based on quantum mechanics was first explored in
the 1970’s and early 1980’s by physicists and computer scientists such as Charles H.
Bennet of the IBM Thomas J. Watson Research Centre, Paul A. Beniof of Arogonne
National Laboratory in Illinois, David Deustch of the University of Oxford and Richard
P. Feynman of Caltech. The idea emerged when scientists were pondering on the
fundamental limits of computation. In 1982 Feynman was among the fewer to attempt to
provide conceptually a new kind of computers which could be devised based on the
principles of quantum physics. He constructed an abstract model to show how a quantum
system could be used to do computations and also explain how such a machine would be
able to act as a simulator for physical problems pertaining to quantum physics. In other
words, a physicist would have the ability to carry out experiments in quantum physics
inside a quantum mechanical computer. Feynman further analysed that quantum computers
can solve quantum mechanical many body problems that are impractical to solve on a
classical computer. This is due to the fact that solutions on a classical computer would
require exponentially growing time where as the whole calculations on quantum computer
can be done in polynomial time.
Later, in 1985, Deutsch realized that Feynman assertion could eventually lead to a
general-purpose quantum computer. He showed that any physical process, in principle
could be modelled perfectly by a quantum computer. Thus, a quantum computer would
have capabilities far beyond those of any traditional classical computer. Consequently
efforts were made to find interesting applications for such a machine. This did not lead to
much success except continuing few mathematical problems. Peter Shor in 1994 set out a
method for using quantum computers to crack an important problem in number theory
which was namely factorisation. He showed how an ensemble of mathematical operations,
designed specifically for a quantum computer could be organized to make such a machine
to factor huge numbers extremely rapidly, much faster than is possible on conventional
computers. With this breakthrough, quantum computing transformed from a mere
academic curiosity directly to an interest world over.
Perhaps the most astonishing fact about quantum computing is that it took exceedingly
large time to take off. Physicists have known since 1920’s that the world of subatomic
particles is a realm apart, but it took computer scientists another half century to begin
wondering whether quantum effects might be harnessed for computation. The answer was
far from obvious.
7
2.2 LIMITATIONS OF CLASSICAL COMPUTER AND BIRTH OF ART OF QUANTUM
COMPUTING
This algorithm therefore, scales exponentially with the input size log N (log N determines
the length of the input. The base of the logarithm is determined by our numbering system.
Thus base 2 gives the length in binary, a base 2 gives the length in binary, a base of 10 in
decimal and so on) e.g. in 1994 a 129 digit number (known as RSA 129) was successfully
factored using this algorithm on approximately 1600 workstations scattered around the
world, the entire factorisation took eight months. Using this to estimate the per factor of
the above exponential scaling, it is found that it would take roughly 800,000 years to factor
a 250 digit number with the same computer power, similarly a 1000 digit number would
require 10 to the power 25 years (much longer than the age of universe). The difficulty of
factoring large numbers is crucial for public key cryptography such as used
8
in banks around 250 digits. Using the trial division method for factorisation 10 L/2 (=N)
divisions are needed to solve the problem on exponential increase as function of L. Suppose
a computer performs 1010 decisions per second. Thus the computer can factor any number
N, in about e.g. a 100 digit number will be factored in 1040 seconds, much longer than 3.8
X 1017 second (12 billion years), the currently estimated age of the universe!
2.2.2 Quantum Factoring
From the analysis of classical factoring of big integers it seems that factoring big
numbers will remain beyond the capabilities of any realistic computing devices and unless
mathematicians or computer scientists come up with an efficient factoring algorithm, the
public key crypto systems will remain secure. However it turns out that this is not the case.
The Classical Theory of Computation is not complete simply because it does not describe
all physically possible computations. In particular, it does not describe computations,
which can be performed by quantum devices. Indeed, recent work in quantum computation
shows that a quantum computer can factor much faster than any classical computer.
According to an algorithm developed by Peter Shor factoring an integer using quantum
computer runs in O((lnN) 2+) steps, where is small. This is roughly quadratic in the input
size, so factoring a 1000 digit number with such an algorithm would require only few
million steps. The implication is that public key cryptosystems based on factoring may be
breakable.
9
2.3 QUANTUM COMPUTING: A WHOLE NEW CONCEPT IN PARALLELISM
In a true sense, parallel computer would have simultaneity built into its very nature. It
would be able to carry out many operations at once, to search instantly through a long list
of possibilities and point out the one that solves the problem. Such computers do exist.
They are called quantum computers. In reality, the more exciting feature of quantum
computing is quantum parallelism. A quantum system in general is not in one “classical
state” but in a “quantum state” consisting (broadly speaking) in a superposition of many
classical or classical like states. This is called principle of linear superposition used to
construct quantum states. If the superposition can be protected from unwanted
entanglement with its environment (known as decoherence) a quantum computer can
output results depending on details of all its classical like states. This is quantum
parallelism- parallelism on a serial machine.
In a quantum computer the fundamental unit of information (is called a quantum bit or
“qubit”, analogous to classical bit used in ordinary computer), is not binary but rather more
quaternary in nature. This qubit property arises as direct consequence of its adherence to
the laws of quantum motions. A qubit can exist not only in a state corresponding to the
logical state 0 or 1 as in a classical state bit but also in states corresponding to a blend of
superposition of those classical states. In other words a qubit can exist as a zero, a one, or
simultaneously as both 0 and 1, with numerical coefficient representing the probability for
each state. This concept may appear to be counterintuitive because every day phenomenon
is governed by classical physics, not quantum mechanics, which taps out at the atomic
level. Physically qubit can be visualized by the spin s=1/2 of one electron system, the two
state +1/2 and –1/2 being two eigenstates of Sz (z component direction of an external
magnetic field of spin ½.). Alternatively a beam of single photon can also be used, the total
states being the state of polarization (horizontal or vertical) with respect to some chosen
axis. Thus qubit can take 2 values, 0 or 1, which are associated with two eigenstates of a
spin of a single electron (say):
|1> = |>
|0> = |>
|0> + |1> = |> + |>
10
And further qubit can be a superposition of these two states with complex coefficient and
this property distinguishes them form classical bits used in conventional computers. In
mathematical terms, we say that since the general state of a qubit can be superposition of
two pure states with arbitrary complex coefficients, then the state is described as a vector
in the two dimensional complex space c2 and the two pure states form the basis of the
representation.
Here a light source emits a photon along a path towards a half-silvered mirror. This mirror
splits the light, reflecting half vertically toward detector A and transmitting half toward
detector B. a photon however represents a single quantised energy state (E=h) and hence
cannot be split, so it is detected with equal probability at either A or B. this is verified by
observation that if the detector A registers the signal than B does not and vice- versa. With
this piece of information one would like to think that any given photon travels either
vertically or horizontally, randomly choosing between the two paths. Quantum mechanics
however predicts that the photon actually travels both paths simultaneously, collapsing
down to one path only upon measurement (collapse of the wave function). This effect is
known as single-particle interference resulting from the linear superposition of the possible
photon states of potential paths. The phenomenon of single particle interference can be
better illustrated in a slightly more elaborate experiment, outlined in fig 3. In this
experiment the photon first encounters a half polished mirror (beam splitter) thus getting
split into two parts: reflected beam and transmitted beam. The two beams are recombined
with the half of the fully silvered mirrors and finally another half silvered mirror splits
before reaching a detector. Each half silvered mirror introduces the probability of the
photon following down one path or the other. Once a photon strikes the mirror along either
of two paths after the first beam splitter, the arrangement is identical to that in fig 2 thus a
single photon travelling vertically, when strikes the mirror, then by the experiment in fig
2, there should be an equal probability (50%) that the photon will strike either detector A
or detector B. the same goes for a photon travelling down the horizontal path. However,
the actual result is drastically different. If the two possible paths are exactly equal in length,
then it turns out that there is 100% probability that photon reaches the detector A & 0%
probability that it reaches the detector B thus the photon is certain to strike the detector A
it seems inescapable that the photon must, in some sense, have actually travelled both
routes simultaneously.
11
B
Fully Silvered
Half Silvered
(Beam splitter)
This can be demonstrated by placing an absorbing screen in the way of either of the routes,
then it becomes equally probable that detector A or B is reached. Block of one of the paths
actually allows detector B to be reached; with both routes open, the photon somehow
knows that it is not permitted to reach detector B, so it must have actually felt out both
routes. It is therefore perfectly legitimate to say that between the two half silvered mirrors
the photon took both transmitted and reflected paths or using more technical language, we
can say that photon is in a coherent superposition of being in the transmitted beam and in
the reflected beam. This quantum interference is resulting due to linear superposition
principle. This is one of those unique characteristics that make current research in quantum
computing not merely a continuation of today’s idea of computer but rather an entirely new
branch of thought and underlying concept and it is because quantum computers harness
those special characteristics that gives them the potential to be incredibly powerful
computational device.
The observation of correlation among various events is day to day phenomena. These
correlations are well described with the help of laws of classical physics. Let us consider
the following example: Imagine a scene of bank robbery is picturised. The bank robber is
pointing a gun at the terrified teller. By looking at the teller one can tell whether the gun
has gone off or not if the teller is alive and unharmed, one can be sure the gun has not been
fired. If the teller is lying dead of a gunshot wound on the floor, one knows the gun has
been fired. This is a simple detective case. Thus there is a direct correlation between the
state of gun and the state of the teller ‘gun fired’ means “teller alive”. In the event it is
presumed the robber only shots to kill and he never misses.
12
possible states only: ‘decayed’ and ‘not decayed’, just as we had two states, ‘fired’ and ‘not
fired’ for the gun or ‘alive’ and ‘dead’ for the teller. However, in the quantum mechanical
world, it is also possible for the atom to be in a combined state decayed-not decayed in
which it is neither one nor the other but somewhere in between. This is due to the principle
of linear superposition of two quantum mechanical states of the atom, and is not something
we normally expect of classical objects like guns or tellers. Further let us consider a system
consisting of two nuclei. Two nuclei may be correlated so that if one has decayed, the other
will also have decayed. And if one has not decayed, neither has the other. This is 100%
correlation. However, the nuclei may also be correlated so that if one is in the superposition
state, ‘decayed-not decayed’, the other will also be. Thus quantum mechanically, then one
more correlation between nuclei than we would expect classically. This kind of quantum
‘super correlation’ is called ‘Entanglement’.
The problem was brought into focus by a famous paper in 1935 by Einstein,
Podolsky and Rosen, who argued that the strange behaviour of entanglement that quantum
mechanics is an incomplete theory not wrong. This is widely known as EPR paradox. The
concept of EPR paradox can be understood with the help of following example: consider a
Helium atom in the ground state. It has two electrons having following quantum numbers:
n=1, l=0, s=1/2, sz = +1/2 for one and sz = -1/2 for another. Thus we have j = 0 & 1. But jz
= (sz )1 + (sz )2 = 0. Hence only j=0 state is allowed. Thus in a Helium atom, two electrons
are antiparallel to each other and hence form entangled pair of system. The atom is provided
sufficient energy (equal to the binding energy of the atom). So that it disintegrates at rest.
Consequently two electrons fly away in opposite direction.
13
Fig 4: EPR paradox description using He atom
Two electrons are taken apart. With the application of magnetic field when the spin of one
electron is flipped, the spin of other electron is also flipped instantaneously
(communication with speed faster than speed of light). This is a real phenomenon Einstein
called it spooky action at a distance; the mechanism of which cannot, as yet be explained
by any theory- it simply must be taken as given and this was Einstein’s objection about the
completeness of quantum theory. However we know that further developments (Bell
inequality and its experimental verification) proved that quantum considerations are correct
even if it means communication between space like events. Even more amazing is the
knowledge about the state of spin of another electron without making a measurement on it.
In the first instant one may be inclined to ask: What is so special about quantum
entanglement? One does encounter similar situations (phenomenon of correlation between
two events) in areas other that quantum world. Let us consider the case of Mr. Bertleman
who has the peculiar habit of wearing socks of different colours in left and right foot. If he
wears red coloured sock in one foot, it should be green in the other, or if it is yellow in one
then it should be blue in the other. Presumably Mr. Bertleman never breaks the rule.
Therefore looking at the colour of one sock one can tell the colour of the other sock which
he is wearing. However on deeper scrutiny, the kind of objection raised above, does not
stand. As a matter of fact in quantum entanglement the choice of measurement also plays
a crucial role. One may decide to measure x-component of spin, or its y-component or a
compound along s direction inclined at an arbitrary angle to x- axis. The other particle
arranges its spin accordingly. In case of Bertleman’s example the onlooker has a role to
play. The onlooker once decides to see the yellow-blue combination of colours for
Bertleman socks, looks at accordingly the intention of onlooker in deciding the colour is
interesting and equally interesting is the instant communication of this intention.
14
2.5.2 EPR SITUATION, HIDDEN VARIABLES AND BELL THEOREM:
The critical examination of the paper “Is Quantum Mechanics Complete?” by Einstein,
Podolsky and Rosen (EPR) carried by John Bell lead to following contradictory conclusion:
1. EPR correlations (usually referred as quantum entanglement) predicted by
quantum mechanics are so strong that one can hardly avoid the conclusion that
quantum mechanics should be completed by some supplementary parameters (those
so called hidden variables.)
2. The elaboration of the above result, demonstrates that the hidden variables
description in fact contradicts some predictions of quantum mechanics.
In the face of these two perfectly convincing and contradictory results, there is only one
way out: ask Nature how it works. Till the end of 1970 there was no experimental result
to answer this question. The contradiction discovered by Bell in EPR paper, is so subtle
that it appears only in a very peculiar situations that had not been investigated. And
require design and build specific experiments.
-1
-
a b
There are two possible outcomes for each measurement, which are +1 and –1. Quantum
mechanics allows for the existence of a two-photon state (EPR state) for which the
polarization measurements taken separately appear random but which are strongly
correlated. More precisely, denoting P+(a) & P-(a) as the probabilities that the polarization
of 1 along a is found equal to + or -, these probabilities are predicted to be equal to 0.5,
similarly the probabilities P+(b) & P-(b) for photon 2 are equal to 0.5 and independent of
the orientation b.
On the other hand, the joint probability P++(a) for observing + for both the
photons is equal to 0.5 Cos2(a.b). In case of parallel polarisers [(a.b)=0], this joint
probability is P++(0)=0.5, Similarly P --(0). For cross polariser [(a.b)= /2], the joint
probability is P -+( /2)=P+-(/2)=0. The results for the two photons of the same pair are
thus always identical, both + or – i.e. they are completely correlated. Such correlations
15
between events each of which appears to be random, may arise outside the physics world
as well. Consider for instance, the occurrence of some defined disease (say G) and let us
assume that biologists have observed its development in 50% of the population aged 20,
and its absence in the remaining half. Now on investigating specific pair of (true) twin
brothers, a perfect correlation is found between the outcomes: if one brother is affected,
the other is also found to be inflicted with the disease; but if one member of the pair has
not developed the disease, then the other is also unaffected. In face of such perfect
correlation for twin brothers, the biologists will certainly conclude that the disease has a
genetic origin. A simple scenario may be invoked: at the first step of conception of the
embryo, a genetic process which is random in nature, produced a pair of chromosome
sequence-one which is responsible for the occurrence or absence of the disease that has
been duplicated and given to both brothers.
Thus the two situations the case of correlation between the polarized states of two
photons and the case of twin brothers (a number of such situations can be exemplified) are
exactly analogous. It seems therefore, natural to link this correlation between the pairs of
photons to some common property analogous to the common genome of the two twin
brothers. This common property changes form pair to pair, which accounts for the random
character of the single event. This is the basic conclusion drawn by John Bell regarding
EPR states. A natural generalization of the EPR reasoning leads to the conclusion that
quantum mechanics is not a complete description of physical reality. As a matter of fact,
introduction of “some common property” which changes from pair to pair, invokes the idea
that complete description of a pair must include “something” in addition to the state vector,
which is the same for all pairs. This “something” can be called supplementary parameter
or hidden variables. Inclusion of hidden variables sends an account of the polarized states
of two photons, for any set (a, b) of orientations.
Bell examined critically the requirement for hidden variables to explain the expected
correlation between the two polarized states of photons. It was shown that the expected
correlations, for the joint measurements of polarized states of photons as mentioned above,
cannot take any set of values, but they are subjected to certain constraints. More precisely,
if we consider four possible sets of orientations [(a.b), (a.b’), (a’.b), (a’.b’)], the
corresponding correlation coefficients (which measure the amount of correlation) are
restricted by Bell inequalities which states that a given combination of these four
coefficients ‘s’ is between –2 and +2 for any reasonable hidden variable theory. Thus Bell
inequalities prescribe a test for the validity of hidden variable theory. However, quantum
mechanics predicts the value of s as 2.8 i.e., it violates Bell inequalities, and the same is
tested by experiments. Thus the hidden variable theories envisaged above are unable to
send an account of the EPR correlation (quantum entanglement) predicted by quantum
mechanics. As a matter of fact quantum mechanical correlations are more intricate to
understand as compared to mutual correlations between twin brothers.
Bell inequality is based on the assumption of local hidden variable models. The
assumption of locality states that the result of a measurement by a polariser cannot be
directly influenced by the choice of the orientation of the other remotely located polariser.
Actually this is nothing but the consequence of Einstein causality (No signal can move
with a speed greater than speed of light in vacuum). Nevertheless, Bell
16
inequalities apply to wide class of theories than local hidden variable theories. Any theory,
in which each photon has a “physical reality” localized in space-time, determining the
outcome of the corresponding measurement, will lead to inequalities that (sometimes)
conflict with quantum mechanics. Bell’s theorem can thus be phrased in the following way:
some quantum mechanical predictions (EPR correlations-quantum entanglement) cannot
be mimicked by any local realistic model in the spirit of Einstein ideas of theory of hidden
variables.
Until recently teleportation was not taken seriously by scientists. Usually teleportation
is the name given by science fiction writers to the feat of making an object or person
disintegrate in one place while a perfect replica appears somewhere else. Normally this is
done by scanning the object in such a way as to extract all the information from it, then this
information is transmitted to the information from it, then this information is transmitted
to the receiving location and used to construct the replica, not necessarily from the actual
material of the original, but probably from atoms of the same kinds, arranged in exactly the
same pattern as the original. A teleportation machine would be like a fax machine, except
that it would work on 3-dimensional objects as well as documents, it would produce an
exact copy rather an approximate facsimile and it would destroy the original in the process
of scanning it.
17
teleportation: if one cannot extract enough information from an object to make a perfect
copy, it would be seen that a perfect copy cannot be made.
Charles H Bennet with his group and Stephen Wiesner have suggested a remarkable
procedure for teleporting quantum states using EPR states (entangled states). Quantum
teleportation may be described abstractly in terms of two particles, A & B. A has in its
possession an unknown state |> represented as:
This is a single quantum bit (qubit)-a two level quantum system. The aim of teleportation
is to transport the state |> from A to B. This is achieved by employing entangled states.
A & B each posses one qubit of a two qubit entangled state;
The above state can be rewritten in the Bell basis (|00>|11>)), (|01>|10>) for the first
two qubit and a conditional unitary transformation of the state |> for the last one, that is
Where X, Y, Z are Pauli matrices in the |0>, |1> basis. A measurement is preformed on
A’s qubits in the Bell basis. Depending on the outcomes of these measurements, B’s
respective states are |>, Z |>, X |>, -iZ |> A sends the outcome of its measurement
to B, who can then recover the original state |> by applying the appropriate unitary
transformation I, Z, Y or iY depending on A’s measurement outcome. It may be noted
that quantum state transmission has not been accomplished faster than light because B must
wait for A’s measurement result to arrive before he can recover the quantum state.
18
2.7 THERMODYNAMICS OF QUANTUM COMPUTATION
Computers are machines and like all machines they are subject to thermodynamics
constraints, based on the laws of thermodynamics. Analogous to any physical system, the
modern computers based on digital devices produce heat in operation. The basic question
is: Can digital computers be improved so as to minimize production of heat. It turns out
that it is possible to think (consistent with the laws of physics) of an ideal computer capable
of shaping, maintaining and moving around digital signals, maintaining and moving around
digital signals without any heat generation. Nevertheless there is one place where heat
must be produced. Whenever information is erased the phase space associated with the
system that stores information shrinks. Erasing a single bit of information reduces entropy
of the system that stored that information by at least
S=klog2. This reduction of entropy results in heat transfer to the environment.
Thus if a computer could be constructed that does not erase any information, such a
computer could work without generating any heat at all. This is precisely the situation in
quantum computers. Quantum Computation is reversible (but not the read out of the result
of that computation). It is therefore possible, at least in principle, to carryout quantum
computation without generating heat. Of course, in reality the computer would still
generate a lot of heat. Electric pulses moving along copper wires would have to work their
way against resistance. Electrons diffusing from a source would still collide with crystal
imperfections and with electrons in the drain again, generating heat. But at least in ideal
situation, copper wires could be replaced with superconductors, imperfect crystals with
prefect ones.
(a,b) (ab)
is less than the amount of information on the left hand side. Using Toffoli gates Charles
Bennett has demonstrated that quantum computers are capable of performing any
computation utilizing only reversible steps. These special gates maintain all information
that is passed to them, so that the computation can be run forward and backward.
Consequently the computation results in a very large amount of data, because every
intermediate step is remembered, but heat generation is eliminated which the computation
goes on. After the computation is over the computation can be run backwards in order to
restore the initial state of the computer and avoid its spontaneous combustion.
The architectural simplicity makes the quantum computer faster, smaller and cheaper
but its conceptual intricacies are posing difficult problems for its experimental realization.
Nevertheless a number of attempts have been made in this direction with an
19
encouraging success. It is envisaged that it may not be too far when the quantum computer
would replace digital computer with its full prospects. Some of the attempts for the
experimental realization of quantum computer are summarized as follows:
2.8.1 Heteropolymers:
The first heteropolymer based quantum computer was designed and built in 1988 by
Teich and then improved by Lloyd in 1993. In a heteropolymer computer a linear array of
atoms is used as memory cells. Information is stored on a cell by pumping the
corresponding atom into an excited state. Instructions are transmitted to the heteropolymer
by laser pulses of appropriately tuned frequencies. The nature of the computation that is
performed on selected atoms is determined by the shape and the duration of the pulse.
An ion trap quantum computer was first proposed by Cirac and Zoller in 1995 and
implemented first by Monroe and collaborators in 1995 and then by Schwarzchild in
1996. The ion trap computer encodes data in energy states of ions and in vibrational
modes between the ions. Conceptually each ion is operated by a separate laser. A
preliminary analysis demonstrated that Fourier transforms can be evaluated with the
ion trap computer. This, in turn, leads to Shor’s factoring algorithm, which is based on
Fourier transforms.
20
2.8.5 Quantum Dots
Quantum computers based on quantum dot technology use simpler architecture and less
sophisticated experimental, theoretical and mathematical skills as compared to the four
quantum computer implementations discussed so far. An array of quantum dots, in which
the dots are connected with their nearest neighbors by the means of gated tunneling barriers
are used for fabricating quantum gates using split gate technique. This scheme has one of
the basic advantages: the qubits are controlled electrically. The disadvantage of this
architecture is that quantum dots can communicate with their nearest neighbors only
resulting data read out is quite difficult.
This computer looks a little like a quantum dot computer but in other ways it is more
like an NMR computer. It consists of a single magnetically active nucleus of p31 in a crystal
of isotropically clean magnetically inactive Si28. The sample is then placed in a very strong
magnetic field in order to set the spin of p31 parallel or antiparallel with the direction of the
field. The spin of the p31 nucleus can then be manipulated by applying a radio frequency
pulse to a control electrode, called A-gate, adjacent to the nucleus. Electron mediated
interaction between spins could in turn be manipulated by applying voltage to electrodes
called J-gates, placed between the p31 nuclei.
In this computer qubits are encoded in a system of anyons. “Anyons” are quasi-
particles in 2-dimensional media obeying parastatistics (neither Fermi Dirac nor Bose
Einstein). But in a way anyons are still closer to fermions, because a fermion like repulsion
exists between them. The respective movement of anyons is described by braid group. The
idea behind the topological quantum computer is to make use of the braid group properties
that describe the motion of anyons in order to carry out quantum computations. It is claimed
that such a computer should be invulnerable to quantum errors of the topological stability
of anyons.
21
2.9 Future Directions of Quantum Computing
The foundation of the subject of quantum computation has become well established,
but everything else required for its future growth is under exploration. That covers quantum
algorithms, understanding dynamics and control of decoherence, atomic scale technology
and worthwhile applications. Reversibility of quantum computation may help in solving
NP problems, which are easy in one direction but hard in the opposite sense. Global
minimization problems may benefit from interference (as seen in Fermat’s principle in
wave mechanics). Simulated annealing methods may improve due to quantum tunneling
through barriers. Powerful properties of complex numbers(analytic functions, conformal
mappings) may provide new algorithms.
Quantum field theory can extend quantum computation to allow for creation and
destruction of quanta. The natural setting for such operations is in quantum optics. For
example, the traditional double slit experiment (or beam splitter) can be viewed as the copy
operation. It is permitted in quantum theory because the intensity of the two copies is half
the previous value. Theoretical tools for handling many-body quantum entanglement are
not well developed. Its improved characterization may produce better implementation of
quantum logic gates and possibilities to correct correlated errors.
22
Chapter 3
Hence in general with the introduction for our formalism of wave function, we can
now divide wave function into 3 categories which are:
1. Quantum Systems with Corr. = 0 ⇒ = an n where n (1,2,3,4..) & (n) are
eigen states.
2. Quantum Systems with 0 < Corr. < 1 ⇒ = + an n where n allowed un-
correlated states & (n) are eigen states.
3. Quantum Systems with Corr. = 1 ⇒ = and an n = 0 (n) are eigen states.
Thus the above treatment of Expansion Theorem in Quantum Systems suggests that the
definition of wave function should be modified to take into account the representation of
EPR states and further investigation should be done to determine the value of in the wave
function definition. To further support and validate the above formalism I have applied the
Schrodinger Wave Eq. both time dependent and time independent to check if the form of
Expansion Theorem changes or not.
In the standard formulation of Quantum Mechanics we have Schrodinger Wave Eq. given
by :
23
Now, by putting all the three definitions of into these fundamental wave equations of
Quantum Mechanics that the modified definition of is correct as we get the expected
results.
Since there are three types of wave function systems as we have discussed in previous
sections, thus how do they relate to qubits is given in following lines:
1. For purely unentangled states in a Quantum system i.e. Corr.= 0, in such type of
systems wave functions is given by Linear Superposition Principle:
= an n & |an|2 =1 where n (1,2,3,4……) and (n) are eigen states.
2. For mixed entangled states in a Quantum System i.e. 0< Corr. < 1, in such type of
systems wave function is given by:
= + an n & |an|2 1 where n allowed un-correlated states & (n) are Eigen
states. In such a case since entangled and unentangled states cannot be separated as that
would amount to an interaction with the system leading to information loss and wave
function collapse. Hence such type of a state is not fit for computational purposes as it may
lead to spurious results.
3. For purely entangled states in a Quantum system i.e. Corr. = 1, in such type of
systems wave function is given by:
= and an n = 0 (n) are Eigen states.
These states are suitable for teleportation of information using EPR states and not for
information storage or computational purposes. Thus case 3 is well suited only for
information communication keeping the validity of Quantum No Cloning Theorem.
Now, another explanation of information loss associated with measurement of a qubit can
be given by energy exchange leading to irreversibility brought into the system according
to the 2nd Law of Thermodynamics. The very act of measurement or observation has
brought an element of irreversibility into the system which can also be appreciated by the
fact that the there is a concept of canonically conjugate variables in Quantum Mechanics.
That is if we have two observables say A and B in a quantum system which are
canonically conjugate, they follow the inequality AB BA and it is represented in
Quantum Mechanics as commutator brackets [A, B] = h/2 ⇒ AB-BA=h/2
Now the vary fact that AB BA suggests that there is some irreversibility brought in the
process of measurement of a quantum system and hence the uncertainty creeps in which is
given by Heisenberg Uncertainty Principle. This leads to some important questions like:
1. Does the notion of quantum of action h (h, 2h, 3h…) lead to irreversibility in the
measurement process since ½ h or any fraction value of h cannot take place?
2. Irreversibility is brought in due to entropy change in the system, since entropy is a
statistical phenomenon, does this anyway relate to statistical behaviour of quantum
probabilities?
3. Is irreversibility in a Quantum measurement more fundamental that uncertainty in
the measurement process and leads to uncertainty in the system, thus explaining the
need for Uncertainty Principle?
Thus in Classical Mechanics entropy (S) of a system is associated with the loss of
information in the system due to the energy interaction and 2nd Law of Thermodynamics.
But in Quantum Mechanics we have the concept of choice of information due to wave
function collapse, so we can define a new parameter analogous to entropy in case of
Quantum systems related to the choice of information. Thus we introduce here a possibility
of the need of a new parameter for depicting the concept of choice of information analogous
to entropy. The new parameter can be an area of further research. The concept of
irreversibility in the system due to measurement process leading to non replication of
information on a qubit is an alternative explanation of the Quantum No Cloning Theorem.
25
Chapter 4
In 1959 physicist Richard Feynman gave an after-dinner talk exploring the limits of
miniaturization. The scientist wrote about the potential for nanoscience in an influential
1959 talk “There’s Plenty of Room at the Bottom.” thus ushering a new era of the field
of low dimensional materials. One thing common about all the conducting
nanomaterials is that, in spite of the contraction in the size of the material, the basic
characteristics of the material remains the same- the energy levels remain the same.
This fact remains true as long as the size of the material undergoes noticeable change.
The number of energy levels increases and they start shifting towards ‘blue’ region. By
this, it is meant that the wavelength is smaller than green, yellow, orange and red. But
with the decrease in wavelength, the frequency increases.
As the frequency increases by shifting towards blue, the energy increases. Therefore,
these materials are capable of storing more energy and are said to exhibit nanoproperty.
The cause of the above said phenomena is stated below:
c = h/ (m0 * c)
h: Planck’s constant
m0: rest mass
c: velocity of light.
If the size of any material is comparable with its Compton’s wavelength, it exhibits
nanoproperty and the materials are called Nanomaterial. This is found to be the order
10-9 m. This is the reason why these materials exhibit nanoproperty. Thus, we come to
the conclusion that as we move from macro to nano dimensions, we observe the
following:
Therefore, these materials are also called as materials of low dimensionality, in which
the charge density increases by decreasing the degree of freedom. The uses of
nanomaterials in the manufacturing of computers justify Moore’s Law, which states that:
26
“In every eighteen months, the capacity of processing and carrying information of
the computers is doubled.”
This is possible by decreasing the number of electrons required to carry one bit of
information. Most of the nanomaterials are characterised by the decrease of the degrees of
freedom. Effective reduction of geometry to two or fewer dimensional systems is possible
by a strong spatial localisation to a plane, line or point( i.e. confinement of an electron in
at least one direction at the de-Broglie wavelength) occurs only in the case of atoms and
electrons localised on crystal imperfections( e.g. on impurities).
Quantum Wells are one of the three basic components of quantum devices, which are
ultra thin, quasi 3-D planes. A narrow strip sliced from one of the planes is a 1-D quantum
wire. Dicing up a 1-D wire yields a 0-D quantum dot. Reducing the number of dimensions
in this manner forces electron to behave in a more atom like manner. All the three devices
mentioned are the quantum devices i.e. they have nanostructures. Thus, we can say that
with the evolution of nanotechnology comes the era of Diminishing dimensions from 3-
dimensions, which was commonly found, to 2-dimensions in the form of Quantum Wells.
Quantum wells were followed by quantum wires at 1-dimensional level. And now the level
0-dimension can be attained by the concept of quantum dots.
Quantum Dots are major contenders for being used as a building blocks of future
Quantum Computers as compared to others like Nuclear Magnetic Resonance technique,
Trapped Ion technique etc because of the fundamental fact that decoherence of these
systems is very less hence even if they are prepared to act as quantum bits they degenerate
very fast and interact with the environment thus loosing their coherence behaviour.
Quantum Dots being 0-Dimensional entities have practically no degree of freedom hence
their interaction with the environment is also ideally zero thus they can act as qubits to a
larger extent. Further analysis and stability criterion are analysed in the subsequent
sections.
The highly successful theory of electricity and magnetism amalgamated with the laws
of classical mechanics, leading to classical electrodynamics is plagued with a serious
problem right from its inception. The problem lies in the interaction of charge particles
when the distances tend to zero. The energy of the particle becomes infinite, its field
momentum becomes infinite and its equation of motion leads to solutions like run-away
solutions and pre-acceleration phenomenon, which are non-physical. This has been termed
as “Problem of Divergences” in the classical electrodynamics.
27
It was envisaged that the advent of Quantum Theory would resolve all these difficulties.
However, this did not prove true. All these problems are still present in the quantum version
of the theory and appear in an even more complicated manner. In the last ten decades, even
since Lorentz identified the problem in 1903, a number of attempts have been made to
circumvent divergence problem of Classical Electrodynamics. Out of all the prescriptions,
the renormalization technique and Rohrlich procedure of invoking boundary conditions
have drawn special attention. During the course of present investigation, I am bringing to
the forefront an attempt which has been made to reformulate the theory of electricity and
magnetism. In the process, a simple but an elegant prescription proposed by Prof. Y M
Gupta has been used. For the development of proposed formalism a stand has been taken
that the problem as enunciated above must be solved initially at the classical level and only
then the reformulation be carried forward to the quantum level. This procedural
development automatically ensures the basic stringent requirement that the future theory
must be a covering theory of the present one: invoking Bohr Correspondence Principle-
“In the limit when the proposed modification is withdrawn, the proposed theory reduces to
the existing one.”
The proposed formalism is based on the observation that in the realm of known
formalism of Classical Electrodynamics potential function, field intensity, field momentum
and interaction force, all, are singular at the origin(R=0) i.e. all these physical quantities
become infinite in the limit of zero distance. It is this singularity, which is the crux of the
problem. In the present formalism we choose a potential function such that it is suitably
and correctly chosen such that it is regular at the origin as well as all other space points, we
automatically get physical quantities free from singularity. This is a logical way to obtain
a whole set of dependent quantities which are regular ab initio. But at the same time it is
also clear that all attempts at changing the basic equations of Electrodynamics (Maxwell
equations) have proved unsuccessful. As a matter of fact, it is required to keep the basic
equations of electrodynamics intact and yet look for a suitable potential function (solution
of basic equations) that provides divergence-free solutions to the problems of Classical
Electrodynamics (field energy, field intensity, field momentum etc.). To achieve this
objective, prescriptions proposed by Prof. Y M Gupta have been followed. The important
features of the prescriptions are-
When the whole problem is analysed very closely, it is observed that the crux of the
problem in hidden in the fact that in the realm of known formalism of classical
28
electrodynamics potential function, field intensity, field momentum and interaction force
is infinite at the origin (R=0). It is this singularity in the behaviour of these physical
quantities that must be removed systematically for the fundamental development of the
theory of electrodynamics. If the potential function is suitably and correctly selected, we
shall automatically get physical quantities that are regular at the origin as well as other
space points. This is a logical way to remove singularities in a systematic way. Once we
are able to obtain the suitable potential function, we can then attempt to carry forward the
reformulation to the quantum level. This procedural development will ensure that, the basic
stringent requirement, that the future theory must be a covering theory of the previous one,
is automatically fulfilled.
The present formalism retains the basic laws of electrostatics as given below:
. E = /0
xE=0
E = -
2 = - /0
Following Prof. Y M Gupta it is found that the last equation above of the potential
has the solution:
= 1/22 0 ∫ Si (Y|r-r’|)/ |r-r’| (r’) d3 r’
Further lim|r-r’|->0 Si (Y|r-r’|)/ |r-r’| = Y
Quantum Dots is a 0-D system and this system can be analysed as a two body bound
state problem like a hydrogen atom. As the systems keep on becoming smaller and
smaller the potential function of the system changes at a particular radius and assumes
a different form. The length at which the potential function changes form is determined
by the Compton Wavelength of the system. Thus the Compton wavelength plays a
major role in determining the modification in potential function. The modified
Coulomb potential is given by:
V = ½ kh r2 –V0 for r < rc
= -1/40 e2/r for r> rc
Where rc determines the size of quantum dot, rc is in the range of 10nm to 100nm, this
has been experimentally determined.
29
The expression of rc is given by:
rc = h/meff c
h: Plancks constant
meff: Effective mass
c: Velocity of light
Due to the reduction of the size at the nano scale the neighbours also try to exert force
on the particles just like a person is trying to move in a congested street with walls on
both sides feels the presence of the closeness of the walls. Similarly in the physical
systems the potential function changes gradually from Coulombian to the one shown
in above equations i.e.:
V = ½ kh r2 –V0 for r < rc
= -1/40 e2/r for r> rc
Hence in the nanomaterials the effective potential energy is given by the following
word equation:
V| r < rc = V| r > rc at r = rc
Solving these two sets of equations that is the boundary conditions and the
differentiability of the potential function leads to finding the solutions to potential
function. Also kh and V0 can be found out b numerical values. These variables can be
found out by:
30
R l (r) | r < rc = R l (r) | r > rc at r = rc
Here we can mention that we can also solve the Schrodinger Wave Equation on the
modified Coulomb Potential for a quantum dot system and get the analytical form of
the wave function. If the majority of the wave function lies inside the Compton
wavelength region then it is a very high probability that the quantum dot is stable in
that region and also gives support to the formulation of potential function. At very low
distances potential acts as a harmonic potential, this also suggests that at the
fundamental level Nature may be harmonic and this may in some way lead to the origin
of Heisenberg Uncertainty Principle due to the fluctuating harmonic nature of the
potential at quantum levels.
As already discussed above that Quantum Wires are 1-D systems with the degree of
freedom in 1 dimension only. So as the analysis of 0-D Quantum Dots has been done I
have also analysed Quantum Wires using the Modified Coulomb Potential by applying
the Schrodinger time independent wave equation given by
2 + 2m/h’ (E-V) = 0
Thus solving the Schrodinger Wave Equation for the given potential and the form of
wave function (r,, z) and using the variable separable method we could get the
wave function in ‘z’ direction as a free particle wave function since the equation which
the Schrodinger Wave Equation reduces to is given by:
2/z2 + (z) =0 which is a famous second order differential equation
with the solution given by
31
(z) = A e+iz which is the wave function of the free particle system. This is also
logically deducible since Quantum Wire is a 1-D system with complete freedom in 1
dimension.
y r
(r, ,z)
x
Fig 10: Quantum Wire as a 1-D system
The analysis of Quantum Wires is more complicated than Quantum Dots due to the
increase in the dimension of the system hence a preliminary study is done of this system
unlike the rigorous study done for Quantum Dots. It is expected from the wave function
expression of the Quantum Wire in modified Coulomb potential that Compton
Wavelength (0) plays a fundamental role in this system. Thus by the study of the two
systems we have done using the modified Coulomb Potential we can conclude that
Compton Wavelength is going to play a major role in Physics may be as important as
the fundamental constants of Nature.
We had a short three day visit to a state of the art one of its kind Nano Technology Lab
in Barkatullah University in Bhopal. It was an informative visit in which we saw
various experiments done with nanomaterials. We also saw different fabrication
techniques for nanomaterials. We saw and learnt the concept of Atomic Force
Microscope (AFM), Scanning Tunnelling Microscope (STM), X Ray Diffractometer
running on the principle of Compton scattering and then using the diffraction pattern
the inter planar distance between the planes of atoms is calculated. We operated upon
the Atomic Force Microscope and also got pictures of the surface of nanomaterials. We
learnt about three kinds of materials based on the degree of freedoms ranging from 2-
D systems like Quantum Wells to 1-D systems like Quantum Wires to 0-D systems like
Quantum Dots. How these quantum structures are developed using Wet Chemical
method which is the most common one and various other methods. Overall it was a
highly informative visit to the Nanotechnology Lab and we could appreciate the
practical ways by which work is going on in the field of nanomaterials and
nanostructures and how this field is going to revolutionise the coming technologies.
This field experience was worth doing in the summer project. The functioning of AFM
and the numerical analysis we did in the Lab are reproduced in the coming pages and
is suggestive of the work we had undertaken.
32
Fig 11: Quantum Dots as seen from an Atomic Force Microscope
Also the basic principle of use of Scanning Probe method is the principle of Quantum
Tunnelling which leads to induction of potential and hence production of current which
thus leads to scanning of the surface of the nanomaterial. We had a visit to the
Nanotechnology Lab and we saw the Scanning Tunnelling Microscope and its principle
is reproduced below:
33
Chapter 5
Symmetry is the most fundamental property of physical systems. It is a very general term
with regard to physical systems. There are various symmetries in Nature like local space
time symmetries and global space time symmetries. Global space time symmetries apply
to all the laws of Physics. Here in this section we will attempt to give a direction on which
a further thought can be developed for the resolution of the famous Measurement Problem
in Quantum Mechanics. It is the Measurement Problem which lies at the heart of various
problems and observation anomalies in Quantum Mechanics. Here I am trying to give an
explanation of Measurement Problem using Symmetry breaking process. The basic
equation of Quantum Mechanics i.e. AB BA valid for conjugate variables shows some
kind of irreversibility in the system. Now by the Leibniz’s Principle of Sufficient Reason
(PSR): If there is no sufficient reason for one thing to happen instead of another, the
principle says that nothing happens or in a situation with certain symmetry evolves in such
a way that, in the absence of an asymmetric cause, the initial symmetry is preserved. In
other words, a breaking of initial symmetry cannot happen without a reason or an
asymmetry cannot originate spontaneously. Asymmetry is what creates a phenomenon.
Thus symmetry breaking follows Principle of Causality that is “For every effect there is a
cause.” By the Measurement process we purposefully throw the Quantum System as a
“choice of information” by the observer unlike the collapsing of the wave function into an
uncertain Eigen state. Thus in a Measurement process the system does not lose information
like classical systems but is thrown in a particular eigen state based on the information
choice made by the observer. Hence we need to introduce a parameter in Quantum
Mechanics which takes care of this “choice of information” rather the “loss of information”
whose parameter is entropy in classical mechanics. Curie’s Principle also is an interesting
way to represent symmetry requirements in a system. The symmetry elements of the causes
must be found in their effects, but the converse is not true, that is the effects can be more
symmetric then their causes. Conditions valid for Curie’s Principle are:
1. The causal connection must be valid.
2. Cause and Effect and their respective symmetries must be well defined.
The there must be a sufficient reason for the Measurement Problem to occur and by the
Leibniz’s Principle of Sufficient Reason (PSR) we can conclude that there must be some
symmetry breaking phenomenon occurring as a reason for the asymmetry in the system
and thus leading to Measurement Problem. Thus it is a matter of further research to find
the precise way how symmetry breaks in the system? What type of symmetry exists? Is it
a local or a global symmetry which breaks in the Reduction(R) Process of the wave function
as it is called in the Quantum Mechanics parlance? Thus introducing symmetry in the
Reduction Process in Quantum Mechanics is a novel idea and can put some more light into
the reason and the precise process by which wave function collapse occurs. Thus this may
fundamentally give the reason of Heisenberg Uncertainty Principle. It is
34
yet to be seen that it is the Uncertainty Principle which is more fundamental or the
symmetry breaking which is occurring in the system.
Basically there are two ways to introduce the effect of correlation in an EPR situation in
the definition of wave function of the Quantum System those are:
1. In the form of Schrodinger Time Dependent Wave Eq.
ih/2 (/t) = H
We can introduce the correlation effect in the modification of the Hamiltonian (H) of
the system by introducing a potential term in the expression of H of the system and
keeping wave function () as it is defined by the Expansion Theorem.
2. In the second case we can keep Hamiltonian (H) as it is and change the definition
of wave function () by introducing a term to account for correlation effect of
EPR states.
Hence we have attempted the second case in our formalism of accounting for correlation
in the definition of wave function () of the system. Though both the approaches are new
to the scene and have not been attempted for this type of a situation. Hence the application
of first case is an area of further research in this formalism.
Eigen states are essentially entangled or correlated in an EPR situation and since EPR
implies non-local communication and apparent breakdown of Heisenberg Uncertainty
Principle hence it may lead to the modification of Linear Superposition Principle’s linearity
gets violated and different eigen states start interfering with each other and lead to the
introduction of in the definition of wave function ().
We find from the above two possible scenarios that EPR correlation can be incorporated
in the Schrodinger time dependent wave Eq. by modifying the Hamiltonian and introducing
the term due to correlation i e. The word eq. can be written as:
Thus this can be one of the attempted explanations for introducing correlation in the scene.
In the second case we can put the Hamiltonian as it is without “energy due to EPR
correlation” term and can rather modify the wave function () in a non-linear way by
introducing the term and accounting for EPR correlation as it has already been done in
previous sections.
It is a well known fact that though Quantum Mechanics gives correct mathematical
results of quantum phenomena but the interpretation of Quantum Mechanics is very
peculiar and is still not complete till date because of paradoxes like Schrodinger Cat
paradox and Einstein Podolsky Rosen (EPR) Paradox, Wave Function Collapse and
35
Measurement Problem. In this we attempt to give a new direction of thinking for a new
formulation of the fundamental principle of Uncertainty in Quantum Mechanics since all
peculiarities of Quantum Mechanics are derived from the Uncertainty Principle. We
describe the two mechanics we know of and predict the possibility of the third one:
3. Here we give a new line of thinking to the above formalisms by saying that:
“Neither position (x) nor momentum (p) to any degree of accuracy can be measured.”
This is a very interesting statement since in case 1 above we have determinism in the
definition of the system. In case 2 we have probability but still it is limited probability since
one of the parameters can still be measured to desired degree of accuracy. But in case 3 we
have brought in a complete indeterminism in which both the variables are completely
indeterministic. This formalism requires a new definition of Schrodinger Wave Eq.,
Uncertainty Principle etc. It may be possible that Quantum Mechanics is a special case of
this third mechanics as we move from complete indeterminism to limited indeterminism
(Quantum Mechanics) to complete determinism (Classical Mechanics). This concept of
complete indeterminism may be related in some way to Consciousness of the observer
which is hot area of research in Quantum Mechanics these days. This last proposal was
given by Stephen Hawking as a new direction of thinking on the modification of Quantum
Mechanics right at the fundamental level.
The foundations of the subject of quantum computation have become well established, but
everything else required for its future growth is under exploration. That covers quantum
algorithms, logic gate operations, error correction, understanding dynamics and control of
decoherence, atomic scale technology and worthwhile applications. Reversibility of
quantum computation may help in solving NP problems, which are easy in one direction
but hard in the opposite sense. Global minimization problems may benefit from
interference effects (as seen in Fermat’s principle in wave mechanics). Simulated annealing
methods may improve due to quantum tunneling through barriers. Powerful properties of
complex numbers (analytic functions, conformal mappings) may provide new algorithms.
Theoretical tools for handling many-body quantum entanglement are not well developed.
Its improved characterization may produce better implementation of quantum logic gates
and possibilities to correct correlated errors.
36
Though decoherence can be described as an effective process, its dynamics is not
understood but an attempt has been made in the present project work in the form of
Symmetry breaking argument or need for an entropy like parameter or function to account
for irreversibility in the system. To be able to control decoherence, one should be able to
figure out the eigenstates favored by the environment in a given setup. The dynamics of
measurement process in not understood fully, though the attempt is also made in this regard
in this project. Measurement is just described as a non-unitary projection operator in an
otherwise unitary quantum theory. Ultimately both the system and the observer are made
up of quantum building blocks, and a unified quantum description of both measurement
and decoherence must be developed. Apart from theoretical gain, it would help in
improving the detectors that operate close to the quantum limit of observation. For the
physicist, it is of great interest to study the transition from classical to quantum regime.
Enlargement of the system from microscopic to mesoscopic levels, and reduction of the
environment from macroscopic to mesoscopic levels, can take us there. If there is
something beyond quantum theory lurking, there it would be noticed in the struggle for
making quantum devices. We may discover new limitations of quantum theory in trying to
conquer decoherence. Theoretical developments alone will be no good without a matching
technology. Nowadays, the race for miniaturization of electronic circuits in not too far
away from the quantum reality of nature. To devise new types of instruments, we must
change our view-points from scientific to technological-quantum effects which are not for
only observation; we should learn how to control them for practical use. The future is not
foreseen yet, but it is definitely promising.
REFERENCES
37