Notation, language, and rigor
Main article: Mathematical notation
Leonhard Euler created and popularized much of the mathematical notation used today.
Most of the mathematical notation in use today was not invented until the 16th century. [64] Before
that, mathematics was written out in words, limiting mathematical discovery.[65] Euler (1707–1783)
was responsible for many of the notations in use today. Modern notation makes mathematics
much easier for the professional, but beginners often find it daunting. According to Barbara
Oakley, this can be attributed to the fact that mathematical ideas are both more abstract and
more encrypted than those of natural language.[66] Unlike natural language, where people can
often equate a word (such as cow) with the physical object it corresponds to, mathematical
symbols are abstract, lacking any physical analog. [67] Mathematical symbols are also more highly
encrypted than regular words, meaning a single symbol can encode a number of different
operations or ideas.[68]
Mathematical language can be difficult to understand for beginners because even common
terms, such as or and only, have a more precise meaning than they have in everyday speech,
and other terms such as open and field refer to specific mathematical ideas, not covered by their
laymen's meanings. Mathematical language also includes many technical terms such
as homeomorphism and integrable that have no meaning outside of mathematics. Additionally,
shorthand phrases such as iff for "if and only if" belong to mathematical jargon. There is a reason
for special notation and technical vocabulary: mathematics requires more precision than
everyday speech. Mathematicians refer to this precision of language and logic as "rigor".
Mathematical proof is fundamentally a matter of rigor. Mathematicians want their theorems to
follow from axioms by means of systematic reasoning. This is to avoid mistaken "theorems",
based on fallible intuitions, of which many instances have occurred in the history of the subject.
[b]
     The level of rigor expected in mathematics has varied over time: the Greeks expected detailed
arguments, but at the time of Isaac Newton the methods employed were less rigorous. Problems
inherent in the definitions used by Newton would lead to a resurgence of careful analysis and
formal proof in the 19th century. Misunderstanding the rigor is a cause for some of the common
misconceptions of mathematics. Today, mathematicians continue to argue among themselves
about computer-assisted proofs. Since large computations are hard to verify, such proofs may be
erroneous if the used computer program is erroneous. [c][69] On the other hand, proof
assistants allow verifying all details that cannot be given in a hand-written proof, and provide
certainty of the correctness of long proofs such as that of the Feit–Thompson theorem.[d]
Axioms in traditional thought were "self-evident truths", but that conception is problematic. [70] At a
formal level, an axiom is just a string of symbols, which has an intrinsic meaning only in the
context of all derivable formulas of an axiomatic system. It was the goal of Hilbert's program to
put all of mathematics on a firm axiomatic basis, but according to Gödel's incompleteness
theorem every (sufficiently powerful) axiomatic system has undecidable formulas; and so a
final axiomatization of mathematics is impossible. Nonetheless mathematics is often imagined to
be (as far as its formal content) nothing but set theory in some axiomatization, in the sense that
every mathematical statement or proof could be cast into formulas within set theory. [71]
Fields of mathematics
See also: Areas of mathematics and Glossary of areas of mathematics
The abacus is a simple calculating tool used since ancient times.
Mathematics can, broadly speaking, be subdivided into the study of quantity, structure, space,
and change (i.e. arithmetic, algebra, geometry, and analysis). In addition to these main concerns,
there are also subdivisions dedicated to exploring links from the heart of mathematics to other
fields: to logic, to set theory (foundations), to the empirical mathematics of the various sciences
(applied mathematics), and more recently to the rigorous study of uncertainty. While some areas
might seem unrelated, the Langlands program has found connections between areas previously
thought unconnected, such as Galois groups, Riemann surfaces and number theory.
Discrete mathematics conventionally groups together the fields of mathematics which study
mathematical structures that are fundamentally discrete rather than continuous.
Foundations and philosophy
In order to clarify the foundations of mathematics, the fields of mathematical logic and set
theory were developed. Mathematical logic includes the mathematical study of logic and the
applications of formal logic to other areas of mathematics; set theory is the branch of
mathematics that studies sets or collections of objects. The phrase "crisis of foundations"
describes the search for a rigorous foundation for mathematics that took place from
approximately 1900 to 1930.[72] Some disagreement about the foundations of mathematics
continues to the present day. The crisis of foundations was stimulated by a number of
controversies at the time, including the controversy over Cantor's set theory and the Brouwer–
Hilbert controversy.
Mathematical logic is concerned with setting mathematics within a rigorous axiomatic framework,
and studying the implications of such a framework. As such, it is home to Gödel's
incompleteness theorems which (informally) imply that any effective formal system that contains
basic arithmetic, if sound (meaning that all theorems that can be proved are true), is
necessarily incomplete (meaning that there are true theorems which cannot be proved in that
system). Whatever finite collection of number-theoretical axioms is taken as a foundation, Gödel
showed how to construct a formal statement that is a true number-theoretical fact, but which
does not follow from those axioms. Therefore, no formal system is a complete axiomatization of
full number theory. Modern logic is divided into recursion theory, model theory, and proof theory,
and is closely linked to theoretical computer science,[citation needed] as well as to category theory. In the
context of recursion theory, the impossibility of a full axiomatization of number theory can also be
formally demonstrated as a consequence of the MRDP theorem.
Theoretical computer science includes computability theory, computational complexity theory,
and information theory. Computability theory examines the limitations of various theoretical
models of the computer, including the most well-known model—the Turing machine. Complexity
theory is the study of tractability by computer; some problems, although theoretically solvable by
computer, are so expensive in terms of time or space that solving them is likely to remain
practically unfeasible, even with the rapid advancement of computer hardware. A famous
problem is the "P = NP?" problem, one of the Millennium Prize Problems.[73] Finally, information
theory is concerned with the amount of data that can be stored on a given medium, and hence
deals with concepts such as compression and entropy.