Toggle light / dark theme

Is Mathematics Mostly Chaos or Mostly Order?

Last winter, at a meeting in the Finnish wilderness high above the Arctic Circle, a group of mathematicians gathered to contemplate the fate of a mathematical universe.

It was minus 20 degrees Celsius, and while some went cross-country skiing, Juan Aguilera, a set theorist at the Vienna University of Technology, preferred to linger in the cafeteria, tearing pieces of pulla pastry and debating the nature of two new notions of infinity. The consequences, Aguilera believed, were grand. “We just don’t know what they are yet,” he said.

Infinity, counterintuitively, comes in many shapes and sizes. This has been known since the 1870s, when the German mathematician Georg Cantor proved that the set of real numbers (all the numbers on the number line) is larger than the set of whole numbers, even though both sets are infinite. (The short version: No matter how you try to match real numbers to whole numbers, you’ll always end up with more real numbers.) The two sets, Cantor argued, represented entirely different flavors of infinity and therefore had profoundly different properties.

Post-Alcubierre Warp-Drives

Researchers are actively exploring and revising the concept of Alcubierre warp drive, as well as alternative approaches, to potentially make superluminal travel feasible with reduced energy requirements and advanced technologies ## ## Questions to inspire discussion.

Practical Warp Drive Concepts.

🚀 Q: What is the Alcubierre warp drive? A: The Alcubierre warp drive (1994) is a superluminal travel concept within general relativity, using a warp bubble that contracts space in front and expands behind the spacecraft.

🌌 Q: How does Jose Natario’s warp drive differ from Alcubierre’s? A: Natario’s warp drive (2001) describes the warp bubble as a soliton and vector field, making it harder to visualize but potentially more mathematically robust.

🔬 Q: What is unique about Chris Van Den Broeck’s warp drive? A: Van Den Broeck’s warp drive (1999) uses a nested warp field, creating a larger interior than exterior, similar to a TARDIS, while remaining a physical solution within general relativity. Energy Requirements and Solutions.

💡 Q: How do Eric Lent’s hyperfast positive energy warp drives work? A: Lent’s warp drives (2020) are solitons capable of superluminal travel using purely positive energy densities, reopening discussions on conventional physics-based superluminal mechanisms.

Japanese Journal of Mathematics

Information geometry has emerged from the study of the invariant structure in families of probability distributions. This invariance uniquely determines a second-order symmetric tensor g and third-order symmetric tensor T in a manifold of probability distributions. A pair of these tensors (g, T) defines a Riemannian metric and a pair of affine connections which together preserve the metric. Information geometry involves studying a Riemannian manifold having a pair of dual affine connections. Such a structure also arises from an asymmetric divergence function and affine differential geometry. A dually flat Riemannian manifold is particularly useful for various applications, because a generalized Pythagorean theorem and projection theorem hold. The Wasserstein distance gives another important geometry on probability distributions, which is non-invariant but responsible for the metric properties of a sample space. I attempt to construct information geometry of the entropy-regularized Wasserstein distance.

Can space and time emerge from simple rules? Wolfram thinks so

Stephen Wolfram joins Brian Greene to explore the computational basis of space, time, general relativity, quantum mechanics, and reality itself.

This program is part of the Big Ideas series, supported by the John Templeton Foundation.

Participant: Stephen Wolfram.
Moderator: Brian Greene.

0:00:00 — Introduction.
01:23 — Unifying Fundamental Science with Advanced Mathematical Software.
13:21 — Is It Possible to Prove a System’s Computational Reducibility?
24:30 — Uncovering Einstein’s Equations Through Software Models.
37:00 — Is connecting space and time a mistake?
49:15 — Generating Quantum Mechanics Through a Mathematical Network.
01:06:40 — Can Graph Theory Create a Black Hole?
01:14:47 — The Computational Limits of Being an Observer.
01:25:54 — The Elusive Nature of Particles in Quantum Field Theory.
01:37:45 — Is Mass a Discoverable Concept Within Graph Space?
01:48:50 — The Mystery of the Number Three: Why Do We Have Three Spatial Dimensions?
01:59:15 — Unraveling the Mystery of Hawking Radiation.
02:10:15 — Could You Ever Imagine a Different Career Path?
02:16:45 — Credits.

VISIT our Website: http://www.worldsciencefestival.com.
FOLLOW us on Social Media:
Facebook: / worldsciencefestival.
Twitter: / worldscifest.
Instagram: https://www.instagram.com/worldscifest/
TikTok: https://www.tiktok.com/@worldscifest.
LinkedIn: https://www.linkedin.com/company/world-science-festival.
#worldsciencefestival #briangreene #cosmology #astrophysics

THE COMPUTATIONAL UNIVERSE: MODELLING COMPLEXITY — Stephen Wolfram PHD #52

Does the use of computer models in physics change the way we see the universe? How far reaching are the implications of computation irreducibility? Are observer limitations key to the way we conceive the laws of physics?
In this episode we have the difficult yet beautiful topic of trying to model complex systems like nature and the universe computationally to get into; and how beyond a low level of complexity all systems, seem to become equally unpredictable. We have a whole episode in this series on Complexity Theory in biology and nature, but today we’re going to be taking a more physics and computational slant.
Another key element to this episode is Observer Theory, because we have to take into account the perceptual limitations of our species’ context and perspective, if we want to understand how the laws of physics that we’ve worked out from our environment, are not and cannot be fixed and universal but rather will always be perspective bound, within a multitude of alternative branches of possible reality with alternative possible computational rules. We’ll then connect this multi-computational approach to a reinterpretation of Entropy and the 2nd law of thermodynamics.
The fact that my guest has been building on these ideas for over 40 years, creating computer language and AI solutions, to map his deep theories of computational physics, makes him the ideal guest to help us unpack this topic. He is physicist, computer scientist and tech entrepreneur Stephen Wolfram. In 1987 he left academia at Caltech and Princeton behind and devoted himself to his computer science intuitions at his company Wolfram Research. He’s published many blog articles about his ideas, and written many influential books including “A New kind of Science”, and more recently “A Project to Find the Fundamental Theory of Physics”, and “Computer Modelling and Simulation of Dynamic Systems”, and just out in 2023 “The Second Law” about the mystery of Entropy.
One of the most wonderful things about Stephen Wolfram is that, despite his visionary insight into reality, he really loves to be ‘in the moment’ with his thinking, engaging in socratic dialogue, staying open to perspectives other than his own and allowing his old ideas to be updated if something comes up that contradicts them; and given how quickly the fields of physics and computer science are evolving I think his humility and conceptual flexibility gives us a fine example of how we should update how we do science as we go.

What we discuss:
00:00 Intro.
07:45 The history of scientific models of reality: structural, mathematical and computational.
14:40 Late 2010’s: a shift to computational models of systems.
20:20 The Principle of Computational Equivalence (PCE)
24:45 Computational Irreducibility — the process that means you can’t predict the outcome in advance.
27:50 The importance of the passage of time to Consciousness.
28:45 Irreducibility and the limits of science.
33:30 Godel’s Incompleteness Theorem meets Computational Irreducibility.
42:20 Observer Theory and the Wolfram Physics Project.
45:30 Modelling the relations between discrete units of Space: Hypergraphs.
47:30 The progress of time is the computational process that is updating the network of relations.
50:30 We ’make’ space.
51:30 Branchial Space — different quantum histories of the world, branching and merging.
54:30 We perceive space and matter to be continuous because we’re very big compared to the discrete elements.
56:30 Branchial Space VS Many Worlds interpretation.
58:50 Rulial Space: All possible rules of all possible interconnected branches.
01:07:30 Wolfram Language bridges human thinking about their perspective with what is computationally possible.
01:11:00 Computational Intelligence is everywhere in the universe. e.g. the weather.
01:19:30 The Measurement problem of QM meets computational irreducibility and observer theory.
01:20:30 Entanglement explained — common ancestors in branchial space.
01:32:40 Inviting Stephen back for a separate episode on AI safety, safety solutions and applications for science, as we did’t have time.
01:37:30 At the molecular level the laws of physics are reversible.
01:40:30 What looks random to us in entropy is actually full of the data.
01:45:30 Entropy defined in computational terms.
01:50:30 If we ever overcame our finite minds, there would be no coherent concept of existence.
01:51:30 Parallels between modern physics and ancient eastern mysticism and cosmology.
01:55:30 Reductionism in an irreducible world: saying a lot from very little input.

References:
“The Second Law: Resolving the Mystery of the Second Law of Thermodynamics”, Stephen Wolfram.

“A New Kind of Science”, Stephen Wolfram.

Observer Theory article, Stephen Wolfram.

Observer Theory

Engineers develop blueprint for robot swarms, mimicking bee and ant construction

Bees, ants and termites don’t need blueprints. They may have queens, but none of these species breed architects or construction managers. Each insect worker, or drone, simply responds to cues like warmth or the presence or absence of building material. Unlike human manufacturing, the grand design emerges simply from the collective action of the drones—no central planning required.

Now, researchers at Penn Engineering have developed mathematical rules that allow virtual swarms of tiny robots to do the same. In , the robots built honeycomb-like structures without ever following—or even being able to comprehend—a plan.

“Though what we have done is just a first step, it is a new strategy that could ultimately lead to a new paradigm in manufacturing,” says Jordan Raney, Associate Professor in Mechanical Engineering and Applied Mechanics (MEAM), and the co-senior author of a new paper in Science Advances. “Even 3D printers work step by step, resulting in what we call a brittle process. One simple mistake, like a clogged nozzle, ruins the entire process.”

Information Processing via Human Soft Tissue: Soft Tissue Reservoir Computing

Physical reservoir computing refers to the concept of using nonlinear physical systems as computational resources to achieve complex information processing. This approach exploits the intrinsic properties of physical systems such as their nonlinearity and memory to perform computational tasks. Soft biological tissues possess characteristics such as stress-strain nonlinearity and viscoelasticity that satisfy the requirements of physical reservoir computing. This study evaluates the potential of human soft biological tissues as physical reservoirs for information processing. Particularly, it determines the feasibility of using the inherent dynamics of human soft tissues as a physical reservoir to emulate nonlinear dynamic systems. In this concept, the deformation field within the muscle, which is obtained from ultrasound images, represented the state of the reservoir. The findings indicate that the dynamics of human soft tissue have a positive impact on the computational task of emulating nonlinear dynamic systems. Specifically, our system outperformed the simple LR model for the task. Simple LR models based on raw inputs, which do not account for the dynamics of soft tissue, fail to emulate the target dynamical system (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-2}$ </tex-math></inline-formula>). By contrast, the emulation results obtained using our system closely approximated the target dynamics (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-3}$ </tex-math></inline-formula>). These results suggest that the soft tissue dynamics contribute to the successful emulation of the nonlinear equation. This study suggests that human soft tissues can be used as a potential computational resource. Soft tissues are found throughout the human body. Therefore, if computational processing is delegated to biological tissues, it could lead to a distributed computation system for human-assisted devices.

Universal framework enables custom 3D point spread functions for advanced imaging

Engineers at the UCLA Samueli School of Engineering have introduced a universal framework for point spread function (PSF) engineering, enabling the synthesis of arbitrary, spatially varying 3D PSFs using diffractive optical processors. The research is published in the journal Light: Science & Applications.

This framework allows for advanced imaging capabilities—such as snapshot 3D —without the need for spectral filters, axial scanning, or digital reconstruction.

PSF engineering plays a significant role in modern microscopy, spectroscopy and computational imaging. Conventional techniques typically employ phase masks at the pupil plane, which constrain the complexity and mathematical representation of the achievable PSF structures.

Decoding high energy physics with AI and machine learning

In the world of particle physics, where scientists unravel the mysteries of the universe, artificial intelligence (AI) and machine learning (ML) are making waves with how they’re increasing understanding of the most fundamental particles. Central to this exploration are parton distribution functions (PDFs). These complex mathematical models are crucial for predicting outcomes of high energy physics experiments that test the Standard Model of particle physics.

Understanding quantum computing’s most troubling problem—the barren plateau

For the past six years, Los Alamos National Laboratory has led the world in trying to understand one of the most frustrating barriers that faces variational quantum computing: the barren plateau.

“Imagine a landscape of peaks and valleys,” said Marco Cerezo, the Los Alamos team’s lead scientist. “When optimizing a variational, or parameterized, , one needs to tune a series of knobs that control the solution quality and move you in the landscape. Here, a peak represents a bad solution and a valley represents a good solution. But when researchers develop algorithms, they sometimes find their model has stalled and can neither climb nor descend. It’s stuck in this space we call a barren .”

For these quantum computing methods, barren plateaus can be mathematical dead ends, preventing their implementation in large-scale realistic problems. Scientists have spent a lot of time and resources developing quantum algorithms only to find that they sometimes inexplicably stall. Understanding when and why barren plateaus arise has been a problem that has taken the community years to solve.