Showing posts with label computer science. Show all posts
Showing posts with label computer science. Show all posts

The entropy and the halting probability problem

The third law of thermodynamics states:
It is impossible for any procedure to lead to the isotherm \(T = 0\) in a finite number of steps.
The theorem, discovered by Walther Nernst, is equal to say:
It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its zero point value in a finite number of operations.
In classical thermodynamics we can define entropy, or the variation of entropy \(\Delta S\), with the following equation: \[\Delta S = \frac{\Delta Q}{T}\] where \(\Delta Q\) is the heat's variation and \(T\) is the temperature.

Idiosyncratic Thinking: a computer heuristics lecture

http://t.co/7JB3CPaQt9 #Feynman
Richard Feynman, Winner of the 1965 Nobel Prize in Physics, gives us an insightful lecture about computer heuristics: how computers work, how they file information, how they handle data, how they use their information in allocated processing in a finite amount of time to solve problems and how they actually compute values of interest to human beings. These topics are essential in the study of what processes reduce the amount of work done in solving a particular problem in computers, giving them speeds of solving problems that can outmatch humans in certain fields but which have not yet reached the complexity of human driven intelligence. The question if human thought is a series of fixed processes that could be, in principle, imitated by a computer is a major theme of this lecture and, in Feynman's trademark style of teaching, gives us clear and yet very powerful answers for this field which has gone on to consume so much of our lives today.

Gods, phylosophy and computers

by @ulaulaman http://t.co/Q3AODpvKAs #Godel #ontologicalproof #god #computer
The ontological arguments for the existence of God was introduced for the first time by St. Anselm in 1078:
God, by definition, is that for which no greater can be conceived. God exists in the understanding. If God exists in the understanding, we could imagine Him to be greater by existing in reality. Therefore, God must exist.
There are a lot of phylosophies, mathematics and logicians that proposed their ontological argument, for example Descartes, Leibniz, Frege, and also Kurt Gödel, that proposed the most formal ontological proof:
The proof was published in 1987 (Godel died in 1978), and a lot of logicians discussed around it. One of the last papers published about the argument is an arXiv that suggested to Anna Limind to write that European Mathematicians ‘Prove’ the Existence of God, but the aim of the paper is to control the consistence of the proof and not the reality of the theorem (I think that the theorem is, simply, undecidable), and also to start a new discipline: the computer-phylosophy.
Indeed BenzmĂĽller and Paleo developed an algorothm in order to use a computer to control the ontological proof. So the work:
(...) opens new perspectives for a computerassisted theoretical philosophy. The critical discussion of the underlying concepts, definitions and axioms remains a human responsibility, but the computer can assist in building and checking rigorously correct logical arguments. In case of logico-philosophical disputes, the computer can check the disputing arguments and partially fulfill Leibniz' dictum: Calculemus

Read also: Spiegel Online International
Christoph Benzmüller & Bruno Woltzenlogel Paleo (2013). Formalization, Mechanization and Automation of Gödel's Proof of God's Existence, arXiv:

When gaming is NP-hard

by @ulaulaman about #candycrush #bejeweled #shariki #nphard #computerscience
Shariki is a puzzle game developed by the russian programmer Eugene Alemzhin in 1994. The rules are simple:
(...) matching three or more balls of the same color in line (vertical or horizontal). These balls then explode and a new ones appear in their place.
The first Shariki's clone is Tetris Attack, a fusion between Shariki and the most famous Tetris, also this developed in Soviet Union by Alexey Pajitnov. But the most famous clone is Bejeweled (2001) by PopCap Games, from which is derived the Candy Crush Saga. During this March, Toby Walsh and the italian team composed by Luciano GualĂ , Stefano Leucci, Emanuele Natale proved that Candy Crush and other similar games are NP-hard:
The twentieth century has seen the rise of a new type of video games targeted at a mass audience of "casual" gamers. Many of these games require the player to swap items in order to form matches of three and are collectively known as tile-matching match-three games. Among these, the most influential one is arguably Bejeweled in which the matched items (gems) pop and the above gems fall in their place. Bejeweled has been ported to many different platforms and influenced an incredible number of similar games. Very recently one of them, named Candy Crush Saga enjoyed a huge popularity and quickly went viral on social networks. We generalize this kind of games by only parameterizing the size of the board, while all the other elements (such as the rules or the number of gems) remain unchanged. Then, we prove that answering many natural questions regarding such games is actually NP-Hard. These questions include determining if the player can reach a certain score, play for a certain number of turns, and others.
The italian team realized also a web-based implementation of their technique.
Toby Walsh (2014). Candy Crush is NP-hard, arXiv:
Luciano GualĂ , Stefano Leucci & Emanuele Natale (2014). Bejeweled, Candy Crush and other Match-Three Games are (NP-)Hard, arXiv:

Tetris is Hard, Even to Approximate

posted by @ulaulaman about #tetris #ComputerScience #NeuralNetwork
Before today I thought that tetris was a really simple game...
In the popular computer game of Tetris, the player is given a sequence of tetromino pieces and must pack them into a rectangular gameboard initially occupied by a given configuration of filled squares; any completely filled row of the gameboard is cleared and all pieces above it drop by one row. We prove that in the offline version of Tetris, it is NP-complete to maximize the number of cleared rows, maximize the number of tetrises (quadruples of rows simultaneously filled and cleared), minimize the maximum height of an occupied square, or maximize the number of pieces placed before the game ends. We furthermore show the extreme inapproximability of the first and last of these objectives to within a factor of p^(1-epsilon), when given a sequence of p pieces, and the inapproximability of the third objective to within a factor of (2 - epsilon), for any epsilon>0. Our results hold under several variations on the rules of Tetris, including different models of rotation, limitations on player agility, and restricted piece sets.(1)
Tetris is a puzzle video game originally designed and programmed by Alexey Pajitnov and it could be used to learn to a neural network:

Multiscale gigapixel photography

posted by @ulaulaman #photography #computer #science #technology
Sometimes, using the Android application Wondershare Panorama, I try to shot some panoramical photos. The result is good, but it could be better, like everything: indeed the actual camera (also in our smartphones) work near the fundamental limit of 1–10 megapixels for millimetre-scale apertures(1). In theory(2) we are able to upgrade this limit. If we define $SW$ like the upper limit for the number of data channels which can be handled in parallel(2), after some calculations, Adolf W. Lohmann found that without aberration, it would increase quadratically with the scaling factor $M$(2). This result is different from the purely geometry result that states $SW$ indipendent from scaling.
That result cannot be realistic, otherwise, very long focal length lens systems would be fairly useless. In practice, the apertures of these long systems are reduced, more or less according to the empirical rule(2)
Now a team of researchers develop a new photographical device that can resolve at 50 gigapixels: AWARE-2 camera(1, *).
And below there are some examples of its capabilities(1):

The universe and the flowers

Peeking the infinite increases the space, the breath, the brain of whoever is watching it.
Erri De Luca, italian writer
The universe is a perilous but also a beautiful place. But the beauty of the universe it's not only in galactic shots, but also in mathematics. For example the maps of the E8 group seems flowers, and if we follow Garrett Lisi and his preprint An exceptionally simple theory of everything(1), these maps are also a sort of universe's flowers!
The E8 maps in this post are extracted from Lisi's preprint and they represent the structure of the group (the first image is F4. E8 is a Lie group, most important in physics because all symmetries group of the physical systems are Lie groups. The Lie group is an analytical group: all functions that we can define in the group are continuous. A physical example is the Galilei's group, and studying it we can argue information about free particles, described by Schroedinger equation.

Riemann hypothesis

The Rieman hypethesis was stated following the 1859 Riemann's paper On the Number of Primes Less Than a Given Magnitude. This is the begin of the paper(1):
I believe that I can best convey my thanks for the honour which the Academy has to some degree conferred on me, through my admission as one of its correspondents, if I speedily make use of the permission thereby received to communicate an investigation into the accumulation of the prime numbers; a topic which perhaps seems not wholly unworthy of such a communication, given the interest which Gauss and Dirichlet have themselves shown in it over a lengthy period.
For this investigation my point of departure is provided by the observation of Euler that the product \[\prod \frac{1}{1-\frac{1}{p^s}} = \sum \frac{1}{n^s}\] if one substitutes for $p$ all prime numbers, and for $n$ all whole numbers. The function of the complex variable $s$ which is represented by these two expressions, wherever they converge, I denote by $\zeta (s)$. Both expressions converge only when the real part of $s$ is greater than 1; at the same time an expression for the function can easily be found which always remains valid.
The Riemann zeta function is connected to the prime numbers distribution, in particular Riemann argued that all of its non trivial zeros(2) have the form $z = \frac{1}{2} + bi$, where $z$ is complex, $b$real,$i = \sqrt{-1}$. There's also a general form of the zeros: $z = \sigma + bi$, where $\sigma$ belong to the critical strip (see below and the image at the right).
In the story of the search of the zeta-zeros, Hugh Montgomery has an important part(4): in 1972 he investigated the distance between two zeta-zeros, finding a function of this difference. After this paper, in 1979, with Norman Levinson(5) he established some others zeta properties, investigating in particular the zeros of zeta derivatives. Obviosly he first of all proofed an equivalence relation between the zeros of Riemann zeta function and the zeros of the derivatives: in particular also these zeros belong to the critical strip, $0 < \sigma < \frac{1}{2}$.
The analitical research around zeta-zeros is not the only way: the first was Lehmer (1956 and 1957) who performed the first computational attempt in order to proof the hypothesis. An example of this kind of researches is given by Richard Brent(6): in his work he try to evaluate Riemann zeta using the Gram points, that are the points in which the zeta change its sign(3). Brent focused his research on the first 70000000 Gram blocks, veryfing the hypothesis.
But there's another approach to the problem: physics. In the end of 90s Alain Connes(7) proofed the link of Rieman hypotesis with quantum chaos.
Quantum chaos studies chaotic classical dynamical systems using quantum laws. In particular Connes found a particular chaotic system in which quantum numbers are prime numbers and the energy levels of the system correspond to the zeta-zeros on the critical line $\sigma = \frac{1}{2}$. In physics it could be the better (but not only) suspect to resolve the hypothesis
Others connection with physics are in the review Physics of the Riemann Hypothesis by Daniel Schumayer and David A. W. Hutchinson, but we can speak about the stories in the paper in another moment.

A solution to a maximal independent set problem

A distributed system is a set of autonomous computer that comunicate in a network in order to reach a certain goal. So a maximal independent set (MIS) is a distributed system's subject. But, what we intend for MIS?
In graph theory, a maximal independent set or maximal stable set is an independent set that is not a subset of any other independent set.
Some example of MIS are in the graph of cube:
You can see that every maximal independent set is constituted by point that aren't adjacent.
The goal of maximum independet set problem is find the maximum size of the maximal independent set in a given graph or network. In other words the problem is the search of the leaders in a local network of connected processors, and forleaderwe intend an active node connected with an inactive node. This problem is a NP-problem.
Following Afek, Alon, Barad, Hornstein, Barkai and Bar-Joseph,
no methods has been able to efficiently reduce message complexity without assuming knowledge fo the number of neighbours.
But a similar network occurs in the precursors of the fly's sensory bristles, so researchers idea is to use data from this biological network to solve the starting computational problem!
Such system is called sensory organ precursors, SOP.