Skip to content

afalbrecht/apperception

 
 

Repository files navigation

The source code for "Making sense of sensory input" and "Making sense of raw input"

Installation instructions

You need to have installed Haskell and Clingo (version 4.5 or above).

  1. To install Haskell:

  2. To install Clingo (version 4.5 or above):

  3. To install Python3 (tested on 3.11.6, would probably work on any python 3 version)

Compilation instructions

Once you have Haskell and Clingo installed, just run (from the root directory):

  • cd code
  • cabal update
  • cabal configure
  • cabal new-build
  • cabal install
  • cd ..

Changes made compared to the original apperception engine

All the code in the /mem_code folder is added new, as well as the save files in /memory.
Further I have added headings indicating what I added in solve.hs, as well as Interpretation.hs, though what I added in Interpretation.hs is mostly concerned with parsing.
In /mem_extra/mem_tree_display.JSON a JSON readable version for the tree is outputted after each change, which is saved for access by the code in /mem_code/mem_tree.pickle.
Lastly a pdf of the complete tree as mentioned in the thesis is added in /mem_extra/tree_graph.pdf.

Instructions for running

This program, the AAE, extends the apperception engine with memory. This has only been implemented for some inputs, namely the misc and sokoban inputs.
The AAE is used by adding a code to the terminal command, these codes can be combined in some instances.

Add the following numbers to the command to do the following:

  • 0: Retrieve or generate template from memory tree if possible
  • 1: Run with an empty interpretation file, thus only the template but no theory is provided by the AAE.
  • 2: Retrieve template and theory from manually saved files
  • 3: Retrieve template from preexisting haskell files, thus executing the original apperception engine program
  • 5: Use the optimized template for sokoban

Simple Examples

Once the system is installed (see above), you are ready to try some examples.

To run these examples, make sure you are in the root directory called apperception. Also note that we use the command code/solve instead of ~/.cabal/bin/solve as stated in the original code, as for some linux distros (namely archlinux) the original option doesn't work properly. If this doesn't work for you you can try using ~/.cabal/bin/solve, although I think code/solve should work across the board.

To empty the tree use mem_code/init_tree.py

First we can initiate the construction of the memory tree with the simple example of a single object oscillating between "on" and "off". We use the code "0" here, as this example is simple enough that the AAE can generate the template on its own:
code/solve misc predict_1.lp 0

Then if we want to use the newly constructed tree to solve the problem again but from memory we can provide the same command:
code/solve misc predict_1.lp 0

The next example, of two objects, needs the inclusion of spatial concepts, thus can not be generated by the AAE itself but needs a prebuilt or diagonalized template. We can use the code "3" here to fetch the template in the manner of the original apperception engine implementation:
code/solve misc predict_2.lp 3

Now the tree has been updated to include the template and theory for the two objects example, thus we can use the code "0" to construct a template from memory:
code/solve misc predict_2.lp 0

We can see that this fails, because the minimal template does not include the necessary a priori spatial concepts, thus we have to use iteration:
code/solve misc-iter predict_2.lp 0

Now we can solve the third example of a basic sequence by retrieving and generating from memory, thus we can use code "0":
code/solve misc predict_3.lp 0

Learning the successor relation with the successor sequence needs a a prebuilt template, thus we use "3":
code/solve misc predict_4.lp 3

We can then reproduce this using "0":
code/solve misc predict_4.lp 0

More Complex Examples

Here we go through the sokoban examples, which take longer to run, even 4 hours of no pre-built files are used.

First we can add the template and theory from the general Sokoban example by either loading in the theory and from manually saved files using "2", which will take 25 seconds:
code/solve sokoban e_8_17 2

Or if you want to let it learn the theory from scratch you can input "21", which will take 4 hours:
code/solve sokoban e_8_17 21

With this loaded in we can reuse it on the same example, using code "0", but because the minimal template does not work we need to use iteration:
code/solve sok-iter e_8_17 0

If you want to use the optimized template for sokoban instead of the minimal template, to forego the iteration, please add "5" to the flag:
code/solve sok-iter e_8_17 05

Now we can use this on the rest of the examples for sokoban:\

  • code/solve sok-iter e_8_17_small 05
  • code/solve sok-iter e0 05
  • code/solve sok-iter e1 05
  • code/solve sok-iter e2 05
  • code/solve sok-iter e4 05
  • code/solve sok-iter e5 05
  • code/solve sok-iter e6 05
  • code/solve sok-iter e7 05
  • code/solve sok-iter e8 05

One example has been kept out of these, which as discussed in the thesis acts as a monkeywrench for the tree by generating a paradoxical rule, that is the following example:
code/solve sok-iter e3 05

This example does succeed, but observe that the second time it does not work anymore, nor does any other sokoban example. If we want to use the tree again we have to initialize and reconstruct again using mem_code/init_tree.py, and then build the tree again using the previous steps.

Understanding the output of the solve process

When solve is run, it produces...

  • the theory θ = (φ, I, R, C) composed of...
    • the initial conditions (I)
    • the rules (R)
    • the constraints (C)
  • the trace (τ(θ))
  • statistics: the cost of the interpretation θ
  • accuracy: whether or not all the predicted sensor readings match the hidden readings

At the moment at the end of every run the best template and theory are outputted, as well as a small representation of the current structure of the tree after taking in the new output.

To generate a latex-readable description of the output:

  • set flag_output_latex = True in Interpretation.hs
  • recompile: scripts/compile_solve.sh
  • run again

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 49.7%
  • Haskell 35.6%
  • Python 7.5%
  • TeX 6.5%
  • C 0.7%