buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
The reduction and type derivation of a simple expression:
(car[num] (cons[num] ⸢1⸣ null[num]))
This proof tree is as patched together as a '70s pre-LaTeX thesis!
(For Matthew Flatt's Programming Languages and Semantics course: https://my.eng.utah.edu/~cs7520/)
I've been #reading the #Lisp in Small Pieces book, and it has helped me put names to a lot of concepts that I knew about Lisps and programming language design/implementation in general. It also provides a great historical perspective to #PLT, especially by mentioning all approaches that were tried but abandoned.
But other than being very very verbose, I find it hard to understand because it keeps changing the model of the #interpreter every chapter. It starts with using Alists to model environments, then uses objects in the next chapter for the same, and in the next one, switches to using closures.
I get that it does this to showcase all possible ways of modelling an interpreter in Lisp, but it is quite disorienting to me as a reader.
But I can't say that the author didn't forewarn me. This is literally the fourth sentence of the book:
“To explain these entities, their origin, their variations, this book will go into great detail.”
It is a decent read, though the language feels a little outdated. It is translated from French so that may be the reason of the overly magniloquent language.
And of course, a fault that all old academic textbooks tend to suffer from: a lack of letters in variable names, like so:
```lisp
(define (evaluate-variable n r s k) (k (s (r n)) s) )
```
I've written a series of blog posts, in which I write a #bytecode #compiler and a #virtualMachine for arithmetic in #Haskell. We explore the following topics in the series:
- Parsing arithmetic expressions to ASTs.
- Compiling ASTs to bytecode.
- Interpreting ASTs.
- Efficiently executing bytecode in a VM.
- Disassembling bytecode and decompiling opcodes for debugging and testing.
- Unit testing and property-based testing for our compiler and VM.
- Benchmarking our code to see how the different passes perform.
- All the while keeping an eye on performance.
The third and final post of the series that focuses on writing the virtual machine is now out: https://abhinavsarkar.net/posts/arithmetic-bytecode-vm/
After a gap of 1.5 years since the last part, I have finally finished writing the third part of my post series on implementing Co. Planning to publish it this weekend. #haskell #programminglanguages #blog
I wrote the fifth part of my #blog series “Implementing Co, a small programming language with #coroutines”. This time, we add support for sleep in #Co for time-based executions. https://abhinavsarkar.net/posts/implementing-co-5/
#Programming #PLT #ProgrammingLanguages #Compilers #Haskell #concurrency