<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
  <channel>
    <title>justine.lol</title>
    <link>https://justine.lol</link>
    <description>Justine Tunney's Web Page</description>

    <item>
      <title>Weird Lexical Syntax</title>
      <link>https://justine.lol/lex/</link>
      <guid>https://justine.lol/lex/</guid>
      <pubDate>Thu, 31 Oct 2024 23:59:59 PDT</pubDate>
      <description>
        I just learned 42 programming languages this month to build a
        new syntax highlighter for llamafile. I feel like I'm up to my
        eyeballs in programming languages right now. Now that it's
        halloween, I thought I'd share some of the spookiest most
        surprising syntax I've seen.
      </description>
    </item>

    <item>
      <title>The Fastest Mutexes</title>
      <link>https://justine.lol/mutex/</link>
      <guid>https://justine.lol/mutex/</guid>
      <pubDate>Wed, 02 Oct 2024 08:28:00 PDT</pubDate>
      <description>
        Imagine you have a workload where all your threads need to do a
        serialized operation. With Cosmo, if you're looking at htop,
        then it's going to appear like only one core is active, whereas
        glibc and musl libc will fill up your entire CPU meter. That's
        bad news if you're running a lot of jobs on the same server. If
        just one of your servers has a mutex bloodbath, then all your
        resources are gone, unless you're using cosmo. It's still a new
        C library and it's a little rough around the edges. But it's
        getting so good, so fast, that I'm starting to view not using it
        in production as an abandonment of professional responsibility.
        The C library is so deeply embedded in the software supply
        chain, and so depended upon, that you really don't want it to be
        a planet killer. If essential unquestioned tools are this
        wasteful then it's no wonder Amazon Cloud makes such a fortune.
      </description>
    </item>

    <item>
      <title>Cosmopolitan v3.9.2</title>
      <link>https://github.com/jart/cosmopolitan/releases/tag/3.9.2</link>
      <guid>https://github.com/jart/cosmopolitan/releases/tag/3.9.2</guid>
      <pubDate>Sun, 22 Sep 2024 03:41:00 PDT</pubDate>
      <description>
        Cosmopolitan's Windows support may finally be feature complete.
        It's now possible to send signals between processes using kill()
        on Windows. Ten new torture test programs have been written to
        tease out more fixes and offer a high level of assurance that
        signal handling is correct. Some of these tests are good enough
        to deadlock the signal handling of UNIX OSes but not our
        signaling module for Windows. They also demonstrate that our
        Windows signal handling actually outperforms many UNIX OSes at
        latency.
      </description>
    </item>

    <item>
      <title>AI Training Shouldn't Erase Authorship</title>
      <link>https://justine.lol/history/</link>
      <guid>https://justine.lol/history/</guid>
      <pubDate>Fri, 23 Aug 2024 05:00:00 PDT</pubDate>
      <description>
        In a world of infinite automation and infinite surveillance,
        survival is going to depend on being the least boring person.
        Over my career I've written and attached my name to thousands of
        public source code files. I know they are being scraped from the
        web and used to train AIs. But if I ask something like Claude,
        "what sort of code has Justine Tunney wrote?" it hasn't got the
        faintest idea. Instead it thinks I'm a political activist, since
        it feels no guilt remembering that I attended a protest on Wall
        Street 13 years ago. But all of the positive things I've
        contributed to society? Gifts I took risks and made great
        personal sacrifices to give? It's erased from the history books.
      </description>
    </item>

    <item>
      <title>LLaMA Now Goes Faster on CPUs</title>
      <link>https://justine.lol/matmul/</link>
      <guid>https://justine.lol/matmul/</guid>
      <pubDate>Sun, 31 Mar 2024 17:59:15 PDT</pubDate>
      <description>
        I just wrote 84 new matrix multiplication kernels for llamafile
        which enable it to read prompts / images faster. Compared to
        llama.cpp, prompt eval time with llamafile should go anywhere
        between 30% and 500% faster when using F16 and Q8_0 weights on
        CPU. The improvements are most dramatic for ARMv8.2+ (e.g. RPI
        5), Intel (e.g. Alderlake), and AVX512 (e.g. Zen 4) computers.
        My kernels go 2x faster than MKL for matrices that fit in L2
        cache, which makes them a work in progress, since the speedup
        works best for prompts having fewer than 1,000 tokens.
      </description>
    </item>

    <item>
      <title>Bash One-Liners for LLMs</title>
      <link>https://justine.lol/oneliners/</link>
      <guid>https://justine.lol/oneliners/</guid>
      <pubDate>Mon, 4 Dec 2023 09:00:00 PST</pubDate>
      <description>
        I spent the last month working with Mozilla to launch an open
        source project called llamafile which is the new best way to run
        an LLM on your own computer. So far things have been going
        pretty smoothly. The project earned 5.6k stars on GitHub, 1073
        upvotes on Hacker News, and received press coverage from
        Hackaday. Yesterday I cut a 0.3 release so let's see what it can
        do.
      </description>
    </item>

    <item>
      <title>Cosmopolitan Third Edition</title>
      <link>https://justine.lol/cosmo3/</link>
      <guid>https://justine.lol/cosmo3/</guid>
      <pubDate>Tue, 31 Oct 2023 09:00:00 PST</pubDate>
      <description>
        After nearly one year of development, I'm pleased to announce
        our version 3.0 release of the Cosmopolitan library.
      </description>
    </item>

    <item>
      <title>Understanding DeepMind's Sorting Algorithm</title>
      <link>https://justine.lol/sorting/</link>
      <guid>https://justine.lol/sorting/</guid>
      <pubDate>Mon, 12 Jun 2023 09:00:00 PST</pubDate>
      <description>
        A few days ago, DeepMind published a blog post talking about a
        paper they wrote, where they discovered tinier kernels for
        sorting algorithms.
      </description>
    </item>

  </channel>
</rss>
