24 Oct 25
This is an interactive, browser-based editor that allows users to create, customize, and configure complex data-driven choropleth and other map visualizations using the amCharts JavaScript library.
21 Oct 25
18 Oct 25
Offers a lot of data on companies, funding and then market.
17 Oct 25
Getting the data model right is key to getting the product right, and I wholeheartedly agree with it. Great examples to show in product meetings.
08 Oct 25
Another markup language, but which actually seems to fix some of the pain points of my current favourite, HJSON.
07 Oct 25
Interesting serialization format
18 Sep 25
A really smart ant overall simple checksum that works on unordered collections
17 Sep 25
Interesting serialization format
16 Sep 25
This is really cool. Nushell used to support this but apparently it was a pain to maintain. Maybe one day…
This is really cool. Nushell used to support this but apparently it was a pain to maintain. Maybe one day…
15 Sep 25
This is really cool. Nushell used to support this but apparently it was a pain to maintain. Maybe one day…
13 Sep 25
A really smart ant overall simple checksum that works on unordered collections
11 Sep 25
A map/reduce workflow for LLMs, with what looks like local caching. To me build systems and data processing pipelines like this one have a big intersection.
08 Sep 25
Like any other map, The Internet map is a scheme displaying objects’ relative position; but unlike real maps (e.g. the map of the Earth) or virtual maps (e.g. the map of Mordor), the objects shown on it are not aligned on a surface. Mathematically speaking, The Internet map is a bi-dimensional presentation of links between websites on the Internet. Every site is a circle on the map, and its size is determined by website traffic, the larger the amount of traffic, the bigger the circle. Users’ switching between websites forms links, and the stronger the link, the closer the websites tend to arrange themselves to each other.
In plain English, this service looks at which websites link to a particular target website, and then it ranks websites that are popular among those linking websites using a method commonly used in recommendation algorithms.
In technical jargon, it reinterprets the incident edges in the adjacency matrix as sparse high dimensional vector, and uses cosine similarity to find the nearest neighbors nodes within this feature-space.
This is a write-up about an experiment from a few months ago, in how to find websites that are similar to each other. Website similarity is useful for many things, including discovering new websites to crawl, as well as suggesting similar websites in the Marginalia Search random exploration mode.
Some simple hash functions including a reversible one.