I’m Zilong, a frontend developer at Booking.com. I’m passionate about programming — with a focus on web development, though my curiosity spans the broader software landscape. Before Booking.com, I worked at Tencent and ByteDance.
Beyond code, I cherish life’s simple pleasures: good food, cooking, reading, writing, and traveling. I also sing tenor with the Shanghai Rainbow Chamber Singers. I hold a Master’s degree in Mathematics from Fudan University.
This is my notebook — a space for thoughts on technical learnings, personal growth, and life reflections. Welcome!
Starting a Blog at the End of 2025
Happy holidays! At the end of 2025, I’m starting a blog. I’ve already written several entries and feel confident I can keep it going.
Here I talk about my past attempts, the writers who inspired me, the motivation, the topics to cover, and the approach I’m taking. It’s my version of a blogging manifesto.
I gradually realized that a unified formatting rule set is needed when working with multiple AI chatbots and agents.
Output formatting styles vary from model to model. For technical topics, I’ve found that Claude tends to output responses like complete documents, starting with an h1 heading and loves to use horizontal rules to separate sections; Gemini usually skips to h3 headings directly without h2 ones, which in my opinion is not a good practice.
Here are examples I tried on OpenRouter, prompting “Explain the Python programming language.”
GPT-5.2, starting with an introductory paragraph and followed by sections
Claude Opus 4.5, a document-like output with an h1 heading at the top and multiple horizontal rules
Gemini 3 Flash, using h3 headings directly
Kimi K2 Thinking, also a document-like one
Even worse, from my experience, outputs from different versions of the same model series (e.g. GPT-5 and GPT-5.2) can vary greatly in terms of formatting.
To address this issue, and to unify output styles of different tools I’m using (ChatGPT as my daily driver, Gemini for work, and Amp as my coding agent), I drafted a minimal formatting guide as follows:
Shared formatting rules:
- Use consistent formatting within the same response
- Insert spaces between English words and CJK characters
- Always specify the language for syntax highlighting when using fenced code blocks
- Do not use horizontal dividers (
<hr />or---) unless they add clear structural value, especially directly before headings- For list items, do not use a period at the end unless the item is a complete sentence
For chat responses:
- Use “Sentence case” for chat names (auto-generated chat titles) and all section headings (capitalize the first word only), never use “Title Case” in such circumstances
- Use heading levels sequentially (
h2, thenh3, etc), never skip levels; Introductory paragraphs may be needed before the first heading in chat responses; Never useh1for chat responses- Avoid filler, praise, or conversational padding (for example “Good question”, “You’re absolutely right”)
For document generation and editing:
- Use “Title Case” for top-level headings (e.g.
h1), typically only once in a document, and “Sentence case” for section headings (capitalize the first word only)- Use heading levels sequentially (
h2, thenh3, etc), never skip levels
I apply these rules to the custom instructions setting in ChatGPT and to AGENTS.md for my coding agent.
Custom instructions setting in ChatGPT
GLM 4.7 and MiniMax M2.1. Chinese AI labs first caught the world’s attention with the DeepSeek models in late 2024. Then in the second half of 2025, we saw a wave of Chinese open-source models like GLM 4.6, MiniMax M2, and Kimi K2. People like these models for their low price, open weights, and solid performance — just slightly below state-of-the-art proprietary models1.
Today, the updated GLM 4.7 and MiniMax M2.1 dropped on the same day. As public holidays approach in the US, Chinese AI labs keep pushing forward, making good use of the time 😉
AI is a rapidly changing field. I’m not one to chase every new model release, though I do find myself following this topic more recently. I’m still learning these concepts and trying to find a pragmatic way to use AI tools. I use ChatGPT as my daily driver, Gemini for work (my company subscribes to it), and Amp as my coding agent.
I may not post about every model release in the future, but here are the models on my radar:
- Proprietary models: GPT (OpenAI), Claude (Anthropic), and Gemini (Google)
- Open-source models: DeepSeek, Kimi (Moonshot AI), GLM (Z.ai), and MiniMax
Since Zig hasn’t hit 1.0 and is still evolving rapidly, following the master branch is common practice for trying out new features and tracking where the language is heading. Even its release notes say “working on a non-trivial project using Zig may require participating in the development process.”
However, nightly master builds quietly stopped on November 26, 2025, when the Zig team announced the migration from GitHub to Codeberg. I assumed the builds were provided by some automation tied to GitHub.
Today, I found that nightly master builds have resumed! The download index JSON used by version managers like ZVM is now being updated again, though the download page hasn’t caught up yet.
Nevertheless, good news! I’m looking forward to trying out exciting follow-ups on the new async I/O.
Zig download page
Update Dec 23, 2025: The download page is now updating again!
TIL: When editing Markdown files in VS Code, you can paste URLs as formatted links via the markdown.editor.pasteUrlAsFormattedLink.enabled setting.
This setting was first introduced in June 2023, with the release of VS Code 1.80.
This is a nice quality-of-life feature. I used to type brackets, parentheses, and URLs manually, always wishing for a simpler way. I’m now using the smart option, which “smartly creates Markdown links by default when not pasting into a code block or other special element.”
Here’s a quick demo:
OpenAI Codex now officially supports skills. After a few days of people finding OpenAI was quietly adopting skills1, the announcement came today. The thread on X goes through how skills work in Codex and shows examples on how to install third-party pre-built skills like Linear and Notion.
Two baked-in skills skill-creator and skill-installer are available in Codex, making bootstrapping and installing skills easier. See details in their official documentation.
Codex’s choice of skills location is .codex/skills, joining the war with .claude/skills, .github/skills, and .agents/skills. I really want to see a unification here.
-
Simon Willison’s blog: OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI ↵
AI Transparency Statement (via). More and more content on the internet is generated by AI these days, and there’s a new word slop to describe the wave of unwanted, unreviewed, and low-value AI-generated content. It’s so alarming that people are starting to be paranoid about the quality of the content they see online, even obviously handcrafted and curated ones.
One of the indicators is the em dashes (—). Since AI-generated content often includes em dashes, they become a signal, and a warning: you might be reading AI-generated content.
It’s usually not true. But the paranoia runs so deep that some writers like Armin Ronacher now publish statements to defend their work.
As for me, I guarantee that all content here is written by me, though I do use AI tools to help review and refine my writing (like this one), and it’s me who does the thinking and makes the final decision. That’s an appropriate way to use AI as an editing tool, in my opinion. Maybe I should write a similar statement for this website too — and maybe every content creator should do the same.
Agent Skills (via). Anthropic published Agent Skills as an open standard yesterday1, just a few days after they co-founded the Agentic AI Foundation and donated the MCP (Model Context Protocol) to it2. Now, along with the widely adopted AGENTS.md, there are three major agentic AI patterns for managing context and tools.
Among the three, AGENTS.md is the simplest and most straightforward one, which is essentially a dedicated README.md for coding agents. It is usually loaded in the context window when starting a session, providing general instructions to help coding agents know the user and the workspace better.
It originated from OpenAI to unify the chaotic name conventions of agent instruction files, before which we had .cursorrules for Cursor, .github/copilot-instructions.md for GitHub Copilot, GEMINI.md for Gemini CLI, etc. It has been gradually adopted by almost all coding agents, except Claude Code, which still insists on its CLAUDE.md. (There’s an open issue though.)
Agent Skills is another neat practice. Introduced by Anthropic in October 20253, it is a composable and token-efficient way to provide capabilities to agents. LLMs can call tools, and Agent Skills is just a simple and standardized way to define a set of tools. A skill is a set of domain-specific instruction files, which can be loaded on demand by the agent itself. Besides instructions in Markdown, a skill can also bundle a set of scripts and supplementary resource files, enabling the agent to run deterministic and reproducible tasks.
Amp, my current coding agent choice, just released the support for Agent Skills earlier this month4. Along with Agent Skills becoming an open standard, GitHub Copilot and VS Code announced their support for it5. Also, Dax, one of OpenCode maintainers, committed to adding support in the upcoming days6. Though, the skills folder name convention is still not unified, .claude/skills for Claude Code, .github/skills for GitHub Copilot, and .agents/skills for Amp. I’d like to see the neutral .agents/skills win.
Compared with these two approaches, MCP is way more complex. It uses a server-client architecture and JSON-RPC to communicate, instead of natural language — the native language of LLMs. An MCP server can provide remote tools, resources and pre-built prompts to the MCP client baked in an agent, enhancing the agent’s capabilities. It was introduced by Anthropic at the end of 20247, and after one year of adoption, its limitations like authorization overhead and token inefficiency have started to emerge, not to mention its difficulty to implement and integrate. In fact, the only MCP server that is still catching my eye is Playwright MCP, which simply gives the browser automation superpower to coding agents. Honestly I didn’t manage to find a chance to try out MCP deeply. Opinions here are merely my observations and largely shaped by discussions on it, like Simon Willison’s post.
Personally, I’m already adopting AGENTS.md globally and in my personal projects. Since Agent Skills becomes more and more promising, I’m looking forward to trying it out, diving deeply, and building my own set of skills.
-
Claude blog: Skills for organizations, partners, the ecosystem ↵
-
Anthropic news: Donating the Model Context Protocol and establishing the Agentic AI Foundation ↵
-
Claude blog: Introducing Agent Skills ↵
-
Amp news: Agent Skills ↵
-
GitHub blog: GitHub Copilot now supports Agent Skills ↵
-
Anthropic news: Introducing the Model Context Protocol ↵
Berkeley Mono (via). Looks like major coding agents like Claude Code, Cursor, and Amp (which I mainly use these days) are all using this monospaced typeface on their social media1 and web pages2. The typeface looks great and indeed has a retro-computing charm. The type foundry, US Graphics Company, also introduces it as “a love letter to the golden era of computing”:
Berkeley Mono coalesces the objectivity of machine-readable typefaces of the 70’s while simultaneously retaining the humanist sans-serif qualities. Inspired by the legendary typefaces of the past, Berkeley Mono offers exceptional straightforwardness and clarity in its form. Its purpose is to make the user productive and get out of the way.
Berkeley Mono specimen from the official website
As the introduction suggests, the typeface reminds me of man pages, telephone books, and vintage technical documentation. The foundry’s website also reflects that aesthetic.
Berkeley Mono is a commercial typeface. Curiously, however, some of those coding agents appear to be using it without a license, which has led the foundry to frequently tag them on X1.
-
The type foundry’s posts on X: Claude uses Berkeley Mono, Cursor uses Berkeley Mono ↵ ↵