This document defines ecosystem-wide conventions for all projects in the fkzys ecosystem. It serves two purposes:
- Human-readable specification — reference for developers working across multiple projects
- Machine-consumable context — provides LLM assistants with the rules and patterns needed to generate code that conforms to ecosystem standards
When generating code for any fkzys project, assistants must follow the rules defined in this specification. They are derived from real projects, not generic best practices.
| Attribute | Value |
|---|---|
| Scope | Ecosystem-wide (all fkzys projects) |
| Authority | Definitive — overrides project-local conventions where conflicts exist |
| Last updated | 2026-04-14 |
| License | CC BY-SA 4.0 (text) / AGPL-3.0-or-later (code patterns) — see §14 |
- Sections 0–10, 13, 15 — Technical specification: patterns, conventions, code structures, CI
- Section 11 — Code generation protocol: rules for assistants generating code in this ecosystem
- Section 12 — Quick reference: common snippets and commands
- Section 14 — Licensing
Project-specific information lives in the repository's own documentation:
README.md— build, install, dependencies, configurationCHANGELOG.md— version historyTODO.md— planned worktests.md— test suite documentation (per-project, but follows ecosystem testing patterns)
This specification defines ecosystem-wide patterns — conventions that apply to all projects in the fkzys ecosystem, regardless of language or purpose.
- Universal patterns: error handling, config parsing, test structure, Makefile conventions
- Security rules:
verify-lib, whitelist config, noeval, ownership checks - Testing philosophy: isolation, temp dirs, mock frameworks, root-only guards
- Build/install conventions:
PREFIX=/usr,DESTDIR=, license paths
- Project-specific dependencies (e.g. GTK4, ffmpeg, xUnit)
- UI toolkit migration details
- Domain logic (subtitle parsing, video processing, Anki integration)
- Changelog entries, TODO lists, screenshots
- Infrastructure-as-code deployment patterns (Jinja2 services, SOPS secrets, Terraform) — covered in §9
The following project types are infrastructure or data, not installable packages. Conventions like Makefile (PREFIX/DESTDIR), depends, bin/, lib/, tests/README.md, and verify-lib do not apply to them:
| Type | Examples | What applies |
|---|---|---|
| Infrastructure-as-code | infra, tf-infra |
§4 (Python patterns), §9 (SOPS, Jinja2), README |
| Container images | sing-box |
Dockerfile conventions, CI validation |
| Dotfiles / config repos | rootfiles, dotfiles |
dotm patterns (dotm.toml, perms, .sops.yaml) |
| Data / rule sets | sing-box_srs |
build.sh with set -euo pipefail |
| Profiles / documentation | gitlab-profile, packages |
README only |
When rules conflict, apply in this order:
- Security (
verify-lib, ownership checks, noeval, no/tmpfor scripts,SecureDir) - Correctness (
set -euo pipefail, error handling, config validation) - Consistency (naming, structure, Makefile targets)
- Convenience (shortcuts, defaults)
These rules apply to LLM assistants interacting with the ecosystem.
- Verify before concluding. Never assume system behavior based on theory. Check logs, process trees, file contents (hex if needed), and registry lookups before making claims about why something does or doesn't work.
- No system modifications without request. Never execute or suggest
sudo,rm /...,systemctl restart,make install, or package removal without explicit user request. Describe options — user decides. - No secret leakage. Tests, documentation, and examples must never contain real secret values or secret-derived data. Use abstract identifiers (
"value","fallback","app","setting"). - No fabricated credentials or signatures. Never generate
Signed-off-by,git config user.*, or credential-like values without user-provided context. - Ask when uncertain. About paths, versions, flags, or ambiguous instructions before generating code or executing commands.
| Path | Purpose |
|---|---|
bin/ |
Entry points: shell scripts, compiled binaries |
lib/ |
Shared libraries: common.sh, Python modules, C code |
etc/ |
Default configuration files (installed to /etc/) |
completions/ |
_cmd (zsh), cmd.bash (bash) |
man/ |
Markdown sources (.md) + compiled roff (.8/.5) |
tests/ |
Test suite. If tests are present, SHOULD include tests/README.md with table and instructions |
depends |
Dependencies. Format: system:pkg or gitpkg:pkg. # for comments |
Makefile |
Targets: build (if needed), install, uninstall, clean, test, man |
backup/ |
Migration/backup scripts |
hooks/ |
Pacman/systemd hooks |
systemd/ |
Service units: user/ or system/ |
extras/ |
Additional scripts (wrappers, helpers) |
| Path | Purpose |
|---|---|
cmd/<name>/ |
Entry point (main.go, version.go) |
internal/ |
Private packages (config, engine, tmpl, safetemp, etc.) |
tests.md |
Test documentation (instead of tests/README.md) |
go.mod / go.sum |
Module definition and dependencies |
Makefile |
build, install, uninstall, test, test-root, clean |
| Path | Purpose |
|---|---|
<ProjectName>/ |
Main application source (.cs files) |
<ProjectName>.Tests/ |
xUnit test project (*_Tests.cs) |
<ProjectName>.csproj |
SDK-style project file (net10.0, AllowUnsafeBlocks, etc.) |
<ProjectName>.Tests.csproj |
Test project, references main project |
tests.md |
Test documentation (instead of tests/README.md) |
Makefile |
build, install, uninstall, test, clean |
#!/bin/bash
# /usr/bin/project-name
#
# Utility description.
#
set -euo pipefail- System scripts:
#!/bin/bash - Libraries:
#!/usr/bin/env bash - If a file is both entry point and library (rare), use
#!/bin/bash.
Exceptions (no set -euo pipefail required):
- Test files — use
set -uo pipefail(no-e). Tests must continue executing when assertions fail so failures can be counted and reported. See §7 for test harness patterns. - Wrapper scripts — thin scripts that
cdto a config directory andexeca binary (e.g.dist/subs2srs.sh). These are simple launchers, not business logic. If theexecfails, there is nothing left to do. - Intentional guard scripts — scripts that rely on conditional control flow incompatible with
errexit. Must include a comment explaining the omission. Guard scripts are typically called by external hooks (e.g. pacman hooks) and may also be exempt fromverify-lib/_src()entry point requirements, since they source libraries in a controlled execution context rather than as standalone entry points.
All bash entry points MUST support:
-V/--version— Print program name and version, then exit 0--help/-h— Print usage information, then exit 0
Both are mandatory for all user-facing shell scripts. The project must also provide:
- man page (
man/project.8.md) — compiled via the Makefilemantarget - shell completions (
completions/_projectfor zsh,completions/project.bashfor bash) — see §8
The version function lives in lib/common.sh, if that file is present in the project. The entry point sources common.sh and delegates to it:
case "${1:-}" in
-V|--version) cmd_version; exit 0 ;;
-h|--help) print_usage; exit 0 ;;
*) main "$@" ;;
esacThis keeps version output consistent across all projects and centralises the source of truth in a single library file.
readonly LIBDIR="/usr/lib/project"
_src() { local p; p=$(verify-lib "$1" "$LIBDIR/") && source "$p" || exit 1; }
_src "${LIBDIR}/common.sh"Entry points that source common.sh may trigger expensive auto-initialization (filesystem scans, network checks, config validation). To keep --help and -V fast, entry points SHOULD set the appropriate _*_NO_INIT variable before sourcing:
readonly _GITPKG_NO_INIT=1
_src "${LIBDIR}/common.sh"The variable name matches the project (uppercased, hyphens to underscores). Setting it tells common.sh to skip initialization and return immediately. Test files SHOULD also set this flag to ensure test isolation.
echo "ERROR: Description of the error" >&2; exit 1- All errors go to stderr
- Prefix
ERROR:for fatal,WARN:for non-fatal exit 1on fatal errors
load_config() {
[[ -f "$CONFIG_FILE" ]] || return 0
# Verify ownership
local owner
owner=$(stat -c %u "$CONFIG_FILE" 2>/dev/null)
if [[ "$owner" != "0" ]]; then
echo "ERROR: $CONFIG_FILE not owned by root (owner uid: $owner)" >&2
return 1
fi
local -a allowed=(KEY1 KEY2 KEY3)
while IFS='=' read -r key value; do
key="${key#"${key%%[![:space:]]*}"}"
key="${key%"${key##*[![:space:]]}"}"
value="${value#"${value%%[![:space:]]*}"}"
value="${value%"${value##*[![:space:]]}"}"
value="${value%% #*}"
value="${value%"${value##*[![:space:]]}"}"
[[ "$key" =~ ^#.*$ || -z "$key" ]] && continue
local valid=0
for a in "${allowed[@]}"; do
[[ "$key" == "$a" ]] && { valid=1; break; }
done
if [[ $valid -eq 1 ]]; then
value="${value#\"}"; value="${value%\"}"
value="${value#\'}"; value="${value%\'}"
printf -v "$key" '%s' "$value"
else
echo "WARN: Unknown config key ignored: $key" >&2
fi
done < "$CONFIG_FILE"
}cleanup() {
local exit_code=$?
set +e
[[ $exit_code -ne 0 ]] && echo ":: cleaning up..."
# ... cleanup logic ...
return $exit_code
}
trap cleanup EXIT[[ -n "${VAR:-}" ]] || { echo "ERROR: VAR not defined" >&2; exit 1; }bwrap_base() {
local -n _arr=$1
_arr+=(--ro-bind /usr /usr --proc /proc)
}Library functions that build or modify arrays passed by the caller SHOULD use local -n (nameref) instead of generating output for the caller to parse. This avoids eval (TOCTOU risk) and makes the API explicit.
Functions SHOULD use snake_case naming. Internal helpers SHOULD be prefixed with an underscore (_).
# Public library function
load_config() { ... }
# Internal helper
_validate_path() { ... }CamelCase naming (e.g. loadConfig, buildOverlay) is discouraged for consistency across all projects in the ecosystem.
# Save state explicitly
local _had_nullglob=false
shopt -q nullglob && _had_nullglob=true
shopt -s nullglob
# ... glob operations ...
# Restore explicitly
if $_had_nullglob; then
shopt -s nullglob
else
shopt -u nullglob
fiGlob loops (for x in pattern) SHOULD set shopt -s nullglob before iterating to prevent literal pattern expansion when no matches exist. Without nullglob, a non-matching glob expands to the literal string (e.g. for f in *.sh → f="*.sh"), which can cause incorrect behaviour or errors.
shopt -s nullglob
for f in *.sh; do
# safe: loop body only executes if files exist
process "$f"
done
shopt -u nullglobif [[ ! "$INPUT" =~ ^[a-zA-Z0-9_-]+$ ]]; then
echo "ERROR: Invalid input" >&2; exit 1
fiLibraries that handle filesystem paths (especially those involving DESTDIR or install destinations) MUST validate paths to prevent directory traversal, absolute paths, and dangerous characters:
_validate_path() {
# Reject empty, absolute paths, traversal (..), and newlines
[[ -n "$1" && ! "$1" =~ ^/ && ! "$1" =~ \.\. && ! "$1" =~ $'\n' ]] || return 1
return 0
}Reject: absolute paths (/usr/bin), .. traversal, newlines, empty strings.
Accept: relative paths (usr/bin/foo), .. as part of filenames (foo..bar).
.PHONY: install uninstall clean test man
PREFIX = /usr
SYSCONFDIR = /etc
DESTDIR =
pkgname = project-name
BINDIR = $(PREFIX)/bin
LIBDIR = $(PREFIX)/lib/project
SHAREDIR = $(PREFIX)/share
MANDIR = $(SHAREDIR)/man
ZSH_COMPDIR = $(SHAREDIR)/zsh/site-functions
BASH_COMPDIR = $(SHAREDIR)/bash-completion/completions
LICENSEDIR = $(SHAREDIR)/licenses/$(pkgname)
MANPAGES = man/project.8
man: $(MANPAGES)
man/%.8: man/%.8.md
pandoc -s -t man -o $@ $<
clean:
rm -f $(MANPAGES)
test:
bash tests/test.shFor shell projects with multiple test scripts, the test target SHOULD use
a @for loop with UNIT_TESTS variable. This provides visible separators
between test files and consistent failure handling:
UNIT_TESTS = \
tests/test_config.sh \
tests/test_cli.sh \
tests/test_commands.sh
test:
@for t in $(UNIT_TESTS); do \
echo ""; \
echo "━━━ $$t ━━━"; \
bash "$$t" || exit 1; \
doneThe @ prefix suppresses the loop line itself. echo "" and echo "━━━"
provide visual separation. || exit 1 stops on the first failure. The
UNIT_TESTS variable makes it easy to add or remove test files.
install:
install -Dm755 bin/project
@if [ ! -f "$(DESTDIR)$(SYSCONFDIR)/project.conf" ]; then \
install -Dm644 etc/project.conf "$(DESTDIR)$(SYSCONFDIR)/project.conf"; \
echo "Installed default config"; \
else \
echo "Config exists, skipping (see etc/project.conf for defaults)"; \
fi
uninstall:
rm -f
### Go Makefile
```makefile
.PHONY: build install uninstall test test-root clean
PREFIX = /usr
DESTDIR =
pkgname = project-name
BINDIR = $(PREFIX)/bin
LICENSEDIR = $(PREFIX)/share/licenses/$(pkgname)
VERSION ?= $(shell git describe --tags --always --dirty 2>/dev/null || echo dev)
BINARY = project-name
build:
CGO_ENABLED=0 go build -trimpath -buildmode=pie -ldflags "-X main.version=$(VERSION)" -o $(BINARY) ./cmd/project-name/
test:
go test ./...
test-root:
sudo go test ./internal/perms/ -v -count=1
clean:
rm -f $(BINARY)
install: build
install -Dm755 $(BINARY) $(DESTDIR)$(BINDIR)/$(BINARY)
install -Dm644 LICENSE $(DESTDIR)$(LICENSEDIR)/LICENSE
uninstall:
rm -f $(DESTDIR)$(BINDIR)/$(BINARY)
rm -rf $(DESTDIR)$(LICENSEDIR)/
.PHONY: build install uninstall test clean
PREFIX = /usr
DESTDIR =
pkgname = project-name
BINDIR = $(PREFIX)/bin
LIBDIR = $(PREFIX)/lib/$(pkgname)
LICENSEDIR = $(PREFIX)/share/licenses/$(pkgname)
PROJECT = project-name
TESTS = $(PROJECT).Tests
build:
dotnet build $(PROJECT)/$(PROJECT).csproj -c Release
test:
dotnet test $(TESTS)/$(TESTS).csproj --no-build -v normal
clean:
dotnet clean $(PROJECT)/$(PROJECT).csproj
rm -rf $(PROJECT)/bin $(PROJECT)/obj
rm -rf $(TESTS)/bin $(TESTS)/obj
install: build
install -Dm755 $(PROJECT)/bin/Release/net10.0/$(PROJECT) $(DESTDIR)$(BINDIR)/$(PROJECT)
install -Dm644 LICENSE $(DESTDIR)$(LICENSEDIR)/LICENSE
uninstall:
rm -f $(DESTDIR)$(BINDIR)/$(PROJECT)
rm -rf $(DESTDIR)$(LICENSEDIR)/-
PREFIX = /usr(not/usr/local) -
DESTDIR =(empty by default) - Config is never overwritten if it already exists
-
mantarget generates from.mdviapandoc -
testtarget SHOULD be present if the project contains tests. If the project has no test suite, the target may be omitted. Language-specific:bash tests/test.sh(shell),go test ./...(Go),pytest(Python),dotnet test(C#). -
cleantarget MUST undo build artifacts, if any are present. For projects with no build step (e.g. pure shell libraries, install-only packages), it may be omitted or be a no-op. - The Makefile MUST install the project
LICENSEto$(SHAREDIR)/licenses/$(pkgname)/LICENSE. - Go:
CGO_ENABLED=0,-trimpath,-buildmode=pie, version via-ldflags - C#:
dotnet build -c Release,dotnet test --no-build
#!/usr/bin/env python3
"""
CLI entry point for project-name.
"""
from __future__ import annotations
import argparse
import sys
def main() -> None:
parser = argparse.ArgumentParser(prog="project-name")
parser.add_argument("--version", action="version", version="%(prog)s 0.1.0")
args = parser.parse_args()
# ... logic ...
if __name__ == "__main__":
main()"""Allow running as `python -m project_name`."""
from .cli import main
main()print(f"Error: description of the error", file=sys.stderr)
sys.exit(1)"""Tests for project_name/module.py — pure parsing functions."""
import os
import sys
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
from project_name.module import function_under_test
class TestFunctionUnderTest:
def test_normal_case(self):
assert function_under_test("input") == "expected"
def test_edge_case(self):
assert function_under_test("") is Noneimport subprocess
import sys
from pathlib import Path
import yaml
def decrypt_sops(file_path: Path) -> dict:
try:
result = subprocess.run(
['sops', '-d', str(file_path)],
capture_output=True, text=True, check=True
)
return yaml.safe_load(result.stdout)
except subprocess.CalledProcessError as e:
print(f"SOPS decryption error: {e.stderr}", file=sys.stderr)
sys.exit(1)
except FileNotFoundError:
print("sops not found in PATH", file=sys.stderr)
sys.exit(1)#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include <limits.h>
#include <errno.h>PREFIX = /usr
DESTDIR =
CC ?= cc
CFLAGS ?= -O2 -Wall -Wextra -Werror
.PHONY: build install uninstall clean
build:
$(CC) $(CFLAGS) -o project project.c
install:
install -Dm755 project $(DESTDIR)$(PREFIX)/bin/project
uninstall:
rm -f $(DESTDIR)$(PREFIX)/bin/project
clean:
rm -f projectfprintf(stderr, "project: description of error: %s\n", strerror(errno));
return 1;char *real = realpath(file, NULL);
if (!real) {
fprintf(stderr, "project: cannot resolve %s: %s\n", file, strerror(errno));
return 1;
}
// ... use real ...
free(real);All library sourcing must pass through verify-lib. It resolves symlinks, validates ownership (uid/gid 0), checks for group/world writability, and walks the directory chain to prevent TOCTOU or namespace escape attacks.
Usage:
_src() { local p; p=$(verify-lib "$1" "$LIBDIR/") && source "$p" || exit 1; }Implementation (verify-lib.c):
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/stat.h>
#include <sys/statvfs.h>
#include <limits.h>
#include <errno.h>
/* Check if running inside a non-init user namespace */
static int in_user_ns(void) {
FILE *f = fopen("/proc/self/uid_map", "r");
if (!f) return 0;
unsigned int inner, count;
unsigned long long outer;
int lines = 0, trivial = 0;
while (fscanf(f, "%u %llu %u", &inner, &outer, &count) == 3) {
lines++;
if (inner == 0 && outer == 0 && count >= 1000) trivial = 1;
}
fclose(f);
return !(lines == 1 && trivial);
}
/* Read kernel overflow uid (shown for unmapped uids in user ns) */
static unsigned int get_overflow_uid(void) {
FILE *f = fopen("/proc/sys/kernel/overflowuid", "r");
if (!f) return 65534;
unsigned int uid = 65534;
if (fscanf(f, "%u", &uid) != 1) uid = 65534;
fclose(f);
return uid;
}
/* Check if path resides on a read-only mount */
static int on_readonly_mount(const char *path) {
struct statvfs sv;
if (statvfs(path, &sv) != 0) return 0;
return (sv.f_flag & ST_RDONLY) != 0;
}
static int verify_dir_chain(const char *path, const char *prefix,
int userns, unsigned int overflow_uid) {
char buf[PATH_MAX];
struct stat st;
size_t prefix_len = strlen(prefix);
if (strnlen(path, PATH_MAX) >= PATH_MAX) return 0;
strncpy(buf, path, PATH_MAX - 1);
buf[PATH_MAX - 1] = '\0';
while (strlen(buf) >= prefix_len) {
if (lstat(buf, &st) != 0) {
fprintf(stderr, "verify-lib: cannot stat %s: %s\n", buf, strerror(errno));
return 0;
}
if (st.st_uid != 0) {
if (!(userns && st.st_uid == overflow_uid && on_readonly_mount(buf))) {
fprintf(stderr, "verify-lib: %s uid=%d, expected 0\n", buf, st.st_uid);
return 0;
}
}
if ((st.st_mode & S_IWGRP) && st.st_gid != 0) {
if (!(userns && st.st_gid == overflow_uid && on_readonly_mount(buf))) {
fprintf(stderr, "verify-lib: %s group-writable with gid=%d\n", buf, st.st_gid);
return 0;
}
}
if ((st.st_mode & S_IWOTH) && !(st.st_mode & S_ISVTX)) {
fprintf(stderr, "verify-lib: %s world-writable without sticky\n", buf);
return 0;
}
char *slash = strrchr(buf, '/');
if (!slash || slash == buf) break;
*slash = '\0';
}
return 1;
}
int main(int argc, char *argv[]) {
if (argc < 2 || argc > 3) {
fprintf(stderr, "usage: verify-lib <file> [prefix]\n");
return 1;
}
const char *file = argv[1];
const char *prefix = argc == 3 ? argv[2] : "/usr/lib/";
int userns = in_user_ns();
unsigned int overflow_uid = get_overflow_uid();
char *real = realpath(file, NULL);
if (!real) {
fprintf(stderr, "verify-lib: cannot resolve %s: %s\n", file, strerror(errno));
return 1;
}
if (strncmp(real, prefix, strlen(prefix)) != 0) {
fprintf(stderr, "verify-lib: %s resolves outside %s\n", real, prefix);
free(real); return 1;
}
struct stat st;
if (lstat(real, &st) != 0) {
fprintf(stderr, "verify-lib: cannot stat %s: %s\n", real, strerror(errno));
free(real); return 1;
}
if (!S_ISREG(st.st_mode)) {
fprintf(stderr, "verify-lib: %s not a regular file\n", real);
free(real); return 1;
}
if (st.st_uid != 0 || st.st_gid != 0) {
if (userns && st.st_uid == overflow_uid && st.st_gid == overflow_uid && on_readonly_mount(real)) {
/* unmapped root on ro mount inside user ns */
} else {
fprintf(stderr, "verify-lib: %s ownership %d:%d, expected 0:0\n", real, st.st_uid, st.st_gid);
free(real); return 1;
}
}
if (st.st_mode & (S_IWGRP | S_IWOTH)) {
fprintf(stderr, "verify-lib: %s writable by non-root (mode=%04o)\n", real, st.st_mode & 07777);
free(real); return 1;
}
if (userns && !on_readonly_mount(real)) {
fprintf(stderr, "verify-lib: %s on writable mount in user ns\n", real);
free(real); return 1;
}
if (!verify_dir_chain(real, prefix, userns, overflow_uid)) {
free(real); return 1;
}
printf("%s\n", real);
free(real);
return 0;
}package main
import (
"fmt"
"os"
)
func main() {
if err := run(); err != nil {
fmt.Fprintf(os.Stderr, "project: %v\n", err)
os.Exit(1)
}
}
func run() error {
args := os.Args[1:]
if len(args) == 0 { return usageError() }
cmd := args[0]
flags := args[1:]
switch cmd {
case "subcmd": return cmdSubcmd(flags)
case "help", "--help", "-h": printUsage(); return nil
case "version", "--version", "-V": cmdVersion(); return nil
default: return fmt.Errorf("unknown command %q\nrun 'project help' for usage", cmd)
}
}return fmt.Errorf("operation failed: %w", err)- Wrap errors with
%wfor context - Print to stderr in
main()only:fmt.Fprintf(os.Stderr, "project: %v\n", err) os.Exit(1)on fatal errors
import "github.com/BurntSushi/toml"
type Config struct {
Dest string `toml:"dest"`
Shell string `toml:"shell"`
Prompts map[string]PromptConfig `toml:"prompts"`
}
func Load(path string) (*Config, error) {
var cfg Config
if _, err := toml.DecodeFile(path, &cfg); err != nil {
return nil, fmt.Errorf("parse %s: %w", path, err)
}
if err := cfg.validate(); err != nil {
return nil, fmt.Errorf("%s: %w", path, err)
}
return &cfg, nil
}
func (c *Config) validate() error {
if c.Dest == "" { return fmt.Errorf("dest is required") }
return nil
}import (
"bytes"
"text/template"
)
func Render(content string, name string, data map[string]any) ([]byte, error) {
tmpl, err := template.New(name).
Funcs(FuncMap()).
Option("missingkey=error").
Parse(content)
if err != nil {
return nil, fmt.Errorf("parse template %s: %w", name, err)
}
var buf bytes.Buffer
if err := tmpl.Execute(&buf, data); err != nil {
return nil, fmt.Errorf("execute template %s: %w", name, err)
}
return buf.Bytes(), nil
}Prevents symlink race attacks in /tmp. Uses XDG runtime dir first, falls back to user state dir. Always 0700.
package safetemp
import (
"os"
"path/filepath"
)
// SecureDir returns a directory suitable for temporary files that should
// not be accessible to other users. The directory is created with mode 0700
// if it does not exist.
//
// Priority:
// 1. $XDG_RUNTIME_DIR/<project>/ — typically /run/user/<uid>, mode 0700
// 2. $HOME/.local/state/<project>/tmp/ — user state directory, mode 0700
// 3. "" — fallback (caller should handle)
func SecureDir(project string) string {
dirs := secureDirs(project)
for _, dir := range dirs {
if err := os.MkdirAll(dir, 0o700); err == nil {
return dir
}
}
return ""
}
func secureDirs(project string) []string {
var result []string
if dir := os.Getenv("XDG_RUNTIME_DIR"); dir != "" {
result = append(result, filepath.Join(dir, project))
}
if home, err := os.UserHomeDir(); err == nil {
result = append(result, filepath.Join(home, ".local", "state", project, "tmp"))
}
return result
}func execScript(content []byte, shell string) error {
dir := safetemp.SecureDir("project")
tmp, err := os.CreateTemp(dir, "project-script-*.sh")
if err != nil { return err }
defer os.Remove(tmp.Name())
if _, err := tmp.Write(content); err != nil { tmp.Close(); return err }
tmp.Close()
if err := os.Chmod(tmp.Name(), 0o700); err != nil { return err }
cmd := exec.Command(shell, tmp.Name())
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Stdin = os.Stdin
return cmd.Run()
}package config
import (
"os"
"path/filepath"
"testing"
)
func TestExpandHome(t *testing.T) {
tests := []struct {
name string
input string
want string
}{
{"tilde only", "~", "/home/user"},
{"tilde path", "~/.config", "/home/user/.config"},
{"absolute", "/etc/passwd", "/etc/passwd"},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got := expandHome(tt.input)
if got != tt.want {
t.Errorf("expandHome(%q) = %q, want %q", tt.input, got, tt.want)
}
})
}
}func skipIfNotRoot(t *testing.T) {
if os.Geteuid() != 0 { t.Skip("requires root") }
}
func TestApplyActions(t *testing.T) {
skipIfNotRoot(t)
// ... tests that require chmod/chown ...
}// Inject isDirFunc to avoid real filesystem lookups in tests
func ComputeActions(rules []PermRule, managedPaths []string, dest string, isDirFunc func(string) bool) []Action {
if isDirFunc == nil {
isDirFunc = func(p string) bool {
info, err := os.Stat(p)
return err == nil && info.IsDir()
}
}
// ...
}CGO_ENABLED=0 go build -trimpath -buildmode=pie -ldflags "-X main.version=$(VERSION)" -o $(BINARY) ./cmd/project/Go projects SHOULD use golangci-lint in CI (§15). At minimum, errcheck and
staticcheck SHOULD be enabled.
All error return values MUST be checked. In tests, unhandled errors cause t.Fatal:
if err := os.WriteFile(path, []byte(content), 0o644); err != nil {
t.Fatal(err)
}
if err := w.Close(); err != nil {
t.Fatalf("w.Close: %v", err)
}In production code, errors are returned:
if err := tmp.Close(); err != nil {
return fmt.Errorf("close temp: %w", err)
}
if _, err := fmt.Fprintf(w, "%s [y/n]: ", question); err != nil {
return false, err
}In defer cleanup paths where the error cannot be meaningfully handled, use
explicit _ = to signal intentional omission:
defer func() { _ = os.Remove(tmpPath) }()
defer func() { _ = f.Close() }()Bare defer of functions returning error (e.g. defer os.Chdir(oldWd)) is
not permitted — wrap in a closure.
Loop-to-append replacement (staticcheck S1011):
// replace manual slice copy with append
symlinks := make([]string, 0, len(symlinkKeys))
symlinks = append(symlinks, symlinkKeys...)tests/README.md— for Shell/Python/C projects wheretests/is a directory containing test scripts (test_config.sh,test_module.py, etc.). If the project contains a test suite, this file SHOULD be present inside that directory.tests.md— for Go and C# projects where tests are part of the build system (*_test.gopackages,*.Tests.csprojprojects) and there is no standalonetests/script directory. If the project contains a test suite, this file SHOULD be present at the repository root.
# Tests
## Overview
| File | Language | Framework | What it tests |
|------|----------|-----------|---------------|
| `test_config.sh` | Bash | Custom assertions | Config loading, parsing, quoting |
| `test_integration.sh` | Bash | Custom assertions | End-to-end flow |
| `test_module.py` | Python | pytest | Pure functions |
## Running
```bash
# All tests
make test
# Individual suites
bash tests/test_config.sh
python -m pytest tests/test_module.py -v
```
## How they work
### Bash unit tests
All unit test files source `test_harness.sh`, which provides:
- **Assertion functions**: `ok`/`fail`/`assert_eq`/`assert_match`/`assert_contains`/`assert_rc`/`run_cmd`
- **Mock call tracking**: `mock_call_count`, `mock_last_args`, `mock_clear_log`
- **Temporary directory**: `$TESTDIR` cleaned up via `trap EXIT`
- **Global state isolation**: `reset_globals()` between test sections
- **Mock framework**: `make_mock` (writes scripts to `$MOCK_BIN` with call logging)
- **Default mocks**: `stat`, `findmnt`, `mountpoint`, `python3`, `mount`, `btrfs`, `flock`, `df`
### Python tests
Standard pytest suites. No system access — all filesystem operations use `tmp_path`, all subprocess calls are mocked.
## Test environment
- Bash tests create a temporary directory (`mktemp -d`) cleaned up via `trap EXIT`
- No root privileges required
- No real disks, partitions, or volumes are touched
- Python tests use pytest's `tmp_path` fixture#!/usr/bin/env bash
# tests/test_harness.sh
#
# Shared test harness for unit tests.
# Sourced by individual test files — NOT run directly.
set -uo pipefail
# Note: no -e. Tests must continue running when assertions fail
# so failures can be counted and reported by summary().
PASS=0; FAIL=0; TESTS=0
# ── Test helpers ─────────────────────────────────────────────
ok() { PASS=$((PASS + 1)); TESTS=$((TESTS + 1)); echo " ✓ $1"; }
fail() { FAIL=$((FAIL + 1)); TESTS=$((TESTS + 1)); echo " ✗ $1"; }
assert_eq() {
local desc="$1" expected="$2" actual="$3"
if [[ "$expected" == "$actual" ]]; then ok "$desc"
else fail "$desc (expected='$expected', got='$actual')"; fi
}
assert_match() {
local desc="$1" pattern="$2" actual="$3"
if [[ "$actual" =~ $pattern ]]; then ok "$desc"
else fail "$desc (pattern='$pattern' not found in '$actual')"; fi
}
assert_contains() {
local desc="$1" needle="$2" haystack="$3"
if [[ "$haystack" == *"$needle"* ]]; then ok "$desc"
else fail "$desc (needle='$needle' not in output)"; fi
}
assert_not_contains() {
local desc="$1" needle="$2" haystack="$3"
if [[ "$haystack" != *"$needle"* ]]; then ok "$desc"
else fail "$desc (needle='$needle' unexpectedly found)"; fi
}
assert_file_exists() {
local desc="$1" path="$2"
if [[ -e "$path" ]]; then ok "$desc"
else fail "$desc (missing: $path)"; fi
}
assert_file_not_exists() {
local desc="$1" path="$2"
if [[ ! -e "$path" ]]; then ok "$desc"
else fail "$desc (unexpected: $path)"; fi
}
assert_file_contains() {
local desc="$1" needle="$2" file="$3"
if grep -qF "$needle" "$file" 2>/dev/null; then ok "$desc"
else fail "$desc (needle='$needle' not in $file)"; fi
}
# Run command in subshell, capture rc + combined stdout/stderr.
# Sets globals: _rc, _out
run_cmd() {
_rc=0; _out=$("$@" 2>&1) || _rc=$?
}
assert_rc() {
local desc="$1" expected="$2"; shift 2
local rc=0; "$@" >/dev/null 2>&1 || rc=$?
assert_eq "$desc" "$expected" "$rc"
}
section() { echo ""; echo "── $1 ──"; }
# ── Mock call tracking ──────────────────────────────────────
mock_call_count() {
local name="$1" log="${TESTDIR}/mock_calls_${name}.log"
if [[ -f "$log" ]]; then wc -l < "$log" | tr -d ' '; else echo "0"; fi
}
mock_last_args() {
local name="$1" log="${TESTDIR}/mock_calls_${name}.log"
if [[ -f "$log" ]]; then tail -1 "$log"; else echo ""; fi
}
mock_clear_log() {
local name="$1" log="${TESTDIR}/mock_calls_${name}.log"
: > "$log"
}
# ── Setup test environment ───────────────────────────────────
TESTDIR=$(mktemp -d)
trap 'rm -rf "$TESTDIR"' EXIT
MOCK_BIN="${TESTDIR}/mock_bin"
mkdir -p "$MOCK_BIN"
ORIG_PATH="$PATH"
export PATH="${MOCK_BIN}:${PATH}"
make_mock() {
local name="$1"; shift
local body="${*:-exit 0}"
local log_file="${TESTDIR}/mock_calls_${name}.log"
: > "$log_file"
cat > "${MOCK_BIN}/${name}" <<ENDSCRIPT
#!/bin/bash
printf '%s\n' "\$*" >> "${log_file}"
${body}
ENDSCRIPT
chmod +x "${MOCK_BIN}/${name}"
}
make_mock_in() {
local dir="$1" name="$2"; shift 2
local body="${*:-exit 0}"
mkdir -p "$dir"
cat > "${dir}/${name}" <<ENDSCRIPT
#!/bin/bash
${body}
ENDSCRIPT
chmod +x "${dir}/${name}"
}
# ── Reset project globals ────────────────────────────────────
# Override this in your test file to clear project-specific state.
reset_globals() { :; }
# ── Default mocks ────────────────────────────────────────────
REAL_STAT=$(command -v stat 2>/dev/null || echo /usr/bin/stat)
make_mock stat "
if [[ \"\${1:-}\" == \"-c\" && \"\${2:-}\" == \"%u\" ]]; then
echo \"0\"
else
exec \"${REAL_STAT}\" \"\$@\"
fi
"
make_mock findmnt 'echo ""'
make_mock mountpoint 'exit 0'
make_mock python3 'echo ""'
make_mock mount 'exit 0'
make_mock btrfs 'exit 0'
make_mock flock 'exit 0'
make_mock df 'echo ""'
# ── Summary ──────────────────────────────────────────────────
summary() {
local name="${0##*/}"
echo ""
echo "════════════════════════════════════"
echo " ${name}: ${PASS} passed, ${FAIL} failed (total: ${TESTS})"
echo "════════════════════════════════════"
if [[ $FAIL -ne 0 ]]; then exit 1; fi
exit 0
}The test_harness.sh above is a reference implementation — it may be
adopted fully or partially depending on project needs.
- Full harness — for shell projects that test business logic involving external commands (mount, btrfs, pacman, flock, etc.). Provides complete assertion suite, mock framework, call tracking, and default mocks.
- Partial harness — for projects that need only assertion helpers
(
ok/fail/assert_eq) and basic mocks without full call tracking ormake_mock_in. - No harness needed — for projects that test pure library functions or compiled binaries without shell-level mocking.
# Tests
## Overview
| Package | File | What it tests |
|---------|------|---------------|
| `internal/config` | `config_test.go` | Parsing, validation, defaults |
| `internal/engine` | `status_test.go` | Status reporting, template rendering |
| `internal/perms` | `apply_test.go` | Permission computation and application |
## Running
```bash
# All tests (no root)
make test
# Individual package
go test ./internal/config/ -v
# Perms tests that require root (chmod/chown/full pipeline)
make test-root
```
## How they work
### Unit tests
All tests use Go's standard `testing` package with `t.TempDir()` for filesystem isolation. No external test frameworks.
### Root-only tests
Guarded by `skipIfNotRoot`. Run via `make test-root` (sudo).
## Test environment
- All tests create temporary directories via `t.TempDir()`, cleaned up automatically
- No root privileges required except `internal/perms` apply tests
- No real home directories or system files are touched
- Root-only tests skip with `t.Skip("requires root")` when run as non-root# Tests
## Overview
| Project | File | Framework | What it tests |
|---------|------|-----------|---------------|
| `subs2srs.Tests` | `UtilsSubsTests.cs` | xUnit | Time formatting, padding, overlap |
| `subs2srs.Tests` | `PrefIOTests.cs` | xUnit | JSON round-trip, migration, defaults |
| `subs2srs.Tests` | `ProjectIOTests.cs` | xUnit | `.s2s.json` save/load, corruption handling |
## Running
```bash
# All tests
make test
# Individual suite
dotnet test subs2srs.Tests/subs2srs.Tests.csproj --filter "FullyQualifiedName~UtilsSubsTests"
```
## How they work
### xUnit suites
- **Parallelization disabled**: `[assembly: CollectionBehavior(DisableTestParallelization = true)]` prevents race conditions on mutable static state.
- **Singleton reset**: `Settings.Instance.reset()` called in constructor and `Dispose()` to isolate test state.
- **Temp directories**: `Path.GetTempPath()` + `Guid` creates isolated dirs. Cleaned up via `IDisposable.Dispose()`.
- **Mocking**: External CLI tools are not invoked. File I/O tests use real temp files.
## Test environment
- All tests create temporary directories via `Path.Combine(Path.GetTempPath(), ...)` and clean up in `Dispose()`
- No root privileges required
- No real media files, subtitles, or system paths are touched
- Tests run sequentially to avoid singleton pollutionShell projects SHOULD measure test line coverage. Use bash-coverage (fkzys-tools) to collect coverage without modifying source files. It works via BASH_ENV + DEBUG trap (set -T), logging each executed line to a coverage file.
# Measure coverage for a single test
bash-coverage -- bash tests/test_config.sh
# Measure all tests in a project
bash-coverage -p ./atomic-upgrade
# Enforce minimum threshold
bash-coverage --min-coverage 80 -- make testCoverage is reported per file with color coding (green ≥80%, yellow ≥50%, red <50%). Executable-line heuristic excludes blank lines, comments, and structural keywords (then, fi, do, etc.) from the denominator. Test files (files under tests/ or matching test_*.sh) are excluded from the report — coverage measures production code, not the test runner itself.
CI MAY enforce minimum coverage thresholds using --min-coverage.
Shell coverage tools work via BASH_ENV + DEBUG trap (set -T) to
collect line-level coverage without modifying source files. Implementations
MUST follow these rules:
1. realpath -m for ALL paths, not just relative. Absolute paths
containing ../ (e.g. /proj/tests/../bin/script) must be normalized
before glob filtering. If path resolution is only applied to relative
paths, the ../ segment remains and filters like */tests/* will match
incorrectly, excluding production files from the report.
2. BASH_SOURCE[1] with fallback to BASH_SOURCE[0]. Child bash
processes launched via shebang exec (e.g. ./bin/script) have an empty
BASH_SOURCE[1]. Without fallback to BASH_SOURCE[0], coverage data
for these files is silently lost.
3. NO set +T / set -T inside the DEBUG trap. Toggling set -T
within the trap creates a race condition — the trap fires on the set -T
command itself, producing spurious entries and breaking inheritance for
child processes. Use early return instead of toggling trace mode.
#compdef project-name
_project_name() {
local context state state_descr line
typeset -A opt_args
_arguments -C \
'(- *)'{-h,--help}'[Show help]' \
'(- *)'{-V,--version}'[Show version]' \
'1:command:((
subcmd1\:"Description"
subcmd2\:"Description"
))' \
'*::arg:->args' \
&& return
case $state in
args)
case ${words[1]} in
subcmd1)
_arguments \
'(-n --dry-run)'{-n,--dry-run}'[Dry run]' \
'*:arg:_files'
;;
esac
;;
esac
}
_project_name "$@"# completions/project-name.bash
# bash completion for project-name
_project_name() {
local cur prev words cword
_init_completion || return
if [[ $cword -eq 1 ]]; then
COMPREPLY=( $(compgen -W "subcmd1 subcmd2 -h --help -V --version" -- "$cur") )
return
fi
case "${words[1]}" in
subcmd1)
COMPREPLY=( $(compgen -W "-n --dry-run" -- "$cur") )
;;
esac
}
complete -F _project_name project-nameAll user-facing CLI tools MUST provide a man page. Man pages are written in Markdown and compiled to roff via pandoc -s -t man. Source files live in man/.
| File pattern | Section | Purpose |
|---|---|---|
project-name.8.md |
8 | CLI commands, system utilities |
project-name.conf.5.md |
5 | Configuration file formats |
---
title: PROJECT-NAME
section: 8
header: System Administration
footer: project-name
---| Field | Purpose |
|---|---|
title |
Uppercase project name (matches .TH title) |
section |
Man section: 8 for commands, 5 for file formats |
header |
Center header (e.g., System Administration, File Formats) |
footer |
Lower-right corner — project name only. MUST NOT contain version, date, or other metadata. |
Omit date.
# NAME
project-name — short description
# SYNOPSIS
**project-name** \<command\> [options]
# DESCRIPTION
What the tool does, how it works.
# COMMANDS
**apply** [-n|\--dry-run]
: What this command does.
**status**
: What this command does.
# OPTIONS
**-h**, **\--help**
: Show usage and exit.
**-V**, **\--version**
: Print version and exit.
# EXAMPLES
Typical usage:
project-name apply
Preview:
project-name apply --dry-run
# EXIT STATUS
**0**
: Success.
**1**
: Error. Common causes.
# FILES
**~/.local/state/project-name/\***
: State files.
# SEE ALSO
**related-tool**(8)# NAME
project.conf — configuration for project-name
# SYNOPSIS
*/etc/project.conf*
# DESCRIPTION
What the config file is for, who reads it.
Format description (KEY=VALUE, TOML, YAML, etc.). Security requirements (ownership, no eval).
# OPTIONS
**KEY_NAME**
: Description. Default: *value*.
# SECURITY
Ownership requirements, parsing restrictions, what is rejected.
# EXAMPLES
Minimal config:
# /etc/project.conf
KEY_NAME = value
# SEE ALSO
**project-name**(8)MANPAGES = man/project-name.8
man: $(MANPAGES)
man/%.8: man/%.8.md
pandoc -s -t man -o $@ $<
clean:
rm -f $(MANPAGES)
install:
install -Dm644 man/project-name.8 $(DESTDIR)$(MANDIR)/man8/project-name.8
uninstall:
rm -f $(DESTDIR)$(MANDIR)/man8/project-name.8The install target MUST only install files — it MUST NOT trigger build steps
(man, build, etc.). Build artifacts are the maintainer's responsibility to
produce before running make install.
Infrastructure projects (infra, tf-infra) manage server configurations, DNS records, and service deployments. They are not installable packages — conventions like Makefile (PREFIX/DESTDIR), depends, bin/, lib/, tests/README.md, and verify-lib do not apply.
What does apply:
- §4 Python patterns: entry points, error handling (
sys.exit(1), stderr), SOPS helper - Jinja2 templating for service configs
- SOPS-encrypted secrets (
.sops.yaml,secrets.enc.yaml) - Tests use pytest with mocked SSH/HTTP (§7 Python test patterns)
class ServiceDeployer:
def __init__(self, config: dict):
self.files = config['files']
self.setup_dirs = config.get('setup_dirs', [])
self.restart_cmd = config.get('restart_cmd')
self.templates_dir = config['templates_dir']
self.secrets_file = config['secrets_file']
def _get_env(self):
return create_jinja_env(self.templates_dir)
def deploy(self, hosts, secrets, env, no_restart=False):
target, port = resolve_target(hosts, secrets['host'])
for entry in self.files:
tpl, rp, opts = entry[0], entry[1], entry[2] if len(entry) == 3 else {}
rendered = env.get_template(tpl).render(**secrets)
# rsync + chown/chmod via SSH# Managed by infra repo — {{ instance_name }}
HostKeyAlgorithms rsa-sha2-512,rsa-sha2-256,ssh-ed25519
AllowUsers {{ common.ssh_allowed_users | join(' ') }}
Port {{ instance.ssh_port | default(common.ssh_port) }}
{% for user in common.ssh_otp_users %}
Match User {{ user }}
AuthenticationMethods keyboard-interactive
{% endfor %}Format: one line per dependency. Comments use #.
# system
system:python3
system:btrfs
system:ukify
gitpkg:verify-lib
Dependency types:
system:<pkg>— Package from the system repository manager (pacman, apt, dnf). Installed via standard package manager.gitpkg:<name>— Project managed bygitpkg. Resolved from configured base URLs or collections. Cloned, built, and installed viagitpkg install <name>.# comment— Ignored by parsers.
Go projects do not use depends. Dependencies are managed by go.mod/go.sum. The depends file is only for shell, Python, C, and other projects without a native dependency manager.
This section defines how code must be generated when working with fkzys projects. Assistants and developers creating new code must follow these rules.
- Verify structure (
Makefile,lib/,tests/ortests.md) before adding files. For infrastructure projects (§9), these do not apply. Go projects usego.modinstead ofdepends. - Apply standards automatically:
set -euo pipefail(except in test files, wrappers, and guard scripts — see §2), whitelist config parser,verify-libsourcing,printf -vinstead ofeval, explicit shopt restore,*_NO_INITbefore sourcing library,local -n(nameref) for array parameters,snake_casefunction naming,shopt -s nullglobbefore glob loops,_validate_path()for install paths, absolute URLs for cross-repository links. - No placeholders. Provide complete, runnable code.
- Include tests or update test documentation when adding features. Test files use
set -uo pipefail(no-e) — failures are counted, not caught by errexit. - Flag
eval,chmod 777, hardcoded secrets, missing ownership checks,/tmpusage for scripts, barelocal -a/-Ain library functions without nameref, glob loops withoutnullglob, missing_validate_path()for install paths, relative links escaping repository root (../). - Ask if uncertain about paths, versions, or flags before generating.
- When editing this specification, new top-level sections are appended at the end with the next sequential number. Subsections MUST use dot notation (e.g., §8.1 MAN PAGES). Existing section numbers MUST NOT be changed.
- Before committing, verify what will be included. Run
git status,git diff, andgit diff --cachedto confirm all changes — staged and unstaged — match intent. Never usegit add -A && git commitwithout stating expected contents. - Comments explain why, not what. Comments should describe the reasoning behind a decision (
# avoid eval: TOCTOU risk) or document non-obvious behaviour (# on-load: one git status call, parse all files). Never comment on what code obviously does (# iterate files,# call function). Omit trivial comments entirely. - No colorful emojis. Do not use colorful emoji characters (e.g. graphic icons for objects, faces, animals) in specification text, code comments, or generated code. Simple Unicode symbols (checkmarks, arrows, box-drawing characters) are acceptable. Use plain-text markers like
BAD:/OK:orFIXME:/NOTE:instead of colorful emojis. - Relative links only within same repository. Relative paths (
../specs/README.md,./bin/script) are only valid for files inside the same repository. References to external repositories MUST use absolute URLs (e.g.https://github.com/fkzys/specs/blob/main/README.md). Never use../to escape the repository root in documentation.
- Shell header:
#!/bin/bash\nset -euo pipefail - Library header:
#!/usr/bin/env bash - Test file header:
#!/usr/bin/env bash\nset -uo pipefail(no-e— see §2 exceptions) - Guard script comment:
# Note: no "set -euo pipefail" — this script relies on conditional control flow incompatible with errexit - Error:
echo "ERROR: msg" >&2; exit 1 - Config check:
[[ -n "${VAR:-}" ]] || { echo "ERROR: VAR not defined" >&2; exit 1; } - Init bypass:
readonly _PROJECT_NO_INIT=1before_src common.sh - Nameref:
local -n _arr=$1; _arr+=(--flag) - Glob loop:
shopt -s nullglob; for f in *.sh; do ...; done; shopt -u nullglob - Path validation:
_validate_path() { [[ -n "$1" && ! "$1" =~ ^/ && ! "$1" =~ \.\. && ! "$1" =~ $'\n' ]] || return 1; } - Function naming:
snake_case(),_internal_helper() - Make install:
install -Dm755 bin/cmd $(DESTDIR)$(PREFIX)/bin/cmd - Python entry:
if __name__ == "__main__": main() - Go main:
func main() { if err := run(); err != nil { fmt.Fprintf(os.Stderr, "project: %v\n", err); os.Exit(1) } } - Go build:
CGO_ENABLED=0 go build -trimpath -buildmode=pie -ldflags "-X main.version=$(VERSION)" - C# build:
dotnet build Project/Project.csproj -c Release - Test run:
make testorbash tests/test.shorpython -m pytest tests/ -vorgo test ./...ordotnet test Tests/Tests.csproj - Makefile test loop (shell, multiple files):
UNIT_TESTS = tests/test_a.sh tests/test_b.sh test: @for t in $(UNIT_TESTS); do \ echo ""; echo "━━━ $$t ━━━"; \ bash "$$t" || exit 1; \ done - CI test loop (shell, multiple files):
- run: | for t in tests/test_config.sh tests/test_cli.sh; do echo "━━━ $t ━━━" bash "$t" || exit 1 done
- Coverage:
bash-coverage -p ./projectorbash-coverage --min-coverage 80 -- make test - Man YAML:
title: NAME\nsection: 8\nheader: System Administration\nfooter: project-name(nodate) - Man compile:
pandoc -s -t man cmd.8.md -o cmd.8 - SOPS:
sops -d secrets.enc.yaml
// Always use `using` for IDisposable (StreamReader, FileStream, XmlReader, etc.)
using var reader = new StreamReader(path, encoding);
// No explicit Close() needed — disposed on scope exit or exception.// Install a custom SynchronizationContext in Program.cs to marshal
// `await` continuations back to the UI main loop (GTK/WinForms/etc.).
private async void OnButtonClicked(object sender, EventArgs e)
{
sender.SetSensitive(false);
var result = await Task.Run(() => HeavyComputation());
UpdateUI(result); // Safe: runs on main thread
sender.SetSensitive(true);
}var parallelOptions = new ParallelOptions { MaxDegreeOfParallelism = MaxThreads };
Parallel.ForEach(workItems, parallelOptions, (item, state) =>
{
if (cancellationToken.IsCancellationRequested) { state.Stop(); return; }
Process(item);
});- Disable parallelization if tests share mutable static state:
[assembly: CollectionBehavior(DisableTestParallelization = true)] - Reset singletons in constructor/dispose to avoid pollution.
- Use
Path.GetTempPath()+Guidfor isolated temp dirs. Clean up inIDisposable.Dispose(). - Mock external CLI tools or use temp files for I/O tests.
- Atomic writes: write to
.tmpextension, thenFile.Move(tmp, final, overwrite: true). - Never hardcode paths: use
Path.Combine,Environment.GetFolderPath. - Validate extensions before parsing:
Path.GetExtension(path).ToLowerInvariant(). - Always wrap
StreamReader/FileStreaminusingto prevent descriptor leaks.
All releases MUST be tagged with annotated, signed git tags:
git tag -s -a v0.0.1 -m 'v0.0.1'
git push origin v0.0.1-s— sign the tag-a— create an annotated tag (stores tagger, date, message as a full Git object)- Both flags are explicit:
-simplies-a, but writing both makes intent clear
Version format: v<major>.<minor>.<patch> (semantic versioning with v prefix).
All commits MUST be signed:
git commit -S -m 'description'-S— sign the commit
This applies to both human-authored and LLM-generated commits. When LLM assistants generate code, the human reviewer must sign the resulting commit.
All commits in ecosystem projects MUST follow the Conventional Commits format:
<type>(<scope>): <description>
| Type | Purpose |
|---|---|
feat |
New feature |
fix |
Bug fix |
docs |
Documentation only |
ci |
CI configuration changes |
refactor |
Code change, no behavior change |
test |
Test additions or changes |
chore |
Maintenance, dependency updates, config |
Scope is optional but encouraged for targeted changes: fix(atomic-gc):, test(keys-vault):, ci(btrfs-file-history):.
Examples:
feat: add --separate-home flag for isolated home subvolumes
fix: handle missing ESP mount point gracefully
docs: clarify test target rules in §3
ci: call test scripts directly instead of make test
Scope may be added in parentheses: fix(cli): handle --dry-run with custom tag.
Commits to this specification follow the same Conventional Commits format.
All projects SHOULD have .github/workflows/ci.yml with path-filtered triggers.
CI triggers MUST use paths (whitelist) to avoid running on unrelated changes
(README edits, LICENSE updates, asset changes). Example:
on:
push:
paths:
- 'bin/**'
- 'lib/**'
- 'tests/**'
- '.github/workflows/ci.yml'
pull_request:
paths:
- 'bin/**'
- 'lib/**'
- 'tests/**'
- '.github/workflows/ci.yml'paths-ignore (blacklist) is NOT used — prefer explicit whitelisting.
CI MUST call test commands directly, NOT through make test. This keeps CI
output explicit and avoids coupling to the Makefile.
| Language | CI command |
|---|---|
| Shell | bash tests/test.sh or loop over tests/test_*.sh |
| Python | python -m pytest tests/ -v |
| Go | go test ./... -v -count=1 |
| C# | dotnet test Project.Tests/Project.Tests.csproj |
Example — Shell project with multiple test files:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- run: |
for t in tests/test_config.sh tests/test_cli.sh tests/test_commands.sh; do
echo "━━━ $t ━━━"
bash "$t" || exit 1
doneExample — Go project with dorny/paths-filter:
changes:
runs-on: ubuntu-latest
outputs:
go: ${{ steps.filter.outputs.go }}
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: dorny/paths-filter@v4
id: filter
with:
filters: |
go:
- '**.go'
- 'go.mod'
- 'go.sum'
- '.github/workflows/ci.yml'
test:
needs: changes
if: needs.changes.outputs.go == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/setup-go@v6
with:
go-version: '1.24'
- run: go test ./... -v -count=1Shell projects SHOULD run shellcheck in CI. Python projects SHOULD run
ruff and mypy. C projects SHOULD run cppcheck. Go projects SHOULD
run go vet.
Shell projects SHOULD measure test coverage in CI using bash-coverage (fkzys-tools). The tool must be installed before use:
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v6
with:
repository: fkzys/fkzys-tools
path: fkzys-tools
- run: |
for t in tests/test_*.sh; do
echo "━━━ $t ━━━"
bash "$t" || exit 1
done
- run: bash fkzys-tools/bash-coverage --min-coverage 50 -p .When an interactive tool supports hook commands (e.g. on-load, on-cd,
on-init, on-select) that communicate with a long-running daemon via IPC
(sockets, remote commands, D-Bus), a race condition exists between the
tool's client registration and the hook's first IPC call.
Hooks that run asynchronously may execute before the tool's client ID is registered with the daemon. This means the first IPC call silently fails — the daemon has no record of the client to deliver the message to.
This manifests as:
- Works after navigation: Hook fires correctly when user navigates (client already registered), but fails on initial load
- Works on restart: Hook works when tool is restarted in the same directory or session (client was registered during previous session)
- No error output: Daemon drops messages to unregistered clients without feedback to the hook script
Hook scripts MUST NOT use sleep, polling loops, or busy-wait retries
to wait for daemon readiness. Sleep duration depends on CPU speed, I/O
latency, and system load, and is not deterministic. Timing-based waits
MUST NOT be used for IPC communication via pipes, Unix sockets, D-Bus,
or remote commands. The only acceptable synchronization mechanisms are:
- Barrier I/O waits (e.g.
tcdrain) for output completion - Two-phase initialization (
on-init→reload→on-load)
Use a two-phase approach:
-
Synchronous initialization hook (
on-init,setup, or equivalent) runs after the tool's client is fully registered with the daemon. Use it to trigger a reload or re-sync event. -
Asynchronous work hook (
on-load,on-cd, or equivalent) performs the actual work. When triggered by the reload from phase 1, the daemon already knows about the client and delivers IPC messages.
Generic flow:
tool starts → daemon registers client → on-init fires (sync)
→ on-init triggers reload → on-load fires (async, client registered)
→ IPC calls succeed
When the tool provides a way to wait for I/O completion, use it as a synchronization barrier to ensure output is fully transmitted before the next render cycle begins.
Example using tcdrain (waits until all buffered output on a file descriptor
is transmitted):
cmd previewer &{{
# Send image to terminal via file descriptor 3
send_image "$1" >&3
# Wait until TTY output buffer is fully flushed
perl -MPOSIX -e 'POSIX::tcdrain(3)'
}}This prevents the tool from rendering the next screen update while the
previous output is still being drawn (flicker, partial images, corrupted
terminal state). The descriptor number must not conflict with other hooks
(e.g. 3 is commonly used for preview output — match the fd used by the
sending command).
This pattern applies to any tool where:
- Hooks run in async subprocesses
- Hooks communicate with a persistent daemon via client IDs or sessions
- The first hook may fire before client registration completes
Examples: file managers with remote commands, terminal multiplexers with status line updates, editors with server-mode IPC.
When hooks appear to fail silently:
- Run the tool with logging enabled
- Compare hook invocation log entries vs daemon receive/dispatch entries
- If the hook fires but no corresponding dispatch appears, the client was not registered when the hook sent its IPC call
- Check the daemon's client registry — unregistered clients show empty connection lists
lf uses on-init (sync, after client registration) and on-load (async,
fires when directory contents are loaded). lf -remote sends commands to
the lf server daemon via Unix socket.
Race: on-load for the initial directory fires before the server has
registered the client in gConnList[id]. lf -remote "send $id ..." sends
to an empty list — message is dropped.
Fix:
# Trigger reload after client is registered
cmd on-init &{{
lf -remote "send $id reload"
}}
# on-load now runs with registered client — IPC succeeds
cmd on-load &{{
# ... git status, parse files ...
lf -remote "send $id :$cmds"
}}Copyright (c) 2026 fkzys
This specification and its embedded architectural patterns are dual-licensed. You may use this project under either the Open Source (Copyleft) licenses or a Commercial License.
Under the open-source model, the following copyleft licenses apply:
- Specification Text & Documentation: Licensed under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0). Any modified versions or derived specifications must be shared under the identical license.
- Code Snippets & Architecture: All embedded code blocks, Makefiles, bash functions (e.g.,
test_harness.sh), C sources (e.g.,verify-lib.c), Go structures, and infrastructure templates (Jinja2/Terraform patterns) are licensed under the GNU Affero General Public License v3.0 or later (AGPL-3.0-or-later).
Note on Implementations: Any tools, infrastructure-as-code deployments (§9), or systems that incorporate, copy, or adapt the code snippets and architectural patterns defined in this specification are considered derivative works. These must be distributed under the AGPL-3.0-or-later license. This explicitly includes network-interacting infrastructure covered by the AGPL network interaction clause.
If you wish to use this specification, implement its patterns, or use its associated tools (gitpkg, infra, verify-lib, etc.) in a proprietary, closed-source product, or if your policies prohibit the use of AGPLv3 software, a Commercial License is available.
A commercial license grants a legal waiver from the AGPLv3 and CC BY-SA 4.0 copyleft requirements, permitting the use, modification, and integration of this work into private or commercial infrastructure without the obligation to disclose source code.
Please contact [fkzys@proton.me] for commercial licensing inquiries.
Unless required by applicable law or agreed to in writing, the specification and code distributed under the Open Source licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.