-
SiteStudio
- Brisbane, Queensland, Australia
- http://sitestudio.co
Lists (1)
Sort Name ascending (A-Z)
Stars
Open Source Application for Advanced LLM + Diffusion Engineering: interact, train, fine-tune, and evaluate large language models on your own computer.
A curated list of awesome commands, files, and workflows for Claude Code
Run Claude Code with Local MLX powered models
Environments for LLM Reinforcement Learning
An open-source AI agent that brings the power of Gemini directly into your terminal.
MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
Zama's Homomorphic Processing Unit implementation on FPGA
ClaraVerse is a privacy-first, fully local AI workspace featuring a Local LLM chat powered by LLama.cpp, along with support for any provider, tool calling, agent building, Stable Diffusion, and n8n…
Like VexRiscv, but, Harder, Better, Faster, Stronger
FPT: a Fixed-Point Accelerator for Torus Fully Homomorphic Encryption
TFHE-rs: A Pure Rust implementation of the TFHE Scheme for Boolean and Integer Arithmetics Over Encrypted Data.
✨ Awesome - A curated list of amazing Homomorphic Encryption libraries, software and resources
A curated list of amazing Fully Homomorphic Encryption (FHE) resources created by the team at Zama.
Integrate cutting-edge LLM technology quickly and easily into your apps
ROPfuscator is a fine-grained code obfuscation framework for C/C++ programs using ROP (return-oriented programming).
Renode - Antmicro's open source simulation and virtual development framework for complex embedded systems
OFRAK: unpack, modify, and repack binaries.
A guidance language for controlling large language models.
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
This is the development repository for the OpenFHE library. The current version is 1.4.2 (released on October 20, 2025).
🐙 Guides, papers, lessons, notebooks and resources for prompt engineering, context engineering, RAG, and AI Agents.
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting