-
StarkWare
- Genesis
- https://fgu.tech/manifesto
- @Abdelstark
- https://primal.net/abdel
Lists (17)
Sort Name ascending (A-Z)
Stars
CADAM is the open source text-to-CAD web application
Inspect and compare self-supervised vision model representations for DINOv2, I-JEPA, V-JEPA 2, EUPE
MathCode: A Frontier Mathematical Coding Agent
PyTorch code and models for the DINOv2 self-supervised learning method.
A high-throughput and memory-efficient inference and serving engine for LLMs
State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!
Efficient Universal Perception Encoder: a single on-device vision encoder with versatile representations that match or exceed specialized experts across multiple task domains.
Integrate Git version control with automatic commit-and-sync and other advanced features in Obsidian.md
A collection of pre-trained, state-of-the-art models in the ONNX format
Stylometric analysis of Satoshi & comparison of Satoshi with 75,000+ authors
Collection of DESIGN.md files that capture design systems from popular websites. Drop one into your project and let coding agents build matching UI.
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
Provable SHRIMPS: Post-quantum hash-based signatures verified in Cairo with STARK proofs
CommitLLM is a cryptographic commit-and-audit protocol for open-weight LLM inference.
You like pytorch? You like micrograd? You love tinygrad! ❤️
Comprehensive roadmap for aspiring Embedded Systems Engineers, featuring a curated list of learning resources
Open-source compliance toolkit for the EU AI Act. Risk classification, conformity checklists, documentation templates.
Implemention of the GRASP world model planner for dino_wm environments
Linkedin Automation Tool: Describe your product. Define your target market. The AI finds the leads for you.
A Python package to assess and improve fairness of machine learning models.
Set of tools to assess and improve LLM security.
Adding guardrails to large language models.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and l…
Provable adversarial robustness at ImageNet scale
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, 2024, 2025)
Inspect: A framework for large language model evaluations
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.