-
NVIDIA
- Toronto
-
12:10
(UTC -04:00) - https://www.alexi.uk/
- @llm_wizard
- @chrisalexiuk
- in/csalexiuk
Highlights
- Pro
Stars
A beautiful, cross-platform CLI tool for analyzing disk space usage with developer-friendly terminal visualizations and JSON export
Open-source library for scalable, reproducible evaluation of AI models and benchmarks.
Developer Asset Hub for NVIDIA Nemotron — A one-stop resource for training recipes, usage cookbooks, datasets, and full end-to-end reference examples to build with Nemotron models
Fully open reproduction of DeepSeek-R1
New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
🤩 An AWESOME Curated List of Papers, Workshops, Datasets, and Challenges from CVPR 2024
Knowledge Agents and Management in the Cloud
The simplest, fastest repository for training/finetuning medium-sized GPTs.
The official Python library for the OpenAI API
An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!
A collection of fine-tuning notebooks!
Supercharge Your LLM Application Evaluations 🚀
Set up your local AI-powered dev environment just like professional AI Engineers
This repository contains a toy implementation of a basic RAQA system.
An introduction to the Chainlit Library for an event with the Machine Learning Maker Space and EveningOfPythonCoding
<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
Resources relating to the DLAI event: https://www.youtube.com/watch?v=eTieetk2dSw
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
[NeurIPS 2023] Tree of Thoughts: Deliberate Problem Solving with Large Language Models
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.