Stars
A modular graph-based Retrieval-Augmented Generation (RAG) system
Repository for the Lux AI Challenge, season 3 @NeurIPS 24. Hosted on @kaggle
A set of vim, zsh, git, and tmux configuration files.
GTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse m…
learning and feeling SLAM together with hands-on-experiments
An Open-source Package for GNSS Positioning and Real-time Kinematic Using Factor Graph Optimization
Python sample codes and textbook for robotics algorithms.
Deep Reinforcement Learning codes for study. Currently, there are only codes for algorithms: DQN, C51, QR-DQN, IQN, QUOTA.
A static analyzer for Java, C, C++, and Objective-C
PyTorch implementation of the Offline Reinforcement Learning algorithm CQL. Includes the versions DQN-CQL and SAC-CQL for discrete and continuous action spaces.
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
Assignments for Berkeley CS 285: Deep Reinforcement Learning (Fall 2021)
A toolkit for developing and comparing reinforcement learning algorithms.
Fast and memory-efficient exact attention
deepspeedai / Megatron-DeepSpeed
Forked from NVIDIA/Megatron-LMOngoing research training transformer language models at scale, including: BERT & GPT-2
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
OpenChat: Easy to use opensource chatting framework via neural networks
PyTorch package for the discrete VAE used for DALL·E.
FastFormers - highly efficient transformer models for NLU
This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference"
Official PyTorch implementation of StyleGAN3
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Code for the paper "Language Models are Unsupervised Multitask Learners"