Highlights
- Pro
Starred repositories
π A delightful community-driven (with 2,400+ contributors) framework for managing your zsh configuration. Includes 300+ optional plugins (rails, git, macOS, hub, docker, homebrew, node, php, pythonβ¦
LlamaIndex is the leading framework for building LLM-powered agents over your data.
π€ The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools
This repository is a collection of existing KGQA datasets in the form of the π€ huggingface datasets -> https://github.com/huggingface/datasets library, aiming to provide easy-to-use access to them.
[ICLR2024] Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources
A project that uses Langchain to construct SPARQL queries against SPARQL endpoints
OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.
Multilingual Generative Pretrained Model
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)
π¦π The platform for reliable agents.
Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
Solutions of Reinforcement Learning, An Introduction
π A list of open LLMs available for commercial use.
[ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"
template for making easy extension of fairseq
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)
Assignment for NLP2 on probing language models
Repository to collect material that is useful for the KRR course at the UvA
Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)
Summaries of courses in the AI Master programme at UvA
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
MIT Deep Learning Book in PDF format (complete and parts) by Ian Goodfellow, Yoshua Bengio and Aaron Courville
Scripts/Notebooks used for my articles published on Medium