-
Juelich Supercomputing Center (JSC), Forschungszentrum Jülich GmbH, LAION
- Germany
- https://mehdidc.github.io
- @mehdidc
Stars
- All languages
- ANTLR
- ActionScript
- Assembly
- Bikeshed
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Common Workflow Language
- Cuda
- Cython
- D
- Dart
- Dockerfile
- Emacs Lisp
- Erlang
- Forth
- Fortran
- FreeMarker
- GLSL
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jinja
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- Lua
- MATLAB
- MDX
- MLIR
- Makefile
- Markdown
- Mathematica
- NASL
- NewLisp
- Nim
- OCaml
- PHP
- Perl
- PostScript
- PowerShell
- Protocol Buffer
- Pug
- Python
- R
- Reason
- Rocq Prover
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Slash
- Svelte
- Swift
- SystemVerilog
- TeX
- TypeScript
- Vala
- Vim Script
- Vue
- reStructuredText
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future …
An Open Phone Agent Model & Framework. Unlocking the AI Phone for Everyone
[NeurIPS 2025] Official Implementation of DINO-Foresight: Looking into the Future with DINO
Code and Pretrained Models for ICLR 2023 Paper "Contrastive Audio-Visual Masked Autoencoder".
🎨 NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.
Plotting heatmaps with the self-attention of the [CLS] tokens in the last layer.
[ICLR 2024] Official repository for "Vision-by-Language for Training-Free Compositional Image Retrieval"
Fara-7B: An Efficient Agentic Model for Computer Use
Official code release for "SuperBPE: Space Travel for Language Models"
Reference PyTorch implementation and models for DINOv3
DeepEP: an efficient expert-parallel communication library
General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]
MixtureVitae: A Permissive, High-Performance, Open-Access Pretraining Dataset
Kandinsky 5.0: A family of diffusion models for Video & Image generation
Manipulate audio with a simple and easy high level interface
Code for Machine Learning for Algorithmic Trading, 2nd edition.
Official Implementation of "MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation"
Fully automatic censorship removal for language models
The most open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.
Cosmos-Predict2.5, the latest version of the Cosmos World Foundation Models (WFMs) family, specialized for simulating and predicting the future state of the world in the form of video.
Open-source framework for the research and development of foundation models.
An experimental implementation of compiler-driven automatic sharding of models across a given device mesh.