- Pittsburgh, PA
- yewon-kim.com
- @haiyewon
Highlights
- Pro
Stars
Songs created with Strudel, the live coding music tool
Live coding RAVE real-time models using TidalCycles and SuperDirt
Official implementation of QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval (ICML 2025)
A MIDI-based Real-time Pianoroll Note Visualization Library in JavaScript
@dharasim The iRealPro jazz chord sequences including tree analysis
Magenta Studio is a collection of music plugins built on Magenta’s open source tools and models
Official Implementation of Amuse: Human-AI Collaborative Songwriting with Multimodal Inspirations
XMIDI Dataset: A large-scale symbolic music dataset with emotion and genre labels.
[ICLR'25 Oral] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think
A Virtual Musical Partner for Creative Brainstorming using MASOM
🎷 Artificial Composition of Multi-Instrumental Polyphonic Music
A lightweight yet powerful audio-to-MIDI converter with pitch bend detection
A chord identifier and harmonizer for MIDI files
Use API to call the music generation AI of suno.ai, and easily integrate it into agents like GPTs.
phoneme tokenizer and grapheme-to-phoneme model for 8k languages
A simple notebook demonstrating prompt-based music generation via Mubert API
Stable diffusion for real-time music generation
Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable…
The latent diffusion model for text-to-music generation.
Audio generation using diffusion models, in PyTorch.
Source code for "FIGARO: Generating Symbolic Music with Fine-Grained Artistic Control"
Symbolic Music Generation with Diffusion Models