-
University of Arkansas
- Fayetteville, AR
- akomand.github.io
Stars
Code for the paper Open-Vocabulary Attention Maps with Token Optimization for Semantic Segmentation in Diffusion Models @ CVPR 2024
Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion
Neural Causal Model (NCM) implementation by the authors of The Causal Neural Connection.
[ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning
Official implementation of our EMNLP 2022 paper "CPL: Counterfactual Prompt Learning for Vision and Language Models"
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
An open source implementation of CLIP.
Official implementation of the paper "Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models
A collection of resources on controllable generation with text-to-image diffusion models.
[ACM MM 2022] Towards Counterfactual Image Manipulation via CLIP
The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)
High-Resolution Image Synthesis with Latent Diffusion Models
A collection of resources and papers on Diffusion Models
PyTorch implementation of normalizing flow models
Normalizing flows in PyTorch. Current intended use is education not production.
Eastern European Machine Learning Summer School (EEML) Workshop Series 2022. Tutorial on Causality for the Serbian Machine Learning Workshop on Deep Learning and Reinforcement Learning.
✨✨Latest Advances on Multimodal Large Language Models
Code repository of the paper "CITRIS: Causal Identifiability from Temporal Intervened Sequences" and "iCITRIS: Causal Representation Learning for Instantaneous Temporal Effects"
Release for Improved Denoising Diffusion Probabilistic Models
[ICML 2023] The official implementation of the paper "TabDDPM: Modelling Tabular Data with Diffusion Models"
Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.