muon is a multimodal omics Python framework
-
Updated
Oct 22, 2025 - Python
muon is a multimodal omics Python framework
[CVPR 2021] Multi-Modal-CelebA-HQ: A Large-Scale Text-Driven Face Generation and Understanding Dataset
Interface for easier topic modelling.
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
Unified storage framework for the entire machine learning lifecycle
Frontiers in Intelligent Colonoscopy [ColonSurvey | ColonINST | ColonGPT]
A Streamlit-based AI assistant generates custom Streamlit app code from user-provided images or text using the Google Gemini model.
A fully differentiable set autoencoder
Collects a multimodal dataset of Wikipedia articles and their images
Multimodal and Multilingual Georeferencing and News Retrieval
ThalamusDB: semantic query processing on multimodal data
Reducing neonatal and under-5 mortality rates via an AI-driven awareness platform with a Gradio app, Gemini API integration, and essential project utilities. #AIForGood
[IEEE TGRS 2022] Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning
Multimodal and Multilingual Georeferencing and News Retrieval
Prompt Engineering and Dev-Ops toolkit for applications powered by Language Models
Tumor2Graph: a novel Overall-Tumor-Profile-derived virtual graph deep learning for predicting tumor typing and subtyping.
SOTA Classification at scale for UAVs, Drones, and much more
Python scripts and assets related to Multimodal-Wireless dataset. The dataset can be found at
FAVSeq is a machine learning-based pipeline for identifying factors affecting the difference between bulk and scRNA-Seq experiments.
Add a description, image, and links to the multimodal-data topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-data topic, visit your repo's landing page and select "manage topics."