[CVPR 2021] Multi-Modal-CelebA-HQ: A Large-Scale Text-Driven Face Generation and Understanding Dataset
-
Updated
Jun 1, 2024 - Python
[CVPR 2021] Multi-Modal-CelebA-HQ: A Large-Scale Text-Driven Face Generation and Understanding Dataset
muon is a multimodal omics Python framework
Unified storage framework for the entire machine learning lifecycle
Interface for easier topic modelling.
Frontiers in Intelligent Colonoscopy [ColonSurvey | ColonINST | ColonGPT]
Code on selecting an action based on multimodal inputs. Here in this case inputs are voice and text.
[IEEE TGRS 2022] Unsupervised Multimodal Change Detection Based on Structural Relationship Graph Representation Learning
Collects a multimodal dataset of Wikipedia articles and their images
A fully differentiable set autoencoder
A multimodal architecture to build multimodal knowledge graphs with flexible multimodal feature extraction and dynamic multimodal concept generation
Prompt Engineering and Dev-Ops toolkit for applications powered by Language Models
Tumor2Graph: a novel Overall-Tumor-Profile-derived virtual graph deep learning for predicting tumor typing and subtyping.
Project for the courses of Natural Interaction and Affective Computing, University of Milan, M.Sc. in Computer Science, A.Y. 2022/2023. Predicting pain given a multi-modal dataset.
SOTA Classification at scale for UAVs, Drones, and much more
Predicting the multi-trajectory evolution of multimodal brain connectivity.
Streamlit App Combining Vision, Language, and Audio AI Models
A Streamlit-based AI assistant generates custom Streamlit app code from user-provided images or text using the Google Gemini model.
Multimodal Agentic GenAI Workflow – Seamlessly blends retrieval and generation for intelligent storytelling
Reducing neonatal and under-5 mortality rates via an AI-driven awareness platform with a Gradio app, Gemini API integration, and essential project utilities. #AIForGood
FAVSeq is a machine learning-based pipeline for identifying factors affecting the difference between bulk and scRNA-Seq experiments.
Add a description, image, and links to the multimodal-data topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-data topic, visit your repo's landing page and select "manage topics."