-
Seoul National University
- Seoul, Republic of Korea
-
05:58
(UTC +09:00) - in/hahyeon-choi
- https://hahyeon610.github.io/
Highlights
- Pro
Stars
Code release for UNCHA: UNcertainty-guided Compositional Hyperbolic Alignment (CVPR 2026 Highlight)
Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model
Code for paper 'How does Transformer Learn Implicit Reasoning?'
The code of Advancing Expert Specialization for Better MoE (NeurIPS2025 oral)
Cosmos-Predict1 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world models for downstream applications.
PyTorch implementation of JiT https://arxiv.org/abs/2511.13720
Pytorch Implementation (unofficial) of the paper "Mean Flows for One-step Generative Modeling" by Geng et al.
Code for ICLR 2025 Paper "Projection Head is Secretly an Information Bottleneck"
Implementation of Multi-View Information Bottleneck
[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
[NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"
Astrofy is a free and open-source template for your Personal Portfolio Website built with Astro and TailwindCSS. Create in minutes a website with Blog, CV, Project Section, Store and RSS Feed.
A beautiful, simple, clean, and responsive Jekyll theme for academics
Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts"
Keras implementation of Representation Learning with Contrastive Predictive Coding
[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning
Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"
[NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy