Lists (1)
Sort Name ascending (A-Z)
Starred repositories
A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Multimodal Live Streaming on Your Phone
Tools for merging pretrained large language models.
Codebase for Merging Language Models (ICML 2024)
AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.
Official repository of Evolutionary Optimization of Model Merging Recipes
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
OpenMMLab Semantic Segmentation Toolbox and Benchmark.
本文原文由知名 Hacker Eric S. Raymond 所撰寫,教你如何正確的提出技術問題並獲得你滿意的答案。
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
[ICLR 2025] The First Multimodal Seach Engine Pipeline and Benchmark for LMMs
A paper list of some recent works about Token Compress for Vit and VLM
PyTorch code and models for the DINOv2 self-supervised learning method.
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
机器学习方法习题解答,在线阅读地址:https://datawhalechina.github.io/statistical-learning-method-solutions-manual
推荐系统入门教程,在线阅读地址:https://datawhalechina.github.io/fun-rec/
A curated list of awesome search engines useful during Penetration testing, Vulnerability assessments, Red/Blue Team operations, Bug Bounty and more
🔍 An LLM-based Multi-agent Framework of Web Search Engine (like Perplexity.ai Pro and SearchGPT)
LLMs interview notes and answers:该仓库主要记录大模型(LLMs)算法工程师相关的面试题和参考答案
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
[CVPR 2024] Official implement of <Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation>
Fast and memory-efficient exact attention
《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程
Ongoing research training transformer models at scale