Skip to content
View lxtGH's full-sized avatar
💬
At home
💬
At home

Highlights

  • Pro

Block or report lxtGH

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 250 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
lxtGH/README.md

Website GitHub Stars

My name is Xiangtai. My research focuses on computer vision, deep learning, and multi-modal models.

Currently, I am working as a Research Scientist at Bytedance (Tiktok) in Singapore.

I was a research fellow at MMlab@NTU, supervised by Prof. Chen Change Loy. I have worked as a research scientist/associate at various places before, including JD, Sensetime, and Shanghai AI Laboratory. I obtained my Ph.D. from Peking University.

My published works are on my homepage, and I am open to discussing potential remote collaborations on research. Please feel free to email me at xiangtai94@gmail.com.

I love coding and building universal, larger, efficient models to a better world!

Moreover, most of my works, including the ones I have profoundly contributed to, are open-sourced on GitHub in both personal or Bytedance repo.

Pinned Loading

  1. OMG-Seg OMG-Seg Public

    OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]

    Python 1.3k 53

  2. HarborYuan/ovsam HarborYuan/ovsam Public

    [ECCV 2024] The official code of paper "Open-Vocabulary SAM".

    Python 1k 34

  3. bytedance/Sa2VA bytedance/Sa2VA Public

    🔥 Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos

    Python 1.3k 84

  4. chongzhou96/EdgeSAM chongzhou96/EdgeSAM Public

    Official PyTorch implementation of "EdgeSAM: Prompt-In-the-Loop Distillation for On-Device Deployment of SAM"

    Jupyter Notebook 1.1k 50

  5. DenseWorld-1M DenseWorld-1M Public

    Code and dataset link for "DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World"

    111 2

  6. Awesome-Segmentation-With-Transformer Awesome-Segmentation-With-Transformer Public

    [T-PAMI-2024] Transformer-Based Visual Segmentation: A Survey

    752 54