-
Wayne State University
- Detroit MI
- https://qiangyao1988.github.io/
- @YaoQiang7
Stars
LLMs interview notes and answers:该仓库主要记录大模型(LLMs)算法工程师相关的面试题和参考答案
《Hello 算法》:动画图解、一键运行的数据结构与算法教程。支持 Python, Java, C++, C, C#, JS, Go, Swift, Rust, Ruby, Kotlin, TS, Dart 代码。简体版和繁体版同步更新,English version in translation
A robustness analysis of FAN ViT models using standard adversarial attacks. This repository is the official implementation of the following paper- https://arxiv.org/abs/2306.05495
Official code of "Discover and Mitigate Unknown Biases with Debiasing Alternate Networks" (ECCV 2022)
Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"
[ICLR 2021 top 3%] Is Attention Better Than Matrix Decomposition?
PyTorch implementation of image classification models for CIFAR-10/CIFAR-100/MNIST/FashionMNIST/Kuzushiji-MNIST/ImageNet
1000 images, one per image-net class. For easy visualization/exploration of classes.
[CVPR 2022] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers
Let's train vision transformers (ViT) for cifar 10 / cifar 100!
[MedIA Journal] An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
A simple approach to enable dense segmentation with ViT.
Probing the representations of Vision Transformers.
Google AI 2018 BERT pytorch implementation
Code for CVPR 2022 paper "Multi-Class Token Transformer for Weakly Supervised Semantic Segmentation"
Official pytorch implementation of NeurIPS 2022 paper, TokenMixup
Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", published at ICML 2020
[CVPR-22] This is the official implementation of the paper "Adavit: Adaptive vision transformers for efficient image recognition".
[ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Library - Vanilla, ViT, DeiT, BERT, GPT
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
This is the official implementation for the paper "Learning to Scaffold: Optimizing Model Explanations for Teaching"
Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition