Stars
Code for the 2025 ACL publication "ConInstruction: Universal Jailbreaking of Multimodal Large Language Models via Non-Textual Modalities"
Code associated with the EMNLP 2024 Main paper: "Image, tell me your story!" Predicting the original meta-context of visual misinformation.
[TMLR] A curated list of language modeling researches for code (and other software engineering activities), plus related datasets.
Must-read Papers on Knowledge Editing for Large Language Models.
A resource repository for representation engineering in large language models
Paper list of misinformation research using (multi-modal) large language models, i.e., (M)LLMs.
Examples of RAG using Llamaindex with local LLMs - Gemma, Mixtral 8x7B, Llama 2, Mistral 7B, Orca 2, Phi-2, Neural 7B
jxzhangjhu / awesome-LMM-Hallucination
Forked from xieyuquanxx/awesome-Large-MultiModal-HallucinationList of papers on Hallucination in LMM
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"
[TMLR 2024] Efficient Large Language Models: A Survey
An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to scrape a user's followers, following, Tweets and more while evading most API limitations.
[ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models
🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Awesome resources for in-context learning and prompt engineering: Mastery of the LLMs such as ChatGPT, GPT-3, and FlanT5, with up-to-date and cutting-edge updates. - Professor Yu Liu
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
Awesome-LLM: a curated list of Large Language Model
AISystem 主要是指AI系统,包括AI芯片、AI编译器、AI推理和训练框架等AI全栈底层技术
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Papers & Works for large languange models (OpenAI GPT-4, Meta Llama, etc.).
A collection of resources and papers on Diffusion Models
A transaction processor for a hypothetical, general-purpose, central bank digital currency
A simple network quantization demo using pytorch from scratch.
a sampling from ImageNet: 5 examples for each of 200 classes
Compression primitives for uplink compression in Federated Learning that are compatible with Secure Aggregation.