-
Konan Tech
- Seongnam, Republic of Korea
- https://momozzing.github.io
Lists (3)
Sort Name ascending (A-Z)
Starred repositories
nanorlhf: from-scratch journey into how LLMs and RLHF really work.
The official repo for โDolphin: Document Image Parsing via Heterogeneous Anchor Promptingโ, ACL, 2025.
Embedding Atlas is a tool that provides interactive visualizations for large embeddings. It allows you to visualize, cross-filter, and search embeddings and metadata.
๐ OpenHands: Code Less, Make More
๐ The fast, Pythonic way to build MCP servers and clients
Expose your FastAPI endpoints as Model Context Protocol (MCP) tools, with Auth!
The official Python SDK for Model Context Protocol servers and clients
๐ถ๐ป ์ ์ ๊ฐ๋ฐ์ ์ ๊ณต ์ง์ & ๊ธฐ์ ๋ฉด์ ๋ฐฑ๊ณผ์ฌ์ ๐
Transforms complex documents like PDFs into LLM-ready markdown/JSON for your Agentic workflows.
Get your documents ready for gen AI
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights iโฆ
Efficient Triton Kernels for LLM Training
Chat with your database or your datalake (SQL, CSV, parquet). PandasAI makes data analysis conversational using LLMs and RAG.
[ICML 2025 Oral] CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction
Image to prompt with BLIP and CLIP
Stable Diffusion web UI
Awesome things about LLM-powered agents. Papers / Repos / Blogs / ...
KURE: ๊ณ ๋ ค๋ํ๊ต์์ ๊ฐ๋ฐํ, ํ๊ตญ์ด ๊ฒ์์ ํนํ๋ ์๋ฒ ๋ฉ ๋ชจ๋ธ
Korean SAT leader board
A blazing fast inference solution for text embeddings models
A high-throughput and memory-efficient inference and serving engine for LLMs
Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
LangChain ๊ณต์ Document, Cookbook, ๊ทธ ๋ฐ์ ์ค์ฉ ์์ ๋ฅผ ๋ฐํ์ผ๋ก ์์ฑํ ํ๊ตญ์ด ํํ ๋ฆฌ์ผ์ ๋๋ค. ๋ณธ ํํ ๋ฆฌ์ผ์ ํตํด LangChain์ ๋ ์ฝ๊ณ ํจ๊ณผ์ ์ผ๋ก ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ฐฐ์ธ ์ ์์ต๋๋ค.
[NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
Banishing LLM Hallucinations Requires Rethinking Generalization
ํ๊ตญ์ด ์ธ์ด๋ชจ๋ธ ๋ค๋ถ์ผ ์ฌ๊ณ ๋ ฅ ๋ฒค์น๋งํฌ
MoRA: High-Rank Updating for Parameter-Ef๏ฌcient Fine-Tuning
ใACL 2024ใ SALAD benchmark & MD-Judge