-
SoftBank
- Tokyo
-
07:55
(UTC +09:00) - https://www.sbintuitions.co.jp/
- in/lxaw
- https://lxaw.github.io/index.html
Highlights
- Pro
Stars
Real-time global intelligence dashboard. AI-powered news aggregation, geopolitical monitoring, and infrastructure tracking in a unified situational awareness interface
[ICLRW'26] EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation
[Neurips 2025] R-KV: Redundancy-aware KV Cache Compression for Reasoning Models
Browser extension to be more focused on Youtube. 15000+ users. Supports Chrome, Firefox, Brave and Edge.
Anki plugin to expose a remote API for creating flash cards.
Scalable toolkit for efficient model reinforcement
Code for accepted paper at ICLR 2026
Why Diffusion Language Models Struggle with Truly Parallel (Non-Autoregressive) Decoding?
Thorsten-Voice: A free to use, offline working, high quality german TTS voice should be available for every project without any license struggling.
German Tacotron 2 and Multi-band MelGAN in TensorFlow with TF Lite inference support
DeepEP: an efficient expert-parallel communication library
A framework for the evaluation of autoregressive code generation language models.
SGLang is a fast serving framework for large language models and vision language models.
A project to improve skills of large language models
DFlash: Block Diffusion for Flash Speculative Decoding
ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.
utilities for decoding deep representations (like sentence embeddings) back to text
Code for NeurIPS 2024 paper - The GAN is dead; long live the GAN! A Modern Baseline GAN - by Huang et al.
Official JAX implementation of End-to-End Test-Time Training for Long Context
[ICLR 2026] Official PyTorch Implementation of RLP: Reinforcement as a Pretraining Objective
dInfer: An Efficient Inference Framework for Diffusion Language Models
GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 training.
CANDI: Continuous and Discrete Diffusion
🧀 Pytorch code for the Fromage optimiser.
Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'
PC で動く高機能な将棋の GUI「ShogiHome」の開発リポジトリ
[ICLR 2025] BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval