Robot VLM and VLA (Vision-Language-Action) inference API helping you manage multimodal prompts, RAG, and location metadata
-
Updated
Oct 6, 2025 - Rust
Robot VLM and VLA (Vision-Language-Action) inference API helping you manage multimodal prompts, RAG, and location metadata
🛡️ A deterministic safety layer for VLA-driven robot arms. Pure Rust core, ~26μs full pipeline.
Add a description, image, and links to the vla topic page so that developers can more easily learn about it.
To associate your repository with the vla topic, visit your repo's landing page and select "manage topics."