Use HuggingFace's Candle with Go.
-
Updated
Aug 10, 2023 - Rust
Use HuggingFace's Candle with Go.
ChatFlameBackend is an innovative backend solution for chat applications, leveraging the power of the Candle AI framework with a focus on the Mistral model
An extension library to Candle that provides PyTorch functions not currently available in Candle
An unofficial implementation of BitNet
A lightweight Rust application to test interaction with large language models. Currently supports running GGUF quantized models with hardware acceleration.
Rust Microservice for generating sentence embeddings through gRPC & HTTP.
Fullstack chatbot built using Rust. Made using Candle, Leptos, Actix, Tokio and Tailwind. Uses quantized Mistral 7B Instruct v0.1 GGUF models.
bilibili视频【CUDA 12.x 并行编程入门(Rust版)】配套代码
Token Classification for BERT-like models via Candle
An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasm) frontend packaged with Tauri!
Real-time camera feed object detection with React.js, WASM, Rust, Candle ML, and YOLO
deploy some machine learning models(CV, NLP,...) locally with candle, for fun
Add a description, image, and links to the candle topic page so that developers can more easily learn about it.
To associate your repository with the candle topic, visit your repo's landing page and select "manage topics."