Generation of software test scenarios using a RAG system with a local LLM (llama.cpp)
-
Updated
Dec 25, 2024 - Jupyter Notebook
Generation of software test scenarios using a RAG system with a local LLM (llama.cpp)
Qwen2.5-Coder: Family of LLMs excels in code, debugging, etc
Another attempt at an LLM driven voice assitant, but with better transcription and better TTS
CICD Answering-Question Chatbot for RAG (Retrieval-Augmented Generation) using Streamlit
This project hosts the LLaMA 3.1 CPP model on RunPod's serverless platform using Docker. It features a Python 3.11 environment with CUDA 12.2, enabling scalable AI request processing through configurable payload options and GPU support.
Stay focus with ai-warden
Hashtags recommendation based on item titles using LLaMA
llm-chat - Interactive Chat with LLM
Collection of helpful scripts for working with GGML models
Prebuilt llama-cli binary for Raspberry Pi Zero 2 W
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."