🛡️ Discover essential tools and resources that leverage AI for enhancing cybersecurity, focusing on modern technologies and their applications in security operations.
-
Updated
Dec 17, 2025
🛡️ Discover essential tools and resources that leverage AI for enhancing cybersecurity, focusing on modern technologies and their applications in security operations.
🌐 Explore and manage free models on OpenRouter effortlessly with our web app, featuring browsing, filtering, and multi-language support.
Plux: AI-powered filetree that lets you grab files with one click and save insights in a built-in notepad. Reduce copy-paste friction, boost productivity. 🐙
Comprehensive guide to FastAPI, Pydantic, and SQLAlchemy for AI engineers. Learn API design, validation, and ORM workflows with practical examples and setup 🐙
A set of scripts to generate full attention-head heatmaps for transformer-based LLMs
Turn your company's scattered knowledge into AI ready Books ✨
🖥️ Run large language models locally with PasLLM, a pure Object Pascal engine optimized for efficient quantization and versatile architecture support.
🚀 Detect anomalies in structured datasets with this AI-driven ETL pipeline, ensuring data quality through seamless ingestion and machine learning insights.
🩺 Build a hybrid AI expert system for accurate medical diagnosis using rule-based logic, retrieval-augmented knowledge, and large-language model reasoning.
Context-Engine: MCP retrieval stack for AI coding assistants. Hybrid code search (dense + lexical + reranker), ReFRAG micro-chunking, local LLM prompt enhancement, and dual SSE/RMCP endpoints. One command deploys Qdrant-powered indexing for Cursor, Windsurf, Roo, Cline, Codex, and any MCP client.
🌟 Build intelligent agents easily with Helios Engine, a powerful framework for developing and deploying LLM-based applications.
Qurio is a fast, polished LLM workspace for multi-provider setups (Gemini, SiliconFlow, OpenAI-compatible and more to come). Manage your threads and knowledge like a master
🤖 Enhance reasoning and interaction with Apollo Astralis 8B, a next-gen AI model that blends strong logic and a warm personality for effective communication.
Bud AI Foundry - A comprehensive inference stack for compound AI deployment, optimization and scaling. Bud Stack provides intelligent infrastructure automation, performance optimization, and seamless model deployment across multi-cloud/multi-hardware environments.
📊 Transform documents into a smart knowledge base using Neo4j and Azure AI for efficient, intelligent searching and answer generation.
🔍 Simulate LLM inference performance to identify bottlenecks and optimize models with InferSim, a lightweight and dependency-free Python tool.
🎤 Enhance your voice-to-text transcriptions with WhisperClip, prioritizing privacy and featuring AI improvements for macOS users.
🌱 Enhance collaboration by synchronizing tasks and resources across teams with Symbiotic, your streamlined project management solution.
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."