22 Nov 25

Run Qwen LLMs locally in your browser with WebGPU. Zero installation, instant AI chat.

by tmfnk 1 month ago

02 Nov 25

Web Search MCP Server for use with Local LLMs A TypeScript MCP (Model Context Protocol) server that provides comprehensive web search capabilities using direct connections (no API keys required) with multiple tools for different use cases.

Features Multi-Engine Web Search: Prioritises Bing > Brave > DuckDuckGo for optimal reliability and performance Full Page Content Extraction: Fetches and extracts complete page content from search results Multiple Search Tools: Three specialised tools for different use cases Smart Request Strategy: Switches between playwright browesrs and fast axios requests to ensure results are returned Concurrent Processing: Extracts content from multiple pages simultaneously

by tmfnk 1 month ago

27 Oct 25

Hyperlink is a local-first AI agent that understands your files privately—PDFs, notes, transcripts, and more. No internet required. Data stays secure, offline, and under your control. A Glean alternative built for personal or regulated use.

by tmfnk 2 months ago
Tags:

24 Oct 25

Psst, kid, want some cheap and small LLMs? This blog post provides a comprehensive guide on how to set up and use llama.cpp, a C++ library, to efficiently run large language models (LLMs) locally on consumer hardware.

by tmfnk 2 months ago

10 Apr 24

This is Dot, a standalone open source app meant for easy use of local LLMs and RAG in particular to interact with documents and files similarly to Nvidia’s Chat with RTX. Dot itself is completely standalone and is packaged with all dependencies including a copy of Mistral 7B, this is to ensure the app is as accessible as possible and no prior knowledge of programming or local LLMs is required to use it.

by chrisSt 1 year ago

18 Dec 23

Our goal is to make open source large language models much more accessible to both developers and end users. We’re doing that by combining llama.cpp with Cosmopolitan Libc into one framework that collapses all the complexity of LLMs down to a single-file executable (called a “llamafile”) that runs locally on most computers, with no installation.

by eli 2 years ago