22 Nov 25

Run Qwen LLMs locally in your browser with WebGPU. Zero installation, instant AI chat.

by tmfnk 2 months ago

02 Nov 25

Web Search MCP Server for use with Local LLMs A TypeScript MCP (Model Context Protocol) server that provides comprehensive web search capabilities using direct connections (no API keys required) with multiple tools for different use cases.

Features Multi-Engine Web Search: Prioritises Bing > Brave > DuckDuckGo for optimal reliability and performance Full Page Content Extraction: Fetches and extracts complete page content from search results Multiple Search Tools: Three specialised tools for different use cases Smart Request Strategy: Switches between playwright browesrs and fast axios requests to ensure results are returned Concurrent Processing: Extracts content from multiple pages simultaneously

by tmfnk 3 months ago

27 Oct 25

Hyperlink is a local-first AI agent that understands your files privately—PDFs, notes, transcripts, and more. No internet required. Data stays secure, offline, and under your control. A Glean alternative built for personal or regulated use.

by tmfnk 3 months ago
Tags:

24 Oct 25

Psst, kid, want some cheap and small LLMs? This blog post provides a comprehensive guide on how to set up and use llama.cpp, a C++ library, to efficiently run large language models (LLMs) locally on consumer hardware.

by tmfnk 3 months ago