- Istanbul, Turkey
-
11:09
(UTC +03:00) - https://vigo.io
- https://vbyazilim.com/
- https://bronxwhq.org/
- https://zombieboys.net/
- https://ugur.ozyilmazel.com/
Highlights
AI
Get started with building Fullstack Agents using Gemini 2.5 and LangGraph
Anthropic's Interactive Prompt Engineering Tutorial
The official Go SDK for Model Context Protocol servers and clients. Maintained in collaboration with Google.
Trae Agent is an LLM-based agent for general purpose software engineering tasks.
Community-contributed instructions, prompts, and configurations to help you make the most of GitHub Copilot.
Private RAG app sample using Llama3, Ollama and PostgreSQL
Transform Web Content into LLM-Ready Data
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
Memory engine and app that is extremely fast, scalable. The Memory API for the AI era.
The /llms.txt file, helping language models use your website
Repo for course: Build with AI: Creating AI Agents with GPT‑5
connect to 50+ data stores via superset mcp server. Can use with open ai agent sdk, Claude app, cursor, windsurf
Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini
AGENTS.md — a simple, open format for guiding coding agents
A Langchain app that allows you to chat with multiple PDFs
Build your own ChatPDF and run it locally
Evaluation and Tracking for LLM Experiments and AI Agents
💫 Toolkit to help you get started with Spec-Driven Development
Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
Starter app to build with OpenAI ChatKit SDK
MCP server for Atlassian tools (Confluence, Jira)
The Intelligence Layer for AI agents. Connect your models, tools, and data to create agentic apps that can think, act and talk to you.
Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
Write Go applications that directly integrate llama.cpp for local inference using hardware acceleration.