A production-ready starter template for building and deploying AI agents using Mastra on the Nosana decentralized compute network.
This template provides everything you need to build intelligent AI agents with a modern web interface and deploy them on decentralized infrastructure. Built with Mastra, CopilotKit, and Next.js.
- Mastra framework - AI agent orchestration and workflow management
- Tool calling system - Connect your agent to external services and APIs
- MCP (Model Context Protocol) support - Enhanced agent capabilities
- Modern Next.js frontend - Beautiful UI for interacting with your agent
- Docker configuration - Ready for containerized deployment
- Nosana deployment configs - Deploy to decentralized GPU infrastructure
This template can be adapted for various AI agent applications:
- 🤖 Personal Assistant - Schedule management, email drafting, task automation
- 📊 Data Analyst Agent - Fetch financial data, generate insights, create visualizations
- 🌐 Web Researcher - Aggregate information from multiple sources, summarize findings
- 🛠️ DevOps Helper - Monitor services, automate deployments, manage infrastructure
- 🎨 Content Creator - Generate social media posts, blog outlines, marketing copy
- 🔍 Smart Search - Multi-source search with AI-powered result synthesis
- 💬 Customer Support Bot - Answer FAQs, ticket routing, knowledge base queries
- Node.js 18+ and pnpm
- Docker (for deployment)
- Git
# Clone this repository
git clone https://github.com/YOUR-USERNAME/nosana-mastra-template
cd nosana-mastra-template
# Copy environment variables
cp .env.example .env
# Install dependencies
pnpm i
# Start the development servers
pnpm run dev:ui # Start UI server (port 3000)
pnpm run dev:agent # Start Mastra agent server (port 4111)Open http://localhost:3000 to see your agent frontend. Open http://localhost:4111 to access the Mastra Agent Playground.
Choose your preferred LLM provider to power your agent:
Run Ollama locally (requires Ollama installed):
ollama pull qwen3:0.6b
ollama serveUpdate your .env:
OLLAMA_API_URL=http://127.0.0.1:11434/api
MODEL_NAME_AT_ENDPOINT=qwen3:0.6bAdd your OpenAI API key to .env and uncomment the OpenAI configuration in src/mastra/agents/index.ts:
OPENAI_API_KEY=your-openai-api-keyUse any OpenAI-compatible endpoint:
OLLAMA_API_URL=https://your-custom-endpoint.com/api
MODEL_NAME_AT_ENDPOINT=your-model-nameImplement your agent logic in the Mastra framework:
- Define your tools - Create custom functions in
src/mastra/tools/ - Configure your agent - Update agent behavior in
src/mastra/agents/ - Test locally - Validate functionality at http://localhost:3000 and http://localhost:4111
Modify the Next.js frontend to match your agent's functionality:
- Update UI components in
src/app/ - Customize the chat interface
- Add custom visualizations or controls
Package your agent for deployment:
# Build your Docker container
docker build -t yourusername/nosana-mastra-agent:latest .
# Test the container locally
docker run -p 3000:3000 yourusername/nosana-mastra-agent:latest
# Push to Docker Hub
docker login
docker push yourusername/nosana-mastra-agent:latestThe provided Dockerfile bundles:
- Your Mastra agent
- Frontend interface
- LLM runtime (all-in-one container)
Deploy your containerized agent to Nosana's decentralized GPU network (see deployment section below).
├── src/
│ ├── app/ # Next.js frontend
│ ├── mastra/ # Mastra agent configuration
│ │ ├── agents/ # Agent definitions
│ │ └── tools/ # Custom tool implementations
│ └── lib/ # Shared utilities
├── nos_job_def/ # Nosana deployment configs
├── Dockerfile # Container configuration
└── .env.example # Environment variables template
Deploy your AI agent to Nosana's decentralized GPU network for production use.
- Open Nosana Dashboard
- Click
Expandto open the job definition editor - Edit
nos_job_def/nosana_mastra_job_definition.jsonwith your Docker image:{ "image": "yourusername/nosana-mastra-agent:latest" } - Copy and paste the edited job definition into the dashboard
- Select your preferred GPU type
- Click
Deploy
Install the Nosana CLI and deploy from your terminal:
# Install Nosana CLI
npm install -g @nosana/cli
# Deploy your agent
nosana job post --file ./nos_job_def/nosana_mastra_job_definition.json --market nvidia-3090 --timeout 30The job definition file includes:
- Docker image reference
- Resource requirements (GPU, memory, CPU)
- Network exposure settings
- Environment variables
Modify nos_job_def/nosana_mastra_job_definition.json to customize your deployment.
- Decentralized Infrastructure - Run on distributed GPU nodes worldwide
- Cost-Effective - Competitive pricing for GPU compute
- Censorship-Resistant - No single point of control or failure
- Scalable - Easy to scale your agent across multiple nodes
- Transparent - On-chain job execution and verification
- Mastra Documentation - Complete guide to the Mastra framework
- Mastra Agents Overview - Understanding AI agents
- Mastra Tool Calling - Implementing custom tools
- Build an AI Stock Agent Guide - Complete tutorial
- CopilotKit Documentation - Frontend AI integration
- Nosana Documentation - Complete Nosana platform guide
- Nosana CLI - Command-line deployment
- Nosana SDK - JavaScript SDK
- Next.js Documentation - Next.js features and API
- Docker Documentation - Container best practices
Need help or want to connect with other builders?
- Discord - Join Nosana Discord for technical support
- Twitter/X - Follow @nosana_ai for updates
- GitHub - Report issues or contribute to the repos
Contributions are welcome! Feel free to:
- Submit bug reports or feature requests
- Improve documentation
- Share your agent implementations
- Contribute code improvements
This template is open source and available under the MIT License.
Built with Mastra • Deployed on Nosana • Powered by decentralized AI