Modern multi-agent AI reasoning system with TypeScript, Next.js, and extensible architecture
π Quick Start β’ π Documentation β’ π€ Contributing β’ π¬ Discussions
A modern, extensible multi-agent AI system built with Next.js, TypeScript, and modular architecture. PolarisAI provides a reasoning-based approach to AI interactions, supporting multiple LLM providers and designed for easy extension with new agents.
User: "How does the reasoning system work?"
π€ PolarisAI: Analyzing your question about the reasoning system...
π REASONING PIPELINE:
1. ANALYZE: Understanding your intent about system architecture
2. PLAN: Structuring response about reasoning workflow
3. EXECUTE: Explaining the four-phase pipeline
4. REFLECT: Ensuring comprehensive coverage
β¨ RESPONSE:
The reasoning system follows a structured pipeline:
- Analyze β Plan β Execute β Reflect
- Each agent processes input through these phases
- Confidence scoring and reasoning explanations included
- Memory system maintains context across conversations
Confidence: 92%
Try it yourself:
bun devβ Openhttp://localhost:3000- Configure OpenAI API key in settings
- Start chatting with the reasoning system!
| ποΈ Architecture | π§ AI Models | π§ Development | π Deployment |
|---|---|---|---|
| β Multi-Agent System | β OpenAI GPT-4 | β TypeScript | β Self-Hosted |
| β Reasoning Pipeline | β Ollama Local | β Next.js 15 | β Docker Ready |
| β Memory System | π Anthropic Claude | β Comprehensive Tests | π Cloud Deploy |
| β Tool Integration | π Google Gemini | β CI/CD Pipeline | π Scaling |
Legend: β Available β’ π In Progress β’ π Planned
| Feature | Description | Status |
|---|---|---|
| π§ Reasoning-First | Every response includes confidence scores and reasoning explanations | β |
| π Multi-Provider | Works with OpenAI, Ollama, and extensible to any LLM provider | β |
| ποΈ Extensible | Clean architecture for adding new agents and capabilities | β |
| π Privacy-Focused | Self-hosted by default, your data stays with you | β |
| π Well-Documented | Comprehensive guides for users and contributors | β |
| π§ͺ Production-Ready | TypeScript, tests, CI/CD, and professional development workflow | β |
- Multi-Agent System: Extensible framework for specialized AI agents
- Reasoning Pipeline: Analyze β Plan β Execute β Reflect workflow
- Provider Agnostic: Support for OpenAI, Ollama, and extensible to other providers
- Local & Remote Models: Run with local models (Ollama) or cloud APIs (OpenAI, Anthropic)
- Memory System: Persistent conversation memory and learning capabilities
- Tool Integration: Extensible tool system for agent capabilities
- General Assistant: Versatile helper for various tasks and questions
- Specialized Tools: Text analysis, brainstorming, step-by-step planning
- Reasoning Display: Shows confidence levels and thought processes
- TypeScript: Fully typed for better development experience
- Next.js 15: Modern React framework with App Router
- Tailwind CSS: Utility-first styling
- Modular Design: Clean separation of concerns
- GDPR Compliance: Built-in privacy features and data protection
- Node.js 18+ or Bun (recommended)
- Optional: Ollama for local models
-
Clone the repository
git clone git@github.com:jakubkunert/PolarisAI.git cd polaris-ai -
Install dependencies
bun install # or npm install -
Start the development server
bun dev # or npm run dev -
Open in browser Navigate to
http://localhost:3000
- Get an API key from OpenAI
- In the web interface, click "Settings"
- Select "OpenAI" as provider
- Enter your API key
- Start chatting!
- Install Ollama:
https://ollama.ai - Pull a model:
ollama pull llama3.2 - Start Ollama service:
ollama serve - The system will auto-detect Ollama and use it
βββββββββββββββββββββββββββββββββββββββββββ
β Frontend (Next.js) β
βββββββββββββββββββββββββββββββββββββββββββ€
β API Layer β
βββββββββββββββββββββββββββββββββββββββββββ€
β Agent Orchestrator β
βββββββββββββββββββββββββββββββββββββββββββ€
β Agent 1 β Agent 2 β ... β
βββββββββββββββββββββββββββββββββββββββββββ€
β Model Providers β
β OpenAI β Ollama β Anthropic β ... β
βββββββββββββββββββββββββββββββββββββββββββ
- BaseModelProvider: Abstract base class for all providers
- OpenAIProvider: OpenAI GPT integration
- OllamaProvider: Local model support
- ModelManager: Handles provider registration and routing
- BaseAgent: Abstract base class for all agents
- BasicTaskPlanner: Implements the reasoning pipeline
- GeneralAssistantAgent: General-purpose assistant
- Tool System: Extensible capabilities framework
- Comprehensive TypeScript interfaces
- GDPR-compliant data structures
- Agent and model abstractions
Each agent follows a structured reasoning process:
- Analysis: Understand user intent and context
- Planning: Create step-by-step action plan
- Execution: Carry out the plan with tools
- Reflection: Learn from the interaction
POST /api/chat
// Request
{
"message": "string",
"provider": "openai" | "ollama" | string,
"apiKey": "string" // Optional, for remote providers
}
// Response
{
"success": boolean,
"response": {
"id": "string",
"content": "string",
"confidence": number, // 0-1
"reasoning": "string",
"timestamp": "string",
"metadata": object
},
"agent": {
"id": "string",
"name": "string",
"status": object
}
}GET /api/chat
Returns system status and available providers.
polaris-ai/
βββ src/
β βββ app/ # Next.js app router
β β βββ api/chat/ # API endpoints
β β βββ page.tsx # Main interface
β βββ core/ # Core system
β β βββ agents/ # Agent implementations
β β βββ models/ # Model providers
β β βββ types/ # TypeScript definitions
β βββ components/ # UI components
β βββ lib/ # Utilities
β βββ hosted/ # Premium features
βββ docs/ # Documentation
βββ README.md
-
Create agent class
// src/core/agents/my-agent.ts import { BaseAgent } from './base-agent'; export class MyAgent extends BaseAgent { constructor(modelProvider: ModelProvider, modelConfig: ModelConfig) { super( 'my-agent', 'My Agent', 'Description of what this agent does', ['capability1', 'capability2'], 'System prompt for the agent...', modelProvider, modelConfig ); } createMemory(): LongTermMemory { // Implement memory structure } createPlanner(): TaskPlanner { // Implement or use existing planner } }
-
Register with orchestrator
// In your orchestrator const agent = new MyAgent(modelProvider, modelConfig); orchestrator.registerAgent(agent);
-
Implement provider
// src/core/models/my-provider.ts import { BaseModelProvider } from './base-provider'; export class MyProvider extends BaseModelProvider { constructor() { super('my-provider', 'My Provider', 'remote'); } async authenticate(apiKey?: string): Promise<boolean> { // Implement authentication } async generateResponse(prompt: string, config: ModelConfig): Promise<string> { // Implement response generation } // ... other required methods }
-
Register with model manager
// In model manager modelManager.registerProvider(new MyProvider());
- Local Processing: Run everything on your own infrastructure
- Data Control: You own and control all your data
- No Tracking: No telemetry or data collection
- Open Source: Full transparency in all operations
- Self-Hosted: Complete control over your environment
- API Key Security: Secure storage of model provider credentials
- Audit Trail: Full visibility into all operations
- Customizable: Modify security to your requirements
We welcome contributions to PolarisAI! Please read our Contributing Guide for details on:
- Development setup and workflow
- Git flow and commit conventions
- Code standards and testing
- Pull request process
- Adding new features and agents
- Fork the repository
- Set up your development environment (see CONTRIBUTING.md)
- Create a feature branch:
git checkout -b feature/your-feature - Make your changes and add tests
- Submit a pull request
- Language: TypeScript
- Framework: Next.js 15
- Styling: Tailwind CSS
- Runtime: Bun (recommended) or Node.js 18+
- Database: Optional (PostgreSQL for persistence)
For detailed development instructions, see CONTRIBUTING.md.
- β Multi-agent reasoning system
- β OpenAI and Ollama provider support
- β Web interface with chat functionality
- β Development workflow and contribution guidelines
-
Enhanced Agent Capabilities
- Specialized agents for different domains
- Advanced reasoning and planning
- Multi-agent collaboration
-
Extended Model Support
- Anthropic Claude integration
- Google Gemini support
- Custom model providers
-
User Experience
- Voice interaction support
- Mobile-responsive design
- Advanced conversation management
-
Enterprise Features
- User authentication and management
- Team collaboration tools
- Advanced analytics and monitoring
- Bug Reports: Use our bug report template
- Feature Requests: Use our feature request template
- Questions: Start a GitHub Discussion
- Security Issues: Please report privately to [security contact]
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to all contributors who help improve PolarisAI
- Inspired by the open-source AI community
- Built with amazing tools and frameworks from the JavaScript ecosystem
Ready to build the future of AI agents? Check out our Contributing Guide and join the community! π