ARK (Automated Resource Knowledgebase) revolutionizes resource management via automation. Using advanced algorithms, it streamlines collection, organization, and access to resource data, facilitating efficient decision-making.
The entire codebase is in Python, except for a few shell scripts.
openai>=1.61.0- OpenAI Python SDK for standardizing inference engine communication and API compatibilitypyyaml>=6.0.2- YAML parser for configuration files (state graphs, etc.)pydantic>=2.10.6- Data validation and schema definition using Python type annotationsrequests>=2.32.3- HTTP library for making API requests to external services and tools
fastapi>=0.115.0- Modern, fast web framework for building the API server with automatic OpenAPI documentationuvicorn>=0.32.0- ASGI server for running FastAPI applications
psycopg2-binary>=2.9.11- PostgreSQL adapter for Python (binary distribution, no compilation required). Used for storing conversation context and long-term memorymem0ai- Memory management library for vector-based memory storage and retrieval using Supabase
Install all dependencies using:
pip install -r requirements.txtNote: psycopg2-binary is used instead of psycopg2 to avoid requiring PostgreSQL development libraries (libpq-dev) on the system. For production deployments, you may want to use psycopg2 with proper system dependencies.
(As of September 11, 2025.)
base_module/for main interfaceconfig_module/for YAML configuration filesmodel_module/for core LLM-inference logicagent_module/for agentic structurestate_module/for defining agent state graphstool_module/for MCP compatibilitymemory_module/for long term memory and context managementschemas/for JSON schemas when communicating between frontend and backendstate_module/tool_module/.gitignoreREADME.md(this very file)requirements.txt(Python dependencies)
The project uses SGLang to run the Qwen 2.5-7B-Instruct model. Start the inference server:
bash model_module/run.shThis will start the SGLang server on port 30000. The model Qwen/Qwen2.5-7B-Instruct is currently in use.
You need to run both the API server and the test interface:
-
Start the API server (in one terminal):
cd base_module python app.pyThis starts the FastAPI server on port 1111, providing the
/v1/chat/completionsendpoint. -
Run the test interface (in another terminal):
cd base_module python main_interface.pyThis provides an interactive CLI to test the agent. Type your messages and press Enter. Type
exitorquitto stop.
| Name | Role | GitHub username | Affiliation |
|---|---|---|---|
| Nathaniel Morgan | Project leader | nmorgan | MIT |
| Joshua Guo | Frontend | duck_master | MIT |
| Ilya Gulko | Backend | gulkily | MIT |
| Jack Luo | Backend | thejackluo | Georgia Tech |
| Bryce Roberts | Backend | BryceRoberts13 | MIT |
| Angela Liu | Backend | angelaliu6 | MIT |
| Ishaana Misra | Backend | ishaanam | MIT |