This is the third version (V3) of the Stock Research Agent, built on top of the LangGraph + LangSmith ecosystem.
Version 3 brings a production-grade architecture with a modular backend and an interactive frontend powered by Deep Agents UI.
It extends Version 2 by introducing real-time observability, agent collaboration, and a complete local deployment workflow.
| Feature | Version 1 (Old) | Version 2 (Improved) | Version 3 (New) |
|---|---|---|---|
| Configuration | Hardcoded in .py |
.env, .json, .md external configs |
Same as V2 |
| Search Provider | Fixed to Brave | Optional Brave / Tavily (auto-selected) | Same as V2 |
| Sub-agents | Defined inline in code | Externalized in subagents.json |
Same as V2 |
| Core Instructions | Inline (long) | Moved to instructions.md |
Same as V2 |
| Token Usage per Query | ~13 000 | ~3 500 | Same as V2 |
| Response Time | Slow (bloated context) | 60โ70 % faster (context engineered) | Same as V2 |
| UI | Basic textbox | Markdown + Chat UI (Gradio) | Integrated with Deep Agents UI via LangSmith Server |
| Error Handling | Minimal | Graceful fallbacks, provider status banners | Same as V2 |
| Backend Runtime | Local Python | Gradio app | LangGraph / LangSmith Server |
| Deployment | Manual | Scripted | Same as V2 + UI integration |
| Monitoring | None | Console logs | UI-based interaction through Deep Agents UI |
-
Before (v1): ~13,000 tokens per research query (bloated instructions + redundant context).
-
Now (v2): ~3,500 tokens per query.
-
Achieved through:
- Externalizing prompts (
.md+.jsonfiles). - Removing redundant instructions.
- Adding clear tool descriptions.
- Context-aware selective tool usage.
- Externalizing prompts (
๐ฐ Impact: Reducing from 13k โ 3.5k tokens saves ~73% in API costs and improves response latency by 60โ70%.
-
When deep agents or tools recurse too much, Python may throw a
RecursionError. -
Avoid simply increasing the recursion limit โ it may hide design flaws or cause infinite loops.
-
Instead:
- Set clear stopping conditions in agents.
- Limit maximum depth of reasoning/tool calls.
- Use guardrails in prompts to discourage looping.
- Keep prompts precise and minimal.
- Avoid duplicate phrases across instructions and sub-agent definitions (they bloat token count).
- Provide short, structured tool descriptions instead of verbose text.
- Define workflows clearly so the model doesnโt need extra clarification cycles.
Before (v1, verbose):
@tool
def get_stock_price(symbol: str) -> str:
"""This tool is designed to fetch stock-related data, including price, company name, market capitalization, P/E ratio, and 52-week highs and lows. Use this whenever you need to know about a company's stock."""After (v2, concise):
@tool
def get_stock_price(symbol: str) -> str:
"""Fetch stock price, company name, market cap, P/E ratio, 52-week range."""โก๏ธ The shorter description conveys the same meaning but reduces tokens and avoids repetition.
Token Comparison:
- v1: ~60 tokens
- v2: ~20 tokens
- Reduction: ~67%
Before (v1, verbose):
fundamental_analyst = {
"name": "fundamental-analyst",
"description": "Performs deep fundamental analysis of companies including financial ratios, growth metrics, and valuation",
"prompt": """You are an expert fundamental analyst with 15+ years of experience.
Focus on:
- Financial statement analysis
- Ratio analysis (P/E, P/B, ROE, ROA, Debt-to-Equity)
- Growth metrics and trends
- Industry comparisons
- Intrinsic value calculations
Always provide specific numbers and cite your sources.""",
}After (v2, concise):
fundamental_analyst = {
"name": "fundamental-analyst",
"description": "Performs company fundamental analysis",
"prompt": "You are a fundamental analyst. Focus only on:\n- Financial statements (Revenue, Net Income, Assets, Debt)\n- Ratios: P/E, P/B, ROE, ROA, Debt/Equity\n- Growth trends vs peers\n- Valuation (intrinsic value)"
}โก๏ธ Context was streamlined by removing persona fluff ("expert with 15+ years") and redundant phrasing. The new version conveys the same scope but with fewer tokens.
Token Comparison:
- v1: ~120 tokens
- v2: ~55 tokens
- Reduction: ~55%
- Tool descriptions are always loaded into the agentโs context.
- Long, verbose descriptions add up quickly because they are repeated on every query.
- Short, clear descriptions save tokens while still giving the model enough guidance.
- Example: If you have 5 tools and each has a 100-token description, thatโs 500 tokens added to every query โ even if only 1 tool is used.
- Externalize large prompts and configs into
.mdand.jsonfiles. - Pass only the needed tools and subagents for each run.
- Remove unnecessary background text or repetition.
- Result: faster responses, fewer tokens, and no information loss.
- Keep tool descriptions under 1โ2 lines.
- Avoid persona fluff in sub-agent prompts.
- Externalize long instructions to
.mdand.jsonfiles. - Monitor token counts with
tiktokenor similar utilities. - Set guardrails to prevent recursion loops.
- Add multi-company comparative analysis.
- Add streaming responses for real-time progress display.
- Extend UI with charts (price trends, RSI graphs).
- Option to export reports as PDF.
Version 2 of the Stock Research Agent delivers:
- Cleaner configuration (
.env,.json,.md). - Flexible and optional search provider setup.
- Optimized token usage with context engineering.
- Practical learnings on recursion, prompt design, and context control.
- Much faster and cheaper queries without sacrificing depth.
This sets the foundation for more advanced reporting, visualization, and deployment.
- Screenshot 1 โ UI previews and analysis examples
- Screenshot 2 โ Example stock research report
- Demo Video โ walkthrough of the stock research workflow
- Version 1 README โ for comparison with the original implementation
This tool is for educational and research purposes only. It does not constitute financial advice. Always consult with qualified financial advisors before making investment decisions. Past performance does not guarantee future results.
- LangChain โ for the DeepAgent framework
- Yahoo Finance โ for free and reliable financial data APIs
- Gradio โ for the intuitive and flexible UI framework
- Ollama โ for enabling local LLM hosting
- Tavily โ for providing fast browser search APIs
- Brave โ for offering a robust web search API
If you find this project useful, please consider giving it a star โญ๏ธ on GitHub!
Built with โค๏ธ using LangChain DeepAgents
Transform your investment research with the power of specialized AI agents.