AI-powered trading strategy generation using n8n workflows and Python ML engine.
- Python 3.8+
- n8n instance (self-hosted or n8n.cloud)
- n8n API key
-
Clone the repository
git clone https://github.com/yourusername/n8n_github.git cd n8n_github -
Create Python virtual environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r scripts/requirements.txt
-
Configure n8n API credentials
cp .env.example .env # Edit .env and add your n8n credentials
-
Log in to your n8n instance
- If using n8n.cloud: https://app.n8n.cloud
- If self-hosted: http://localhost:5678 (default)
-
Generate API Key
- Click your avatar (top-right)
- Select "Account Settings"
- Navigate to "API Tokens"
- Click "Create New Token"
- Copy the token
-
Update .env file
N8N_BASE_URL=https://yourname.n8n.cloud N8N_API_KEY=your_api_key_here N8N_API_VERSION=v1 N8N_TIMEOUT=30 N8N_VERIFY_SSL=true
Test your n8n API configuration:
python scripts/test_n8n_connection.pyExpected output:
1️⃣ Testing .env file... ✅
2️⃣ Testing configuration... ✅
3️⃣ Testing API client creation... ✅
4️⃣ Testing API connection... ✅
5️⃣ Testing workflow listing... ✅
✅ All tests passed! n8n API is ready to use.
from scripts.n8n_integration import N8nIntegration
from dotenv import load_dotenv
load_dotenv()
integration = N8nIntegration()
if integration.connect():
integration.print_workflows_table()from scripts.n8n_integration import N8nIntegration
from dotenv import load_dotenv
load_dotenv()
integration = N8nIntegration()
integration.connect()
# Execute workflow and wait for completion
result = integration.execute(
workflow_id='your_workflow_id',
input_data={'symbol': 'BTC', 'timeframe': '1h'},
wait=True,
max_wait=300
)
print(f"Status: {result.get('status')}")
print(f"Data: {result.get('data')}")from scripts.execute_trading_workflow import TradingWorkflowExecutor
from dotenv import load_dotenv
load_dotenv()
executor = TradingWorkflowExecutor()
executor.connect()
# Find input CSV
input_csv = executor.find_input_csv('BTC_USDT')
# Execute workflow
execution = executor.execute_workflow(
'DailyFeatureSelection',
input_csv=input_csv,
wait=True
)
# Save results
executor.save_execution_result(execution, 'trading_result.json')The repository includes a small agent that periodically triggers a configured n8n workflow (e.g. a strategy generator), waits for it to complete, and saves results.
Configuration (add to .env):
WORKFLOW_NAMEorWORKFLOW_ID— workflow to trigger (default:StrategyGenerator)AGENT_INTERVAL— seconds between runs (default:3600)DRY_RUN— set totrueto run the agent locally without contacting n8n (useful for testing)UPLOAD_TO_GCS,GCS_BUCKET— optional: upload saved results to GCSBQ_TABLE— optional BigQuery tableproject.dataset.tableto load tabular results
Run the agent (foreground):
source .venv/bin/activate
python scripts/strategy_agent.pyRun in dry mode (no external credentials required):
export DRY_RUN=true
python scripts/strategy_agent.pyWhere results are saved:
- Saved result files go to the
reports/directory by default. - A sample result is included at
reports/sample_strategy.json.
If you set DRY_RUN=true the agent will generate a fake execution result and store it locally so you can test the full pipeline without network or credential configuration.
n8n_github/
├── scripts/
│ ├── n8n_api_client.py # Low-level API client
│ ├── n8n_integration.py # High-level integration
│ ├── execute_trading_workflow.py # Trading workflow executor
│ ├── example_n8n_usage.py # Full usage examples
│ ├── test_n8n_connection.py # Connection tests
│ └── requirements.txt # Python dependencies
├── data/ # Input CSV data
├── reports/ # Output results & analysis
├── .env # Configuration (ignored by git)
├── .env.example # Configuration template
└── README.md # This file
test_connection()- Verify API connectivitylist_workflows()- Get all workflowsget_workflow(workflow_id)- Get workflow detailsexecute_workflow(workflow_id, data)- Execute workflowget_execution(execution_id)- Get execution statuslist_executions(workflow_id)- List executionswait_for_execution(execution_id)- Wait for completion
connect()- Test connectionget_workflows()- Get workflows with cachingfind_workflow(name_or_id)- Search workflowsexecute()- Execute with defaultsget_execution_status()- Get statuslist_recent_executions()- List recent runs
connect()- Connect to n8nlist_available_workflows()- Display workflowsfind_input_csv()- Locate CSV filesread_csv_data()- Read CSV dataexecute_workflow()- Execute with CSVsave_execution_result()- Save resultsprocess_workflow_results()- Format results
-
Check N8N_BASE_URL is correct
curl https://yourname.n8n.cloud/api/v1/me -H "X-N8N-API-KEY: your_key" -
Verify API key is valid
- Regenerate in n8n → Account Settings → API Tokens
-
Check firewall/network access
- Ensure n8n instance is accessible
-
List available workflows
python scripts/n8n_integration.py
-
Use exact workflow ID from list
- Format: "123456" or "abcdef123456"
Increase timeout in .env:
N8N_TIMEOUT=60
N8N_MAX_WAIT_SECONDS=600| Variable | Default | Description |
|---|---|---|
N8N_BASE_URL |
http://localhost:5678 |
n8n instance URL |
N8N_API_KEY |
`` | API authentication token |
N8N_API_VERSION |
v1 |
API version |
N8N_TIMEOUT |
30 |
Request timeout (seconds) |
N8N_VERIFY_SSL |
true |
Verify SSL certificates |
N8N_MAX_WAIT_SECONDS |
300 |
Max execution wait time |
N8N_POLL_INTERVAL |
2 |
Status check interval (seconds) |
MIT
- Create a feature branch
- Make changes
- Test with
python scripts/test_n8n_connection.py - Submit a pull request