JarvisCore Configuration Guide¶
Complete guide to configuring JarvisCore framework.
Table of Contents¶
- Quick Start
- Phase → Configuration Quick Reference — v0.4.0 phase-to-env-var mapping
- Environment Variables
- LLM Configuration
- Sandbox Configuration
- Storage Configuration
- P2P Configuration
- Execution Settings
- Logging Configuration
- Configuration Examples
- Troubleshooting
Quick Start¶
JarvisCore uses environment variables for configuration with sensible defaults.
Zero-Config Mode¶
No configuration required! Just install and run:
from jarviscore import Mesh
from jarviscore.profiles import AutoAgent
mesh = Mesh()
# Uses default settings, tries to detect LLM providers
Basic Configuration¶
Create a .env file in your project root:
That's it! The framework handles the rest.
Phase → Configuration Quick Reference¶
Each infrastructure phase is enabled by specific environment variables. All phases degrade gracefully when not configured.
| Phase | Feature | Required env vars | Install extras |
|---|---|---|---|
| 1 | Blob storage | STORAGE_BACKEND=local (default), STORAGE_BASE_PATH=./blob_storage |
— |
| 1 | Azure Blob | STORAGE_BACKEND=azure, AZURE_STORAGE_CONNECTION_STRING |
pip install "jarviscore-framework[azure]" |
| 2 | Context distillation | — | automatic |
| 3 | Telemetry / tracing | LOG_LEVEL=DEBUG (JSONL trace), PROMETHEUS_ENABLED=true |
pip install "jarviscore-framework[prometheus]" |
| 4 | Mailbox messaging | REDIS_URL=redis://localhost:6379/0 |
pip install "jarviscore-framework[redis]" |
| 5 | Prometheus metrics | PROMETHEUS_ENABLED=true, PROMETHEUS_PORT=9090 |
[prometheus] |
| 6 | Kernel / SubAgent OODA | KERNEL_MAX_TURNS=30, KERNEL_MAX_TOTAL_TOKENS=80000 |
automatic (AutoAgent) |
| 7 | Distributed workflow | REDIS_URL, REDIS_CONTEXT_TTL_DAYS=7 |
[redis] |
| 7D | Nexus auth | NEXUS_GATEWAY_URL=https://..., AUTH_MODE=production |
— |
| 8 | UnifiedMemory | REDIS_URL |
[redis] |
| 9 | Auto-injection | — | automatic |
Quick install by use case:
pip install jarviscore-framework # Zero-infra (LLM + P2P only)
pip install "jarviscore-framework[redis]" # + Mailbox, Memory, Distributed workflow
pip install "jarviscore-framework[redis,prometheus]" # + Metrics (Ex1, Ex2, Ex4)
pip install "jarviscore-framework[redis,prometheus,azure]" # + Azure Blob
pip install "jarviscore-framework[full]" # Everything
Environment Variables¶
Configuration File¶
Initialize your project and create .env file:
# Initialize project (creates .env.example)
python -m jarviscore.cli.scaffold
# Copy and configure
cp .env.example .env
# Edit .env with your values
Standard Names (No Prefix)¶
JarvisCore uses standard environment variable names without prefixes:
LLM Configuration¶
Configure language model providers. The framework tries them in order: Claude → vLLM → Azure → Gemini
Anthropic Claude (Recommended)¶
# Standard Anthropic API
CLAUDE_API_KEY=sk-ant-...
# Optional: Custom endpoint (Azure Claude, etc.)
CLAUDE_ENDPOINT=https://api.anthropic.com
# Optional: Model selection
CLAUDE_MODEL=claude-model # or claude-opus-4, claude-haiku-3.5
Get API Key: https://console.anthropic.com/
vLLM (Self-Hosted)¶
Recommended for cost-effective production:
Setup vLLM:
# Install vLLM
pip install vllm
# Start server
python -m vllm.entrypoints.openai.api_server \
--model Qwen/Qwen2.5-Coder-32B-Instruct \
--port 8000
Azure OpenAI¶
AZURE_API_KEY=your-azure-key
AZURE_ENDPOINT=https://your-resource.openai.azure.com
AZURE_DEPLOYMENT=gpt-4o
AZURE_API_VERSION=2025-01-01-preview
Get Started:
Google Gemini¶
GEMINI_API_KEY=your-gemini-api-key
GEMINI_MODEL=gemini-2.0-flash
GEMINI_TEMPERATURE=0.1
GEMINI_TIMEOUT=30.0
Get API Key: https://ai.google.dev/
Common LLM Settings¶
# Request timeout (seconds)
LLM_TIMEOUT=120.0
# Sampling temperature (0.0 - 1.0)
LLM_TEMPERATURE=0.7
Provider Selection¶
The framework automatically selects providers:
- Tries Claude first (if CLAUDE_API_KEY set)
- Falls back to vLLM (if LLM_ENDPOINT set)
- Falls back to Azure (if AZURE_API_KEY set)
- Falls back to Gemini (if GEMINI_API_KEY set)
You only need to configure ONE provider.
Sandbox Configuration¶
Configure code execution environment.
Local Mode (Default)¶
In-process execution, fast, for development:
No additional configuration needed.
Remote Mode (Production)¶
Azure Container Apps sandbox service:
SANDBOX_MODE=remote
SANDBOX_SERVICE_URL=https://browser-task-executor.bravesea-3f5f7e75.eastus.azurecontainerapps.io
Features: - Full process isolation - Better security - Hosted by JarvisCore (no setup required) - Automatic fallback to local
When to use: - Production deployments - Untrusted code execution - Multi-tenant systems - High security requirements
Sandbox Timeout¶
Storage Configuration¶
Configure result storage and code registry.
Storage Directory¶
Storage Structure¶
logs/
├── {agent_id}/ # Agent results
│ └── {result_id}.json
└── code_registry/ # Registered functions
├── index.json
└── functions/
└── {function_id}.py
Result Storage¶
Results are automatically stored: - File storage: Persistent JSON files - In-memory cache: LRU cache (1000 results) - Zero dependencies: No Redis, no database
Code Registry¶
Generated code is automatically registered: - Searchable: Find functions by keywords/capabilities - Reusable: Share code between agents - Auditable: Track all generated code
P2P Configuration¶
Configure distributed mesh networking for p2p and distributed modes.
Execution Modes¶
| Mode | Code Config | Workflow Engine | P2P Coordinator |
|---|---|---|---|
autonomous |
Mesh(mode="autonomous") |
✅ | ❌ |
p2p |
Mesh(mode="p2p", config={...}) |
❌ | ✅ |
distributed |
Mesh(mode="distributed", config={...}) |
✅ | ✅ |
Network Settings (P2P and Distributed)¶
Per-process P2P settings use JARVISCORE_ prefix. For multi-node deployments
set these at process launch (not in a shared .env) or pass them explicitly
in the Mesh config dict.
# Bind address and port — per-process; ZMQ port = JARVISCORE_BIND_PORT + ZMQ_PORT_OFFSET
JARVISCORE_BIND_HOST=0.0.0.0 # Listen on all interfaces (default: 127.0.0.1)
JARVISCORE_BIND_PORT=7950 # SWIM protocol port (default: 7946)
# Node identification
JARVISCORE_NODE_NAME=jarviscore-node-1
# Seed nodes (comma-separated) for joining existing cluster
JARVISCORE_SEED_NODES=192.168.1.100:7950,192.168.1.101:7950
Preferred for multi-node: pass per-process config directly in code:
mesh = Mesh(mode="distributed", config={
"bind_host": "0.0.0.0",
"bind_port": 7950,
"seed_nodes": "192.168.1.100:7950",
})
Transport Configuration¶
# ZeroMQ port offset
ZMQ_PORT_OFFSET=1000 # ZMQ will use JARVISCORE_BIND_PORT + 1000
# Transport type
TRANSPORT_TYPE=hybrid # udp, tcp, or hybrid
Keepalive Settings¶
# Enable smart keepalive
KEEPALIVE_ENABLED=true
# Keepalive interval (seconds)
KEEPALIVE_INTERVAL=90
# Keepalive timeout (seconds)
KEEPALIVE_TIMEOUT=10
# Activity suppression window (seconds)
ACTIVITY_SUPPRESS_WINDOW=60
When to Use P2P¶
Use P2P (distributed mode) when: - Running agents across multiple machines - Need high availability - Load balancing required - Geographic distribution
Don't use P2P (autonomous mode) when: - Single machine deployment - Development/testing - Simple workflows - Getting started
Execution Settings¶
Repair Attempts¶
When code execution fails, AutoAgent attempts to fix it automatically.
Retry Settings¶
Applies to LLM calls, HTTP requests, etc.
Timeout Configuration¶
# Code execution timeout
EXECUTION_TIMEOUT=300 # 5 minutes (default)
# LLM request timeout
LLM_TIMEOUT=120 # 2 minutes (default)
Logging Configuration¶
Log Level¶
Log Formats¶
Development:
Production:
Python Logging¶
import logging
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Get logger
logger = logging.getLogger('jarviscore')
logger.setLevel(logging.DEBUG)
Configuration Examples¶
Example 1: Local Development¶
Use case: Rapid prototyping, testing
Example 2: vLLM Production¶
# .env
LLM_ENDPOINT=http://localhost:8000
LLM_MODEL=Qwen/Qwen2.5-Coder-32B-Instruct
SANDBOX_MODE=remote
SANDBOX_SERVICE_URL=https://browser-task-executor...
LOG_LEVEL=INFO
LOG_DIRECTORY=/var/log/jarviscore
P2P_ENABLED=false
Use case: Cost-effective single-node production
Example 3: Azure OpenAI with P2P¶
# .env
AZURE_API_KEY=...
AZURE_ENDPOINT=https://my-resource.openai.azure.com
AZURE_DEPLOYMENT=gpt-4o
AZURE_API_VERSION=2025-01-01-preview
SANDBOX_MODE=remote
SANDBOX_SERVICE_URL=https://browser-task-executor...
P2P_ENABLED=true
JARVISCORE_BIND_HOST=0.0.0.0
JARVISCORE_BIND_PORT=7946
JARVISCORE_SEED_NODES=192.168.1.100:7946,192.168.1.101:7946
LOG_LEVEL=INFO
LOG_DIRECTORY=/var/log/jarviscore
Use case: Enterprise distributed deployment
Example 4: Multi-Provider Fallback¶
# .env
# Primary: Claude
CLAUDE_API_KEY=sk-ant-...
# Fallback 1: Azure
AZURE_API_KEY=...
AZURE_ENDPOINT=https://...
AZURE_DEPLOYMENT=gpt-4o
# Fallback 2: Gemini
GEMINI_API_KEY=...
SANDBOX_MODE=local
LOG_LEVEL=INFO
Use case: High availability with provider redundancy
Example 5: Zero-Config¶
Everything else uses defaults. Perfect for getting started!
Environment-Specific Configuration¶
Development¶
# .env.development
CLAUDE_API_KEY=...
SANDBOX_MODE=local
LOG_LEVEL=DEBUG
P2P_ENABLED=false
EXECUTION_TIMEOUT=60
MAX_REPAIR_ATTEMPTS=1
Staging¶
# .env.staging
AZURE_API_KEY=...
AZURE_ENDPOINT=...
SANDBOX_MODE=remote
SANDBOX_SERVICE_URL=https://...
LOG_LEVEL=INFO
P2P_ENABLED=true
EXECUTION_TIMEOUT=300
Production¶
# .env.production
LLM_ENDPOINT=http://vllm-service:8000
SANDBOX_MODE=remote
SANDBOX_SERVICE_URL=https://...
LOG_LEVEL=WARNING
LOG_DIRECTORY=/var/log/jarviscore
P2P_ENABLED=true
BIND_HOST=0.0.0.0
EXECUTION_TIMEOUT=600
MAX_REPAIR_ATTEMPTS=3
Load Configuration¶
import os
from jarviscore import Mesh
# Set environment
env = os.getenv('ENVIRONMENT', 'development')
# Load config file
from dotenv import load_dotenv
load_dotenv(f'.env.{env}')
# Create mesh
mesh = Mesh()
Programmatic Configuration¶
Override environment variables in code:
from jarviscore import Mesh
# Autonomous mode (no P2P config needed)
mesh = Mesh(mode="autonomous", config={
'execution_timeout': 600,
'log_level': 'DEBUG'
})
# P2P mode (requires network config)
mesh = Mesh(mode="p2p", config={
'bind_host': '0.0.0.0',
'bind_port': 7950,
'node_name': 'my-node',
'seed_nodes': '192.168.1.10:7950', # Optional, for joining cluster
})
# Distributed mode (both workflow + P2P)
mesh = Mesh(mode="distributed", config={
'bind_host': '0.0.0.0',
'bind_port': 7950,
'node_name': 'my-node',
'execution_timeout': 600,
})
Note: Programmatic config overrides environment variables.
Validation¶
Check Configuration¶
from jarviscore.config import settings
# Print current settings
print(f"Sandbox Mode: {settings.sandbox_mode}")
print(f"Log Directory: {settings.log_directory}")
print(f"Claude Key: {'Set' if settings.claude_api_key else 'Not set'}")
Verify LLM Providers¶
from jarviscore.execution import create_llm_client
llm = create_llm_client()
# Test generation
response = await llm.generate("Hello")
print(f"Provider: {response['provider']}")
print(f"Model: {response['model']}")
Security Best Practices¶
1. Never Commit .env Files¶
2. Use Secret Management¶
Development:
Production:
# AWS Secrets Manager
aws secretsmanager get-secret-value --secret-id jarviscore/claude-key
# Azure Key Vault
az keyvault secret show --vault-name myvault --name claude-key
# Kubernetes Secrets
kubectl create secret generic jarviscore-secrets \
--from-literal=CLAUDE_API_KEY=...
3. Restrict File Permissions¶
4. Use Remote Sandbox in Production¶
# Production
SANDBOX_MODE=remote # Better isolation
# Development
SANDBOX_MODE=local # Faster iteration
5. Rotate API Keys Regularly¶
Set up key rotation every 90 days.
Troubleshooting¶
Issue: Configuration not loading¶
Solution:
# Ensure .env is in correct location
import os
print(os.getcwd()) # Should contain .env
# Manual load
from dotenv import load_dotenv
load_dotenv('.env')
Issue: LLM provider not found¶
Solution:
# Check at least one provider is configured
echo $CLAUDE_API_KEY
echo $LLM_ENDPOINT
echo $AZURE_API_KEY
echo $GEMINI_API_KEY
Issue: Sandbox connection failed¶
Solution:
# Test remote sandbox URL
curl -X POST https://browser-task-executor... \
-H "Content-Type: application/json" \
-d '{"STEP_DATA": {...}, "TASK_CODE_B64": "..."}'
# Fallback to local
SANDBOX_MODE=local
Issue: P2P connection failed¶
Solution:
# Check firewall
sudo ufw allow 7946/tcp
sudo ufw allow 7946/udp
sudo ufw allow 8946/tcp # ZMQ (BIND_PORT + 1000)
# Check seed nodes are reachable
nc -zv 192.168.1.100 7946
Issue: Storage directory not writable¶
Solution:
# Create directory
mkdir -p ./logs
# Fix permissions
chmod 755 ./logs
# Or change location
LOG_DIRECTORY=/tmp/jarviscore-logs
Configuration Reference¶
All Variables¶
| Variable | Default | Description |
|---|---|---|
CLAUDE_API_KEY |
None | Anthropic API key |
CLAUDE_ENDPOINT |
https://api.anthropic.com | Claude API endpoint |
CLAUDE_MODEL |
claude-sonnet-4 | Claude model |
LLM_ENDPOINT |
None | vLLM server URL |
LLM_MODEL |
default | vLLM model name |
AZURE_API_KEY |
None | Azure OpenAI key |
AZURE_ENDPOINT |
None | Azure OpenAI endpoint |
AZURE_DEPLOYMENT |
None | Azure deployment name |
AZURE_API_VERSION |
2024-02-15-preview | Azure API version |
GEMINI_API_KEY |
None | Google Gemini key |
GEMINI_MODEL |
gemini-2.0-flash | Gemini model |
LLM_TIMEOUT |
120.0 | LLM timeout (seconds) |
LLM_TEMPERATURE |
0.7 | Sampling temperature |
SANDBOX_MODE |
local | Execution mode |
SANDBOX_SERVICE_URL |
None | Remote sandbox URL |
EXECUTION_TIMEOUT |
300 | Code timeout (seconds) |
MAX_REPAIR_ATTEMPTS |
3 | Max repair attempts |
MAX_RETRIES |
3 | Max retry attempts |
LOG_DIRECTORY |
./logs | Storage directory |
LOG_LEVEL |
INFO | Log verbosity |
P2P_ENABLED |
false | Enable P2P mesh |
JARVISCORE_NODE_NAME |
jarviscore-node | Node identifier |
JARVISCORE_BIND_HOST |
127.0.0.1 | P2P bind address (per-process) |
JARVISCORE_BIND_PORT |
7946 | P2P bind port (per-process) |
JARVISCORE_SEED_NODES |
None | Seed nodes CSV (per-process) |
ZMQ_PORT_OFFSET |
1000 | ZMQ port = bind_port + offset |
TRANSPORT_TYPE |
hybrid | Transport type |
KEEPALIVE_ENABLED |
true | Enable keepalive |
KEEPALIVE_INTERVAL |
90 | Keepalive interval |
Next Steps¶
- Read the User Guide for practical examples
- Check the API Reference for detailed documentation
- Explore .env.example for complete configuration template
Version¶
Configuration Guide for JarvisCore v1.0.2
Last Updated: 2026-03-04