Skip to main content
Helios supports multiple LLM providers through a unified Gateway interface. The gateway automatically routes requests to the appropriate backend based on the model identifier.

Supported Providers

Google Gemini

Default provider with computer-use models

Anthropic

Claude models via direct API

AWS Bedrock

Claude models via AWS

OpenAI

GPT models with computer-use preview

Model Selection

Specify a model with the -m flag:
helios tasks/my-task -m <model-identifier>

Routing Rules

The gateway uses these patterns to route to providers:
PatternProvider
gemini/* or contains geminiGoogle Gemini
claude-* (e.g., claude-sonnet-4-20250514)Anthropic Direct
bedrock/* or contains anthropic.AWS Bedrock
openai/* or computer-use-previewOpenAI

Available Models

Google Gemini

Model IDDescription
gemini/gemini-2.5-computer-use-preview-10-2025Computer-use preview (default)
gemini/gemini-3-pro-previewGemini 3 Pro

Anthropic (Direct)

Model IDDescription
claude-sonnet-4-20250514Claude Sonnet 4
claude-opus-4-20250514Claude Opus 4

AWS Bedrock

Model IDDescription
bedrock/global.anthropic.claude-sonnet-4-20250514-v1:0Claude Sonnet via Bedrock
bedrock/global.anthropic.claude-opus-4-5-20251101-v1:0Claude Opus via Bedrock

OpenAI

Model IDDescription
openai/computer-use-previewComputer-use preview

Environment Variables

Set these in your shell or .env file:
# Google Gemini (default provider)
export GEMINI_API_KEY=your-gemini-api-key
# or
export GOOGLE_API_KEY=your-google-api-key

# Anthropic Direct
export ANTHROPIC_API_KEY=your-anthropic-api-key

# AWS Bedrock
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_REGION=us-east-1

# OpenAI
export OPENAI_API_KEY=your-openai-api-key

Using .env Files

Create a .env file in your project root:
# .env
GEMINI_API_KEY=your-key-here
ANTHROPIC_API_KEY=your-key-here
OPENAI_API_KEY=your-key-here
Never commit .env files to version control. Add .env to your .gitignore.

Model Comparison

ProviderStrengthsBest For
GeminiFast, good visionGeneral tasks, quick iteration
Claude SonnetBalanced performanceMost tasks, good default
Claude OpusBest reasoningComplex tasks, hard problems
OpenAIWide availabilityOpenAI-native workflows

Batch Execution with Models

Specify the model for batch runs:
helios batch tasks/ -n 4 -m claude-sonnet-4-20250514
All tasks in the batch will use the specified model.

Comparing Models

Run the same tasks with different models:
# Run with Gemini
helios batch tasks/benchmark/ -n 4 -o results/gemini/

# Run with Claude
helios batch tasks/benchmark/ -n 4 -m claude-sonnet-4-20250514 -o results/claude/

# Run with OpenAI
helios batch tasks/benchmark/ -n 4 -m openai/computer-use-preview -o results/openai/

Troubleshooting

Check that your environment variable is set:
echo $ANTHROPIC_API_KEY
If empty, export it:
export ANTHROPIC_API_KEY=your-key
Verify the model identifier matches a supported pattern:
# Correct
helios tasks/my-task -m claude-sonnet-4-20250514

# Wrong
helios tasks/my-task -m claude4-sonnet
If you hit rate limits, reduce concurrency in batch mode:
helios batch tasks/ -n 2  # Lower concurrency
Ensure your AWS IAM user/role has Bedrock permissions:
  • bedrock:InvokeModel
  • bedrock:InvokeModelWithResponseStream

Debugging

Enable debug logging to see gateway details:
export CUA_LOG_LEVEL=DEBUG
helios tasks/my-task -m claude-sonnet-4-20250514
This shows:
  • Model routing decisions
  • API request/response details
  • Token usage

Next Steps