Supported Providers
Google Gemini
Default provider with computer-use models
Anthropic
Claude models via direct API
AWS Bedrock
Claude models via AWS
OpenAI
GPT models with computer-use preview
Model Selection
Specify a model with the-m flag:
Routing Rules
The gateway uses these patterns to route to providers:| Pattern | Provider |
|---|---|
gemini/* or contains gemini | Google Gemini |
claude-* (e.g., claude-sonnet-4-20250514) | Anthropic Direct |
bedrock/* or contains anthropic. | AWS Bedrock |
openai/* or computer-use-preview | OpenAI |
Available Models
Google Gemini
- Models
- Usage
- API Key
| Model ID | Description |
|---|---|
gemini/gemini-2.5-computer-use-preview-10-2025 | Computer-use preview (default) |
gemini/gemini-3-pro-preview | Gemini 3 Pro |
Anthropic (Direct)
- Models
- Usage
- API Key
| Model ID | Description |
|---|---|
claude-sonnet-4-20250514 | Claude Sonnet 4 |
claude-opus-4-20250514 | Claude Opus 4 |
AWS Bedrock
- Models
- Usage
- Credentials
| Model ID | Description |
|---|---|
bedrock/global.anthropic.claude-sonnet-4-20250514-v1:0 | Claude Sonnet via Bedrock |
bedrock/global.anthropic.claude-opus-4-5-20251101-v1:0 | Claude Opus via Bedrock |
OpenAI
- Models
- Usage
- API Key
| Model ID | Description |
|---|---|
openai/computer-use-preview | Computer-use preview |
Environment Variables
Set these in your shell or.env file:
Using .env Files
Create a.env file in your project root:
Model Comparison
| Provider | Strengths | Best For |
|---|---|---|
| Gemini | Fast, good vision | General tasks, quick iteration |
| Claude Sonnet | Balanced performance | Most tasks, good default |
| Claude Opus | Best reasoning | Complex tasks, hard problems |
| OpenAI | Wide availability | OpenAI-native workflows |
Batch Execution with Models
Specify the model for batch runs:Comparing Models
Run the same tasks with different models:Troubleshooting
API key not found
API key not found
Check that your environment variable is set:If empty, export it:
Model not found
Model not found
Verify the model identifier matches a supported pattern:
Rate limiting
Rate limiting
If you hit rate limits, reduce concurrency in batch mode:
Bedrock permission errors
Bedrock permission errors
Ensure your AWS IAM user/role has Bedrock permissions:
bedrock:InvokeModelbedrock:InvokeModelWithResponseStream
Debugging
Enable debug logging to see gateway details:- Model routing decisions
- API request/response details
- Token usage