model argument to Agent.start():
images/main/main.py
Model() constructor.
Supported Models
Autonomy supports a wide range of models from different providers:| Model | Provider | Description |
|---|---|---|
| claude-opus-4-v1 | Anthropic | Most capable model for complex tasks |
| claude-sonnet-4-v1 | Anthropic | Balanced performance and cost |
| nova-premier-v1 | Amazon | High-performance model from AWS |
| deepseek-r1 | DeepSeek | Advanced reasoning capabilities |
| nova-pro-v1 | Amazon | Professional-grade AWS model |
| llama4-maverick | Meta | High-performance Llama 4 variant |
| llama4-scout | Meta | Efficient Llama 4 variant |
| nova-lite-v1 | Amazon | Lightweight and cost-effective AWS model |
| nova-micro-v1 | Amazon | Ultra-lightweight, most cost-effective option |
Parameters
TheModel() constructor accepts additional parameters that control the model’s behavior.
-
temperature: The sampling temperature to use, between 0 and 2. Higher values like 0.8 produce more random outputs, while lower values like 0.2 make outputs more focused and deterministic. -
top_p: An alternative to sampling with temperature. It instructs the model to consider the results of the tokens with top_p probability. For example, 0.1 means only the tokens comprising the top 10% probability mass are considered.
images/main/main.py
Invoke Models Directly
For simple use cases, you can also invoke models directly. This is useful when you need to make one-off completions without the full features of an agent.images/main/main.py
Streaming Responses
You can also stream responses from models by settingstream=True:
images/main/main.py

