Are you sure you want to delete this access key?
sidebar_position |
---|
10 |
This provider supports the Anthropic Claude series of models.
Note: Anthropic models can also be accessed through Amazon Bedrock. For information on using Anthropic models via Bedrock, please refer to our AWS Bedrock documentation.
To use Anthropic, you need to set the ANTHROPIC_API_KEY
environment variable or specify the apiKey
in the provider configuration.
Create Anthropic API keys here.
Example of setting the environment variable:
export ANTHROPIC_API_KEY=your_api_key_here
Config Property | Environment Variable | Description |
---|---|---|
apiKey | ANTHROPIC_API_KEY | Your API key from Anthropic |
apiBaseUrl | ANTHROPIC_BASE_URL | The base URL for requests to the Anthropic API |
temperature | ANTHROPIC_TEMPERATURE | Controls the randomness of the output (default: 0) |
max_tokens | ANTHROPIC_MAX_TOKENS | The maximum length of the generated text (default: 1024) |
top_p | - | Controls nucleus sampling, affecting the randomness of the output |
top_k | - | Only sample from the top K options for each subsequent token |
tools | - | An array of tool or function definitions for the model to call |
tool_choice | - | An object specifying the tool to call |
headers | - | Additional headers to be sent with the API request |
The messages API supports all the latest Anthropic models.
The anthropic
provider supports the following models via the messages API:
anthropic:messages:claude-3-5-sonnet-20241022
anthropic:messages:claude-3-5-sonnet-20240620
anthropic:messages:claude-3-5-haiku-20241022
anthropic:messages:claude-3-opus-20240229
anthropic:messages:claude-3-sonnet-20240229
anthropic:messages:claude-3-haiku-20240307
anthropic:messages:claude-2.0
anthropic:messages:claude-2.1
anthropic:messages:claude-instant-1.2
To allow for compatibility with the OpenAI prompt template, the following format is supported:
Example: prompt.json
[
{
"role": "system",
"content": "{{ system_message }}"
},
{
"role": "user",
"content": "{{ question }}"
}
]
If the role system
is specified, it will be automatically added to the API request.
All user
or assistant
roles will be automatically converted into the right format for the API request.
Currently, only type text
is supported.
The system_message
and question
are example variables that can be set with the var
directive.
The Anthropic provider supports several options to customize the behavior of the model. These include:
temperature
: Controls the randomness of the output.max_tokens
: The maximum length of the generated text.top_p
: Controls nucleus sampling, affecting the randomness of the output.top_k
: Only sample from the top K options for each subsequent token.tools
: An array of tool or function definitions for the model to call.tool_choice
: An object specifying the tool to call.Example configuration with options and prompts:
providers:
- id: anthropic:messages:claude-3-5-sonnet-20241022
config:
temperature: 0.0
max_tokens: 512
prompts:
- file://prompt.json
The Anthropic provider supports tool use (or function calling). Here's an example configuration for defining tools:
providers:
- id: anthropic:messages:claude-3-5-sonnet-20241022
config:
tools:
- name: get_weather
description: Get the current weather in a given location
input_schema:
type: object
properties:
location:
type: string
description: The city and state, e.g., San Francisco, CA
unit:
type: string
enum:
- celsius
- fahrenheit
required:
- location
See the Anthropic Tool Use Guide for more information on how to define tools and the tool use example here.
You can include images in the prompts in Claude 3 models.
See the Claude vision example.
One important note: The Claude API only supports base64 representations of images. This is different from how OpenAI's vision works, as it supports grabbing images from a URL. As a result, if you are trying to compare Claude 3 and OpenAI vision capabilities, you will need to have separate prompts for each.
See the OpenAI vision example to understand the differences.
The completions API is deprecated. See the migration guide here.
The anthropic
provider supports the following models:
anthropic:completion:claude-1
anthropic:completion:claude-1-100k
anthropic:completion:claude-instant-1
anthropic:completion:claude-instant-1-100k
anthropic:completion:<insert any other supported model name here>
Supported environment variables:
ANTHROPIC_API_KEY
- requiredANTHROPIC_STOP
- stopwords, must be a valid JSON stringANTHROPIC_MAX_TOKENS
- maximum number of tokens to sample, defaults to 1024ANTHROPIC_TEMPERATURE
- temperatureConfig parameters may also be passed like this:
providers:
- id: anthropic:completion:claude-1
prompts: chat_prompt
config:
temperature: 0
Model-graded assertions such as factuality
or llm-rubric
use OpenAI by default and expect OPENAI_API_KEY
as an environment variable. If you are using Anthropic, you may override the grader to point to a different provider.
Because of how model-graded evals are implemented, the model must support chat-formatted prompts (except for embedding or classification models).
The easiest way to do this for all your test cases is to add the defaultTest
property to your config:
defaultTest:
options:
provider:
id: anthropic:messages:claude-3-5-sonnet-20241022
config:
# optional provider config options
However, you can also do this for individual assertions:
# ...
assert:
- type: llm-rubric
value: Do not mention that you are an AI or chat assistant
provider:
id: anthropic:messages:claude-3-5-sonnet-20241022
config:
# optional provider config options
Or individual tests:
# ...
tests:
- vars:
# ...
options:
provider:
id: anthropic:messages:claude-3-5-sonnet-20241022
config:
# optional provider config options
assert:
- type: llm-rubric
value: Do not mention that you are an AI or chat assistant
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?