Skip to content

Anthropic Client API Reference

Direct Anthropic API client for Claude models. This client implements the BaseLLMClient interface using httpx to communicate directly with the Anthropic API.

Overview

The Anthropic client provides:

  • Direct HTTP communication with Anthropic's API
  • Implements the BaseLLMClient abstract interface
  • JSON response parsing with error handling
  • Call counting for usage tracking
  • Configurable timeout and retry settings
  • Proper handling of Anthropic's system prompt format

Usage

from causaliq_knowledge.llm import AnthropicClient, AnthropicConfig

# Create client with custom config
config = AnthropicConfig(
    model="claude-sonnet-4-20250514",
    temperature=0.1,
    max_tokens=500,
)
client = AnthropicClient(config=config)

# Make a completion request
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "What is 2+2?"},
]
response = client.completion(messages)
print(response.content)

# Parse JSON response
json_data = response.parse_json()

Environment Variables

The Anthropic client requires the ANTHROPIC_API_KEY environment variable to be set:

export ANTHROPIC_API_KEY=your_api_key_here

AnthropicConfig

AnthropicConfig dataclass

AnthropicConfig(
    model: str = "claude-sonnet-4-20250514",
    temperature: float = 0.1,
    max_tokens: int = 500,
    timeout: float = 30.0,
    api_key: Optional[str] = None,
)

Configuration for Anthropic API client.

Extends LLMConfig with Anthropic-specific defaults.

Attributes:

  • model (str) –

    Anthropic model identifier (default: claude-sonnet-4-20250514).

  • temperature (float) –

    Sampling temperature (default: 0.1).

  • max_tokens (int) –

    Maximum response tokens (default: 500).

  • timeout (float) –

    Request timeout in seconds (default: 30.0).

  • api_key (Optional[str]) –

    Anthropic API key (falls back to ANTHROPIC_API_KEY env var).

Methods:

  • __post_init__

    Set API key from environment if not provided.

api_key class-attribute instance-attribute

api_key: Optional[str] = None

max_tokens class-attribute instance-attribute

max_tokens: int = 500

model class-attribute instance-attribute

model: str = 'claude-sonnet-4-20250514'

temperature class-attribute instance-attribute

temperature: float = 0.1

timeout class-attribute instance-attribute

timeout: float = 30.0

__post_init__

__post_init__() -> None

Set API key from environment if not provided.

AnthropicClient

AnthropicClient

AnthropicClient(config: Optional[AnthropicConfig] = None)

Direct Anthropic API client.

Implements the BaseLLMClient interface for Anthropic's Claude API. Uses httpx for HTTP requests.

Example

config = AnthropicConfig(model="claude-sonnet-4-20250514") client = AnthropicClient(config) msgs = [{"role": "user", "content": "Hello"}] response = client.completion(msgs) print(response.content)

Parameters:

  • config

    (Optional[AnthropicConfig], default: None ) –

    Anthropic configuration. If None, uses defaults with API key from ANTHROPIC_API_KEY environment variable.

Methods:

Attributes:

API_VERSION class-attribute instance-attribute

API_VERSION = '2023-06-01'

BASE_URL class-attribute instance-attribute

BASE_URL = 'https://api.anthropic.com/v1'

_total_calls instance-attribute

_total_calls = 0

cache property

cache: Optional['TokenCache']

Return the configured cache, if any.

call_count property

call_count: int

Return the number of API calls made.

config instance-attribute

config = config or AnthropicConfig()

model_name property

model_name: str

Return the model name being used.

Returns:

  • str

    Model identifier string.

provider_name property

provider_name: str

Return the provider name.

use_cache property

use_cache: bool

Return whether caching is enabled.

_build_cache_key

_build_cache_key(
    messages: List[Dict[str, str]],
    temperature: Optional[float] = None,
    max_tokens: Optional[int] = None,
) -> str

Build a deterministic cache key for the request.

Creates a SHA-256 hash from the model, messages, temperature, and max_tokens. The hash is truncated to 16 hex characters (64 bits).

Parameters:

  • messages
    (List[Dict[str, str]]) –

    List of message dicts with "role" and "content" keys.

  • temperature
    (Optional[float], default: None ) –

    Sampling temperature (defaults to config value).

  • max_tokens
    (Optional[int], default: None ) –

    Maximum tokens (defaults to config value).

Returns:

  • str

    16-character hex string cache key.

cached_completion

cached_completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse

Make a completion request with caching.

If caching is enabled and a cached response exists, returns the cached response without making an API call. Otherwise, makes the API call and caches the result.

Parameters:

  • messages
    (List[Dict[str, str]]) –

    List of message dicts with "role" and "content" keys.

  • **kwargs
    (Any, default: {} ) –

    Provider-specific options (temperature, max_tokens, etc.)

Returns:

  • LLMResponse

    LLMResponse with the generated content and metadata.

complete_json

complete_json(
    messages: List[Dict[str, str]], **kwargs: Any
) -> tuple[Optional[Dict[str, Any]], LLMResponse]

Make a completion request and parse response as JSON.

Parameters:

  • messages
    (List[Dict[str, str]]) –

    List of message dicts with "role" and "content" keys.

  • **kwargs
    (Any, default: {} ) –

    Override config options passed to completion().

Returns:

  • tuple[Optional[Dict[str, Any]], LLMResponse]

    Tuple of (parsed JSON dict or None, raw LLMResponse).

completion

completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse

Make a chat completion request to Anthropic.

Parameters:

  • messages
    (List[Dict[str, str]]) –

    List of message dicts with "role" and "content" keys.

  • **kwargs
    (Any, default: {} ) –

    Override config options (temperature, max_tokens).

Returns:

  • LLMResponse

    LLMResponse with the generated content and metadata.

Raises:

  • ValueError

    If the API request fails.

is_available

is_available() -> bool

Check if Anthropic API is available.

Returns:

  • bool

    True if ANTHROPIC_API_KEY is configured.

list_models

list_models() -> List[str]

List available Claude models from Anthropic API.

Queries the Anthropic /v1/models endpoint to get available models.

Returns:

  • List[str]

    List of model identifiers

  • List[str]

    (e.g., ['claude-sonnet-4-20250514', ...]).

set_cache

set_cache(cache: Optional['TokenCache'], use_cache: bool = True) -> None

Configure caching for this client.

Parameters:

  • cache
    (Optional['TokenCache']) –

    TokenCache instance for caching, or None to disable.

  • use_cache
    (bool, default: True ) –

    Whether to use the cache (default True).

Supported Models

Anthropic provides the Claude family of models:

Model Description Free Tier
claude-sonnet-4-20250514 Claude Sonnet 4 - balanced performance ❌ No
claude-opus-4-20250514 Claude Opus 4 - highest capability ❌ No
claude-3-5-haiku-latest Claude 3.5 Haiku - fast and efficient ❌ No

See Anthropic documentation for the full list of available models.