Anthropic Client API Reference¶
Direct Anthropic API client for Claude models. This client implements the BaseLLMClient interface using httpx to communicate directly with the Anthropic API.
Overview¶
The Anthropic client provides:
- Direct HTTP communication with Anthropic's API
- Implements the
BaseLLMClientabstract interface - JSON response parsing with error handling
- Call counting for usage tracking
- Configurable timeout and retry settings
- Proper handling of Anthropic's system prompt format
Usage¶
from causaliq_knowledge.llm import AnthropicClient, AnthropicConfig
# Create client with custom config
config = AnthropicConfig(
model="claude-sonnet-4-20250514",
temperature=0.1,
max_tokens=500,
)
client = AnthropicClient(config=config)
# Make a completion request
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"},
]
response = client.completion(messages)
print(response.content)
# Parse JSON response
json_data = response.parse_json()
Environment Variables¶
The Anthropic client requires the ANTHROPIC_API_KEY environment variable to be set:
AnthropicConfig¶
AnthropicConfig
dataclass
¶
AnthropicConfig(
model: str = "claude-sonnet-4-20250514",
temperature: float = 0.1,
max_tokens: int = 500,
timeout: float = 30.0,
api_key: Optional[str] = None,
)
Configuration for Anthropic API client.
Extends LLMConfig with Anthropic-specific defaults.
Attributes:
-
model(str) –Anthropic model identifier (default: claude-sonnet-4-20250514).
-
temperature(float) –Sampling temperature (default: 0.1).
-
max_tokens(int) –Maximum response tokens (default: 500).
-
timeout(float) –Request timeout in seconds (default: 30.0).
-
api_key(Optional[str]) –Anthropic API key (falls back to ANTHROPIC_API_KEY env var).
Methods:
-
__post_init__–Set API key from environment if not provided.
AnthropicClient¶
AnthropicClient
¶
AnthropicClient(config: Optional[AnthropicConfig] = None)
Direct Anthropic API client.
Implements the BaseLLMClient interface for Anthropic's Claude API. Uses httpx for HTTP requests.
Example
config = AnthropicConfig(model="claude-sonnet-4-20250514") client = AnthropicClient(config) msgs = [{"role": "user", "content": "Hello"}] response = client.completion(msgs) print(response.content)
Parameters:
-
(config¶Optional[AnthropicConfig], default:None) –Anthropic configuration. If None, uses defaults with API key from ANTHROPIC_API_KEY environment variable.
Methods:
-
_build_cache_key–Build a deterministic cache key for the request.
-
cached_completion–Make a completion request with caching.
-
complete_json–Make a completion request and parse response as JSON.
-
completion–Make a chat completion request to Anthropic.
-
is_available–Check if Anthropic API is available.
-
list_models–List available Claude models from Anthropic API.
-
set_cache–Configure caching for this client.
Attributes:
-
API_VERSION– -
BASE_URL– -
_total_calls– -
cache(Optional['TokenCache']) –Return the configured cache, if any.
-
call_count(int) –Return the number of API calls made.
-
config– -
model_name(str) –Return the model name being used.
-
provider_name(str) –Return the provider name.
-
use_cache(bool) –Return whether caching is enabled.
model_name
property
¶
Return the model name being used.
Returns:
-
str–Model identifier string.
_build_cache_key
¶
_build_cache_key(
messages: List[Dict[str, str]],
temperature: Optional[float] = None,
max_tokens: Optional[int] = None,
) -> str
Build a deterministic cache key for the request.
Creates a SHA-256 hash from the model, messages, temperature, and max_tokens. The hash is truncated to 16 hex characters (64 bits).
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(temperature¶Optional[float], default:None) –Sampling temperature (defaults to config value).
-
(max_tokens¶Optional[int], default:None) –Maximum tokens (defaults to config value).
Returns:
-
str–16-character hex string cache key.
cached_completion
¶
cached_completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse
Make a completion request with caching.
If caching is enabled and a cached response exists, returns the cached response without making an API call. Otherwise, makes the API call and caches the result.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Provider-specific options (temperature, max_tokens, etc.)
Returns:
-
LLMResponse–LLMResponse with the generated content and metadata.
complete_json
¶
complete_json(
messages: List[Dict[str, str]], **kwargs: Any
) -> tuple[Optional[Dict[str, Any]], LLMResponse]
Make a completion request and parse response as JSON.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Override config options passed to completion().
Returns:
-
tuple[Optional[Dict[str, Any]], LLMResponse]–Tuple of (parsed JSON dict or None, raw LLMResponse).
completion
¶
completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse
Make a chat completion request to Anthropic.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Override config options (temperature, max_tokens).
Returns:
-
LLMResponse–LLMResponse with the generated content and metadata.
Raises:
-
ValueError–If the API request fails.
is_available
¶
Check if Anthropic API is available.
Returns:
-
bool–True if ANTHROPIC_API_KEY is configured.
list_models
¶
List available Claude models from Anthropic API.
Queries the Anthropic /v1/models endpoint to get available models.
Returns:
-
List[str]–List of model identifiers
-
List[str]–(e.g., ['claude-sonnet-4-20250514', ...]).
Supported Models¶
Anthropic provides the Claude family of models:
| Model | Description | Free Tier |
|---|---|---|
claude-sonnet-4-20250514 |
Claude Sonnet 4 - balanced performance | ❌ No |
claude-opus-4-20250514 |
Claude Opus 4 - highest capability | ❌ No |
claude-3-5-haiku-latest |
Claude 3.5 Haiku - fast and efficient | ❌ No |
See Anthropic documentation for the full list of available models.