DeepSeek Client¶
Direct DeepSeek API client for DeepSeek-V3 and DeepSeek-R1 models.
Overview¶
DeepSeek is a Chinese AI company known for highly capable models at competitive prices. Their API is OpenAI-compatible, making integration straightforward.
Key features:
- DeepSeek-V3: General purpose chat model, excellent performance
- DeepSeek-R1: Advanced reasoning model, rivals OpenAI o1 at much lower cost
- Very competitive pricing (~$0.14/1M input for chat)
- OpenAI-compatible API
Configuration¶
The client requires a DEEPSEEK_API_KEY environment variable:
# Linux/macOS
export DEEPSEEK_API_KEY="your-api-key"
# Windows PowerShell
$env:DEEPSEEK_API_KEY="your-api-key"
# Windows cmd
set DEEPSEEK_API_KEY=your-api-key
Get your API key from: https://platform.deepseek.com
Usage¶
Basic Usage¶
from causaliq_knowledge.llm import DeepSeekClient, DeepSeekConfig
# Default config (uses DEEPSEEK_API_KEY env var)
client = DeepSeekClient()
# Or with custom config
config = DeepSeekConfig(
model="deepseek-chat",
temperature=0.1,
max_tokens=500,
timeout=30.0,
)
client = DeepSeekClient(config)
# Make a completion request
messages = [{"role": "user", "content": "What is 2 + 2?"}]
response = client.completion(messages)
print(response.content)
Using with CLI¶
# Query with DeepSeek
cqknow query smoking lung_cancer --model deepseek/deepseek-chat
# Use reasoning model for complex queries
cqknow query income education --model deepseek/deepseek-reasoner --domain economics
# List available DeepSeek models
cqknow models deepseek
Using with LLMKnowledge Provider¶
from causaliq_knowledge.llm import LLMKnowledge
# Single model
provider = LLMKnowledge(models=["deepseek/deepseek-chat"])
result = provider.query_edge("smoking", "lung_cancer")
# Multi-model consensus
provider = LLMKnowledge(
models=[
"deepseek/deepseek-chat",
"groq/llama-3.1-8b-instant",
],
consensus_strategy="weighted_vote",
)
Available Models¶
| Model | Description | Best For |
|---|---|---|
deepseek-chat |
DeepSeek-V3 general purpose | Fast, general queries |
deepseek-reasoner |
DeepSeek-R1 reasoning model | Complex reasoning tasks |
Pricing¶
DeepSeek offers very competitive pricing (as of Jan 2025):
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|---|---|---|
| deepseek-chat | $0.14 | $0.28 |
| deepseek-reasoner | $0.55 | $2.19 |
Note: Cache hits are even cheaper. See DeepSeek pricing for details.
API Reference¶
DeepSeekConfig
dataclass
¶
DeepSeekConfig(
model: str = "deepseek-chat",
temperature: float = 0.1,
max_tokens: int = 500,
timeout: float = 30.0,
api_key: Optional[str] = None,
)
Configuration for DeepSeek API client.
Extends OpenAICompatConfig with DeepSeek-specific defaults.
Attributes:
-
model(str) –DeepSeek model identifier (default: deepseek-chat).
-
temperature(float) –Sampling temperature (default: 0.1).
-
max_tokens(int) –Maximum response tokens (default: 500).
-
timeout(float) –Request timeout in seconds (default: 30.0).
-
api_key(Optional[str]) –DeepSeek API key (falls back to DEEPSEEK_API_KEY env var).
Methods:
-
__post_init__–Set API key from environment if not provided.
DeepSeekClient
¶
DeepSeekClient(config: Optional[DeepSeekConfig] = None)
Direct DeepSeek API client.
DeepSeek uses an OpenAI-compatible API, making integration straightforward. Known for excellent reasoning capabilities (R1) at low cost.
Available models
- deepseek-chat: General purpose (DeepSeek-V3)
- deepseek-reasoner: Advanced reasoning (DeepSeek-R1)
Example
config = DeepSeekConfig(model="deepseek-chat") client = DeepSeekClient(config) msgs = [{"role": "user", "content": "Hello"}] response = client.completion(msgs) print(response.content)
Parameters:
-
(config¶Optional[DeepSeekConfig], default:None) –DeepSeek configuration. If None, uses defaults with API key from DEEPSEEK_API_KEY environment variable.
Methods:
-
cached_completion–Make a completion request with caching.
-
complete_json–Make a completion request and parse response as JSON.
-
completion–Make a chat completion request.
-
is_available–Check if the API is available.
-
list_models–List available models from the API.
-
set_cache–Configure caching for this client.
Attributes:
-
cache(Optional['TokenCache']) –Return the configured cache, if any.
-
call_count(int) –Return the number of API calls made.
-
model_name(str) –Return the model name being used.
-
provider_name(str) –Return the provider name.
-
use_cache(bool) –Return whether caching is enabled.
model_name
property
¶
Return the model name being used.
Returns:
-
str–Model identifier string.
cached_completion
¶
cached_completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse
Make a completion request with caching.
If caching is enabled and a cached response exists, returns the cached response without making an API call. Otherwise, makes the API call and caches the result.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Provider-specific options (temperature, max_tokens, etc.)
Returns:
-
LLMResponse–LLMResponse with the generated content and metadata.
complete_json
¶
complete_json(
messages: List[Dict[str, str]], **kwargs: Any
) -> tuple[Optional[Dict[str, Any]], LLMResponse]
Make a completion request and parse response as JSON.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Override config options passed to completion().
Returns:
-
tuple[Optional[Dict[str, Any]], LLMResponse]–Tuple of (parsed JSON dict or None, raw LLMResponse).
completion
¶
completion(messages: List[Dict[str, str]], **kwargs: Any) -> LLMResponse
Make a chat completion request.
Parameters:
-
(messages¶List[Dict[str, str]]) –List of message dicts with "role" and "content" keys.
-
(**kwargs¶Any, default:{}) –Override config options (temperature, max_tokens).
Returns:
-
LLMResponse–LLMResponse with the generated content and metadata.
Raises:
-
ValueError–If the API request fails.
is_available
¶
Check if the API is available.
Returns:
-
bool–True if API key is configured.
list_models
¶
List available models from the API.
Queries the API to get models accessible with the current API key, then filters using _filter_models().
Returns:
-
List[str]–List of model identifiers.
Raises:
-
ValueError–If the API request fails.