CausalIQ Knowledge User Guide¶
Overview¶
The causaliq-knowledge package is part of the CausalIQ ecosystem for intelligent causal discovery and inference. CausalIQ Knowledge provides the following capabilities for integrating LLM knowledge into causal discovery workflows:
| Capability | Pattern | Description |
|---|---|---|
generate-graph |
Create | Generate causal graphs from network context using LLMs |
CLI-Only Utilities¶
The following commands are available only through the CLI and are not accessible as workflow actions:
| Command | Description |
|---|---|
list-models |
List available LLM models from configured providers |
cache-stats |
View LLM cache statistics and costs |
export-cache |
Export LLM cache entries to files |
import-cache |
Import LLM cache entries from files |
Command Line, Workflow or Programmatic Access¶
As with all CausalIQ packages, users may access the capabilities of CausalIQ Knowledge in three ways:
-
Command Line Interface (CLI) provides an easy introduction to generating graphs from the command line. This is primarily orientated to generating a single graph and thus gaining an initial understanding of how to use the capability.
-
CausalIQ Workflows allows users to include CausalIQ Knowledge steps within workflows which can combine learning graphs from data or LLMs, performing inference, analysing results, through to generating publication tables and charts.
-
Programmatic Access using the Python API for complete flexibility over the processing logic.
The CLI and workflow routes use the same command or action name respectively, which in turn matches the capability name in the table above. Parameters are named identically, and as far as practical, the capability behaviour is the same in the CLI and workflow interfaces.
Workflow Concepts¶
For common workflow concepts that apply across all CausalIQ packages, see the CausalIQ Workflow User Guide:
- Action patterns — Create, update, and aggregate patterns
- Common parameters
—
input,output,filterparameters - Workflow caching — Result storage and conservative execution
- CLI usage
— Execution modes (
run,dry-run,force)
The individual action guides in this section document knowledge-specific parameters and behaviour for each capability.
LLM Provider Setup¶
CausalIQ Knowledge uses direct vendor-specific API clients (not wrapper libraries) to communicate with LLM providers. This approach provides reliability and minimal dependencies. Currently supported:
| Provider | Environment Variable | Free Tier | Console URL |
|---|---|---|---|
| Groq | GROQ_API_KEY |
Yes (fast) | console.groq.com |
| Google Gemini | GEMINI_API_KEY |
Yes | aistudio.google.com |
| OpenAI | OPENAI_API_KEY |
No | platform.openai.com |
| Anthropic | ANTHROPIC_API_KEY |
No | console.anthropic.com |
| DeepSeek | DEEPSEEK_API_KEY |
No | platform.deepseek.com |
| Mistral | MISTRAL_API_KEY |
No | console.mistral.ai |
| Ollama | (local) | Yes | ollama.ai |
Use cqknow list-models to see which providers are
configured and what models are available.
Storing API Keys¶
Set environment variables for your chosen providers:
Free Options¶
Groq (Recommended for Testing)¶
Groq offers a generous free tier with extremely fast inference:
- Sign up at console.groq.com
- Create an API key
- Set the environment variable
- Use in code:
Google Gemini (Free Tier)¶
Google offers free access to Gemini models:
- Sign up at aistudio.google.com/apikey
- Create an API key
- Set
GEMINI_API_KEYenvironment variable - Use in code:
Ollama (Local)¶
Run models locally with Ollama (no API key needed):
- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama3 - Use in commands with
-m ollama/llama3
What's Next?¶
- Generate Graph — Generate causal graphs from network context using LLMs
- Network Context Format — Detailed specification for network context JSON files
- List Models — View available LLM models from configured providers
- LLM Cache Management — View, export, and import cached LLM responses