Skip to content

CausalIQ Analysis CLI

This CLI provides commands for analysing and visualising learned causal graphs, including structural metrics, stability assessment, significance tests, and publication-ready tables and charts.

CLI entry point

cli

Command-line interface for causaliq-analysis.

Functions:

  • best_graph_cmd

    Extract optimal DAG from a PDG using greedy algorithm.

  • cli

    CausalIQ Analysis CLI - Tools for analysing and visualising causal graphs.

  • evaluate_graph_cmd

    Evaluate a learned graph against a ground truth reference.

  • main

    Entry point for the CLI.

  • merge_graphs_cmd

    Merge multiple graphs into a single PDG with edge probabilities.

  • migrate_trace_cmd

    Migrate legacy Trace pickle files to GraphML format.

  • summarise_cmd

    Summarise numerical metrics across experiments.

best_graph_cmd

best_graph_cmd(input: str, output: str, threshold: float) -> None

Extract optimal DAG from a PDG using greedy algorithm.

Reads a Probabilistic Dependency Graph (PDG) and extracts the best DAG by greedily selecting high-probability edges while avoiding cycles. Undirected probability is split equally between forward and backward directions.

For direction ties, alphabetical ordering is used (source -> target where source < target).

Creates output directory containing dag.graphml and _meta.json.

Example

causaliq-analysis best-graph -i merged.graphml -o results/optimal

causaliq-analysis best-graph -i merged.graphml -o results/optimal \ --threshold=0.5

cli

cli() -> None

CausalIQ Analysis CLI - Tools for analysing and visualising causal graphs.

evaluate_graph_cmd

evaluate_graph_cmd(
    input_graph: str,
    reference: str,
    metrics_requested: Tuple[str, ...],
    output: Optional[str],
    output_format: str,
) -> None

Evaluate a learned graph against a ground truth reference.

Computes structural accuracy metrics including F1 and SHD (Structural Hamming Distance). Supports both direct comparison and equivalence class comparison (comparing CPDAGs).

Supported metrics: - f1: F1 score from direct graph comparison - shd: Structural Hamming Distance from direct comparison - precision: Precision from direct comparison - recall: Recall from direct comparison - equiv.f1: F1 score comparing equivalence classes (CPDAGs) - equiv.shd: SHD comparing equivalence classes (CPDAGs)

Example

causaliq-analysis evaluate-graph -i learned.graphml \ -r ground_truth.graphml -m f1 -m shd

causaliq-analysis evaluate-graph -i learned.graphml \ -r ground_truth.graphml -m equiv.f1 -m equiv.shd

causaliq-analysis evaluate-graph -i learned.graphml \ -r ground_truth.graphml -m f1 --format=table

main

main() -> None

Entry point for the CLI.

merge_graphs_cmd

merge_graphs_cmd(
    inputs: Tuple[str, ...],
    output: str,
    filter_expr: Optional[str],
    weights: Optional[str],
    object_type: Optional[str],
    strategy: str,
) -> None

Merge multiple graphs into a single PDG with edge probabilities.

Reads GraphML files (.graphml) and/or WorkflowCache databases (.db) and combines them into a Probabilistic Dependency Graph (PDG) using the specified merge strategy.

Input type is auto-detected by file extension: - .graphml: Read as GraphML file (filter/weights not applicable) - .db: Read graphml objects from cache entries (filter/weights apply)

Example

causaliq-analysis merge-graphs -i graph1.graphml -i graph2.graphml \ -o merged.graphml

causaliq-analysis merge-graphs -i results.db \ -f "network == 'asia' and sample_size > 500" \ -o merged.graphml

causaliq-analysis merge-graphs -i results.db -w weights.json \ -o merged.graphml --object-type=cpdag

causaliq-analysis merge-graphs -i results.db \ --strategy noisy_or -o merged.graphml

migrate_trace_cmd

migrate_trace_cmd(
    network: str,
    series: str,
    root_dir: str,
    sample_size: Optional[str],
    seed: str,
    output: Optional[str],
) -> None

Migrate legacy Trace pickle files to GraphML format.

Converts Trace files containing learnt graphs into portable GraphML format with accompanying metadata JSON files.

Example

causaliq-analysis migrate-trace -n asia -s TABU/SAMPLE/BASE -r experiments -N 10k -S 0-1 -o migrated/asia

summarise_cmd

summarise_cmd(
    metrics: Tuple[str, ...],
    input_files: Tuple[str, ...],
    output: str,
    filter_expr: Optional[str],
) -> None

Summarise numerical metrics across experiments.

Computes summary statistics (mean, SD, count) for numerical metrics extracted from JSON files or workflow cache (.db) entries. Produces publication-ready tabular output in CSV format.

Metric specifications use the format .: - f1.mean - compute mean of 'f1' values - shd.sd - compute standard deviation of 'shd' values - precision.count - count non-null 'precision' values

The field name follows a dotted path convention for nested metadata. For workflow caches, metrics are extracted from entry metadata (e.g., 'causaliq-analysis.evaluate_graph.f1' becomes 'f1').

Example

causaliq-analysis summarise -m f1.mean -m f1.sd -m shd.mean \ -i results.json -o summary.csv

causaliq-analysis summarise -m precision.mean -m recall.mean \ -i cache.db -o metrics_summary.csv

causaliq-analysis summarise -m f1.mean -i cache.db \ -f "network == 'asia'" -o asia_summary.csv

causaliq-analysis summarise -m f1.mean -m f1.sd -i cache.db -o -

This is the entry point for the CLI logic.