axe
Reference

Provenance

Provenance schema attached to every search result — bridge version, template, intent classifier, coverage.

Every search result carries a provenance dict answering two questions:

  1. Why did the system return this? — Template used, coverage of query constraints, intent classifier version.
  2. What artifacts produced it? — Bridge version, ANN snapshot size, timestamp.

Schema

{
  "provenance": {
    "bridge_version": "20260223T015855Z_hlq_reqs_v2",
    "bridge_format_version": "1.0.0",
    "ann_snapshot_items": 1806,
    "template": "composite_trader_screening_daily",
    "intent_classifier": "regex_v1",
    "coverage": {
      "applied": [...],
      "dropped": [...],
      "fully_covered": true
    },
    "timestamp_utc": "2026-02-26T14:30:00Z"
  }
}

Fields

FieldTypeDescription
bridge_versionstrBridge snapshot that produced this result
bridge_format_versionstrBridge schema version
ann_snapshot_itemsintANN index size at query time
templatestr | nullSQL template name used. Null for help responses and multi-coin top-level results
intent_classifierstrClassifier version. Currently regex_v1 (transitional)
coveragedict | nullWhich query constraints reached the SQL. Null for help/error
timestamp_utcstrISO 8601 timestamp of when the query ran

Coverage

Coverage tracks which semantic constraints from the query were applied to the SQL template vs dropped:

{
  "applied": [
    {"field": "max_risk_score", "value": 3.5, "label": "maximum risk score"},
    {"field": "temporal_window", "value": {"window_days": 7}, "label": "date range override"}
  ],
  "dropped": [
    {"field": "min_consistency_score", "value": 0.8, "label": "minimum consistency score",
     "reason": "template 'risk_regime_stress_scoring_daily' does not use {min_consistency_score}"}
  ],
  "fully_covered": false
}
  • applied — Constraints that were injected into the SQL.
  • dropped — Constraints the user specified but the template doesn't support. Each includes a reason.
  • fully_coveredtrue if nothing was dropped. When false, the result may not fully match the query.

Where provenance appears

Provenance is injected at every return site in the pipeline:

Return sitetemplatecoverage
Help responsenullnull
Error (no template)nullnull
Single-coin dry-runtemplate namecoverage dict
Single-coin livetemplate namecoverage dict
Multi-coin dry-runnull{"per_coin": [...]}
Multi-coin livenull{"per_coin": [...]}

MCP tools

All 4 search-related MCP tools pass through provenance:

  • find_similar_tradersresponse.provenance
  • explain_traderresponse.provenance
  • detect_copytraderesponse.provenance
  • screen_riskresponse.provenance

Why this matters

No one else in the AI crypto intelligence space provides traceable, auditable evidence chains. Provenance lets agents:

  • Audit results — Verify which bridge version and template produced a result.
  • Debug coverage gaps — See which constraints were dropped and why.
  • Track freshness — Timestamp shows when the query ran; bridge version shows data staleness.
  • Compare across versions — When the bridge updates, provenance shows exactly what changed.

On this page