Why this page exists

OEM security architects, type-approval leads, and Functional Safety + Cybersecurity managers don’t buy “AI” on marketing claims. They buy it when the AI’s output survives an external audit.

The four sections below describe the four properties an automotive AI system must have to do that. Every property is implemented in ThreatZ today — no roadmap items.

Property 1

Catalog grounding

Every AI suggestion is grounded in a vetted threat catalog. No free-text generation; no hallucinated CVEs.

Property 2

Provenance trail

Every threat carries source IDs all the way back to the catalog entry, pattern, and reasoning step.

Property 3

Deterministic mode

One toggle switches the AI off. Rules-only TARA still ships with the same workflow shape and evidence pack.

Property 4

Audit log

Every AI action is logged with timestamp, model version, input snapshot, output, and human override (if any).

Property 1 — Catalog grounding

ThreatZ does not generate threats from a general-purpose language model. It draws from three vetted sources:

  • ENISA Automotive Threat Landscape — the European Union Agency for Cybersecurity’s structured catalog. Refreshed annually.
  • MITRE ATT&CK — mapped to automotive context (Vehicle Function impact, ECU-class targets, communication-protocol relevance).
  • Internal automotive threat patterns — 10,000+ patterns curated by the VxLabs research team across CAN, SOME/IP, DoIP, OBD-II, V2X, and cloud interfaces.

When the AI suggests a threat scenario for an asset in your TARA, it returns a candidate plus the catalog entry it derived from. If the catalog has no matching pattern for an asset, the AI returns nothing — not a fabrication.

What this means for your auditor: every threat scenario in a ThreatZ-generated TARA can be traced to a published, vetted source. There is no “the model thinks” β€” there is “ENISA-2024-AT-073 says.”

Property 2 — Provenance trail

Each threat scenario in ThreatZ carries a structured provenance object alongside the human-readable description. A simplified example:

// threat.provenance β€” emitted on every AI-suggested threat (AI mode)
{
  "threat_id": "TZ-2026-0413-T0142",
  "asset_id": "VHCL-ECU-IVI-01",
  "source_catalog": "vxlabs-internal-2024 (ENISA Threat Landscape mapped)",
  "source_entry": "vx-AT-0073: SOME/IP service-discovery spoofing",
  "mitre_attack_ics": "T0830 (Adversary-in-the-Middle)",
  "automotive_context": "SD-spoofing on Automotive Ethernet zonal gateway",
  "reasoning_steps": [
    "Asset has SOME/IP-SD interface (ECU port 30490/UDP)",
    "Trust boundary classification: external-facing",
    "No mutual authentication observed in interface contract",
    "Catalog match confidence: 0.93"
  ],
  "model_version": "threatz-tara-v3.1.2",
  "deterministic_replay": "true"
}

Internal IDs (vx-AT-####) are how the catalog stores entries; the UI shows the ENISA descriptive name alongside. T0830 is the MITRE ATT&CK for ICS “Adversary-in-the-Middle” technique — SOME/IP-SD spoofing is the automotive instantiation. AI mode emits a confidence score; deterministic mode (next section) does not.

Auditors can replay the reasoning steps in the platform, follow the link to the catalog entry, and verify the asset attributes that triggered the match. Replay is structurally identical — same threat ID, same catalog entry, same provenance object — when the same model version is invoked on the same architecture snapshot.

Property 3 — Deterministic mode

For safety-critical workflows where AI outputs must not be the load-bearing artifact, ThreatZ ships a deterministic mode toggle at the program level. With AI off:

  • Threat enumeration falls back to rules-only catalog matching — same catalogs, no inference, no confidence scores.
  • Attack feasibility scoring uses the ISO/SAE 21434 Annex F default rubric, parameterized by your governance pillar settings.
  • Risk treatment recommendations are not generated; the analyst writes them in the editor with no AI suggestions.

The TARA report, evidence pack, and CSMS audit pack ship in the same shape regardless of mode. Auditors and downstream tools cannot tell whether AI was used; you can.

When to switch to deterministic mode: safety-critical platforms (ASIL-D items), regulated certification body engagement, internal red-team review, and any program where governance policy explicitly forbids generative AI in the audit trail.

Property 4 — Audit log

Every AI invocation in ThreatZ creates an immutable audit-log entry. The log captures:

  • Action type — e.g. threat_suggest, feasibility_score, risk_treatment_propose
  • Model version — the exact build hash of the model the suggestion came from
  • Input snapshot — the architecture-graph subset the AI saw (asset IDs, connections, properties)
  • Raw output — the JSON the model emitted, before any UI formatting
  • Human override — if a TARA owner edited or rejected the suggestion, the diff is logged
  • Timestamp + actor — UTC ISO 8601, plus the user account that triggered the call

The audit log is exportable as part of the CSMS evidence pack (see CSMS Audit Preparation). Type-approval bodies that require “AI use disclosure” (already required in some EU member states for safety-critical software) can be served from this log directly.

Model card

ThreatZ publishes a model card per major release. The model card describes:

  • What the model is (architecture, parameter count band, training-data composition at category level — not raw data)
  • What the model can do (threat suggestion, feasibility scoring, risk-treatment proposal)
  • What the model cannot do (no firmware analysis, no live-traffic IDS, no autonomous decision-making)
  • Refresh cadence (quarterly catalog refresh, semi-annual model retrain)
  • Drift monitoring (precision/recall against held-out customer-anonymized TARAs)
  • Known limitations (proprietary protocols not in catalog return null; novel attack patterns require human enumeration)

The current model card ships with the platform release notes; account teams can share it on request as part of an evaluation pack.

How this maps to ISO/SAE 21434 and UNECE R155

The four properties above support specific clauses:

  • ISO 21434 Clause 5.4 (Cybersecurity governance) β€” model card + audit log = traceable cybersecurity activity.
  • ISO 21434 Clause 8 (Continual cybersecurity activities) β€” catalog refresh + drift monitoring = ongoing posture management.
  • ISO 21434 Clause 15.4 (TARA threat scenarios) β€” catalog grounding + provenance = scenarios with verifiable derivation.
  • UNECE R155 7.2.2.2 (Vulnerability identification) β€” audit log captures every identification path.
  • UNECE R155 Annex 5 Β§7.3 (CSMS evidence) β€” provenance trail + audit log = evidence pack.

See the full mapping in our ISO/SAE 21434 TARA guide and CSMS Audit Preparation Checklist.

What this is not

  • It is not an autonomous TARA. A human cybersecurity engineer reviews and approves every AI suggestion before it ships in a TARA report.
  • It is not trained on customer data. Customer architectures are processed at inference only and never leave the tenant boundary; the catalog is curated separately by the VxLabs research team. See the Security page for the data-handling DPA addendum and SOC 2 Type II control mapping.
  • It is not a general-purpose language model. The model is constrained to the catalog and the workflow grammar; it does not freelance.
  • It does not claim certified AI status. The certification of AI for automotive (ISO/PAS 8800, EU AI Act) is in flight; we update the model card and this page as those frameworks publish.