Most automotive threat-intelligence integrations stop at the incident half of the loop. A feed lands in the Vehicle SOC, an analyst correlates the indicator against the SBOM, a patch is staged, an OTA campaign goes out. The vulnerability is closed; the lesson is filed away in a ticketing system that the next program's threat modeller will never read. The OEM has bought a faster firefighter. It has not bought institutional memory.
The other half — turning every incident into design-phase knowledge — is where automotive cybersecurity actually compounds. Each in-the-wild exploit teaches you something concrete: which component class fails, which protocol surface gets attacked, which supplier disclosed late, how long a real exploit took to develop in the field. That intelligence belongs in the same knowledge graph that drives next year's TARA on the next program, where the AI can use it before a single requirement is drafted.
This article maps the two halves end-to-end. The first half: how a properly engineered vulnerability-management pipeline, fed by structured TI, compresses time-to-patch from days to hours. The second half: how that same pipeline, when designed with persistence in mind, becomes the substrate for AI threat recommendations during design — closing the loop that ISO/SAE 21434 §15 has been quietly demanding all along.
What makes automotive threat intelligence different
Automotive threat intelligence inherits a lot from IT TI — STIX, TAXII, MISP, the indicator/observable/TTP ontology — but the operational reality on a vehicle program is sharply different from what a SOC manager at a SaaS company is solving for. Three structural facts shape every design decision.
The vehicle lifecycle is 10–15 years. A CVE published in 2024 can still be sitting in two million ECUs in 2034. There is no fleet-wide forced update; OTA reach is bounded by hardware capability, customer consent, and regulatory windows. Yesterday's medium-severity CVE is tomorrow's named-actor target. TI must be retained as cumulative state, not flushed when patches ship.
DoS can be a safety event. A denial-of-service on a CAN gateway carrying ASIL-D-relevant signals is not an availability incident in the IT sense; it is a potential loss of vehicle control. Modern domain-partitioned E/E architectures with ASIL decomposition contain that blast radius better than legacy flat-CAN designs, but the worst-case path still has to be modelled. Feasibility scoring, triage SLAs, and incident classification have to be wired into safety-relevant pathways and explicitly mapped to the affected ASIL level. ISO/SAE 21434's impact model (Safety, Financial, Operational, Privacy) is not optional — it changes how an indicator is prioritised compared to an enterprise TI workflow.
Fleets are largely homogeneous. A successful exploit on one ECU firmware revision — absent ASLR, per-VIN HSM keying, secure-boot rollback protection, or other in-firmware mitigations — is by construction exploitable on every vehicle that shares it. There is no "diversity defence" the way there is across an enterprise's heterogeneous endpoint fleet. The blast-radius reduction depends on which mitigations were actually built in at design time; this is one of the structural reasons Loop 2's design-phase memory matters. It is also what makes coordinated disclosure timing and patch staging so consequential, and what makes Auto-ISAC's role in cross-OEM intelligence sharing structurally important rather than a nice-to-have.
Strategic, tactical, and operational TI in the vehicle context
The classical strategic-tactical-operational split maps cleanly to automotive once you reframe the time horizons:
- Strategic (12–36 months): supply-chain ecosystem risk, supplier disclosure histories, regulatory horizon (R155 tightening, EU CRA phase-in), platform-level architectural risk patterns. Consumed by procurement, security architects, executive risk committees.
- Tactical (1–12 months): named CVEs in deployed firmware, named threat actor campaigns targeting Tier-1 supplier networks, advance notice of upcoming disclosures, MITRE ATT&CK-mapped TTPs. Consumed by PSIRT, security engineering leads, product security architects.
- Operational (hours–days): live IoCs, malicious firmware hashes, compromised OTA signing certificates, observed CAN-frame anomaly baselines, exploit code in public repositories. Consumed by the Vehicle SOC and incident response.
The sources that actually matter
A workable automotive TI integration draws from a layered stack:
- Auto-ISAC — coordinated OEM/Tier-1 sharing (TLP:AMBER and below), monthly threat reports, supplier-incident cross-correlation. The closest thing to a sector-specific operational TI feed.
- NVD, GHSA, OSV, CNVD — software vulnerability databases. Necessary but not sufficient; CWEs and CVSS vectors need an automotive-context enrichment layer before they're useful for triage.
- CISA ICS-CERT — embedded and ICS advisories. The closest fit for ECU firmware classes; many advisories cover SoCs and microcontrollers shared with automotive.
- ENISA — NIS2 transport sector reports, EU regulatory landscape. Strategic input for compliance roadmapping.
- Commercial vehicle-specific TI — paid feeds covering connected-vehicle threat campaigns, supplier-incident data, and named-actor research. Mostly operational and tactical; coverage and quality vary widely between vendors — trial them against your own SBOM before subscribing.
- Internal field telemetry — the OEM's own VSOC anomaly stream, P1–P4 prioritised events. The most underused source: your own fleet has more automotive-grounded TI than any external vendor.
Why CWE alone is not enough
A generic CVE feed will tell you that a component has a CWE-120 buffer overflow with CVSS 7.8. It will not tell you whether the vulnerable code path is reachable on an Ethernet diagnostic port, a CAN bus accessible from the OBD connector, or only on an in-flash bootloader path that requires physical disassembly. The same CWE in two different ECUs has two different attack feasibility scores. Automotive TI ingestion needs an enrichment layer with at least these fields:
Show example: automotive enrichment fields appended to a CVE / indicator
x_automotive_component_class: ECU | gateway | sensor | HMI | TCU
x_automotive_protocol: CAN | CAN-FD | LIN | FlexRay | Ethernet | MQTT
x_automotive_attack_surface: physical | local-net | OBD | Bluetooth | V2X | OTA | cloud
x_automotive_ota_exposure: true | false (can OTA mitigate without recall?)
x_automotive_safety_relevance: ASIL-A..D | QM
x_automotive_csms_clause: R155 §7.2.2.x | ISO 21434 §13/§15Without those fields, every CVE becomes a manual research project. With them, the same CVE auto-routes to the right SOC queue, the right severity band, and the right regulatory clause.
Loop 1 — faster patching and mitigation
The incident-response side of the loop is the easier sell to budget owners: it has measurable MTTR. Done well, an OEM goes from "we read about this CVE in a newsletter on Tuesday and started impact analysis on Thursday" to "we had a draft OTA package staging by lunchtime". The architecture is not exotic; it just has to be wired together properly.
The ingestion pipeline
The inbound side is a multi-source pipeline that normalises everything into an internal indicator schema, regardless of feed format:
- STIX 2.1 over TAXII 2.1 for Auto-ISAC, vendor TI, and any sharing community. Pull on a 5–15 minute interval; subscribe to the relevant collections only (over-subscribing is the most common cause of alert fatigue).
- NVD JSON 2.0 feed for CVE base data, polled hourly. Persist the CPE matchpoints and CVSS vectors verbatim; layer the automotive enrichment on top, never overwriting the upstream record.
- GHSA, OSV, CNVD for ecosystem-specific coverage (npm/PyPI/Maven, OSS-style aggregations, China-specific advisories that NVD doesn't carry).
- MISP federation for collaborative intelligence with peer OEMs and Tier-1 partners; MISP's event/attribute model handles automotive enrichments cleanly via custom taxonomies.
- Internal VSOC stream as a first-class TI source. Field anomalies are timestamped, ECU-attributed, and richer than any external feed for your fleet's own attack surface.
Each ingested indicator gets pinned to a stable internal ID and timestamped at ingestion, dispatch, and resolution. That timestamp chain is what turns a "we patched it" claim into evidence that satisfies UNECE R155 7.2.2.3's "without undue delay" language.
SBOM matching with VEX disposition
The matchpoint between TI and your fleet is the SBOM. CycloneDX 1.5 and SPDX v3 both support the structures needed; CycloneDX VEX is the disposition record that closes each match. A worked match looks like this:
Show example: CycloneDX 1.5 VEX disposition (JSON)
{
"bom-ref": "pkg:generic/openssl@1.1.1q",
"id": "CVE-2024-XXXX",
"source": {"name": "NVD"},
"ratings": [{"severity": "high", "score": 7.5, "method": "CVSSv3_1"}],
"analysis": {
"state": "not_affected",
"justification": "code_not_reachable",
"detail": "TLS handshake path not exposed on this ECU; firmware build excludes the affected cipher suite."
},
"affects": [
{"ref": "pkg:generic/openssl@1.1.1q"}
],
"x_firmware_containing_component": "ecu-tcu-firmware-v3.4.2"
}The affects[].ref field must point at a bom-ref from the SBOM itself; the per-firmware impact is carried as a custom property (or a separate dependency-graph traversal in the consuming tool). The CycloneDX justification vocabulary defines a fixed set of values — code_not_reachable, code_not_present, requires_configuration, requires_dependency, requires_environment, protected_by_compiler, protected_at_runtime, protected_at_perimeter, protected_by_mitigating_control, inline_mitigations_already_exist — and your tooling should reject free-text values to keep the disposition machine-readable.
Three things matter operationally. First, every SBOM-CVE match must produce a VEX disposition — not_affected, affected, under_investigation, or fixed — with a justification code from the CycloneDX vocabulary. Implicit decisions ("we looked at it and moved on") cannot be evidence in an audit. Second, version-string matching alone is fragile; two builds of the same library version with different compile flags are not equivalent attack surfaces. Third, VEX is not a one-shot; the disposition can change as new exploit data lands, and that history is what an auditor will ask to see. SBOM management tooling has to record each disposition update with timestamp and rationale.
Blast-radius assessment and affected-VIN derivation
A patch decision is fundamentally a question of which vehicles?. Your TI and SBOM together produce the answer mechanically. The chain is: CVE → affected component versions → ECU firmware builds containing those versions → vehicle programs deploying those firmware builds → production VIN ranges. If any link in that chain requires manual lookup, the analyst-hours dominate the response budget.
Once the affected-VIN set is derived, exposure modifiers refine it: which VINs are reachable via OTA in the next deployment window, which require dealership intervention, which are out of the support window entirely. The output of this step is a triage view with three numbers: total affected, OTA-mitigatable, and recall-risk. That triage view is what regulatory teams need on day one.
SOC ticket enrichment: a worked example
An ingested indicator that turns into an enriched ticket looks materially different from a raw CVE alert:
CVE-2024-XXXX · OpenSSL 1.1.1 · CVSSv3.1 7.5 · CISA KEV: yes (added 2026-04-26)
Affected SBOMs:tcu-firmware-v3.4.2,infotainment-v8.2.0
Affected programs: Program-A (812k VIN), Program-B (240k VIN, EOL)
OTA-mitigatable: 712k of 1052k (67%)
Public exploit: yes, GitHub PoC published 2026-04-22, ~$0 tooling cost
Attack surface: TLS handshake path; reachable on Ethernet diagnostic port and infotainment cellular interface
Regulatory: R155 7.2.2.3 disclosure clock running; CISA KEV adds federal reporting urgency
Suggested triage: P1; mitigate via OTA in 14-day window; recall-track the EOL program
The analyst still makes the decision. They just no longer spend the first six hours of their response building this slide.
What the MTTR numbers actually look like
Public benchmarks for automotive incident MTTR are scarce and noisy — vendors quote big reductions, but the baselines vary wildly. The honest framing, drawn from public reporting and customer engagements: triage time (CVE published → first SOC decision) typically lands in the 4–12 hour band when the pipeline above is end-to-end deployed, against a 2–5 day baseline at organisations doing it manually — though actual results vary with incident complexity, analyst availability, and how mature the SBOM coverage is. Mitigation deployment time (decision → first OTA wave) is bounded by safety-validation gates that TI cannot speed up; what it changes is queue depth, because the patch package can be pre-staged from disclosure day. Recall-track decisions on EOL programs benefit most: regulatory teams get an affected-VIN export on disclosure day instead of three weeks later. None of these numbers should be quoted as a guaranteed outcome; they are an order-of-magnitude expectation when the structural work is done.
Loop 2 — design-phase AI memory
Now the harder, more strategic half. Loop 1 makes incidents cheaper to handle. Loop 2 makes the next vehicle program structurally more secure by feeding everything Loop 1 learned into the place where the next TARA is drafted. This is the part most TI integrations ignore, and it is the part that ISO/SAE 21434 §15 implicitly demands.
From event to knowledge: what gets persisted
An incident is, in the moment, a ticket. After the patch ships, most organisations close the ticket and move on. The institutional knowledge in the response — which component class failed, how the attacker got in, how long the exploit took to develop, which supplier disclosed when — either gets lost in unstructured ticket prose or, at best, lives in a tribal-knowledge wiki nobody reads during a TARA. The fix is not a culture fix. It is a schema fix: persist the lessons as structured updates to the same artifacts the next TARA will start from.
Concretely, every closed incident emits writes into:
- The security catalog — the versioned threats, damage scenarios, controls, requirements, and goals library that all future TARAs draw from.
- The component-risk baseline — per-component-class incident frequency and feasibility-score distributions.
- The supplier trust ledger — disclosure latency, coordinated-disclosure adherence, and SBOM accuracy per Tier-2.
- The STRIDE/PASTA template library — canonical threats with concrete attack-path examples instead of abstract definitions.
- The compliance evidence index — timestamped traceability from incident to TARA-update to control change, satisfying ISO/SAE 21434 §15 and R155 Annex 5.
Six concrete feedback loops
1. Component incident frequency → baseline risk elevation. If a specific microcontroller family appears in three exploited-in-the-wild CVEs over 36 months, the next TARA that lists that microcontroller as an asset starts with an elevated baseline-risk attribute. The threat-modelling AI auto-flags the relevant STRIDE categories with higher prior probability instead of treating them as blank-slate hypotheses.
2. Documented attack path → security-goal proposal. A field-observed attack path on CAN-FD — arbitration-ID spoofing into a legacy gateway, then bootloader entry — gets canonicalised as a reusable threat scenario. Next program's TARA, scanning a similar architecture, has the AI propose the relevant security goals and counter-controls (frame authentication, time-windowed nonces) as a starting point instead of an analyst-drafted shortlist.
3. Supplier disclosure pattern → Tier-2 trust scoring. A supplier whose CVEs are typically disclosed 6–14 months after first in-the-wild sighting carries a lower trust score in the next program's procurement and TARA flows. The scoring is semi-automatable: disclosure-latency histograms and third-party-research-driven-disclosure ratios can be computed from incident data, but evaluating actual adherence to ISO/IEC 29147 (vulnerability disclosure) and ISO/IEC 30111 (vulnerability handling), and weighing SBOM-accuracy deltas, requires periodic analyst review. The automated component is the latency distribution; the trust-score weighting stays analyst-assessed. Suppliers with a documented coordinated-disclosure policy and a shorter mean-time-to-disclosure shrink the window in which the fleet is exposed. The trust score then influences SLAs, contract terms, and architectural risk acceptance on the next program.
4. Field exploit metadata → attack feasibility scoring. ISO 21434 attack-feasibility factors (elapsed time, expertise, equipment, knowledge of the target, window of opportunity) are usually estimated by analyst judgement. With persisted field data, four of the five become empirical: "elapsed time to develop a working exploit on this attack class was 15 minutes in the most recent case; equipment cost: $200 (commodity CAN analyzer); knowledge: public PoC; expertise: layperson". The fifth factor — window of opportunity — remains analyst-assessed because it depends on the deployed vehicle's connectivity profile, parking patterns, and regional exposure rather than on the exploit itself; field data narrows the analyst's estimate but does not replace it. Feasibility scores still become measurements where they were guesses, with one factor explicitly flagged as the human-judgement remainder.
5. Real-world TTPs → STRIDE catalog enrichment. Generic STRIDE categories are easy to enumerate and hard to defend against in the abstract. Real campaigns — supplier CI/CD compromise, OTA signing-key theft, lateral movement from supplier VPN into firmware build pipelines — become first-class threats in the catalog with specific mitigations. This loop is analyst-curated, not automated: an architect or PSIRT lead converts an observed campaign into a canonical threat scenario with mapped mitigations; the AI then surfaces it as a recommendation in the next TARA pass and records the architect's accept/decline back into the catalog. The next program's TARA reasoning starts from "this supply-chain TTP has been observed twice; the relevant mitigations are HSM-backed signing keys and supplier network segmentation" rather than "we should consider supply-chain attacks generally".
6. Compliance evidence as a side-effect. Strictly speaking this is not a feedback loop in the same shape as 1–5; it is the audit-trail by-product of running 1–5 with discipline. ISO/SAE 21434 §15 requires that TARA stays current with field experience and threat intelligence. R155 Annex 5 Part A & B require demonstrating that threats and mitigations are kept current. With the loops wired, the evidence is generated automatically: every incident produces a timestamped update to the security catalog, every catalog change is propagated into the affected programs' TARAs as a "consider this in the next baseline review" item, and the entire chain — TI source → incident → catalog update → TARA delta → control change → verification — is auditable as a single trace.
How the AI uses captured intelligence
The persisted catalog is what makes "AI threat modelling" meaningfully different from a smarter STRIDE template. The model context for an AI threat-recommendation pass during design includes the system architecture (assets, protocols, interfaces) and the relevant cumulative knowledge:
system:
- asset: ecu_tcu (uC: ARM-Cortex-M7-class)
- interface: ethernet_diag (1Gbps, dev-mode-locked)
- protocol: secure_ota_v2 (HSM-signed)
knowledge_graph_context:
- prior_incidents_for_uC_class: 3 in 36mo
- documented_attack_paths_for_secure_ota_v2: 1 (signing-key theft via Tier-2 CI/CD)
- supplier_trust_score(Tier-2_X): 0.62 (median disclosure latency 8mo)
- field_feasibility_for_eth_diag_attacks: avg 18min, $200 commodity tooling
recommended_threats:
- "Spoofing of OTA signing identity via Tier-2 CI/CD lateral movement"
(confidence: high, derived_from: incident-2025-Q3-014)
- "Tampering with TCU firmware via Ethernet diagnostic interface"
(confidence: medium, derived_from: feasibility-baseline-eth-diag-2026)
- "Information disclosure via uC peripheral side-channel"
(confidence: medium, derived_from: cve-2024-XXXX cluster)
The architect reviews and accepts, declines, or refines each recommendation. Every accept/decline is itself a write back into the catalog with a structured rationale — tracked for future iterations of the recommendation logic and as audit evidence that the AI's outputs were reviewed by a human. The recommendations are explicitly a starting point for architect discussion, not a standalone decision; their value is in grounding the conversation on prior-program incidents and supplier-specific data instead of taxonomic STRIDE templates. This is how the loop closes — not as a marketing claim about "AI", but as a structured feedback signal between design-time and field experience that an audit can reconstruct.
Integration architecture and standards
Nothing above requires bespoke standards. The interoperability stack is mature; the gap is the automotive-specific glue.
STIX 2.1 with automotive extensions
STIX 2.1 SDOs (indicator, vulnerability, course-of-action, threat-actor, attack-pattern) cover everything needed; the only addition is custom properties for automotive context. A Tier-1's TAXII feed indicator might look like this after enrichment:
Show example: STIX 2.1 indicator with automotive extensions (JSON)
{
"type": "indicator",
"spec_version": "2.1",
"id": "indicator--a3f12c4e-...-1234abcd", // UUID truncated for brevity
"pattern": "[file:hashes.'SHA-256' = 'b1e5a0d7c3f9...']", // hash truncated
"pattern_type": "stix",
"valid_from": "2026-04-22T08:00:00Z",
"indicator_types": ["malicious-activity"],
"labels": ["malicious-firmware"],
"x_automotive_component_class": "gateway",
"x_automotive_protocol": "ethernet",
"x_automotive_attack_surface": "ota",
"x_automotive_ota_exposure": true,
"x_automotive_csms_clause": "R155 §7.2.2.3",
"x_automotive_feasibility": {
"elapsed_time_minutes": 18,
"equipment_cost_usd": 200,
"knowledge_required": "public_poc"
}
}TAXII subscriptions and MISP federation
TAXII 2.1 collections are the right level of granularity: subscribe to vendor-specific channels, not "all of Auto-ISAC". MISP fills the federated-collaboration gap when peers want bidirectional sharing — the event/attribute model maps cleanly to STIX and supports custom taxonomies for automotive enrichments. The non-negotiable hygiene rules: deduplicate on indicator hash, retain all attribution metadata, and never overwrite upstream records (always layer enrichment on top).
The automotive normalisation layer
Generic IT TI normalisation pipelines are not enough on their own. The automotive layer adds the fields enumerated earlier (component class, protocol, attack surface, OTA exposure, ASIL relevance, CSMS clause), plus a routing rule set: which indicator types go to which queues (PSIRT, VSOC, architecture review), which trigger automatic SBOM-VEX matching, and which require Auto-ISAC reciprocal sharing. This routing layer is the single place where automotive-specific operational policy lives; it should be versioned and reviewed quarterly.
Anti-patterns and pitfalls
Most failed TI integrations fail in identifiable ways:
- Alert fatigue contaminating the AI. If analysts are forced to triage 1000 indicators a day, their decisions become heuristic and inconsistent. An AI trained on those decisions inherits the inconsistency. Fix: stratified routing, with operational TI flowing to the SOC and strategic/tactical TI queued for architecture-review cadences.
- SBOM-CVE false positives. Naive version-string matching against CPE patterns generates a high false-positive rate — commonly cited in the order of one third to one half of raw matches once compile flags, build feature sets, and patched-but-same-version cases are considered. Fix: VEX is mandatory for every match; track justification codes; treat the CPE as a starting point not an answer.
- IT-centric feeds without enrichment. A Log4j-class CVE applied mechanically to ECU firmware produces noise, because most embedded code paths don't load Log4j. Fix: gate generic feeds through an automotive-applicability filter before they reach the SOC queue.
- IoCs that don't map to ECU boundaries. A network IoC ("malicious IP X") on a CAN-only ECU is meaningless. Fix: enrich every IoC with the protocol/layer it can be observed at; reject indicators that fail the enrichment.
- No supplier cross-correlation. A Tier-2 disclosing a CVE to one OEM but not others creates blind spots. Fix: Auto-ISAC supply-chain queries on every supplier-attributable CVE; raise it as a coordinated-disclosure issue if scope is unclear.
- Temporal blindness. Field anomalies can precede public disclosure by days to weeks — rarely longer for high-severity exploits, occasionally longer for stealthy supply-chain campaigns. Fix: behavioural anomaly detection runs independently of TI; TI confirms intent, but detection is firmware-first.
Regulatory anchors
Threat intelligence integration is not a "nice to have" in the major automotive cybersecurity regulations — it is a load-bearing requirement. Three frameworks in particular impose specific obligations.
UNECE R155
R155 is the most prescriptive. Paragraph 7.2.2.2 requires the CSMS to include processes for the post-production phase that identify vulnerabilities and respond to cyber threats. Paragraph 7.2.2.3 requires that those vulnerabilities are managed and mitigations deployed "without undue delay" — a clause that has no defensible answer without a structured TI feed and a timestamped triage record. Paragraph 7.3 explicitly mandates ongoing monitoring of cyber threats including known and emerging vulnerabilities. Annex 5 Part A (Threats and corresponding Mitigations to Vehicle Types) and Part B (Mitigations relating to vehicle threats) require a demonstrated, current catalog of threats and mitigations — which is exactly the artifact Loop 2 produces. R156 (Software Updates) is the natural complement: it mandates secure OTA delivery, integrity verification, and rollback capability, so the TI → SBOM-VEX → OTA pipeline in Loop 1 is part of a coordinated R155/R156 compliance posture, not a R155-only concern. Continuous monitoring evidence under R155 covers the audit-side mechanics in detail.
ISO/SAE 21434
Section 13 (Continuous Activities) is where TI integration lives explicitly. §13.1.1 (cybersecurity monitoring) implies external threat-intelligence feeds; §13.1.2 (vulnerability management) requires ingestion from public databases and threat feeds; §13.1.3 (incident handling) closes the loop by feeding incident data back into the TARA. Section 15 (TARA updates) is the legal hook for Loop 2 specifically: TARA must be updated based on field experience and threat intelligence. Section 11 (cybersecurity case) requires evidence of those updates; the timestamped trace from Loop 1 is exactly that evidence.
EU CRA
The EU Cyber Resilience Act adds an EU-wide overlay. Article 10 requires reporting "significant cyber incidents" to ENISA on tight timelines — a TI integration that is already producing structured, timestamped, severity-tagged incident records makes that reporting mechanical instead of forensic. Annex I requires "considering the evolving threat landscape" in the threat model — which is restated R155 7.3 in EU statutory language. EU CRA vs UNECE R155 covers the overlapping obligations and where they diverge.
How ThreatZ implements the loop
The two loops are not theoretical; the building blocks are in ThreatZ today. Loop 1 runs out of the SBOM and Operations pillars: NVD, GHSA, OSV, and CNVD scanning against CycloneDX/SPDX imports; threat-chatter scoring with source breakdown across forums, dark-web telemetry, social, and TI feeds; CISA KEV status tracking; 90-day CVE risk forecasting (a forward-looking severity-trajectory signal that combines disclosure-pattern history with chatter-volume change to flag "Escalated", "Likely", or "Stable" risk); and weighted indicator matching that correlates incidents back to TARA threat scenarios. TARA-to-runtime-detection covers the matching mechanics.
Loop 2 runs through the Governance and TARA pillars. Versioned security catalogs are the persistence layer that holds field-derived knowledge; trusted-vulnerability-source policies anchor what feeds the catalog; risk regression dashboards (new / fixed / regressed / severity-changed) surface what's drifted across program baselines. The TARA module's AI threat recommendations and damage-scenario suggestions are STRIDE-grounded but enriched by the catalog state, and accept/decline feedback is itself written back, tightening recommendations over time. The CSMS audit preparation guide covers how the resulting evidence chain holds up under audit.
The loop is end-to-end traceable: indicator ingested → CVE/SBOM matched with VEX disposition → incident handled with timestamped evidence → catalog update emitted → affected programs' TARAs flag the change for review → AI recommendations update on the next design cycle → verification evidence closes the §15 audit chain.
See the loop running on your own SBOM
ThreatZ matches your SBOM against multi-source TI in real time and writes captured intelligence back into the catalogs that drive the next program's TARA.
Explore ThreatZKey takeaways
- Automotive threat intelligence has two loops, not one. Loop 1 compresses incident MTTR; Loop 2 turns each incident into design-phase memory the AI uses on the next program.
- Generic IT TI feeds need an automotive enrichment layer (component class, protocol, attack surface, OTA exposure, ASIL relevance) before they're useful for triage.
- The matchpoint between TI and your fleet is the SBOM. CycloneDX VEX is the disposition record that turns matches into audit evidence.
- Loop 2 is what ISO/SAE 21434 §15 and UNECE R155 Annex 5 implicitly demand: TARA must be kept current with field experience and threat intelligence, and the evidence trail has to be reconstructable.
- Persistence beats process. Six structured writes per incident — security catalog, component baseline, supplier trust ledger, STRIDE library, compliance evidence index, and feasibility-scoring distribution — make the loop auditable.
- The AI's value at design time is not in “smart STRIDE”; it's in grounding recommendations on the cumulative catalog of field-observed attacks, supplier behaviours, and feasibility measurements specific to your fleet.
- Anti-patterns are predictable: alert fatigue, naive SBOM matching, IT-centric feeds, IoC misalignment, supplier-correlation blindness, and temporal blindness. All are fixable with discipline at the normalisation layer.