Why KernelScan?
Linux runs inside almost every commercial product with digital elements — network gear, industrial controllers, medical devices, automotive ECUs, IoT, edge appliances. The Linux kernel community runs a sophisticated, mature security process: kernel.org operates as a CVE Numbering Authority (CNA), publishes structured CVE records, and tracks fixes per LTS branch with commit-level precision.
Commercial product developers don't speak that dialect. Their SBOM scanners, vulnerability management tools, and compliance platforms are built around the NVD-driven, CPE-keyed feed — which is downstream from the kernel community's process, manually triaged, and chronically backlogged. By the time a kernel CVE shows up in a commercial scanner, the upstream community has already shipped a fix, mailing-list discussions have moved on, and adversaries have had a head start.
KernelScan exists to close that gap — a kernel-native security pipeline that ingests every CVE the moment kernel.org publishes it, then re-emits it through the formats your existing scanners and compliance tooling already understand.
The regulatory pressure
Authorities on both sides of the Atlantic now require manufacturers to do exactly what the Linux community already does — but with the rigor, traceability, and timeliness expected of a commercial supply chain.
European Union — Cyber Resilience Act (CRA)
Regulation (EU) 2024/2847, adopted October 2024 and enforceable from late 2027, applies to every product with digital elements placed on the EU market — including industrial gear, IoT, medical, automotive, and consumer hardware running Linux.
- Vulnerability identification and management across all components, including third-party dependencies — the kernel is the dominant one for most embedded products.
- Mandatory SBOM accompanying every product, structured and machine-readable.
- 24-hour reporting to ENISA / CSIRTs for actively exploited vulnerabilities; 72-hour reporting for severe vulnerabilities.
- Security updates without undue delay across the support period (typically the longer of 5 years or the expected product lifetime).
- Penalties up to €15M or 2.5% of global annual turnover, whichever is higher.
United States — federal cyber requirements
- Executive Order 14028 (2021) — software supply chain security: SBOMs, attestations, secure development practices.
- NIST SP 800-218 (SSDF) — the Secure Software Development Framework that mandates ongoing vulnerability identification and remediation in components.
- OMB M-22-18 / M-23-16 — federal agencies collect self-attestations from software producers against the SSDF; vendors selling to government must comply.
- CISA Known Exploited Vulnerabilities (KEV) catalog — sets a 14-day federal patch deadline once a CVE is added.
- Sector-specific mandates — FDA refuse-to-accept policy on medical-device cybersecurity (since 2023), TSA pipeline directives, NERC CIP for the bulk power grid.
The kernel-shaped hole
Both regulatory frameworks expect manufacturers to monitor and act on vulnerabilities in their dependencies continuously. For commercial Linux products, the dominant dependency is the kernel — and the kernel is precisely where mainstream NVD-driven tooling has its largest blind spot. Most kernel CVEs are visible to the kernel community for weeks before they appear in scanners. That window is the difference between a defensible compliance posture and a regulator question you can't answer.
KernelScan is the kernel-native bridge between the Linux community's security process and the commercial software supply chain. It pulls directly from kernel.org's CVE feed, processes every record through a deterministic pipeline as it lands, and re-publishes it in the formats your scanners and SBOM tools already consume — OSV for Trivy / Grype / Renovate, CycloneDX VDR for Dependency-Track and richer pipelines.
What that means for you
- A vulnerability monitoring story regulators will accept — kernel CVEs surface as soon as upstream publishes them, not weeks later.
- A defensible audit trail — per-CVE, per-config verdicts with the upstream commit references, branch fix versions, and (on Pro) deployment-context exposure analysis.
- A scanner-native delivery format — drop-in OSV mirror for Dependency-Track / Trivy / Grype, or CycloneDX VDR for richer pipelines. No new tools to procure.
- Direct evidence for SBOM and SSDF attestations — every record carries provenance, attribution, and a CC-BY-4.0 license that survives redistribution.
What is KernelScan?
KernelScan turns a Linux kernel .config into a CycloneDX VEX report — a per-CVE verdict (exploitable, not_affected, in_triage) calibrated to your kernel version and build options. Pro accounts add a deployment-context layer: an AI assessor reasons about your product's interfaces, network exposure, and hardening to rule out CVEs that simply can't reach the device in the field.
Three layers of analysis stack on top of each other:
- Layer 1 — Config analysis. A deterministic engine maps each CVE's fix commits to
CONFIG_*dependencies. If the option isn't compiled in, the CVE isnot_affected. No LLM involved. - Layer 2 — Factor assessment. For exploitable CVEs on products that declare deployment context, an LLM evaluates the CVE description against the relevant subset of factors (e.g. "USB on enclosure + sealed cabinet") and may downgrade the verdict, annotate it as mitigated, or flag it as increased risk. Layer 2 only runs on Pro tier.
- Layer 3 — VEX generation. Layer 1 and Layer 2 are folded into a single CycloneDX 1.6 VEX file with full provenance — the
.confighash, the engine version, and the reasoning behind every status change.
Layer 2 can only refine exploitable CVEs. If config analysis already cleared a CVE, factors can't undo that — the code isn't in the build.
Where the data comes from
KernelScan is a continuous pipeline. Every new CVE published by the Linux kernel CNA is ingested, enriched, mapped, and re-published — directly off kernel.org, well ahead of when the same CVE eventually surfaces in NVD-driven tools.
| Source | What it provides |
|---|---|
| kernel.org CVE feed (Linux CNA) |
CVE record, description, introduced-in version, per-LTS-branch fix versions, fix commit hashes. The primary source — KernelScan tracks it directly. |
| NVD (NIST) |
CVSS v3 / v4 scoring, CWE classification, vendor references — folded in when NIST's enrichment lands. NVD typically trails kernel.org by weeks for kernel CVEs. |
| Linux kernel git tree (torvalds + stable) |
Source of truth for the CVE → CONFIG_* mapper (Makefile rules + fix-commit diffs) and for the kconfig symbol catalog. |
| KernelScan enrichment | Calculated CVSS + CWE for kernel CVEs while NVD enrichment is pending; risk summary & vulnerability analysis on Basic+; factor-aware exploitability verdicts on Pro+. |
Merge logic preserves fields per source — NVD doesn't overwrite kernel.org branch fixes, kernel.org doesn't overwrite NVD CVSS. Re-running an importer is always safe; it upserts.
The Linux kernel community publishes CVEs weeks ahead of NVD enrichment. KernelScan rides that pipeline directly so your scanners see kernel CVEs as soon as upstream does — not a month later.
CVSS scoring & the gap KernelScan fills
Severity scores (CVSS) are how the rest of the software industry decides what to patch first, what counts as "critical", and which CVEs trip a regulatory threshold. Kernel CVEs sit in an awkward position: the upstream community has principled reasons to refuse to score them, the body that would normally fill the gap (NVD) is chronically backlogged, and downstream consumers still need a number. KernelScan is where that compromise is reconciled.
Why the Linux kernel community doesn't score CVEs
kernel.org operates as a CNA — it assigns CVE IDs, publishes structured records, and tracks fixes per LTS branch. It explicitly does not assign CVSS scores, by policy. The reasoning, articulated repeatedly by kernel maintainers, comes down to three points:
- Severity is heavily context-dependent in the kernel. A use-after-free in a rarely-built USB driver is a fundamentally different attack surface from one in core memory management or networking — but a single CVSS score doesn't carry that context. Encoding it honestly per-CVE is impossible without knowing the consumer's build and deployment.
- The kernel ships fixes per CVE, not severity assessments. Maintainers focus on producing correct, minimal patches and stable-branch backports. Adding a CVSS triage step at the CNA level would slow CVE publication and create false precision the community explicitly doesn't endorse.
- CVSS encourages selective patching. The kernel community's stance is that everything fixable should be fixed on a stable branch — not that customers should be triaged into "High" vs "Low" buckets and gamble on the low end. A single number tends to push consumers in exactly the opposite direction.
The result: every kernel CVE arrives at downstream consumers with no severity, no vector, no CWE — just the description, the affected ranges, and the fix.
Why NVD enrichment is slow
NVD has historically been the institution that filled the scoring gap — NIST analysts manually triage each CVE to add CVSS, CWE, and CPE data. That model has run into hard scaling problems:
- Every record is hand-triaged. A small analyst team is responsible for the entire CVE corpus, across every ecosystem. Throughput is gated on people, not pipelines.
- The kernel firehose is hostile to that model. kernel.org's CNA assigns thousands of CVEs per year — orders of magnitude more than any other major CNA — and the rate is rising as the community gets more aggressive with assignment.
- NVD has publicly acknowledged its backlog. Lengthy delays are common; some records sit in awaiting analysis indefinitely. For a manufacturer relying solely on NVD, the kernel is effectively a blind spot during the window between CNA publication and NIST analysis.
But commercial product developers still need a score
Whatever the merits of the upstream position, here's how vulnerability management actually runs in industry:
- SBOM scanners and vulnerability platforms are CVSS-keyed. Filters, alerts, dashboards, and reporting all sort by severity. An unscored CVE is effectively invisible to most workflows.
- Compliance regimes reference severity thresholds. CRA's "severe vulnerability" reporting trigger, CISA's KEV catalog with its 14-day federal patch deadline, internal patch SLAs ("all High and Critical within X days") — they all depend on a score being present.
- Customer security questionnaires and audits ask for it. "How many High-severity CVEs are open in your product?" is the standard form. "Unscored" is not an acceptable answer.
- Engineering prioritization needs a comparable metric. Without a number, every CVE looks the same; with one, teams can rank and plan releases.
The kernel community's principled refusal to score is defensible. The commercial software supply chain's need for a score is real. The two positions are incompatible — and waiting weeks for NVD isn't a viable answer when a CRA reporting clock starts ticking the moment a vulnerability is publicly known.
How KernelScan steps in
KernelScan computes a CVSS v3.1 score — vector, base score, severity rating, and a matched CWE — for every kernel CVE while NVD enrichment is pending. The score is derived from the published CVE record, the fix patch, and the affected subsystem context. Every record is stamped so the score's provenance is unambiguous:
- NVD's score, when it eventually lands, is preserved. KernelScan's calculated score lives in a namespaced field, and the canonical NVD score takes precedence in scanner workflows that pick one source per scalar (Dependency-Track among them). KernelScan's score surfaces when — and only when — there is no NVD score yet.
- Per-LTS-branch fix tracking is published alongside the score. Severity is one signal; knowing exactly which stable branches contain the fix and which don't is what turns "patch the High ones" into a concrete release plan.
- Provenance and license travel with every record. Source organization (KernelScan / NVD / kernel.org), CC-BY-4.0 license, and machine-readable attribution — so audit trails and SBOM artefacts can show exactly where each piece of data came from.
The compromise: the upstream community keeps its principled position and is never asked to produce a number it can't honestly defend; manufacturers get a score that exists at the moment the CVE is published, not weeks later when NVD catches up.
Live CVE feed (OSV + CycloneDX)
KernelScan re-publishes the entire kernel CVE corpus as a public, machine-readable feed in two formats. This is the "NVD lag killer for kernel CVEs" — subscribers see new kernel CVEs the same day they land instead of the same month.
Two formats
- OSV 1.6 — what Trivy, Grype, OSV-Scanner, Renovate, and Dependency-Track's OSV mirror consume natively.
- CycloneDX 1.6 VDR (Vulnerability Disclosure Report) — native to Dependency-Track and richer for our AI-extended fields.
Endpoints
Two parallel surfaces — Bearer-authenticated for scripts and CI, and tokenized URLs for tools like Dependency-Track that don't forward Authorization headers.
| Method | Path | Purpose |
|---|---|---|
| GET | /feed/index.json | Manifest — counts, freshness, license. Public, no auth. |
| GET | /feed/osv/all.json | Bulk OSV — every kernel CVE. Free account+. |
| GET | /feed/osv/{cve_id}.json | Single OSV record. Free account+. |
| GET | /feed/cyclonedx/all.json | Bulk CycloneDX VDR. |
| GET | /feed/cyclonedx/{cve_id}.json | Single-vulnerability VDR. |
| GET | /feed/t/{token}/... | Same surfaces with the credential in the URL path — for Dependency-Track. |
Tier semantics
The same URL serves every tier; the body changes based on the authenticating account.
| Field | Free | Basic+ |
|---|---|---|
| Bulk endpoints — coverage window | Last 60 days | All tracked CVEs |
| Per-CVE endpoints — coverage | Unrestricted | Unrestricted |
| NVD CVSS / CWE | ✓ | ✓ |
| AI-calculated CVSS, severity, CWE | ✓ | ✓ |
| AI risk summary | — | ✓ |
| AI vulnerability analysis | — | ✓ |
Per-CVE lookups (/feed/osv/CVE-2024-XXXX.json) are unrestricted at every tier — a free account can always look up any specific CVE by ID. Only the bulk endpoints are windowed.
CVE IDs and aliasing
OSV records use a KernelScan-namespaced ID (KSCAN-CVE-2024-1086) and carry the canonical CVE ID in aliases. This mirrors how GHSA records relate to CVE — Dependency-Track registers KernelScan as a distinct vulnerability source and collapses our record into the same finding as NVD's via the alias. One row per CVE in the findings view, both sources visible, one audit trail.
CycloneDX VDR records use the canonical CVE-… ID directly; CDX has no equivalent of OSV's namespace-id model.
License
Feed contents are CC-BY-4.0. Free for redistribution, commercial use, and integration into SBOM / vulnerability scanning pipelines. Attribution travels in-band on every record so downstream tools surface it automatically.
Setting up Dependency-Track
The fastest path from a self-hosted Dependency-Track instance to fresh kernel CVEs is to configure KernelScan as DT's OSV mirror. KernelScan publishes a curated ecosystems.txt so DT's admin UI shows KernelScan as a first-class ecosystem checkbox alongside PyPI, npm, Debian, etc.
Why bother
- OSV's stock
Linuxstream is empirically broken. Sample-checked counts show under 1% coverage of the kernel.org CNA's 2024 corpus. KernelScan ships the missing records. - Earlier kernel CVEs. KernelScan re-publishes weeks ahead of when the same CVE lands in NVD-driven scanners.
- Per-LTS-branch fix tracking. One OSV range per LTS branch (
mainline,6.6,6.1, …) so DT's affected-version matcher gets data NVD doesn't ship. - No bandwidth tax for the rest of OSV. Tick
PyPIin DT and KernelScan returns a302to Google's bucket — DT pulls directly. KernelScan never proxies the bytes; your token never reaches Google.
Step 1 — Generate a tokenized URL
DT 4.x doesn't forward Authorization headers to its OSV mirror loader, so KernelScan issues a per-user URL-embedded token (kf_pub_…). On /account, click Generate URL in the Dependency-Track URL card. Copy it immediately — it's shown exactly once. KernelScan stores only the hash; if you lose it, click Rotate URL.
Step 2 — Verify it works
URL='https://kernelscan.io/feed/t/kf_pub_…/osv/KernelScan/all.zip'
curl -sS -o /tmp/kscan.zip -w 'HTTP %{http_code}, %{size_download} bytes\n' "$URL"
unzip -l /tmp/kscan.zip | tail -1 # ~13K KSCAN-CVE-*.json files
Invalid tokens return 404 (never 401) so the URL space can't be enumerated.
Step 3 — Configure Dependency-Track
- Sign in to DT as an admin.
- Open Administration → Vulnerability Sources → OSV.
- Set the OSV Mirror URL to the
/osv/base of your token URL:https://kernelscan.io/feed/t/<your-token>/osv/ - Save, then reload — DT fetches our
ecosystems.txtand re-populates the ecosystem list. KernelScan appears at the top. - Tick KernelScan + any other ecosystems your SBOMs need.
- Set sync cadence to ≥ 60 minutes. Faster polling buys nothing — KernelScan re-publishes upstream changes promptly — and risks throttling.
- Click Mirror OSV to trigger an immediate sync.
Step 4 — Verify ingestion
After DT's first sync completes, open a known kernel CVE in DT (one that already had an NVD entry is a safe choice). You should see:
- Title — the canonical CVE id (e.g.
KSCAN-CVE-2024-1086). KernelScan deliberately doesn't synthesise summaries from description prose, matching NVD / osv.dev convention so DT's title column stays clean. - Source —
KernelScanappears alongsideNVDin the sources list. Per-recordpackage.ecosystemremainsLinuxso DT's component matcher works unchanged. - Severity / CVSS — DT picks one source per scalar field by precedence, so an NVD score will usually win. Where NVD has no score yet, KernelScan's AI-calculated CVSS surfaces instead.
- Affected ranges — multiple ranges. KernelScan contributes one per LTS branch; the Affected Version Attribution view shows which source supplied each.
- References — kernel.org patch URLs, the KernelScan landing page for the CVE, and vendor advisories alongside NVD's references.
For a stronger signal, pick a CVE that is in cvelistV5 but not yet in NVD. The DT finding only exists thanks to KernelScan — without this integration it would only show up after NVD enrichment landed weeks later.
Throughput & throttling
Each tokenized URL is rate-shaped per-token. Normal state serves at 2 MB/s — the 19 MB OSV mirror zip in ~10 seconds. If a misconfigured DT polls aggressively (> 60 requests/hour), the URL flips to 64 KB/s until the customer rotates. Recovery is rotation-only by design — auto-recovery against a broken poll loop just oscillates.
The Account page shows the current state in real time (yellow banner when throttled). HTTP responses also carry X-KernelScan-Feed-Throttled and X-KernelScan-Feed-Requests-Hour so external monitoring can pick the state up. To recover after a throttle event, click Rotate URL on /account, then update DT's mirror URL to the new value — the old URL stops working immediately.
Operational notes
- One active URL per account. Generating or rotating produces a fresh URL and atomically invalidates the previous one. There is no multi-token mode — if you need separate mirrors for staging and prod DT instances, mint them under separate accounts.
- Rotation cadence. No hard requirement, but rotate periodically as you would any long-lived credential. Generate → update DT's mirror URL → DT re-syncs cleanly.
- Revocation. Click Revoke on /account to invalidate the URL without minting a replacement. DT's next sync attempt returns
404. - One alias per record. KernelScan ships a
KSCAN-CVE-…id with the canonical CVE id inaliases. That single alias is what lets DT collapse our record into the same finding as NVD's. Make sure your DT instance has alias sync enabled for the OSV source — otherwise you get two findings per CVE. - CI / scripts use a different credential. The URL token (
kf_pub_…) is read-only, scoped to feed access, and throughput-shaped — meant for tools that can't sendAuthorizationheaders. CI scripts and your own automation should use a Bearer API key (ks_live_…) against the flat URLs instead. The two are minted independently and rotate independently.
Troubleshooting
| Symptom | What to check |
|---|---|
| Project shows 0 findings even though DT ingested ~13K KernelScan records. | Almost always a CPE-only kernel component in your SBOM. OSV is a PURL-only schema, so DT's OSV matcher cannot hit a component that has only a CPE. Add a PURL alongside the CPE, e.g. pkg:generic/linux/kernel@6.6.67, and re-run vulnerability analysis. The fix belongs in whatever produces the SBOM (Syft, Trivy, hand-written CycloneDX); for a one-off check you can also edit the component's PURL inline in DT's project view. |
DT returns 404 on every fetch. |
The token has been rotated, revoked, or the URL is wrong. Re-mint from /account and update DT's mirror URL. |
KernelScan checkbox doesn't appear in DT's Ecosystems list. |
DT caches ecosystems.txt per OSV-config save. Save the mirror URL, reload the admin page, and the checkbox should appear. If it still doesn't, hit <base>/ecosystems.txt directly with curl to confirm DT can reach it from its network. |
| DT shows 0 KernelScan vulnerabilities after sync. | Check DT's logs for OSV download activity. Confirm your DT version honours vuln-source.google.osv.base.url (DT 4.10+) or vuln-source.osv.mirror.url (older). |
| Findings show "no affected ranges". | Records use package.ecosystem = "Linux" for DT's matcher. If a finding has no ranges, the issue is DT's Linux-ecosystem matcher, not the KernelScan brand on the zip. |
| Duplicate findings appear (NVD + KernelScan side-by-side). | OSV alias sync is disabled on your DT instance. Re-enable it so the canonical CVE id in aliases collapses our record into NVD's. |
| PyPI / npm / etc. mirrors fail with cross-origin redirect errors. | DT's HTTP client follows 302s by default and the GCS bucket is publicly readable, so this should "just work". If your DT runs in an egress-restricted network, allowlist osv-vulnerabilities.storage.googleapis.com so DT can follow the redirect. |
| Stale data. | DT's sync interval gates freshness. KernelScan can be at most ~4 hours behind cvelistV5; DT adds its own sync delay on top. Lower DT's cadence if freshness matters, but keep it ≥ 60 minutes. |
| Older CVE missing from DT after sync (free tier). | If the CVE was published more than 60 days ago, the free-tier bulk feed deliberately omits it. Hit the per-CVE endpoint directly (/feed/t/<token>/osv/CVE-2023-XXXX.json) — that endpoint is unrestricted within tier — or upgrade if your DT instance needs the full historical corpus in bulk. |
| Throttle banner showing yellow on /account. | DT polled too aggressively (> 60 req/h) and the URL is now serving at 64 KB/s. Lower DT's sync cadence to ≥ 60 minutes, then click Rotate URL to reset. |
MCP endpoint — KernelScan for agents
KernelScan is a service for humans and for agents. The web UI and the REST API are how people interact; the Model Context Protocol endpoint at /mcp is how Claude Desktop, Cursor, and any other MCP-speaking client interact. Same database, same tier rules, same per-user API keys — just a different entry point.
Plug your KernelScan account into your AI tooling and ask things like "which of my products have unfixed KEV CVEs this month?" or "summarise the high-severity kernel CVEs published in Q1 affecting the BPF subsystem". The agent calls KernelScan tools directly and answers from your live data — no copy-paste, no screenshots, no stale exports.
What you can do over MCP
Eight tools cover the main read and write paths. Read tools work for every plan (free callers see only CVEs published more than 60 days ago, mirroring how the public feed is windowed). Write tools require a paid plan and run the same analysis pipeline as the web app.
| Tool | What it does | Plan |
|---|---|---|
whoami | Identity, plan, quota state, and effective rate limits — call this first. | All |
search_cves | Search by query, severity, CVSS minimum, or publish date. Newest first. | All* |
get_cve | Full single-CVE detail — NVD + AI scoring, fix versions, references, CISA KEV. | All* |
list_products | Your products with denormalised stats (total / affected / KEV / severity histogram). | Basic+ |
get_product | One product plus the parsed CVE breakdown (affected, not_affected, in_triage, KEV). | Basic+ |
get_product_vex | The CycloneDX 1.6 VEX document straight from the 24 h cache. | Basic+ |
create_product | Upload a kernel .config, run analysis, return stats. Same code path as POST /api/products. | Basic+ |
update_product | Edit a product. Re-runs analysis when kernel_version, arch, or config_text changes. | Basic+ |
* Free callers get a 60-day publish lag on CVE reads — the same window the public feed applies to bulk endpoints, applied here to search_cves and get_cve.
Plan matrix
| Capability | Free | Basic | Pro / Enterprise |
|---|---|---|---|
| CVE read tools (60-day lag for free) | ✓ | full | full |
AI risk summary & vulnerability analysis on get_cve | — | — | ✓ |
| Read your product analyses | — | ✓ | ✓ |
| Create & update products via agent | — | up to 3 | 10 / unlimited |
| Security factors on write tools | — | — | ✓ |
Authentication
Every request carries the same ks_live_… API key the REST surface accepts:
Authorization: Bearer ks_live_<your_personal_api_key>
Mint or revoke keys on /account. Validation is hash-based — KernelScan only stores the SHA-256; the full key is shown exactly once. JWT browser sessions and tokenised feed URLs (kf_pub_…) are not accepted on /mcp — only the API-key surface, so MCP traffic is consistently identifiable.
Connecting Claude Desktop
Add a server entry to your Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json on macOS, equivalent path on Windows / Linux):
{
"mcpServers": {
"kernelscan": {
"transport": "http",
"url": "https://kernelscan.io/mcp",
"headers": {
"Authorization": "Bearer ks_live_..."
}
}
}
}
Restart Claude Desktop. The KernelScan tools appear in the tool picker. First-time check: ask "Use the KernelScan whoami tool" — the response shows your plan and quota, confirming auth is working.
Connecting Cursor and other MCP clients
Any client speaking Streamable HTTP MCP works. Point it at https://kernelscan.io/mcp with the same Bearer header. The server advertises a KernelScan server name on the initialize handshake.
Verification
For a bare-metal smoke check outside any agent, use @modelcontextprotocol/inspector:
npx @modelcontextprotocol/inspector
# In the UI:
# Transport: HTTP
# URL: https://kernelscan.io/mcp
# Header: Authorization: Bearer ks_live_...
Expected: whoami returns your account state, the eight tools above are listed, and a free-tier get_cve call against a CVE published in the last 60 days returns not found. A get_product call against a product_id belonging to another user also returns not found — never a permission error and never the product itself — so existence isn't leaked across accounts.
Rate limits — bounded by class, per user
MCP tools are grouped into three cost classes, with per-user limits scaled to your plan. The whoami tool returns your current limits.
| Class | Tools | Free | Basic | Pro | Enterprise | Window |
|---|---|---|---|---|---|---|
| light | whoami, get_cve, list_products | 240 | 600 | 600 | 600 | 60 s |
| medium | search_cves, get_product, get_product_vex | 30 | 120 | 240 | 240 | 60 s |
| heavy | create_product, update_product (when re-analysis runs) | 0 | 5 | 15 | 30 | 10 min |
When a limit trips, the tool error tells the agent how many seconds to back off — well-behaved clients pace themselves automatically. search_cves additionally enforces a 3-200 character query length to avoid pathological full-table scans.
Why this exists
Vulnerability triage is a natural fit for agents: the inputs are structured, the questions are repetitive, and the people asking them ("did anything ship this month I need to patch?") often don't speak CVE-CVSS-CWE fluently. An MCP endpoint lets KernelScan plug straight into the agent workstation — Claude Desktop, Cursor, Continue, your in-house tool — without anyone writing a wrapper integration. The same ks_live_… key works everywhere, the same tier rules apply, the same data is served. KernelScan for humans, KernelScan for agents.
CVE → CONFIG_* mapper
The mapper is the deterministic engine that decides "this CVE is not_affected because CONFIG_FOO is disabled in your build". No AI involved — it's pure Makefile and git diff parsing.
The algorithm
- Parse all kernel Makefiles (per series + patch level) to build a
file → CONFIG_*map. The mapper understands the whole vocabulary:obj-$(CONFIG_FOO) += bar.o, subdirectory gating (obj-$(CONFIG_X) += subdir/), conditional descents, and Kbuild fragments. - For each CVE with a fix commit, run
git diff-treeto enumerate the changed.cfiles. - Trace each changed file back through the Makefile rules and any subdirectory gating to collect the
CONFIG_*options that gate it into the build. - Resolve kconfig dependency chains:
CONFIG_A depends on B depends on C— if C is off, A is unreachable too. - Produce a DNF expression ("disjunctive normal form" — sums of products) over
CONFIG_*options. A typical expression looks like(CONFIG_NETFILTER && CONFIG_NF_CONNTRACK) || CONFIG_BRIDGE_NETFILTER.
How exclusion works at analysis time
When you upload a .config, the engine evaluates each CVE's DNF expression against your enabled symbols:
- Expression evaluates true → CVE is
exploitable(the vulnerable code is in the build). - Expression evaluates false → CVE is
not_affectedwith justificationcode_not_present(CycloneDX standard). - No mapping exists for this CVE in this series — typical for very recent CVEs the mapper hasn't processed yet, or CVEs that touch core code with no
CONFIG_*guard →in_triage.
Per-series, per-patch-level
The mapper runs once per kernel series (e.g. 6.6.x, 6.12.x) and stores the mapping per patch level. Makefile rules drift between series — a config option that gates foo.c in 6.1 may have been renamed or the file moved by 6.12. Per-series storage keeps the mapping precise.
100% deterministic. No model, no probability — given the same kernel source tree and the same fix commit, the mapper produces the same DNF expression every time.
Three-layer VEX analysis
VEX (Vulnerability Exploitability eXchange) is the standard for asserting "this CVE does not apply to me, here's why". KernelScan emits CycloneDX 1.6 VEX with full per-CVE provenance.
Per-CVE statuses
| Status | Meaning | Source layer |
|---|---|---|
exploitable | The vulnerable code is compiled in and the deployment context (if any) doesn't rule out the attacker. | L1 + L2 |
not_affected | The vulnerable code isn't in the build (config) or the deployment context eliminates the attack vector (factors). | L1 or L2 downgrade |
in_triage | No mapping yet for this CVE in this kernel series. We can't decide. You should review manually. | L1 default |
fixed | The kernel version analyzed already contains the fix. | L1 (version compare) |
What's in the VEX file
- Components — the kernel as an SBOM component, with version, architecture, and the
.confighash. - Vulnerabilities — every relevant CVE with description, references, CVSS scores, CWE.
- Analysis blocks — per-CVE:
state,justification(e.g.code_not_present), andresponserecommendations. - Detail — the human-readable reasoning. For Layer 1: which
CONFIG_*were missing. For Layer 2: the LLM's argument referencing your factors. - Properties —
kernelscan:engine_version,kernelscan:config_hash,kernelscan:factor_set_hash, attribution.
Layer 2 verdicts (Pro tier)
When a product has security factors, the LLM produces one of four verdicts per (CVE, factor combination):
| Verdict | VEX effect |
|---|---|
not_affected | Downgrade from exploitable. Justification includes the LLM's reasoning. |
mitigates | Stay exploitable; add a kernelscan:mitigating_context property. |
increases_risk | Stay exploitable; add a kernelscan:risk_context property. |
neutral | No change. |
Security factor taxonomy
The factor taxonomy is what makes KernelScan more than a CVE list. It is a structured way to describe your specific product — where it lives, what it can be touched with, what runs on it, who interacts with it — so that each kernel CVE can be evaluated against your reality and not against an abstract worst case. This section is your modelling guide. Read it before you fill in the New Product Wizard.
Why deployment context decides reachability
The same kernel runs on a public-kiosk payment terminal, a sealed datacenter appliance, a residential router, an industrial controller, and a connected car. The same kernel CVE has wildly different real-world impact across those products. A use-after-free in a USB driver is a critical issue on a publicly-accessible kiosk and a near-non-issue on a sealed appliance in a locked cabinet. Severity scores don't capture this; deployment context does.
KernelScan asks two questions for every CVE:
- Is the vulnerable code in the build? Answered deterministically by the config mapper — a function of your
.config. - Can a real-world attacker actually reach it on your product? Answered by your factor taxonomy. This is what this section is about.
If the answer to (1) is no, the CVE is not_affected immediately, no matter what factors you set. If the answer to (1) is yes, the factor taxonomy decides what happens next.
The four verdict outcomes
For each CVE that survives the config check, KernelScan produces one of four verdicts based on your factors:
| Verdict | What it means | VEX effect |
|---|---|---|
not_affected | Your factors eliminate every plausible path to the vulnerable code. The attacker required by this CVE cannot exist near your product. | Downgrade — the CVE drops out of "must patch" lists. Reasoning is recorded for audit. |
mitigates | Your hardening or deployment context meaningfully raises the bar for exploitation but does not eliminate it. The CVE remains exploitable; the mitigating context is documented. | Stays exploitable. A kernelscan:mitigating_context property is added explaining what reduces the risk. |
increases_risk | Your factors make the CVE more dangerous than the description alone suggests — e.g. internet-facing + multi-tenant + a kernel network CVE. | Stays exploitable. A kernelscan:risk_context property flags the elevated risk. |
neutral | Your factors are relevant to this CVE but do not change the verdict in either direction. | No change. |
Reasoning about the threat actor, not the slugs
Verdicts always argue about which kind of attacker a CVE requires — local user, network attacker, physically adjacent person, USB-wielding visitor, malicious tenant, supply-chain insider — and whether your product's factors permit that actor to plausibly exist near the device. A CVE that needs physical USB access is not_affected on a sealed appliance in a locked datacenter cage because no actor at that exposure tier can reach the port. The reasoning is about the threat model the CVE describes, not about whether usb-enclosure happens to be in your slug list.
This is why honest factor selection matters more than checking every box. Over-stating exposure — selecting Wi-Fi when there is no radio, exposed USB when ports are sealed, internet-facing when the device is on a private LAN — produces conservative but useless assessments where everything stays exploitable. Under-stating exposure produces a comfortable VEX that doesn't survive an audit. Be precise, the way you would be in front of a regulator.
Scoping: which factors apply to which CVE
Most CVEs only intersect with a subset of the taxonomy. A Wi-Fi management-frame CVE has nothing to say about CAN bus exposure; a CAN bus CVE has nothing to say about Wi-Fi. KernelScan narrows each CVE to the categories that actually bear on its threat model before producing a verdict — a USB driver CVE looks at physical interfaces and user access; a TCP stack CVE looks at network exposure and hardening. You don't need to worry about this scoping yourself. Fill out every category that genuinely applies to your product; the system will use only the parts that are relevant to each CVE.
The 10 categories at a glance
| # | Category | Factors | What it captures |
|---|---|---|---|
| 1 | Deployment Environment | 7 | Where the device physically lives — sets the baseline pool of plausible attackers. |
| 2 | Physical Protection | 4 | The barrier between a nearby attacker and the device's actual interfaces. |
| 3 | Interfaces: On PCB | 8 | Interfaces only reachable by opening the device. |
| 4 | Interfaces: On Enclosure | 8 | Interfaces on the chassis, inside the device's physical security perimeter. |
| 5 | Interfaces: Exposed | 6 | Interfaces accessible without crossing any physical security boundary. |
| 6 | Network Exposure | 8 | The network reachability surface — separate from the physical Ethernet interface tier. |
| 7 | Wireless | 5 | Radios attached to the device. |
| 8 | Execution Context | 6 | What runs on the device and who can put code on it. |
| 9 | Security Hardening | 8 | Active mitigations that change exploitability rather than reachability. |
| 10 | User Access | 5 | Who can interact with the system, how, and at what privilege level. |
The remaining subsections walk through each category in detail. Every factor lists what it actually means and when you should select it for your product.
1. Deployment Environment
The single biggest determinant of which threat actors can plausibly exist near the device. A residential gateway, a hospital infusion pump, and a roadside cabinet face different attacker populations even if they ship the same kernel. Pick the option that best describes the typical install location for the product. If the product ships into multiple environments, model the worst realistic one — that is the deployment your VEX has to defend.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
env-private | Private / residential | Device sits inside a private home or trusted residence. The plausible-attacker pool is family members, guests, and (in some threat models) household-network neighbours. | Consumer routers, smart-home hubs, NAS appliances sold for home use. |
env-corporate | Corporate / access-controlled | Device is in a badge-controlled office or campus. Casual physical attackers are excluded; insider risk and visiting-contractor risk remain. | Enterprise networking, office printers, conference-room AV systems, employee laptops. |
env-public-indoor | Public indoor | Anyone passing through the building can approach the device. Physical-access CVEs become realistic threats. | Retail-floor terminals, hospital lobby kiosks, airport-gate gear, museum displays. |
env-public-outdoor | Public outdoor | Even less controlled than public indoor — weather plus persistent unsupervised physical access. | Pole-mounted cameras, traffic cabinets, parking-meter gateways, smart-streetlight controllers. |
env-datacenter | Data center | Controlled server room. Physical access is restricted to operators with key-card or escort; the dominant threat is insider or remote. | Rack-mounted servers, storage arrays, datacenter switches, carrier edge gear. |
env-industrial | Industrial / OT | Factory floor or operational-technology environment with a defined operator population. Often combined with industrial fieldbus interfaces (CAN, Modbus, RS-485) and long-lived deployments. | PLCs, HMIs, industrial gateways, factory-floor controllers, SCADA RTUs. |
env-vehicle | Vehicle / mobile platform | Installed in a vehicle. CAN bus adjacency, OBD-II port exposure, and unattended-vehicle attacks become relevant. | Automotive ECUs, fleet telematics, in-vehicle infotainment, agricultural-vehicle controllers. |
2. Physical Protection
Pairs with Deployment Environment. Once a plausible attacker is near the device, the question is what stands between them and the actual interfaces. Sealed-and-locked is the strongest combination; tamper-evident-only is forensic value, not prevention. Pick exactly one — the weakest barrier wins. A locked cabinet around an open chassis is still effectively "locked cabinet"; a sealed enclosure not in a cabinet is still effectively "sealed".
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
phys-sealed | Sealed / tamper-resistant enclosure | Opening the device requires tools and produces observable damage. PCB-tier interfaces (JTAG, internal headers) become impractical for casual attackers. | Industrial gateways with epoxy or ultrasonic-welded housings, automotive ECUs, sealed appliance products. |
phys-locked-cabinet | Locked cabinet / rack | An additional key-or-card barrier around the device. Combines well with sealed enclosure. | Rack-mounted gear in colocation, network closets, locked equipment rooms. |
phys-tamper-evident | Tamper-evident only | Seals reveal that the device has been opened, but do not prevent it. Useful for incident response, weak as a real-time mitigation. | Gear with frangible warranty seals, single-use tamper labels, no enforcement enclosure. |
phys-open | Fully open / no enclosure | Dev board, open chassis, bare PCB. Every interface tier collapses to its most exposed form — PCB headers might as well be public ports. | Reference designs, evaluation kits, dev-board-as-product offerings. |
Interface exposure tiers — the same port at three different risk levels
The next three categories all describe physical interfaces, but split them into three exposure tiers because the same kind of port has fundamentally different attack-surface implications depending on where it sits. A USB port on a PCB header inside a sealed enclosure in a locked cabinet is not the same threat as a USB port on a public kiosk's faceplate.
| Tier | Where it sits | Reachable by | Mitigated by Physical Protection? | Examples |
|---|---|---|---|---|
| PCB (internal) | Inside the device. Requires disassembly to reach. | Anyone who can open the enclosure — i.e. an attacker willing to break a seal or pick a lock. | Yes — heavily. A sealed enclosure plus a locked cabinet effectively eliminates this tier for casual attackers. | JTAG / SWD headers, GPIO / I²C / SPI buses, on-board USB headers, internal CAN headers. |
| Enclosure | On the chassis, inside the device's physical security perimeter. | Anyone who can reach the device itself — operators, visitors, or anyone with chassis access. | Partially — combines with Deployment Environment to determine actual reachability. A chassis USB port in a locked rack is operator-only; the same port on a desktop appliance is anyone in the room. | USB ports, RJ-45 Ethernet jacks, serial consoles, SD card slots, chassis CAN connectors. |
| Exposed | Beyond the physical security perimeter. Touchable without crossing any boundary. | Anyone — by definition. Any plausible attacker the deployment environment admits. | No. By definition there is no physical barrier; Physical Protection cannot mitigate this tier. | Kiosk USB, public Ethernet jack, OBD-II port, fleet field-wiring CAN, exposed sensor probes. |
Be deliberate. If your product has Ethernet on the back panel and Ethernet running into the field on a different connector, both apply. If a USB port is on the chassis but behind a screwed-on faceplate, model it as enclosure-tier and select the appropriate Physical Protection.
3. Interfaces: On PCB (internal)
Interfaces only reachable by opening the device. CVEs in subsystem drivers — JTAG, USB, serial, peripheral buses — become near-irrelevant for products with strong Physical Protection if the corresponding interface only exists at this tier. Select what is physically present on the board, even if you do not advertise it.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
jtag-pcb | JTAG / SWD debug | Hardware debug header on the PCB. | Production boards still carry the JTAG / SWD pads — even if disabled in firmware, the connector is still physically present. |
gpio-i2c-spi-pcb | GPIO / I²C / SPI buses | Low-level peripheral buses exposed on internal headers or test points. | Boards expose I²C / SPI / GPIO for sensors, EEPROMs, secure elements, or expansion modules. |
usb-pcb | USB header (internal) | USB header on the PCB, only reachable by opening the device. | Internal USB connectors used for cellular modules, secure dongles, or factory provisioning. |
serial-pcb | Serial / UART test points | Serial debug pads or pin-header on the PCB. Very common in embedded designs. | Any board with a UART debug header (most do, even if depopulated in production). |
pcie-pcb | PCI / PCIe (on-board) | PCIe components or M.2 slots inside the enclosure. | On-board PCIe-attached storage, internal M.2 modems, on-board accelerator cards. |
can-pcb | CAN bus header (internal) | Internal CAN connector. Driver-level CAN CVEs (CONFIG_CAN) need this. | Automotive ECUs and industrial gear that route CAN through internal harnesses. |
onewire-pcb | 1-Wire header (internal) | Internal 1-Wire bus connector (CONFIG_W1). | Boards using DS18B20 thermometers, DS2401 ID chips, or other 1-Wire peripherals on internal headers. |
iio-pcb | IIO sensors (ADC/DAC) | Industrial I/O subsystem sensors on the PCB (CONFIG_IIO). | Boards with on-PCB ADCs, DACs, or IIO-driven sensor arrays. |
4. Interfaces: On Device Enclosure
Chassis-level interfaces inside whatever physical security perimeter applies. A chassis USB port in a locked datacenter cabinet is reachable only by operators; the same port on a desktop appliance in a corporate office is reachable by anyone who walks into the room. The category captures presence; Deployment Environment + Physical Protection captures who can actually reach it.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
usb-enclosure | USB port | USB port on the chassis. | Any externally-visible USB-A / USB-C connector on the device. |
ethernet-enclosure | Ethernet port | RJ-45 on the chassis. Network-stack CVEs requiring local-LAN attachment interact here. | Any chassis Ethernet jack, including LAN, WAN, and management ports. |
serial-enclosure | Serial console (RS-232) | RS-232 console port on the chassis — often the legacy management backdoor on industrial gear. | DB-9 / RJ-45 console ports, serial-over-USB management interfaces. |
rs485-enclosure | RS-485 / RS-422 port | Industrial serial bus connector on the chassis. | Modbus-RTU controllers, SCADA RTUs, building-automation gateways. |
sd-mmc-enclosure | SD / MMC card slot | Removable storage slot on the chassis. Storage-driver CVEs requiring physical media interact here. | Devices with externally-accessible SD slots — cameras, embedded systems, single-board appliances. |
pcie-enclosure | PCI / PCIe slot (hotplug) | Hotpluggable PCIe interface on the chassis (Thunderbolt, ExpressCard). DMA-attack CVEs become relevant. | Workstations, servers, or industrial PCs with externally-pluggable PCIe / Thunderbolt. |
can-enclosure | CAN bus connector | CAN connector (DB9 / terminal block) on the chassis. | Industrial CAN gateways, in-cabin automotive gear with externally-routed CAN. |
onewire-enclosure | 1-Wire connector | 1-Wire connector or probe port on the chassis. | HVAC, building-automation, or sensor-aggregation gear with externally-pluggable 1-Wire. |
5. Interfaces: Exposed Beyond Perimeter
Interfaces accessible without crossing any physical security boundary. This is the most attacker-favourable tier — Physical Protection cannot mitigate it because there is, by definition, no protection. The presence of an exposed-tier interface generally turns a "sealed appliance, not_affected" verdict into "exploitable" for any CVE in the corresponding driver. Be honest here: a kiosk USB port behind a removable cover is still usb-exposed; OBD-II in a vehicle is can-exposed regardless of how the vehicle is otherwise secured.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
usb-exposed | USB port (public-facing) | USB port that anyone can physically touch — kiosk, charging station, public-terminal USB. | Payment terminals, public kiosks, transit ticket machines, charging-station chassis. |
ethernet-exposed | Ethernet port (public-facing) | Ethernet jack reachable from outside the security perimeter — public wall-jack, room-terminal jack. | Hotel-room data jacks, conference-room walls, public-area wall plates wired into the device's LAN. |
rs485-exposed | RS-485 / RS-422 (field wiring) | Industrial serial bus running through field wiring beyond the device's perimeter — physical-layer attacks become realistic. | Field-bus deployments where RS-485 cables run unsecured between buildings, fields, or zones. |
sd-mmc-exposed | SD / MMC slot (public-facing) | User-accessible card slot — payment card slot, photo-printing kiosk, public utility meter. | Public-facing devices with end-user-accessible SD slots. |
can-exposed | CAN bus (field wiring) | CAN bus reachable from outside the secure perimeter — OBD-II port, fleet telematics, exposed agricultural CAN. | Vehicles with OBD-II ports, agricultural / construction equipment with field-routed CAN, fleet telematics gateways. |
onewire-exposed | 1-Wire probes (external) | 1-Wire sensor probes physically reachable outside the enclosure. | External 1-Wire temperature / humidity probes used in food-cold-chain or HVAC monitoring. |
6. Network Exposure
The network reachability surface — distinct from the physical Ethernet interface tier. Network-stack CVEs (TCP, IP, IPsec, NFS, …) are dominated by this category. An air-gapped device is immune to entire CVE classes regardless of which network drivers are compiled in; an internet-facing bridge router is the worst case. Select the actual reachability of the device in production, not what the lab prototype could do.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
internet-facing | Internet-facing | Device is directly reachable from the public internet. Worst case for any network-stack CVE. | Public-IP edge routers, VPN concentrators, public-cloud-exposed services, residential gateways. |
lan-only | Internal network only | Device sits on a private network with no inbound internet path. A network attacker must already be on the LAN. | Internal industrial controllers, office printers, corporate-LAN-only appliances. |
air-gapped | Air-gapped | No network connectivity at all. Eliminates network CVEs entirely. | Truly disconnected gear — some defence, classified, or process-control systems. |
tcp-services | Exposes TCP services | Listens on TCP ports. Required precondition for any "remote unauthenticated listener" CVE class. | Any device exposing SSH, HTTP, telnet, SMB, NFS, RDP, custom-protocol listeners — anything inbound. |
nfs-smb | NFS / SMB file sharing | Device exports or mounts NFS / SMB / CIFS shares. NFS / CIFS server CVEs are gated on this. | NAS appliances, file servers, NFS-mounting industrial gear. |
bridge-routing | Bridge / routing mode | Device forwards packets between segments. Bridge / Netfilter CVEs apply. | Routers, layer-2 bridges, transparent firewalls, virtual-network gateways. |
ipv6-enabled | IPv6 enabled | IPv6 stack is active and routable. IPv6-specific stack CVEs require this. | Anything with native IPv6 connectivity — most modern carrier-attached and cloud-hosted gear. |
vpn-endpoint | VPN endpoint | Device terminates VPN tunnels (IPsec, WireGuard, OpenVPN). Tunnel-stack and key-exchange CVEs apply. | Site-to-site / road-warrior VPN gateways, secure-remote-access concentrators. |
7. Wireless
Radios attached to the device. Wireless-stack CVEs (Wi-Fi management frames, Bluetooth pairing, BLE GATT, NFC, baseband-adjacent kernel code) are gated on the corresponding radio being present and active. A device without a Wi-Fi radio is immune to Wi-Fi-stack CVEs even if CONFIG_CFG80211 is compiled in. Select only what is physically present and operational.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
wifi-enabled | Wi-Fi enabled | 802.11 wireless networking is active. Required for any Wi-Fi-stack CVE to be reachable. | The device has a Wi-Fi radio that is enabled in the shipped product (not disabled in firmware). |
bluetooth-enabled | Bluetooth enabled | Bluetooth Classic or BLE is active. Required for BlueZ-stack and pairing CVEs. | Device has an active BT radio — phones, IoT pairing peripherals, BLE-controlled industrial gear. |
cellular-lte | Cellular / LTE modem | Carrier modem present. Some baseband-adjacent kernel CVEs apply (USB / serial bridge to the modem). | Cellular routers, IoT gateways with embedded modems, fleet-telematics gear. |
nfc-interface | NFC interface | NFC controller active. NFC-stack CVEs require the radio to be reachable. | Payment terminals, access-control readers, ID-card scanners. |
lora-zigbee | LoRa / Zigbee / 802.15.4 | Low-power wide-area or mesh radio is present. | LPWAN gateways, smart-home hubs, industrial mesh-sensor concentrators. |
8. Execution Context
What runs on the device and who can put code on it. Privilege-escalation CVEs require an attacker who can already execute code; container-escape CVEs require a hostile container; hypervisor-escape CVEs require a hostile VM. A bare-metal appliance with no untrusted users gives an attacker no foothold to escalate from; a multi-tenant container host is the polar opposite. Select all that apply — these are not mutually exclusive.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
untrusted-code | Runs untrusted user code | The system executes code from untrusted or semi-trusted users — apps, plugins, scripts, browser content with kernel reach. | Hosted runtimes, customer-installable plugin platforms, browsers/JIT in privileged contexts. |
container-host | Container host | Runs Docker, Podman, LXC, or similar. Container-escape CVEs (overlayfs, cgroups, namespaces) interact strongly. | Anything running customer or third-party container workloads — edge-compute platforms, build runners, K8s nodes. |
vm-hypervisor | VM hypervisor host | Runs KVM, Xen, or similar. Hypervisor-escape CVEs interact. | VM-hosting servers, embedded virtualization platforms. |
bare-metal | Bare metal single-purpose | Dedicated appliance running a single workload with no user code. The strongest "no foothold" mitigator. | Sealed industrial appliances, single-purpose firewalls, dedicated function devices. |
multi-tenant | Multi-tenant | Multiple isolated tenants share the kernel. Cross-tenant kernel CVEs apply. | Public-cloud hosts, shared-hosting platforms, MSP-managed multi-customer gear. |
realtime-preempt | Real-time (PREEMPT_RT) | Uses the PREEMPT_RT real-time scheduling patch. Some race-window CVEs behave differently. | Real-time control systems, audio-processing gear, robotics, motion controllers. |
9. Security Hardening
Active mitigations that change exploitability rather than reachability. Hardening rarely turns a CVE from exploitable to not_affected; it more often turns it into mitigates — the CVE remains theoretically exploitable but the practical bar is raised. KASLR raises the cost of memory-disclosure CVEs; SELinux can block lateral movement after initial compromise; secure boot constrains persistence. Select honestly — a feature compiled in but not enforced doesn't count.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
selinux-apparmor | SELinux / AppArmor enforcing | Mandatory access control is in enforcing mode (not permissive, not disabled). | The default policy ships in enforcing mode and the attack surface is meaningfully constrained by it. |
seccomp-enabled | Seccomp enabled | seccomp-bpf system-call filtering is active around exposed processes. | Network-facing daemons, container runtimes, or sandboxed services are running under seccomp profiles. |
kernel-lockdown | Kernel lockdown mode | Lockdown LSM is active — even root cannot modify the running kernel (load arbitrary modules, write /dev/mem, kexec to unsigned kernels). | Secure-boot products that activate kernel lockdown, hardened distros, locked-down appliances. |
secure-boot | Secure boot chain | UEFI Secure Boot or equivalent verified boot is active end-to-end. | The bootloader, kernel, and (often) initramfs are signed and verified at boot. |
dm-verity | dm-verity / integrity | Block-level integrity verification is in use for the rootfs or critical partitions. | Android-style verified-boot, ChromeOS-style verity, embedded OS images using dm-verity. |
readonly-rootfs | Read-only rootfs | Root filesystem is mounted read-only. Eliminates persistence vectors that rely on writing system files. | Embedded OS images, appliance products, immutable infrastructure platforms. |
kaslr-enabled | KASLR enabled | Kernel address-space layout randomization is active. | Standard kernels with KASLR shipped enabled — the default for most modern distros. |
stack-protector | Stack protector enabled | Compiler stack-buffer-overflow protection is active in the kernel build. | The kernel was built with stack-protector / fortify-source enabled. |
10. User Access
Who can interact with the system, how, and at what privilege level. Most local-attacker kernel CVEs require an attacker with at least an unprivileged shell. An appliance with no interactive users gives them no starting point; an SSH-listening server with operator accounts is a different threat model entirely. Pick all that apply.
| Slug | Factor | What it means | Pick this when |
|---|---|---|---|
root-shell | Root shell access | Operators, customers, or technicians have root-level shell access. Many "local privilege escalation" CVEs become low-impact (the attacker is already root); but local-to-kernel-attack CVEs become realistic vehicles for kernel-level persistence. | Devices documented to grant operators root shell — many embedded gateways, dev boards, and self-administered appliances. |
ssh-remote | SSH remote access | SSH server is running for remote login. Remote-after-credential-theft CVEs become realistic. | Anything exposing SSH for management — most servers, network gear, Linux-based appliances. |
console-only | Local console only | The only interactive access is via the physical console. Local CVEs require physical access — combines strongly with environment + protection. | Sealed appliances with serial-console-only management, devices with no remote shell. |
web-management | Web management interface | Device is administered via a web UI. Web-app CVEs become primary; kernel exposure is mediated through the web stack. | Routers with admin pages, IoT devices with web UIs, network appliances with browser-based management. |
no-interactive-users | No interactive users | Appliance mode with no human login at all. The strongest user-access mitigation — local-attacker CVEs require the attacker to first establish a foothold by some other means. | Sealed appliances with no shell access, automated headless devices, disposable container instances. |
Industrial interfaces — cross-tier reference
Kernel-supported industrial buses appear in multiple interface-tier categories so they can be modelled at the right exposure level. Use this table to find the right slug for your bus + exposure combination:
| Interface | PCB | Enclosure | Exposed | Kernel option |
|---|---|---|---|---|
| CAN bus | can-pcb | can-enclosure | can-exposed (OBD-II) | CONFIG_CAN |
| 1-Wire | onewire-pcb | onewire-enclosure | onewire-exposed | CONFIG_W1 |
| IIO (ADC/DAC) | iio-pcb | — | — | CONFIG_IIO |
| RS-485 / RS-422 | — | rs485-enclosure | rs485-exposed | serial drivers |
Modelling principles in summary
- Be precise about exposure tier. The same port at three tiers produces three different verdicts. Model what is actually true for the shipping product.
- Pick the worst realistic deployment. If the same product ships into multiple environments, model the most exposed one. Your VEX has to defend that case.
- Don't pad the hardening list. "Compiled in but not enforced" doesn't count as enforcing. Be honest the way an auditor would be.
- Treat absence honestly. If a radio isn't physically present, leave the wireless factors empty. Adding factors "just in case" produces conservative-but-meaningless verdicts.
- Update when the product changes. A firmware release that disables a previously-shipped radio, hardens defaults, or seals an enclosure changes the factor model. Re-run the assessment.
FAQ
How secure is my data? Aren't .config files and product factor profiles confidential?
The legal floor. KernelScan is operated under German law and is subject to the GDPR (DSGVO). All traffic is over TLS and accounts are isolated from one another. Technical and organisational measures are documented in the privacy policy. If a personal-data breach occurs we are required by Art. 33 GDPR to notify the competent supervisory authority within 72 hours of becoming aware of it, and by Art. 34 GDPR to notify affected users directly, without undue delay, where the breach is likely to result in a high risk to their rights and freedoms.
Defense in depth on your side. A .config by itself is rarely uniquely identifying — most are derived from public kernel defconfigs. The commercially revealing combination is product name + factor profile + .config; the factor profile in particular fingerprints the device (deployment environment, interfaces, hardening). If your product details are highly sensitive, model your products under codenames. A record called "Edge Gateway 2026 Q3" with factors {env-public-outdoor, ethernet-exposed, cellular-lte, …} is an obvious fingerprint; the same record under "Project Aurora-7" cannot be tied to a specific device in your portfolio — even in the worst case where the platform itself is compromised. Two layers of protection are better than one.
Why does KernelScan publish its own CVSS score? Won't that conflict with NVD?
The kernel community deliberately does not assign CVSS scores, and NVD's manual enrichment trails CNA publication by weeks or months. Without a score, kernel CVEs are invisible to severity-keyed scanners and miss compliance thresholds. KernelScan's calculated score fills that gap. It lives in a namespaced field and carries its own provenance — when NVD's score eventually lands, downstream tools that pick one source per scalar (Dependency-Track among them) prefer NVD. KernelScan's score surfaces only when there is no NVD score yet.
Does KernelScan help me meet EU CRA / US SSDF compliance requirements?
KernelScan provides the kernel-side of the picture: continuous identification of CVEs in the kernel component of your product, machine-readable SBOM-compatible output (CycloneDX VDR, OSV), per-version fix tracking, and CC-BY-4.0 redistribution rights so the data can sit inside your own compliance artefacts. It is not a full compliance solution — you still need an SBOM tool, a vulnerability management process, and (for CRA) reporting workflows — but it eliminates the largest blind spot in the standard NVD-driven stack: kernel CVEs that haven't yet been enriched.
Do I need to upload my .config to use the CVE database?
No. The CVE database is browseable for free with an account — sign-up is free and unlocks the full kernel CVE corpus and the public CVE feed. Without an account, browsing is limited. .config upload is only required for VEX analysis (the per-CVE per-build verdict).
Why does KernelScan publish a KSCAN-CVE-… ID instead of the canonical CVE-…?
So Dependency-Track and other aliasing-aware consumers can treat KernelScan as a distinct vulnerability source without colliding with NVD on the same record. Both sources show up under one finding (via the alias) with one audit trail. CycloneDX VDR records use the canonical CVE-… ID directly because CDX has no namespace-id model.
How fresh is the CVE data?
KernelScan ingests every new CVE record as a continuous pipeline, directly off the kernel.org CVE feed — typically weeks ahead of when the same CVE is enriched and visible in NVD-driven scanners. Calculated CVSS / CWE for kernel CVEs awaiting NVD enrichment is published alongside the core record.
Can the AI overrule the deterministic config analysis?
No. Layer 2 (factor assessment) can only refine exploitable verdicts. If Layer 1 already says not_affected because the vulnerable code isn't in the build, the LLM never sees that CVE — there's nothing to assess. The deterministic floor is always the deterministic floor.
What if a CVE has no CONFIG_* mapping yet?
It's reported as in_triage in the VEX. This usually means the CVE is very recent and the per-series mapper hasn't yet processed the latest patch level, or the fix touches core code that has no CONFIG_* guard. The mapper resumes automatically and the verdict is upgraded once the mapping lands.
Is the CVE feed compatible with Trivy / Grype / Renovate?
Yes — the OSV 1.6 endpoints (/feed/osv/...) follow the OSV schema natively, so any OSV-aware scanner can consume them with a Bearer token. Dependency-Track has its own dedicated tokenized URL surface because it can't forward the Authorization header to its OSV mirror loader.
What's the difference between the API key (ks_live_…) and the DT URL token (kf_pub_…)?
The API key is for general programmatic access — CI scripts, Bearer auth, the full API surface. The DT URL token is read-only, scoped to feed access, throughput-shaped per-token, and rotatable independently. You only need the latter if you're wiring KernelScan into Dependency-Track. Both are minted on the /account page.
Is the data licensed for commercial use?
Yes — feed contents are CC-BY-4.0. Free for redistribution, commercial use, and integration into SBOM / vulnerability scanning pipelines. Attribution is required and ships in-band on every record (database_specific.kernelscan.attribution for OSV, kernelscan.io:attribution for CycloneDX).
What kernel versions are covered?
The version list is dynamic — it tracks whichever kernel sources have been imported. The free tier covers the current LTS series; Basic+ extends to all tracked versions. Mainline plus the active stable LTS branches (6.6, 6.1, …) are always indexed.