diff --git a/docs/draft/certificates.md b/docs/draft/certificates.md
new file mode 100644
index 000000000..873d2dd6d
--- /dev/null
+++ b/docs/draft/certificates.md
@@ -0,0 +1,555 @@
+# Certificate Chain Validation in wolfHSM
+
+## 1. Overview
+
+wolfHSM provides a server-resident X.509 certificate manager that lets clients
+provision trusted root anchors into NVM and then verify candidate certificate
+chains against those anchors over the standard wolfHSM client/server protocol.
+The chain walk, signature checks, anchor selection, and any custom verify
+callbacks all run inside the trusted server environment; the client only ever
+ships DER bytes and trust-anchor identifiers, never private key material or
+root certificates that have been provisioned with the non-exportable flag.
+
+The feature set is layered. Each layer below adds capability without
+invalidating the layer above it, and each is independently gated by a
+compile-time configuration macro.
+
+| Capability | Macro | Notes |
+|-----------------------------------------|----------------------------------------------------|-------|
+| Trusted-root NVM CRUD + chain verify | `WOLFHSM_CFG_CERTIFICATE_MANAGER` | Base feature. Requires crypto. |
+| Multi-root chain verify | (always available with the base feature) | Bounded by `WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS`. |
+| Trusted CA verify-result cache | `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE` | Per-server cache by default. |
+| Cross-client (global) verify cache | `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL` | Layered on top of the verify cache. |
+| Cache leaf public key after verify | `WH_CERT_FLAGS_CACHE_LEAF_PUBKEY` request flag | Available on every verify variant. |
+| Attribute-certificate (X.509 ACERT) | `WOLFHSM_CFG_CERTIFICATE_MANAGER_ACERT` | Single-root verify only. |
+| DMA transport for large chains | `WOLFHSM_CFG_DMA` | Available on every cert API. |
+| User-supplied verify callback | `whServerCertConfig.verifyCb` (server-side only) | Applied per cert manager. |
+
+The remaining sections walk the client API for each operation, then dive into
+the multi-root and trusted-cache features in detail and describe the precise
+semantics that result when both are enabled together.
+
+## 2. Build Configuration
+
+### Required
+
+- `WOLFHSM_CFG_CERTIFICATE_MANAGER` — enables every API in this document.
+ Requires `!WOLFHSM_CFG_NO_CRYPTO` (the implementation depends on
+ `WOLFSSL_CERT_MANAGER` and the wolfCrypt ASN.1 decoder).
+
+### Optional
+
+- `WOLFHSM_CFG_DMA` — enables the `*Dma*` variants that pass the candidate
+ chain by client address rather than copying it through the comm buffer.
+- `WOLFHSM_CFG_CERTIFICATE_MANAGER_ACERT` — enables `wh_Client_CertVerifyAcert`
+ / `wh_Client_CertVerifyAcertDma`. Requires wolfSSL built with `WOLFSSL_ACERT`
+ and `WOLFSSL_ASN_TEMPLATE`.
+- `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE` — enables the trusted CA verify cache
+ (Section 5). Pulls in `wh_Client_CertVerifyCacheClear` on the client.
+- `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL` — relocates the verify cache
+ from the per-server context into the shared NVM context so it is reused
+ across every client connected to the server. Requires
+ `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE`.
+
+### Bounds
+
+- `WOLFHSM_CFG_MAX_CERT_SIZE` — maximum DER size of any single certificate
+ read from or written to NVM. Defaults to `WOLFHSM_CFG_COMM_DATA_LEN` when
+ DMA is off, `4096` when DMA is on.
+- `WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS` — upper bound on the number of trusted
+ root NVM IDs that may be supplied to a single multi-root verify call.
+ Defaults to `8`. This bound also sizes the inline root-id array in the
+ multi-root DMA request and the per-slot root binding in the verify cachel
+ fails a static assert if the resulting DMA request struct would exceed
+ `WOLFHSM_CFG_COMM_DATA_LEN`.
+- `WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT` — number of slots in the verify
+ cache (FIFO ring). Defaults to `16`.
+
+## 3. Common Concepts
+
+### 3.1 Trust anchors live in NVM
+
+Every trusted root is a regular NVM object identified by a `whNvmId`. The
+client provisions roots with `wh_Client_CertAddTrusted`, removes them with
+`wh_Client_CertEraseTrusted`, and reads them back with
+`wh_Client_CertReadTrusted`. Verification operations take the NVM ID(s) of
+the root(s) to anchor against — the root certificate bytes themselves are
+never sent inline with a verify request.
+
+Roots respect normal NVM access and flag policy. A root provisioned with
+`WH_NVM_FLAGS_NONEXPORTABLE` cannot be read back via
+`wh_Client_CertReadTrusted` (the server returns `WH_ERROR_ACCESS`) but is
+still usable as a verify anchor.
+
+### 3.2 Verification flags (`whCertFlags`)
+
+Defined in `wolfhsm/wh_common.h`:
+
+- `WH_CERT_FLAGS_NONE` — verify only.
+- `WH_CERT_FLAGS_CACHE_LEAF_PUBKEY` — on a successful verify, extract the
+ leaf certificate's `SubjectPublicKeyInfo` and cache it in the server's
+ key cache so subsequent crypto operations can address it by `whKeyId`.
+
+### 3.3 Cached leaf key id
+
+Verify variants whose name contains `AndCacheLeafPubKey` take an `inout_keyId`
+argument. On entry, supply either an explicit `whKeyId` or `WH_KEYID_ERASED`
+to let the server pick a unique id; on success, the caller-side id is updated
+with the value the server actually used. Failed verifies leave the prior id
+contents undisturbed and do not populate the key cache.
+
+`cachedKeyFlags` carries the NVM usage flags applied to the cached key —
+typically `WH_NVM_FLAGS_USAGE_VERIFY` for a leaf certificate's public key.
+
+### 3.4 Async (request/response) split
+
+Every verify and trusted-root mutation API has three forms:
+
+- A single blocking call (e.g. `wh_Client_CertVerify`).
+- A non-blocking `*Request` call that returns as soon as the request is on
+ the wire.
+- A non-blocking `*Response` call that returns `WH_ERROR_NOTREADY` until the
+ server has replied, then yields `out_rc`.
+
+The blocking forms loop on `WH_ERROR_NOTREADY` internally. Use the split
+pair when the calling thread needs to remain responsive (for example, to
+service a separate request).
+
+### 3.5 Return-code conventions
+
+All client functions return a wolfHSM transport-layer `int`:
+`WH_ERROR_OK` if the request and response cycle completed, or a negative
+error code if the comm layer itself failed.
+
+The server's verify result is returned separately via `out_rc`:
+
+| `out_rc` | Meaning |
+|-------------------------|----------------------------------------------------------------------|
+| `WH_ERROR_OK` (0) | Chain anchored successfully. |
+| `WH_ERROR_CERT_VERIFY` | Chain did not anchor (signature, expiry, or path failure). |
+| `WH_ERROR_NOTFOUND` | (Multi-root only) every supplied root id was absent from NVM. |
+| `WH_ERROR_BADARGS` | Argument shape or wire-payload size violation. |
+| `WH_ERROR_ACCESS` | Read-trusted on a non-exportable cert. |
+| Other negative codes | Underlying NVM, transport, or cert-manager environment errors. |
+
+This separation lets callers distinguish a real trust failure
+(`WH_ERROR_CERT_VERIFY`) from "the trust store itself is empty"
+(`WH_ERROR_NOTFOUND`) and from infrastructure errors.
+
+## 4. Client API
+
+All prototypes below live in `wolfhsm/wh_client.h`. The `*Request` /
+`*Response` split forms are omitted from the listing for brevity but exist
+for every blocking entry point shown.
+
+### 4.1 Initialization
+
+```c
+int wh_Client_CertInit(whClientContext* c, int32_t* out_rc);
+```
+
+Initializes the server's certificate manager subsystem. Required once per
+server before any other cert call. When the trusted-cert verify cache is
+enabled in per-client mode, `CertInit` clears the calling client's cache
+(see Section 5.4).
+
+### 4.2 Trusted root provisioning
+
+```c
+int wh_Client_CertAddTrusted(whClientContext* c, whNvmId id,
+ whNvmAccess access, whNvmFlags flags,
+ uint8_t* label, whNvmSize label_len,
+ const uint8_t* cert, uint32_t cert_len,
+ int32_t* out_rc);
+
+int wh_Client_CertEraseTrusted(whClientContext* c, whNvmId id, int32_t* out_rc);
+
+int wh_Client_CertReadTrusted(whClientContext* c, whNvmId id, uint8_t* cert,
+ uint32_t* cert_len, int32_t* out_rc);
+```
+
+`CertAddTrusted` writes a DER root certificate into NVM under the supplied
+`whNvmId` with the given access and flag policy. `CertEraseTrusted` removes
+it. `CertReadTrusted` reads it back, with `*cert_len` updated to the actual
+stored size on success (or, on `WH_ERROR_BUFFER_SIZE`, the size needed).
+
+When the verify cache is enabled, both `AddTrusted` and `EraseTrusted` also
+trigger a cache eviction for the affected root id (Section 5.5).
+
+DMA variants:
+
+```c
+int wh_Client_CertAddTrustedDma(whClientContext* c, whNvmId id,
+ whNvmAccess access, whNvmFlags flags,
+ uint8_t* label, whNvmSize label_len,
+ const void* cert, uint32_t cert_len,
+ int32_t* out_rc);
+
+int wh_Client_CertReadTrustedDma(whClientContext* c, whNvmId id, void* cert,
+ uint32_t cert_len, int32_t* out_rc);
+```
+
+### 4.3 Single-root chain verify
+
+```c
+int wh_Client_CertVerify(whClientContext* c, const uint8_t* cert,
+ uint32_t cert_len, whNvmId trustedRootNvmId,
+ int32_t* out_rc);
+```
+
+Walks the chain in `cert` (concatenated DER in leaf-last certificate order),
+anchoring against the single root identified by `trustedRootNvmId`. The server
+constructs a fresh `WOLFSSL_CERT_MANAGER` for the call, loads the root, walks
+the chain via `wolfSSL_CertManagerVerifyBuffer`, and returns the result via
+`out_rc`.
+
+DMA variant:
+
+```c
+int wh_Client_CertVerifyDma(whClientContext* c, const void* cert,
+ uint32_t cert_len, whNvmId trustedRootNvmId,
+ int32_t* out_rc);
+```
+
+### 4.4 Single-root verify with leaf-key caching
+
+```c
+int wh_Client_CertVerifyAndCacheLeafPubKey(
+ whClientContext* c, const uint8_t* cert, uint32_t cert_len,
+ whNvmId trustedRootNvmId, whNvmFlags cachedKeyFlags, whKeyId* inout_keyId,
+ int32_t* out_rc);
+```
+
+Same chain walk as `wh_Client_CertVerify`, plus on success the leaf
+certificate's public key is copied into the server's key cache under
+`*inout_keyId` (or a server-chosen id if the input was `WH_KEYID_ERASED`)
+with `cachedKeyFlags` as its NVM usage policy. Subsequent crypto operations
+can address the key by id.
+
+DMA variant: `wh_Client_CertVerifyDmaAndCacheLeafPubKey`.
+
+### 4.5 Multi-root chain verify
+
+```c
+int wh_Client_CertVerifyMultiRoot(whClientContext* c, const uint8_t* cert,
+ uint32_t cert_len,
+ const whNvmId* trustedRootNvmIds,
+ uint16_t numRoots, int32_t* out_rc);
+```
+
+Identical to the single-root call, except the server loads up to
+`numRoots` roots (`1 .. WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS`) into a single
+cert manager and the chain succeeds if it anchors to *any* of them. See
+Section 5 for the full semantics.
+
+DMA variant: `wh_Client_CertVerifyMultiRootDma`.
+
+### 4.6 Multi-root verify with leaf-key caching
+
+```c
+int wh_Client_CertVerifyMultiRootAndCacheLeafPubKey(
+ whClientContext* c, const uint8_t* cert, uint32_t cert_len,
+ const whNvmId* trustedRootNvmIds, uint16_t numRoots,
+ whNvmFlags cachedKeyFlags, whKeyId* inout_keyId, int32_t* out_rc);
+```
+
+DMA variant: `wh_Client_CertVerifyMultiRootDmaAndCacheLeafPubKey`.
+
+### 4.7 Attribute certificate verify
+
+```c
+int wh_Client_CertVerifyAcert(whClientContext* c, const void* cert,
+ uint32_t cert_len, whNvmId trustedRootNvmId,
+ int32_t* out_rc);
+```
+
+Verifies an X.509 attribute certificate's signature against the public key
+of the trusted root identified by `trustedRootNvmId`. Available only when
+the server is built with `WOLFHSM_CFG_CERTIFICATE_MANAGER_ACERT`. There is
+no multi-root or leaf-cache variant — attribute certificates are signed
+directly by an attribute authority and the call carries a single anchor.
+
+A signature mismatch is reported as `WH_ERROR_CERT_VERIFY` in `out_rc`, the
+same convention as the standard verify path.
+
+DMA variant: `wh_Client_CertVerifyAcertDma`.
+
+### 4.8 Verify-cache management
+
+Available only when `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE` is enabled:
+
+```c
+int wh_Client_CertVerifyCacheClear(whClientContext* c, int32_t* out_rc);
+```
+
+Drops every entry from the server's verify cache. In per-client mode this
+clears only the calling client's cache; in global mode (Section 5.6) it
+clears the shared cache for all clients. Subsequent verifies fall back to
+running the full wolfSSL signature path until the cache is repopulated.
+
+## 5. The Multi-Root Feature
+
+### 5.1 Why it exists
+
+The single-root entry point couples each verify to exactly one trust anchor.
+Callers needing to validate a chain against any of several acceptable roots
+otherwise have to either fold every acceptable root under a single super-root
+(operationally awkward when the root infrastructures are independent) or
+loop over `wh_Client_CertVerify` per root, parsing the chain again each
+attempt and inferring at the application layer whether a per-anchor failure
+should trigger a retry against the next anchor.
+
+`wh_Client_CertVerifyMultiRoot` collapses both of those into a single
+request: hand the server an array of up to `WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS`
+NVM ids, the server loads each one as a CA into a single cert manager, and
+the chain is walked exactly once. If it anchors to any of the supplied
+roots the verify succeeds; otherwise it fails with `WH_ERROR_CERT_VERIFY`.
+
+### 5.2 Order independence
+
+The cert manager picks an issuer for each child cert by subject/issuer
+matching during chain walk, not by load order. Listing root A before root B
+does not "prefer" A.
+
+### 5.3 Mixed-failure semantics
+
+Multi-root distinguishes three failure modes via `out_rc`:
+
+| Outcome | `out_rc` |
+|-----------------------------------------------------------|-------------------------|
+| Chain anchors to ≥ 1 loaded root | `WH_ERROR_OK` |
+| ≥ 1 anchor loaded; chain does not anchor to any of them | `WH_ERROR_CERT_VERIFY` |
+| Every supplied root id is absent from NVM | `WH_ERROR_NOTFOUND` |
+| Any non-absent failure reading or loading a supplied root | underlying error code |
+
+Roots that are absent from NVM are skipped silently — they do not abort the
+operation and do not count against the chain's chance of anchoring. A read
+or load failure on an *existing* root, by contrast, is treated as an
+environment error and aborts the call.
+
+## 6. The Trusted Verify Cache
+
+### 6.1 Overview
+
+When `WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE` is enabled, the server keeps a
+fixed-size FIFO ring of slots, each holding:
+
+- A SHA-256 hash of a successfully-verified DER-encoded **CA** certificate.
+- The set of trusted root NVM ids that were loaded into the cert manager
+ when that cert was verified.
+
+On a subsequent verify, before the server invokes
+`wolfSSL_CertManagerVerifyBuffer` for a CA cert in the chain, it hashes
+that cert and looks the hash up in the cache. A hit short-circuits the
+public-key signature check; the rest of the chain walk (CA decode, store
+load for downstream certs, leaf pubkey extract) continues unchanged.
+
+Only CA certs are ever inserted. Leaves are deliberately excluded.
+
+The verify cache is never stored in NVM and does not persist across power
+cycles.
+
+This feature is intended to provide a substantial performance enhancement by
+eliminating multiple potentially redundant and expensive public key verification
+operations, however it does so at the expense of security in some scenarios. If
+deploying this feature in production it is paramount that the nuances regarding
+the trust anchor consequences are fully understood and align with the threat
+model of the application. **This feature should be used with caution and for most
+scenarios is NOT recommended.**
+
+### 6.2 Internals
+
+The wolfHSM trusted certificate cache binds each entry to the *set* of
+trusted roots that were actually loaded when the verify occured, and lookups
+require the cached set to be a **subset of the caller's currently loaded set**.
+
+The soundness argument rests on the monotonicity of X.509 verification: adding
+more trusted roots should never invalidate a previously successful verify, so a
+chain that validated under set `S` still validates under any superset `T ⊇ S`.
+A cache hit therefore implies the cached verify's anchor (whichever root in `S`
+actually closed the chain) is currently trusted, regardless of which element of
+`S` it was, since every element of `S` is known to be in `T`.
+
+### 6.3 Hits, misses, and recording the loaded set
+
+Crucially, the *loaded* set is recorded — not the caller-supplied set. If a
+caller passes three roots but only two are present in NVM, the cache slot
+records the two-element loaded set. Forwarding the three-element supplied
+set instead would let a stale entry under the missing root match a verify
+whose effective trust store does not contain that root.
+
+Insertion is deduplicated on exact `(set, hash)` match under the cache lock, so
+concurrent inserts of the same verify collapse to a single slot. Two
+entries with the same hash but different sets coexist: each is an
+independent claim about a distinct verify, and dropping either could lose
+hit coverage for callers whose loaded set is a superset of one but not the
+other.
+
+The ring overwrites using a FIFO pattern once full.
+
+### 6.4 Cache lifecycle and eviction
+
+Three mutation paths interact with the cache:
+
+- **`wh_Client_CertAddTrusted`** evicts every cache slot whose stored set
+ contains the affected root id. `AddObject` supersedes any prior object at
+ that id, so cached verifies anchored at the previous root would otherwise
+ short-circuit a verify under the new (different) root resident at that id.
+- **`wh_Client_CertEraseTrusted`** evicts the same way. Without this, a
+ later `AddTrusted` reusing the freed id would inherit phantom cache hits
+ from the now-departed root.
+- **`wh_Client_CertVerifyCacheClear`** drops every slot.
+
+Eviction happens on success only. Otherwise, a failed `AddTrusted` or
+`EraseTrusted` leaves the prior root and any cache entries bound to it in
+place.
+
+### 6.5 Per-client vs global mode
+
+By default the cache lives in `whServerCertContext` and is per-server (and
+therefore per-client connection). Each client connection sees its own slots
+and its own hit rate; a verify under client A does not warm client B's
+cache.
+
+`WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL` relocates the cache into the
+shared NVM context, where every connected client shares one `whCertVerify
+CacheContext`. Hits then apply across client boundaries: once any client
+has verified a CA against root R, every client whose loaded root set
+contains R hits the cache for that CA.
+
+Global mode adds a dedicated lock embedded in the cache so cache operations
+do not serialize behind general NVM I/O. In per-client mode the cache
+piggybacks on the NVM lock — adequate given the cache is private to one
+server and `CertInit` resets it on each (re)connect.
+
+### 6.6 Interaction with the user-supplied verify callback
+
+Server builds may register a `VerifyCallback` via
+`whServerCertConfig.verifyCb` (or replace it at runtime via
+`wh_Server_CertSetVerifyCb`). The callback is installed on the per-request
+cert manager and runs inside `wolfSSL_CertManagerVerifyBuffer`. **Verify
+cache hits short-circuit that path and so deliberately do not invoke the
+callback.**
+
+## 7. Multi-Root and the Verify Cache Together
+
+When both features are compiled in, the cache participates in both
+single-root and multi-root verifies. The combination preserves the
+single-root behavior exactly while extending hit semantics to the larger
+trust-set landscape multi-root callers create. The interaction has three
+corner cases worth being explicit about.
+
+### 7.1 Cache entries record which roots were actually used, not which were asked for
+
+A multi-root verify request lists the trusted roots the caller is willing
+to trust, but some of those roots may not currently exist in NVM. The
+server skips any missing roots and only loads the ones it finds, so the
+trust store the chain is actually verified against can be smaller than the
+list the caller supplied.
+
+When a successful verify produces a new cache entry, the entry remembers
+that smaller, real trust store — not the original request. For example, a
+caller that asks for eight roots but only has three present in NVM
+produces a three-root cache entry, not an eight-root one.
+
+This matters because cache lookups use the subset rule: an entry hits only
+when its recorded roots are all present in the looking-up caller's
+currently-loaded set. Recording roots that were never actually loaded
+would let a future verify hit an entry under a root that wasn't part of
+the trust store when the cached chain originally validated, and the
+subset rule's soundness argument would no longer hold.
+
+### 7.2 Single-root verifies populate the cache too — and produce the broadest entries
+
+The single-root path is implemented as a one-element multi-root call.
+Successful single-root verifies therefore insert one-element entries
+(`{R}`) — the narrowest possible set. Under the subset rule, those entries
+are also the most reusable: any future multi-root call whose loaded set
+contains `R` (e.g. `{R, R₂}`, `{R, R₂, R₃}`) hits.
+
+Multi-root entries with larger sets (`{R₁, R₂, R₃}`) have correspondingly
+narrower reuse — only future verifies whose loaded set is a superset
+(`{R₁, R₂, R₃}` itself, or `{R₁, R₂, R₃, R₄}`, etc.) will hit. They are
+still useful: they capture verifies that pure single-root traffic would
+not generate.
+
+A practical consequence: if a deployment runs both single-root traffic
+against `R₁` and multi-root traffic against `{R₁, R₂}`, the single-root
+verifies populate `{R₁}`-bound entries that the multi-root traffic also
+hits, while the multi-root verifies populate `{R₁, R₂}`-bound entries that
+do *not* serve future single-root `{R₁}` traffic. The cache is therefore
+biased toward maximizing reuse from single-root callers.
+
+### 7.3 A single root rotation invalidates entries across both paths
+
+`AddTrusted` and `EraseTrusted` call `CertVerifyCache_EvictRoot(id)`, which
+drops every slot whose recorded set *contains* `id`. This does the right
+thing for both single- and multi-root populated entries:
+
+- A `{R₁}` entry is dropped on a rotation of `R₁` and is unaffected by
+ rotations of any other root — exactly what monotonicity demands.
+- A `{R₁, R₂}` entry is dropped on a rotation of either `R₁` or `R₂`. The
+ original verify may have anchored at the rotated root, and the remaining
+ set is no longer a sound claim about which stores still validate the
+ chain. Stripping just the rotated id from the set would leave a slot
+ that falsely claims `{R_other}` validated this chain on its own.
+
+A multi-root caller's "live" cache footprint therefore depends on the
+stability of every root in its supplied sets, not just the one that
+ultimately anchored. This is intrinsic to the soundness argument — the
+cache cannot identify which anchor closed any given chain after the fact —
+and is the trade-off paid for cross-anchor cache reuse.
+
+### 7.4 Cache miss falls back to the regular multi-root path
+
+A miss does not change semantics relative to a no-cache build: the server
+runs `wolfSSL_CertManagerVerifyBuffer` against the populated cert manager
+just as it would have without the cache. There is no path by which a miss
+weakens the verify; the cache is a pure performance optimization.
+
+### 7.5 Recommendations
+
+- Provision long-lived roots with stable NVM ids when targeting a high
+ cache hit rate. Frequent rotations will keep the cache cold.
+- Prefer per-client mode (the default) when client trust stores diverge
+ significantly. Prefer global mode when most clients verify against the
+ same set of roots (e.g. fleet-uniform PKI).
+- Single-root callers benefit from the cache without any additional design.
+ Multi-root callers benefit most when the supplied set is reasonably
+ stable across calls — the recorded set is what determines hit eligibility
+ for downstream traffic.
+
+## 8. Worked Example
+
+Provision two roots, verify a chain against either, and cache the leaf
+public key for subsequent signing-key lookup:
+
+```c
+whClientContext* c = /* ... */;
+whNvmId rootIds[2] = { 100, 101 };
+int32_t rc;
+whKeyId leafKeyId = WH_KEYID_ERASED;
+
+/* One-time provisioning (can also be done offline) */
+wh_Client_CertInit(c, &rc);
+wh_Client_CertAddTrusted(c, rootIds[0], WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONE,
+ (uint8_t*)"primary", 7,
+ primary_root_der, primary_root_len, &rc);
+wh_Client_CertAddTrusted(c, rootIds[1], WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONE,
+ (uint8_t*)"backup", 6,
+ backup_root_der, backup_root_len, &rc);
+
+/* Verify a chain against either root and cache the leaf public key */
+int ret = wh_Client_CertVerifyMultiRootAndCacheLeafPubKey(
+ c, chain_der, chain_len, rootIds, 2,
+ WH_NVM_FLAGS_USAGE_VERIFY, &leafKeyId, &rc);
+
+if (ret == WH_ERROR_OK && rc == WH_ERROR_OK) {
+ /* leafKeyId now refers to the leaf cert's public key in the server's
+ * key cache; subsequent crypto operations can use it by id. With the verify
+ * cache enabled, a repeat verify of this chain against this root set hits
+ * the cache for every CA in the chain and skips the wolfSSL signature path.
+ */
+}
+```
+
diff --git a/src/wh_client_cert.c b/src/wh_client_cert.c
index 26b2f3cae..ff8ce0fd2 100644
--- a/src/wh_client_cert.c
+++ b/src/wh_client_cert.c
@@ -669,6 +669,136 @@ int wh_Client_CertVerifyMultiRootAndCacheLeafPubKey(
inout_keyId, out_rc);
}
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+int wh_Client_CertVerifyCacheClearRequest(whClientContext* c)
+{
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+ return wh_Client_SendRequest(c, WH_MESSAGE_GROUP_CERT,
+ WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_CLEAR, 0,
+ NULL);
+}
+
+int wh_Client_CertVerifyCacheClearResponse(whClientContext* c, int32_t* out_rc)
+{
+ int rc;
+ uint16_t group;
+ uint16_t action;
+ uint16_t size;
+ whMessageCert_SimpleResponse resp;
+
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ rc = wh_Client_RecvResponse(c, &group, &action, &size, &resp);
+ if (rc == WH_ERROR_OK) {
+ if ((group != WH_MESSAGE_GROUP_CERT) ||
+ (action != WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_CLEAR) ||
+ (size != sizeof(resp))) {
+ rc = WH_ERROR_ABORTED;
+ }
+ else if (out_rc != NULL) {
+ *out_rc = resp.rc;
+ }
+ }
+ return rc;
+}
+
+int wh_Client_CertVerifyCacheClear(whClientContext* c, int32_t* out_rc)
+{
+ int rc = WH_ERROR_OK;
+
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ do {
+ rc = wh_Client_CertVerifyCacheClearRequest(c);
+ } while (rc == WH_ERROR_NOTREADY);
+
+ if (rc == WH_ERROR_OK) {
+ do {
+ rc = wh_Client_CertVerifyCacheClearResponse(c, out_rc);
+ } while (rc == WH_ERROR_NOTREADY);
+ }
+
+ return rc;
+}
+
+int wh_Client_CertVerifyCacheSetEnabledRequest(whClientContext* c,
+ uint8_t enable)
+{
+ whMessageCert_SetEnabledRequest* req = NULL;
+
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ req = (whMessageCert_SetEnabledRequest*)wh_CommClient_GetDataPtr(c->comm);
+ if (req == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+ memset(req, 0, sizeof(*req));
+ req->enable = enable ? 1 : 0;
+
+ return wh_Client_SendRequest(
+ c, WH_MESSAGE_GROUP_CERT,
+ WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_SET_ENABLED, sizeof(*req),
+ (uint8_t*)req);
+}
+
+int wh_Client_CertVerifyCacheSetEnabledResponse(whClientContext* c,
+ int32_t* out_rc)
+{
+ int rc;
+ uint16_t group;
+ uint16_t action;
+ uint16_t size;
+ whMessageCert_SimpleResponse resp;
+
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ rc = wh_Client_RecvResponse(c, &group, &action, &size, &resp);
+ if (rc == WH_ERROR_OK) {
+ if ((group != WH_MESSAGE_GROUP_CERT) ||
+ (action != WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_SET_ENABLED) ||
+ (size != sizeof(resp))) {
+ rc = WH_ERROR_ABORTED;
+ }
+ else if (out_rc != NULL) {
+ *out_rc = resp.rc;
+ }
+ }
+ return rc;
+}
+
+int wh_Client_CertVerifyCacheSetEnabled(whClientContext* c, uint8_t enable,
+ int32_t* out_rc)
+{
+ int rc = WH_ERROR_OK;
+
+ if (c == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ do {
+ rc = wh_Client_CertVerifyCacheSetEnabledRequest(c, enable);
+ } while (rc == WH_ERROR_NOTREADY);
+
+ if (rc == WH_ERROR_OK) {
+ do {
+ rc = wh_Client_CertVerifyCacheSetEnabledResponse(c, out_rc);
+ } while (rc == WH_ERROR_NOTREADY);
+ }
+
+ return rc;
+}
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
#ifdef WOLFHSM_CFG_DMA
int wh_Client_CertAddTrustedDmaRequest(whClientContext* c, whNvmId id,
diff --git a/src/wh_message_cert.c b/src/wh_message_cert.c
index 07617b72c..05fc9bcca 100644
--- a/src/wh_message_cert.c
+++ b/src/wh_message_cert.c
@@ -45,6 +45,18 @@ int wh_MessageCert_TranslateSimpleResponse(
return 0;
}
+int wh_MessageCert_TranslateSetEnabledRequest(
+ uint16_t magic, const whMessageCert_SetEnabledRequest* src,
+ whMessageCert_SetEnabledRequest* dest)
+{
+ (void)magic;
+ if ((src == NULL) || (dest == NULL)) {
+ return WH_ERROR_BADARGS;
+ }
+ dest->enable = src->enable;
+ return 0;
+}
+
int wh_MessageCert_TranslateAddTrustedRequest(
uint16_t magic, const whMessageCert_AddTrustedRequest* src,
whMessageCert_AddTrustedRequest* dest)
diff --git a/src/wh_nvm.c b/src/wh_nvm.c
index 371e4e79c..8e24cf05a 100644
--- a/src/wh_nvm.c
+++ b/src/wh_nvm.c
@@ -104,13 +104,33 @@ int wh_Nvm_Init(whNvmContext* context, const whNvmConfig* config)
memset(&context->globalCache, 0, sizeof(context->globalCache));
#endif
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ /* Initialize the global cert verify cache. Default to enabled so a fresh
+ * NVM context preserves pre-runtime-toggle behavior; clients can disable
+ * via wh_Client_CertVerifyCacheSetEnabled. */
+ memset(&context->globalCertVerifyCache, 0,
+ sizeof(context->globalCertVerifyCache));
+ context->globalCertVerifyCache.enabled = 1;
+#endif
+
#ifdef WOLFHSM_CFG_THREADSAFE
/* Initialize lock (NULL lockConfig = no-op locking) */
rc = wh_Lock_Init(&context->lock, config->lockConfig);
if (rc != WH_ERROR_OK) {
return rc;
}
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ /* Initialize the global cert verify cache lock. Distinct lock from the
+ * NVM lock so cert-cache traffic and NVM I/O don't serialize each other.
+ * NULL config => no-op locking, same as the NVM lock above. */
+ rc = wh_Lock_Init(&context->globalCertVerifyCache.lock,
+ config->certVerifyCacheLockConfig);
+ if (rc != WH_ERROR_OK) {
+ (void)wh_Lock_Cleanup(&context->lock);
+ return rc;
+ }
#endif
+#endif /* WOLFHSM_CFG_THREADSAFE */
if (context->cb != NULL && context->cb->Init != NULL) {
rc = context->cb->Init(context->context, config->config);
@@ -118,6 +138,9 @@ int wh_Nvm_Init(whNvmContext* context, const whNvmConfig* config)
context->cb = NULL;
context->context = NULL;
#ifdef WOLFHSM_CFG_THREADSAFE
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ (void)wh_Lock_Cleanup(&context->globalCertVerifyCache.lock);
+#endif
(void)wh_Lock_Cleanup(&context->lock);
#endif
}
@@ -140,6 +163,14 @@ int wh_Nvm_Cleanup(whNvmContext* context)
memset(&context->globalCache, 0, sizeof(context->globalCache));
#endif
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ /* Clear cache slots/writeIdx but keep the embedded lock intact until its
+ * own cleanup below. */
+ memset(context->globalCertVerifyCache.slots, 0,
+ sizeof(context->globalCertVerifyCache.slots));
+ context->globalCertVerifyCache.writeIdx = 0;
+#endif
+
/* No callback? Return ABORTED */
if (context->cb->Cleanup == NULL) {
rc = WH_ERROR_ABORTED;
@@ -149,6 +180,9 @@ int wh_Nvm_Cleanup(whNvmContext* context)
}
#ifdef WOLFHSM_CFG_THREADSAFE
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ (void)wh_Lock_Cleanup(&context->globalCertVerifyCache.lock);
+#endif
(void)wh_Lock_Cleanup(&context->lock);
#endif
diff --git a/src/wh_server.c b/src/wh_server.c
index 943e583c1..cdcaab41d 100644
--- a/src/wh_server.c
+++ b/src/wh_server.c
@@ -122,6 +122,20 @@ int wh_Server_Init(whServerContext* server, whServerConfig* config)
}
#endif /* WOLFHSM_CFG_DMA */
+#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER) && !defined(WOLFHSM_CFG_NO_CRYPTO)
+ /* Register the user-supplied verify callback, if any. The cache (if
+ * compiled in) is already zero-initialized by the memset above. */
+ if (config->certConfig != NULL) {
+ server->cert.verifyCb = config->certConfig->verifyCb;
+ }
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ !defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ /* Cache defaults to enabled so a fresh server preserves pre-runtime-toggle
+ * behavior. Clients can disable via wh_Client_CertVerifyCacheSetEnabled. */
+ server->cert.cache.enabled = 1;
+#endif
+#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER && !WOLFHSM_CFG_NO_CRYPTO */
+
/* Log the server startup */
WH_LOG(&server->log, WH_LOG_LEVEL_INFO, "Server Initialized");
diff --git a/src/wh_server_cert.c b/src/wh_server_cert.c
index 83b583d32..bf89f04ba 100644
--- a/src/wh_server_cert.c
+++ b/src/wh_server_cert.c
@@ -33,6 +33,7 @@
#include "wolfhsm/wh_error.h"
#include "wolfhsm/wh_server.h"
#include "wolfhsm/wh_server_cert.h"
+#include "wolfhsm/wh_server_cert_cache.h"
#include "wolfhsm/wh_server_nvm.h"
#include "wolfhsm/wh_server_keystore.h"
#include "wolfhsm/wh_message.h"
@@ -41,6 +42,303 @@
#include "wolfssl/wolfcrypt/types.h"
#include "wolfssl/ssl.h"
#include "wolfssl/wolfcrypt/asn.h"
+#include "wolfssl/wolfcrypt/sha256.h"
+
+
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+/* Resolve the verify cache for this server. In per-client mode the cache
+ * lives on the server context; in global mode it lives on the shared NVM
+ * context. Returns NULL if either pointer is missing. */
+static whCertVerifyCacheContext* _GetVerifyCache(whServerContext* server)
+{
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ if ((server == NULL) || (server->nvm == NULL)) {
+ return NULL;
+ }
+ return &server->nvm->globalCertVerifyCache;
+#else
+ if (server == NULL) {
+ return NULL;
+ }
+ return &server->cert.cache;
+#endif
+}
+
+/* Lock helpers compile to no-ops when the cache has no embedded lock (i.e.
+ * outside global+threadsafe builds). */
+static int _LockVerifyCache(whCertVerifyCacheContext* cache)
+{
+#if defined(WOLFHSM_CFG_THREADSAFE) && \
+ defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ return wh_Lock_Acquire(&cache->lock);
+#else
+ (void)cache;
+ return WH_ERROR_OK;
+#endif
+}
+
+static int _UnlockVerifyCache(whCertVerifyCacheContext* cache)
+{
+#if defined(WOLFHSM_CFG_THREADSAFE) && \
+ defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ return wh_Lock_Release(&cache->lock);
+#else
+ (void)cache;
+ return WH_ERROR_OK;
+#endif
+}
+
+/* Returns 1 if every element of `subset` appears in `superset`. The arrays
+ * are unsorted but bounded by WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS, so the
+ * O(N*M) scan is fine. */
+static int _IsSubsetOf(const whNvmId* subset, uint16_t subsetCount,
+ const whNvmId* superset, uint16_t supersetCount)
+{
+ uint16_t i, j;
+ int found;
+ for (i = 0; i < subsetCount; i++) {
+ found = 0;
+ for (j = 0; j < supersetCount; j++) {
+ if (subset[i] == superset[j]) {
+ found = 1;
+ break;
+ }
+ }
+ if (!found) {
+ return 0;
+ }
+ }
+ return 1;
+}
+
+/* Internal slot scan, must be called with the cache lock held. Hit if any
+ * committed slot's stored root set is a subset of the supplied root set
+ * AND its hash matches. */
+static int _LookupSubsetUnlocked(const whCertVerifyCacheContext* cache,
+ const whNvmId* rootNvmIds, uint16_t numRoots,
+ const uint8_t* hash)
+{
+ int i;
+ for (i = 0; i < WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT; i++) {
+ const whCertVerifyCacheSlot* slot = &cache->slots[i];
+ if (slot->committed &&
+ (memcmp(slot->hash, hash, WH_CERT_VERIFY_CACHE_HASH_LEN) == 0) &&
+ _IsSubsetOf(slot->rootNvmIds, slot->numRoots, rootNvmIds,
+ numRoots)) {
+ return WH_ERROR_OK;
+ }
+ }
+ return WH_ERROR_NOTFOUND;
+}
+
+/* Internal exact-match scan for insert dedup. Two sets of equal size are
+ * equal iff one is a subset of the other, so reuse _IsSubsetOf with a size
+ * check rather than sorting. */
+static int _HasExactSlotUnlocked(const whCertVerifyCacheContext* cache,
+ const whNvmId* rootNvmIds, uint16_t numRoots,
+ const uint8_t* hash)
+{
+ int i;
+ for (i = 0; i < WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT; i++) {
+ const whCertVerifyCacheSlot* slot = &cache->slots[i];
+ if (slot->committed && (slot->numRoots == numRoots) &&
+ (memcmp(slot->hash, hash, WH_CERT_VERIFY_CACHE_HASH_LEN) == 0) &&
+ _IsSubsetOf(slot->rootNvmIds, slot->numRoots, rootNvmIds,
+ numRoots)) {
+ return 1;
+ }
+ }
+ return 0;
+}
+
+int wh_Server_CertVerifyCache_Lookup(whServerContext* server,
+ const whNvmId* rootNvmIds,
+ uint16_t numRoots, const uint8_t* hash)
+{
+ whCertVerifyCacheContext* cache;
+ int rc;
+ int found;
+
+ if ((server == NULL) || (hash == NULL) || (rootNvmIds == NULL) ||
+ (numRoots == 0)) {
+ return WH_ERROR_BADARGS;
+ }
+ cache = _GetVerifyCache(server);
+ if (cache == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ rc = _LockVerifyCache(cache);
+ if (rc != WH_ERROR_OK) {
+ return rc;
+ }
+ if (!cache->enabled) {
+ /* Runtime-disabled cache: always miss, regardless of slot contents.
+ * Slots are cleared at disable time, so the scan would miss anyway,
+ * but short-circuiting keeps disabled-cache verify cost predictable. */
+ found = WH_ERROR_NOTFOUND;
+ }
+ else {
+ found = _LookupSubsetUnlocked(cache, rootNvmIds, numRoots, hash);
+ }
+ (void)_UnlockVerifyCache(cache);
+ return found;
+}
+
+void wh_Server_CertVerifyCache_Insert(whServerContext* server,
+ const whNvmId* rootNvmIds,
+ uint16_t numRoots, const uint8_t* hash)
+{
+ whCertVerifyCacheContext* cache;
+ whCertVerifyCacheSlot* slot;
+ uint16_t idx;
+ uint16_t k;
+ int rc;
+
+ if ((server == NULL) || (hash == NULL) || (rootNvmIds == NULL) ||
+ (numRoots == 0) || (numRoots > WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS)) {
+ return;
+ }
+ cache = _GetVerifyCache(server);
+ if (cache == NULL) {
+ return;
+ }
+
+ rc = _LockVerifyCache(cache);
+ if (rc != WH_ERROR_OK) {
+ return;
+ }
+ /* Runtime-disabled cache: drop the insert silently. The slot array is
+ * already empty (cleared on disable) and stays that way until re-enable,
+ * so dropping here preserves "no new entries while disabled". */
+ if (!cache->enabled) {
+ (void)_UnlockVerifyCache(cache);
+ return;
+ }
+ /* Dedup on exact (set, hash) match under the lock so concurrent inserts
+ * of the same verify collapse to a single slot. Differing-set entries
+ * for the same hash coexist: each is an independent claim about a
+ * distinct verify, and dropping either could lose hit coverage. */
+ if (!_HasExactSlotUnlocked(cache, rootNvmIds, numRoots, hash)) {
+ idx = cache->writeIdx;
+ slot = &cache->slots[idx];
+ slot->numRoots = (uint8_t)numRoots;
+ for (k = 0; k < numRoots; k++) {
+ slot->rootNvmIds[k] = rootNvmIds[k];
+ }
+ memcpy(slot->hash, hash, WH_CERT_VERIFY_CACHE_HASH_LEN);
+ slot->committed = 1;
+ cache->writeIdx =
+ (uint16_t)((idx + 1) % WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT);
+ }
+ (void)_UnlockVerifyCache(cache);
+}
+
+void wh_Server_CertVerifyCache_Clear(whServerContext* server)
+{
+ whCertVerifyCacheContext* cache;
+ int rc;
+
+ if (server == NULL) {
+ return;
+ }
+ cache = _GetVerifyCache(server);
+ if (cache == NULL) {
+ return;
+ }
+
+ rc = _LockVerifyCache(cache);
+ if (rc != WH_ERROR_OK) {
+ return;
+ }
+ /* Clear payload only; the embedded lock (when present) must survive a
+ * Clear, otherwise the next operation would acquire an uninitialized
+ * lock. */
+ memset(cache->slots, 0, sizeof(cache->slots));
+ cache->writeIdx = 0;
+ (void)_UnlockVerifyCache(cache);
+}
+
+int wh_Server_CertVerifyCache_SetEnabled(whServerContext* server,
+ uint8_t enable)
+{
+ whCertVerifyCacheContext* cache;
+ int rc;
+
+ if (server == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+ cache = _GetVerifyCache(server);
+ if (cache == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+
+ rc = _LockVerifyCache(cache);
+ if (rc != WH_ERROR_OK) {
+ return rc;
+ }
+ /* Flush on transition to disabled so a future re-enable starts from a
+ * clean state rather than reviving entries that pre-dated the disable.
+ * Mirrors the payload-only clear done by wh_Server_CertVerifyCache_Clear:
+ * the embedded lock (when present) must survive. */
+ if (!enable) {
+ memset(cache->slots, 0, sizeof(cache->slots));
+ cache->writeIdx = 0;
+ }
+ cache->enabled = enable ? 1 : 0;
+ (void)_UnlockVerifyCache(cache);
+ return WH_ERROR_OK;
+}
+
+void wh_Server_CertVerifyCache_EvictRoot(whServerContext* server,
+ whNvmId rootNvmId)
+{
+ whCertVerifyCacheContext* cache;
+ int rc;
+ int i;
+
+ if (server == NULL) {
+ return;
+ }
+ cache = _GetVerifyCache(server);
+ if (cache == NULL) {
+ return;
+ }
+
+ rc = _LockVerifyCache(cache);
+ if (rc != WH_ERROR_OK) {
+ return;
+ }
+ /* Drop any slot whose stored root set contains the evicted root. We
+ * cannot safely strip the root from the set and keep the entry: the
+ * original verify may have anchored at the now-departed root, so the
+ * remaining set is no longer a sound claim. writeIdx is left alone:
+ * the FIFO ring is sparse but still well-formed, and pruning here
+ * would otherwise need to compact entries belonging to other roots. */
+ for (i = 0; i < WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT; i++) {
+ whCertVerifyCacheSlot* slot = &cache->slots[i];
+ if (slot->committed) {
+ uint16_t k;
+ for (k = 0; k < slot->numRoots; k++) {
+ if (slot->rootNvmIds[k] == rootNvmId) {
+ memset(slot, 0, sizeof(*slot));
+ break;
+ }
+ }
+ }
+ }
+ (void)_UnlockVerifyCache(cache);
+}
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
+int wh_Server_CertSetVerifyCb(whServerContext* server, VerifyCallback cb)
+{
+ if (server == NULL) {
+ return WH_ERROR_BADARGS;
+ }
+ server->cert.verifyCb = cb;
+ return WH_ERROR_OK;
+}
/* Replicates GetSequence, which is WOLFSSL_LOCAL. */
@@ -61,20 +359,28 @@ static int DerNextSequence(const uint8_t* input, uint32_t maxIdx,
}
-static int _verifyChainAgainstCmStore(whServerContext* server,
- WOLFSSL_CERT_MANAGER* cm,
- const uint8_t* chain, uint32_t chain_len,
- whCertFlags flags,
- whNvmFlags cachedKeyFlags,
- whKeyId* inout_keyId)
+static int
+_verifyChainAgainstCmStore(whServerContext* server, WOLFSSL_CERT_MANAGER* cm,
+ const uint8_t* chain, uint32_t chain_len,
+ const whNvmId* trustedRootNvmIds, uint16_t numRoots,
+ whCertFlags flags, whNvmFlags cachedKeyFlags,
+ whKeyId* inout_keyId)
{
int rc = 0;
const uint8_t* cert_ptr = chain;
uint32_t remaining_len = chain_len;
int cert_len = 0;
word32 idx = 0;
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ uint8_t certHash[WH_CERT_VERIFY_CACHE_HASH_LEN];
+ int hashed = 0;
+#else
+ (void)trustedRootNvmIds;
+ (void)numRoots;
+#endif
- if (cm == NULL || chain == NULL || chain_len == 0) {
+ if (cm == NULL || chain == NULL || chain_len == 0 ||
+ trustedRootNvmIds == NULL || numRoots == 0) {
return WH_ERROR_BADARGS;
}
@@ -82,6 +388,9 @@ static int _verifyChainAgainstCmStore(whServerContext* server,
while (remaining_len > 0) {
/* Reset index for each certificate */
idx = 0;
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ hashed = 0;
+#endif
/* Get the length of the current certificate */
rc = DerNextSequence(cert_ptr, remaining_len, &idx, &cert_len);
@@ -94,9 +403,37 @@ static int _verifyChainAgainstCmStore(whServerContext* server,
return WH_ERROR_ABORTED;
}
- /* Verify the current certificate */
- rc = wolfSSL_CertManagerVerifyBuffer(cm, cert_ptr, cert_len + idx,
- WOLFSSL_FILETYPE_ASN1);
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ /* Hash the DER cert and check the verify cache. A hit short-circuits
+ * the public-key signature check; the cert is otherwise treated as if
+ * it had verified normally so the rest of the loop (CA decode, store
+ * load, leaf pubkey extract) continues unchanged. */
+ rc = wc_Sha256Hash_ex(cert_ptr, (word32)(cert_len + idx), certHash,
+ NULL, server->devId);
+ if (rc != 0) {
+ return rc;
+ }
+ hashed = 1;
+ {
+ int hit = (wh_Server_CertVerifyCache_Lookup(
+ server, trustedRootNvmIds, numRoots, certHash) ==
+ WH_ERROR_OK);
+ if (hit) {
+ rc = WOLFSSL_SUCCESS;
+ }
+ else {
+ /* Verify the current certificate */
+ rc = wolfSSL_CertManagerVerifyBuffer(
+ cm, cert_ptr, cert_len + idx, WOLFSSL_FILETYPE_ASN1);
+ }
+ }
+#else
+ {
+ /* Verify the current certificate */
+ rc = wolfSSL_CertManagerVerifyBuffer(cm, cert_ptr, cert_len + idx,
+ WOLFSSL_FILETYPE_ASN1);
+ }
+#endif
/* If this is not the leaf certificate and it's trusted, add it to the
@@ -169,6 +506,29 @@ static int _verifyChainAgainstCmStore(whServerContext* server,
return rc;
}
}
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ /* Insert only CA certs into the verify cache. Leaves are not
+ * cached: a cache hit on a leaf during a future "leaf alone"
+ * verify would short-circuit the wolfSSL signature check that
+ * would otherwise have failed (the leaf's issuer is not in the
+ * cert manager when the leaf is supplied without its
+ * intermediates). CA caching is sound because the chain walk
+ * loads each verified CA into the cert manager before the next
+ * cert is processed.
+ *
+ * The slot's binding is the loaded root set passed in. Under
+ * subset-lookup semantics, a future verify hits this entry
+ * only when its loaded set is a superset, which by X.509
+ * verify monotonicity guarantees the cached chain still
+ * validates. Single-root callers produce one-element entries
+ * (broadest reuse); multi-root callers produce wider entries
+ * that are still useful when later traffic presents at least
+ * the same roots. */
+ if (hashed && dc.isCA) {
+ wh_Server_CertVerifyCache_Insert(server, trustedRootNvmIds,
+ numRoots, certHash);
+ }
+#endif
wc_FreeDecodedCert(&dc);
}
else {
@@ -190,7 +550,18 @@ int wh_Server_CertInit(whServerContext* server)
#ifdef DEBUG_WOLFSSL
wolfSSL_Debugging_ON();
#endif
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ !defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ /* Per-client cache is owned by the server context and zeroed on each
+ * server init. Under _GLOBAL the cache lives in the NVM context and is
+ * initialized exactly once in wh_Nvm_Init — clearing it here would wipe
+ * entries populated by other clients. */
+ if (server != NULL) {
+ wh_Server_CertVerifyCache_Clear(server);
+ }
+#else
(void)server;
+#endif
return WH_ERROR_OK;
}
@@ -225,6 +596,17 @@ int wh_Server_CertAddTrusted(whServerContext* server, whNvmId id,
rc = wh_Nvm_AddObject(server->nvm, &metadata, cert_len, cert);
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ /* Cache entries are bound to the trusted root by NVM ID. AddObject
+ * supersedes any prior object at this ID, so cached verifies anchored at
+ * the previous root must be evicted lest they short-circuit a verify
+ * under the new (different) root. Evict on success only — a failed add
+ * leaves the prior root in place. */
+ if (rc == WH_ERROR_OK) {
+ wh_Server_CertVerifyCache_EvictRoot(server, id);
+ }
+#endif
+
return rc;
}
@@ -241,6 +623,15 @@ int wh_Server_CertEraseTrusted(whServerContext* server, whNvmId id)
id_list[0] = id;
rc = wh_Nvm_DestroyObjects(server->nvm, 1, id_list);
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ /* See AddTrusted: stale cache entries against the now-erased root must
+ * not survive, otherwise a future AddTrusted at the same ID would inherit
+ * a phantom cache hit. Evict on success only. */
+ if (rc == WH_ERROR_OK) {
+ wh_Server_CertVerifyCache_EvictRoot(server, id);
+ }
+#endif
+
return rc;
}
@@ -286,9 +677,14 @@ int wh_Server_CertVerifyMultiRoot(whServerContext* server, const uint8_t* cert,
WOLFSSL_CERT_MANAGER* cm = NULL;
uint8_t root_cert[WOLFHSM_CFG_MAX_CERT_SIZE];
uint32_t root_cert_len;
- int rc = WH_ERROR_OK;
- int anchorsLoaded = 0;
- uint16_t i;
+ int rc = WH_ERROR_OK;
+ /* Track only the roots that were actually loaded into the cert manager.
+ * Forwarding the full caller-supplied set into the cache lookup would let
+ * a stale entry under a missing root match a verify whose effective trust
+ * store does not contain that root. */
+ whNvmId loadedRootNvmIds[WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS];
+ uint16_t loadedRootCount = 0;
+ uint16_t i;
if ((server == NULL) || (cert == NULL) || (cert_len == 0) ||
(trustedRootNvmIds == NULL) || (numRoots == 0) ||
@@ -308,6 +704,13 @@ int wh_Server_CertVerifyMultiRoot(whServerContext* server, const uint8_t* cert,
return WH_ERROR_ABORTED;
}
+ /* Apply the user-supplied verify callback, if registered. wolfSSL invokes
+ * it during wolfSSL_CertManagerVerifyBuffer; cache hits short-circuit that
+ * path and so deliberately do not invoke the callback. */
+ if (server->cert.verifyCb != NULL) {
+ wolfSSL_CertManagerSetVerify(cm, server->cert.verifyCb);
+ }
+
/* Load each root anchor. Absent roots are silently skipped; any other
* read or load failure is fatal and reported. */
for (i = 0; i < numRoots; i++) {
@@ -330,17 +733,20 @@ int wh_Server_CertVerifyMultiRoot(whServerContext* server, const uint8_t* cert,
(void)wolfSSL_CertManagerFree(cm);
return WH_ERROR_ABORTED;
}
- anchorsLoaded++;
+ loadedRootNvmIds[loadedRootCount++] = trustedRootNvmIds[i];
}
/* If no anchors were loaded, the trust store is empty */
- if (anchorsLoaded == 0) {
+ if (loadedRootCount == 0) {
(void)wolfSSL_CertManagerFree(cm);
return WH_ERROR_NOTFOUND;
}
- /* Verify the chain against the populated trust store */
- rc = _verifyChainAgainstCmStore(server, cm, cert, cert_len, flags,
+ /* Verify the chain against the populated trust store. Pass only the
+ * loaded root set so cache lookups cannot match entries bound to a root
+ * that is not actually in cm. */
+ rc = _verifyChainAgainstCmStore(server, cm, cert, cert_len,
+ loadedRootNvmIds, loadedRootCount, flags,
cachedKeyFlags, inout_keyId);
if (rc != WH_ERROR_OK) {
rc = WH_ERROR_CERT_VERIFY;
@@ -681,6 +1087,62 @@ int wh_Server_HandleCertRequest(whServerContext* server, uint16_t magic,
*out_resp_size = sizeof(resp);
}; break;
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ case WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_CLEAR: {
+ whMessageCert_SimpleResponse resp = {0};
+
+#ifndef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ /* Per-client cache piggybacks on the NVM lock for serialization.
+ * Under _GLOBAL the cache has its own lock acquired internally by
+ * wh_Server_CertVerifyCache_Clear, so the NVM lock isn't needed
+ * (and acquiring it would needlessly block NVM I/O on cache
+ * clears). */
+ rc = WH_SERVER_NVM_LOCK(server);
+ if (rc == WH_ERROR_OK) {
+ wh_Server_CertVerifyCache_Clear(server);
+ (void)WH_SERVER_NVM_UNLOCK(server);
+ }
+#else
+ wh_Server_CertVerifyCache_Clear(server);
+ rc = WH_ERROR_OK;
+#endif
+ resp.rc = rc;
+
+ wh_MessageCert_TranslateSimpleResponse(
+ magic, &resp, (whMessageCert_SimpleResponse*)resp_packet);
+ *out_resp_size = sizeof(resp);
+ }; break;
+
+ case WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_SET_ENABLED: {
+ whMessageCert_SetEnabledRequest req = {0};
+ whMessageCert_SimpleResponse resp = {0};
+
+ if (req_size != sizeof(req)) {
+ resp.rc = WH_ERROR_ABORTED;
+ }
+ else {
+ wh_MessageCert_TranslateSetEnabledRequest(
+ magic, (whMessageCert_SetEnabledRequest*)req_packet, &req);
+#ifndef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ /* Same locking rationale as VERIFY_CACHE_CLEAR above. */
+ rc = WH_SERVER_NVM_LOCK(server);
+ if (rc == WH_ERROR_OK) {
+ rc = wh_Server_CertVerifyCache_SetEnabled(server,
+ req.enable);
+ (void)WH_SERVER_NVM_UNLOCK(server);
+ }
+#else
+ rc = wh_Server_CertVerifyCache_SetEnabled(server, req.enable);
+#endif
+ resp.rc = rc;
+ }
+
+ wh_MessageCert_TranslateSimpleResponse(
+ magic, &resp, (whMessageCert_SimpleResponse*)resp_packet);
+ *out_resp_size = sizeof(resp);
+ }; break;
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
#ifdef WOLFHSM_CFG_DMA
case WH_MESSAGE_CERT_ACTION_ADDTRUSTED_DMA: {
whMessageCert_AddTrustedDmaRequest req = {0};
diff --git a/src/wh_server_she.c b/src/wh_server_she.c
index 283de85e5..3615dce16 100644
--- a/src/wh_server_she.c
+++ b/src/wh_server_she.c
@@ -1145,7 +1145,7 @@ static int _EncEcb(whServerContext* server, uint16_t magic, uint16_t req_size,
void* resp_packet)
{
int ret = 0;
- uint32_t field;
+ uint32_t field = 0;
uint32_t keySz;
uint8_t* in;
uint8_t* out;
@@ -1221,7 +1221,7 @@ static int _EncCbc(whServerContext* server, uint16_t magic, uint16_t req_size,
void* resp_packet)
{
int ret = 0;
- uint32_t field;
+ uint32_t field = 0;
uint32_t keySz;
uint8_t* in;
uint8_t* out;
@@ -1304,7 +1304,7 @@ static int _DecEcb(whServerContext* server, uint16_t magic, uint16_t req_size,
void* resp_packet)
{
int ret = 0;
- uint32_t field;
+ uint32_t field = 0;
uint32_t keySz;
uint8_t* in;
uint8_t* out;
@@ -1387,7 +1387,7 @@ static int _DecCbc(whServerContext* server, uint16_t magic, uint16_t req_size,
void* resp_packet)
{
int ret = 0;
- uint32_t field;
+ uint32_t field = 0;
uint32_t keySz;
uint8_t* in;
uint8_t* out;
diff --git a/test/Makefile b/test/Makefile
index de09e7d04..f674cd3c0 100644
--- a/test/Makefile
+++ b/test/Makefile
@@ -165,6 +165,18 @@ ifeq ($(AUTH),1)
DEF += -DWOLFHSM_CFG_ENABLE_AUTHENTICATION
endif
+# Support trusted-cert verify-result cache
+ifeq ($(CERT_VERIFY_CACHE),1)
+ DEF += -DWOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+endif
+
+# Use the cross-client (global) variant of the trusted-cert verify cache.
+# Implies CERT_VERIFY_CACHE.
+ifeq ($(CERT_VERIFY_CACHE_GLOBAL),1)
+ DEF += -DWOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ DEF += -DWOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+endif
+
## Project defines
# Option to build wolfcrypt tests
ifeq ($(TESTWOLFCRYPT),1)
@@ -295,8 +307,13 @@ $(BUILD_DIR)/%.o: %.s
@echo "Compiling ASM file: $(notdir $<)"
$(CMD_ECHO) $(AS) $(ASFLAGS) $(DEF) $(INC) -c -o $@ $<
-# Add additional flag here to avoid pragma
-$(BUILD_DIR)/wh_test_check_struct_padding.o: CFLAGS+=-Wpadded -DWOLFHSM_CFG_NO_CRYPTO
+# Wire-format struct-padding audit. -Wpadded turns spurious padding into a
+# build error. WH_PADDING_CHECK is an internal sentinel honored by
+# wh_settings.h that suppresses external dependencies (wolfSSL etc.) so the
+# audit doesn't drag in third-party source whose layout could perturb the
+# result. Distinct from WOLFHSM_CFG_NO_CRYPTO so it doesn't disable user
+# features (e.g. the cert verify cache).
+$(BUILD_DIR)/wh_test_check_struct_padding.o: CFLAGS+=-Wpadded -DWH_PADDING_CHECK
$(BUILD_DIR)/%.o: %.c
@echo "Compiling C file: $(notdir $<)"
diff --git a/test/wh_test_cert.c b/test/wh_test_cert.c
index 80175ffaa..deedc9304 100644
--- a/test/wh_test_cert.c
+++ b/test/wh_test_cert.c
@@ -121,7 +121,9 @@ int whTest_CertServerCfg(whServerConfig* serverCfg)
server, RAW_CERT_CHAIN_B, RAW_CERT_CHAIN_B_len, rootCertB,
WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
- /* attempt to verify invalid chains, should fail */
+ /* attempt to verify invalid chains, should fail. Cache entries are scoped
+ * to the trusted root NVM ID, so prior positive verifies under the true
+ * root cannot bypass these cross-root checks. */
WH_TEST_PRINT("Attempting to verify invalid certificate chains...\n");
WH_TEST_ASSERT_RETURN(WH_ERROR_CERT_VERIFY ==
wh_Server_CertVerify(server, RAW_CERT_CHAIN_A,
@@ -253,6 +255,439 @@ int whTest_CertServerCfg(whServerConfig* serverCfg)
WH_TEST_PRINT("Test completed successfully\n");
return rc;
}
+
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+/* Exercises the trusted-cert verify cache directly through the server API:
+ * - repeat-verify of the same chain under the same root stays successful
+ * - cache entries are bound to the trusted root NVM ID: chain A under root B
+ * must fail even after chain A has been cached by a verify under root A
+ * (regression test against cross-root cache bypass)
+ * - clearing the cache leaves the cross-root case still failing */
+static int whTest_CertServerVerifyCache(whServerConfig* serverCfg)
+{
+ whServerContext server[1] = {0};
+ const whNvmId rootCertA = 1;
+ const whNvmId rootCertB = 2;
+
+ WH_TEST_PRINT("=== Server cert verify-cache test ===\n");
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(server, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(server));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ server, rootCertA, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_A_CERT, ROOT_A_CERT_len));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ server, rootCertB, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_B_CERT, ROOT_B_CERT_len));
+
+ /* 1. Repeat-verify hit: verify chain A twice under root A; both succeed. */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+
+ /* 2. Cache is bound to root NVM ID: chain A under root B must fail even
+ * though every cert in chain A was cached under root A by step 1. The
+ * cache hit must not let an unsigned-by-rootB chain through. */
+ WH_TEST_ASSERT_RETURN(WH_ERROR_CERT_VERIFY ==
+ wh_Server_CertVerify(server, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len, rootCertB,
+ WH_CERT_FLAGS_NONE,
+ WH_NVM_FLAGS_USAGE_ANY, NULL));
+
+ /* 3. Clear: the same cross-root verify still fails cold. */
+ wh_Server_CertVerifyCache_Clear(server);
+ WH_TEST_ASSERT_RETURN(WH_ERROR_CERT_VERIFY ==
+ wh_Server_CertVerify(server, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len, rootCertB,
+ WH_CERT_FLAGS_NONE,
+ WH_NVM_FLAGS_USAGE_ANY, NULL));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(server, rootCertA));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(server, rootCertB));
+ WH_TEST_PRINT("Server cert verify-cache test PASSED\n");
+ return WH_ERROR_OK;
+}
+
+/* Counts verify-callback invocations so we can tell a cold verify (callback
+ * fires for every cert) from a warm verify (callback skipped on cache hits).
+ * Used by the SetEnabled test below to confirm that disabling actually
+ * suppresses cache hits. */
+static int s_setEnabledCb_count = 0;
+static int whTest_setEnabledVerifyCb(int preverify,
+ WOLFSSL_X509_STORE_CTX* store)
+{
+ (void)store;
+ s_setEnabledCb_count++;
+ return preverify;
+}
+
+/* Exercises wh_Server_CertVerifyCache_SetEnabled:
+ * - disable flushes existing entries and suppresses subsequent cache hits
+ * (a re-verify after disable runs cold, callback fires the full count)
+ * - re-enable resumes caching (a verify after re-enable populates, the next
+ * re-verify warms — callback count drops back down)
+ * - default state is enabled (covered implicitly by the cold/warm counts) */
+static int whTest_CertServerVerifyCacheSetEnabled(whServerConfig* serverCfg)
+{
+ whServerContext server[1] = {0};
+ whServerCertConfig certCfg = {.verifyCb = whTest_setEnabledVerifyCb};
+ whServerCertConfig* savedCertConfig;
+ const whNvmId rootCertA = 1;
+ int coldCount;
+ int warmCount;
+ int afterDisableCount;
+
+ WH_TEST_PRINT("=== Server cert verify-cache set-enabled test ===\n");
+
+ /* Inject the counting callback so we can detect cache hits by absence of
+ * callback fires. Restore on exit. */
+ savedCertConfig = serverCfg->certConfig;
+ serverCfg->certConfig = &certCfg;
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(server, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(server));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ server, rootCertA, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_A_CERT, ROOT_A_CERT_len));
+
+ /* Start cold under the global-shared cache mode where prior tests may
+ * have populated entries. Per-client mode is already clean. */
+ wh_Server_CertVerifyCache_Clear(server);
+
+ /* 1. Cold verify under default-enabled cache: callback fires for every
+ * cert in the chain. */
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ coldCount = s_setEnabledCb_count;
+ WH_TEST_ASSERT_RETURN(coldCount > 0);
+
+ /* 2. Re-verify with cache still enabled: CA cache hits skip the callback,
+ * so the count is strictly less than cold. */
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ warmCount = s_setEnabledCb_count;
+ WH_TEST_ASSERT_RETURN(warmCount > 0);
+ WH_TEST_ASSERT_RETURN(warmCount < coldCount);
+
+ /* 3. Disable the cache. Entries from steps 1-2 must be flushed and new
+ * inserts suppressed: the next verify should be cold again (count back
+ * up to coldCount). */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerifyCache_SetEnabled(server, 0));
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ afterDisableCount = s_setEnabledCb_count;
+ WH_TEST_ASSERT_RETURN(afterDisableCount == coldCount);
+
+ /* 4. Second verify while still disabled — also cold (no caching). */
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_setEnabledCb_count == coldCount);
+
+ /* 5. Re-enable. The post-disable verify did not populate the cache, so
+ * the first re-enabled verify is still cold (and populates). */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerifyCache_SetEnabled(server, 1));
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_setEnabledCb_count == coldCount);
+
+ /* 6. Subsequent verify warms, matching step 2's warmCount. */
+ s_setEnabledCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_setEnabledCb_count == warmCount);
+
+ /* Clean up: leave the cache empty for downstream tests. */
+ wh_Server_CertVerifyCache_Clear(server);
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(server, rootCertA));
+ serverCfg->certConfig = savedCertConfig;
+ WH_TEST_PRINT("Server cert verify-cache set-enabled test PASSED\n");
+ return WH_ERROR_OK;
+}
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+/* Counts callback invocations to detect cross-client cache hits. The verify
+ * callback fires only on a cold verify; a global-cache hit short-circuits the
+ * wolfSSL verify path and bypasses the callback. Two server contexts that
+ * share the same NVM context must share the cache, so the second context's
+ * verify of an already-cached chain must NOT increment this counter. */
+static int s_globalCacheCb_count = 0;
+static int whTest_globalCacheVerifyCb(int preverify,
+ WOLFSSL_X509_STORE_CTX* store)
+{
+ (void)store;
+ s_globalCacheCb_count++;
+ /* Mirror wolfSSL's verdict so cross-root verifies still fail. Returning a
+ * hard 1 would mask signature mismatches and break the cross-root
+ * regression check below. */
+ return preverify;
+}
+
+/* Cross-client cache hit test. Two whServerContext instances, both backed by
+ * the single whNvmContext owned by the test driver, must share the trusted
+ * cert verify cache: a chain verified on serverA must short-circuit when
+ * verified again on serverB. */
+static int whTest_CertServerVerifyCacheGlobalShared(whServerConfig* serverCfg)
+{
+ whServerContext serverA[1] = {0};
+ whServerContext serverB[1] = {0};
+ const whNvmId rootCertA = 1;
+ const whNvmId rootCertB = 2;
+ int beforeCount;
+
+ WH_TEST_PRINT(
+ "=== Server cert verify-cache global cross-client test ===\n");
+
+ /* Two independent server contexts, both pointing at the same NVM
+ * context via serverCfg. The cache lives on the NVM context in global
+ * mode, so both servers see the same slots. */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(serverA, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(serverA));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(serverB, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(serverB));
+
+ /* Register the same counting callback on both servers so we can detect
+ * which verify path actually executed wolfSSL's signature check vs.
+ * which one short-circuited via the global cache. */
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Server_CertSetVerifyCb(serverA, whTest_globalCacheVerifyCb));
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Server_CertSetVerifyCb(serverB, whTest_globalCacheVerifyCb));
+
+ /* Trust both roots so the cross-root regression below has somewhere to
+ * land. */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ serverA, rootCertA, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_A_CERT, ROOT_A_CERT_len));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ serverA, rootCertB, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_B_CERT, ROOT_B_CERT_len));
+
+ /* Make sure we start cold even if a prior test populated the global
+ * cache. wh_Server_CertInit no longer clears under _GLOBAL. */
+ wh_Server_CertVerifyCache_Clear(serverA);
+
+ /* 1. Cold verify on A populates the global cache. */
+ s_globalCacheCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ serverA, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_globalCacheCb_count > 0);
+
+ /* 2. Same chain re-verified on B hits the cache populated by A for the
+ * CA certs — those callback invocations are skipped. The leaf is not
+ * cached (caching it would let an isolated "leaf alone" verify falsely
+ * succeed via cache hit), so the leaf's callback still fires. The
+ * re-verify therefore invokes the callback fewer times than the cold
+ * verify but still at least once. */
+ beforeCount = s_globalCacheCb_count;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ serverB, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_globalCacheCb_count > beforeCount);
+ WH_TEST_ASSERT_RETURN(s_globalCacheCb_count - beforeCount < beforeCount);
+
+ /* 3. Cross-root: chain A under rootB must still fail on B even though
+ * chain A was cached under rootA. The cache is keyed on (root, hash);
+ * a hit under one root must not satisfy a verify under another. */
+ WH_TEST_ASSERT_RETURN(WH_ERROR_CERT_VERIFY ==
+ wh_Server_CertVerify(serverB, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len, rootCertB,
+ WH_CERT_FLAGS_NONE,
+ WH_NVM_FLAGS_USAGE_ANY, NULL));
+
+ /* 4. Clear via serverA wipes the shared cache; serverB now cold-verifies
+ * again and the callback fires. */
+ wh_Server_CertVerifyCache_Clear(serverA);
+ beforeCount = s_globalCacheCb_count;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ serverB, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_globalCacheCb_count > beforeCount);
+
+ /* Reset cache so subsequent tests in the driver get a clean slate. */
+ wh_Server_CertVerifyCache_Clear(serverA);
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(serverA, rootCertA));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(serverA, rootCertB));
+ WH_TEST_PRINT("Server cert verify-cache global cross-client test PASSED\n");
+ return WH_ERROR_OK;
+}
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE && \
+ WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL */
+
+/* State for the user-injectable verify callback test */
+static int s_verifyCb_count = 0;
+static int s_verifyCb_lastPreverify = -1;
+static int s_verifyCb_returnVal = 1;
+
+static int whTest_recordingVerifyCb(int preverify,
+ WOLFSSL_X509_STORE_CTX* store)
+{
+ (void)store;
+ s_verifyCb_count++;
+ s_verifyCb_lastPreverify = preverify;
+ return s_verifyCb_returnVal;
+}
+
+/* Exercises the user-injectable verify callback configured through
+ * whServerCertConfig. Confirms:
+ * - the callback is invoked during chain verification with preverify=1
+ * - returning zero from the callback fails the verify
+ * - cache hits on CA certs bypass the callback (when the verify cache
+ * is enabled). Leaf certs are intentionally not cached, so the leaf's
+ * signature is re-verified (and the callback re-invoked) on every
+ * verify call. */
+static int whTest_CertServerVerifyCallback(whServerConfig* serverCfg)
+{
+ int rc;
+ whServerContext server[1] = {0};
+ whServerCertConfig certCfg = {.verifyCb = whTest_recordingVerifyCb};
+ whServerCertConfig* savedCertConfig;
+ const whNvmId rootCertA = 1;
+
+ WH_TEST_PRINT("=== Server cert verify-callback test ===\n");
+
+ /* Inject our cert config; restore on exit. */
+ savedCertConfig = serverCfg->certConfig;
+ serverCfg->certConfig = &certCfg;
+
+ s_verifyCb_count = 0;
+ s_verifyCb_lastPreverify = -1;
+ s_verifyCb_returnVal = 1;
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(server, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(server));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ server, rootCertA, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_A_CERT, ROOT_A_CERT_len));
+
+ /* 1. Callback is invoked on a successful verify with preverify=1. */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count > 0);
+ WH_TEST_ASSERT_RETURN(s_verifyCb_lastPreverify == 1);
+
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ {
+ /* 2. Cache hits on CA certs bypass the callback. Leaves are not
+ * cached (caching them would let an isolated "leaf alone" verify
+ * falsely succeed via cache hit), so the leaf's callback fires on
+ * every re-verify. The re-verify therefore invokes the callback
+ * fewer times than the cold verify but still at least once. */
+ int firstRunCount = s_verifyCb_count;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count > firstRunCount);
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count - firstRunCount < firstRunCount);
+
+ /* Clear cache so the next verify re-enters wolfSSL and the cb. */
+ wh_Server_CertVerifyCache_Clear(server);
+ }
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
+ /* 3. Returning zero from the callback forces verify failure. */
+ s_verifyCb_returnVal = 0;
+ s_verifyCb_count = 0;
+ WH_TEST_ASSERT_RETURN(WH_ERROR_CERT_VERIFY ==
+ wh_Server_CertVerify(server, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE,
+ WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count > 0);
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(server, rootCertA));
+ serverCfg->certConfig = savedCertConfig;
+ rc = WH_ERROR_OK;
+ WH_TEST_PRINT("Server cert verify-callback test PASSED\n");
+ return rc;
+}
+
+/* Exercises wh_Server_CertSetVerifyCb: register, replace, and unregister the
+ * verify callback after the server is already initialized (i.e. without
+ * supplying it via whServerCertConfig). */
+static int whTest_CertServerVerifyCallbackRuntime(whServerConfig* serverCfg)
+{
+ int rc;
+ whServerContext server[1] = {0};
+ whServerCertConfig* savedCertConfig;
+ const whNvmId rootCertA = 1;
+
+ WH_TEST_PRINT("=== Server cert verify-callback runtime test ===\n");
+
+ /* Force NULL certConfig so registration must come from the runtime API. */
+ savedCertConfig = serverCfg->certConfig;
+ serverCfg->certConfig = NULL;
+
+ s_verifyCb_count = 0;
+ s_verifyCb_lastPreverify = -1;
+ s_verifyCb_returnVal = 1;
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_Init(server, serverCfg));
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertInit(server));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertAddTrusted(
+ server, rootCertA, WH_NVM_ACCESS_ANY, WH_NVM_FLAGS_NONMODIFIABLE, NULL,
+ 0, ROOT_A_CERT, ROOT_A_CERT_len));
+
+ /* 1. No callback registered: verify succeeds, counter stays 0. */
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count == 0);
+
+ /* 2. Register at runtime; cb must fire on the next cold verify. */
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Server_CertSetVerifyCb(server, whTest_recordingVerifyCb));
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ wh_Server_CertVerifyCache_Clear(server);
+#endif
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count > 0);
+ WH_TEST_ASSERT_RETURN(s_verifyCb_lastPreverify == 1);
+
+ /* 3. Unregister at runtime; verify still succeeds, counter stays 0. */
+ s_verifyCb_count = 0;
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertSetVerifyCb(server, NULL));
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ wh_Server_CertVerifyCache_Clear(server);
+#endif
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertVerify(
+ server, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertA,
+ WH_CERT_FLAGS_NONE, WH_NVM_FLAGS_USAGE_ANY, NULL));
+ WH_TEST_ASSERT_RETURN(s_verifyCb_count == 0);
+
+ /* 4. NULL server is rejected. */
+ WH_TEST_ASSERT_RETURN(WH_ERROR_BADARGS ==
+ wh_Server_CertSetVerifyCb(NULL, NULL));
+
+ WH_TEST_RETURN_ON_FAIL(wh_Server_CertEraseTrusted(server, rootCertA));
+ serverCfg->certConfig = savedCertConfig;
+ rc = WH_ERROR_OK;
+ WH_TEST_PRINT("Server cert verify-callback runtime test PASSED\n");
+ return rc;
+}
#endif /* WOLFHSM_CFG_ENABLE_SERVER */
#ifdef WOLFHSM_CFG_ENABLE_CLIENT
@@ -323,7 +758,9 @@ int whTest_CertClient(whClientContext* client)
client, RAW_CERT_CHAIN_B, RAW_CERT_CHAIN_B_len, rootCertB_id, &out_rc));
WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
- /* attempt to verify invalid chains, should fail */
+ /* attempt to verify invalid chains, should fail. Cache entries are scoped
+ * to the trusted root NVM ID, so prior positive verifies under the true
+ * root cannot bypass these cross-root checks. */
WH_TEST_PRINT("Attempting to verify invalid certificate chains...\n");
WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(
client, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertB_id, &out_rc));
@@ -497,6 +934,90 @@ int whTest_CertClient(whClientContext* client)
/* Test non-exportable flag enforcement */
WH_TEST_RETURN_ON_FAIL(whTest_CertNonExportable(client));
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ /* Verify-cache scenarios over the full client/server RPC: cross-root
+ * recognition, the clear RPC, and re-verify behavior after clear. */
+ {
+ whNvmId rootCertA_id_c = 1;
+ whNvmId rootCertB_id_c = 2;
+
+ WH_TEST_PRINT("=== Client cert verify-cache test ===\n");
+
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertAddTrusted(client, rootCertA_id_c, WH_NVM_ACCESS_ANY,
+ WH_NVM_FLAGS_NONMODIFIABLE, NULL, 0,
+ ROOT_A_CERT, ROOT_A_CERT_len, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertAddTrusted(client, rootCertB_id_c, WH_NVM_ACCESS_ANY,
+ WH_NVM_FLAGS_NONMODIFIABLE, NULL, 0,
+ ROOT_B_CERT, ROOT_B_CERT_len, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+
+ /* Start from a known-empty cache. */
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerifyCacheClear(client, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+
+ /* Warm the cache by verifying chain A under its true root. */
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertA_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+
+ /* Cache entries are bound to the trusted root NVM ID: chain A under
+ * root B fails even though every cert in chain A is cached under
+ * root A. The cache hit must not bypass the cross-root check. */
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertB_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_CERT_VERIFY);
+
+ /* After clear, the cross-root verify still fails cold. Exercises the
+ * clear RPC path. */
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerifyCacheClear(client, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertB_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_CERT_VERIFY);
+
+ /* Exercise the SetEnabled RPC path. The cross-root-must-fail
+ * invariant has to hold regardless of cache state, so re-run it
+ * after disable and after re-enable to confirm the RPC neither
+ * crashes nor breaks correctness. Behavioral observation (cache
+ * hits vs misses) is covered by whTest_CertServerVerifyCacheSetEnabled;
+ * here we just confirm the wire path round-trips. */
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertVerifyCacheSetEnabled(client, 0, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertA_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertB_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_CERT_VERIFY);
+
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertVerifyCacheSetEnabled(client, 1, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerify(client, RAW_CERT_CHAIN_A,
+ RAW_CERT_CHAIN_A_len,
+ rootCertA_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+
+ /* Cleanup */
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertEraseTrusted(client, rootCertA_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_RETURN_ON_FAIL(
+ wh_Client_CertEraseTrusted(client, rootCertB_id_c, &out_rc));
+ WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
+ WH_TEST_PRINT("Client cert verify-cache test PASSED\n");
+ }
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
WH_TEST_PRINT("Certificate client test completed successfully\n");
return rc;
@@ -635,7 +1156,9 @@ int whTest_CertClientDma_ClientServerTestInternal(whClientContext* client)
client, RAW_CERT_CHAIN_B, RAW_CERT_CHAIN_B_len, rootCertB_id, &out_rc));
WH_TEST_ASSERT_RETURN(out_rc == WH_ERROR_OK);
- /* attempt to verify invalid chains, should fail */
+ /* attempt to verify invalid chains, should fail. Cache entries are scoped
+ * to the trusted root NVM ID, so prior positive verifies under the true
+ * root cannot bypass these cross-root checks. */
WH_TEST_PRINT("Attempting to verify invalid certificate chains...\n");
WH_TEST_RETURN_ON_FAIL(wh_Client_CertVerifyDma(
client, RAW_CERT_CHAIN_A, RAW_CERT_CHAIN_A_len, rootCertB_id, &out_rc));
@@ -998,6 +1521,47 @@ int whTest_CertRamSim(whTestNvmBackendType nvmType)
WH_ERROR_PRINT("Certificate server config tests failed: %d\n", rc);
}
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+ if (rc == WH_ERROR_OK) {
+ rc = whTest_CertServerVerifyCache(s_conf);
+ if (rc != WH_ERROR_OK) {
+ WH_ERROR_PRINT("Cert verify-cache tests failed: %d\n", rc);
+ }
+ }
+ if (rc == WH_ERROR_OK) {
+ rc = whTest_CertServerVerifyCacheSetEnabled(s_conf);
+ if (rc != WH_ERROR_OK) {
+ WH_ERROR_PRINT("Cert verify-cache set-enabled tests failed: %d\n",
+ rc);
+ }
+ }
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ if (rc == WH_ERROR_OK) {
+ rc = whTest_CertServerVerifyCacheGlobalShared(s_conf);
+ if (rc != WH_ERROR_OK) {
+ WH_ERROR_PRINT("Cert verify-cache global cross-client tests "
+ "failed: %d\n",
+ rc);
+ }
+ }
+#endif
+#endif
+
+ if (rc == WH_ERROR_OK) {
+ rc = whTest_CertServerVerifyCallback(s_conf);
+ if (rc != WH_ERROR_OK) {
+ WH_ERROR_PRINT("Cert verify-callback tests failed: %d\n", rc);
+ }
+ }
+
+ if (rc == WH_ERROR_OK) {
+ rc = whTest_CertServerVerifyCallbackRuntime(s_conf);
+ if (rc != WH_ERROR_OK) {
+ WH_ERROR_PRINT("Cert verify-callback runtime tests failed: %d\n",
+ rc);
+ }
+ }
+
/* Cleanup NVM */
wh_Nvm_Cleanup(nvm);
#ifndef WOLFHSM_CFG_NO_CRYPTO
diff --git a/test/wh_test_lock.c b/test/wh_test_lock.c
index 6ba42c187..7dd7225df 100644
--- a/test/wh_test_lock.c
+++ b/test/wh_test_lock.c
@@ -174,9 +174,12 @@ static int testNvmRamSimWithLock(whLockConfig* lockConfig)
.config = flashCfg,
};
- /* NVM context with lock */
- whNvmContext nvm = {0};
- whNvmConfig nvmCfg;
+ /* NVM context with lock. Zero-init nvmCfg so any conditionally-compiled
+ * fields (e.g. certVerifyCacheLockConfig under
+ * WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL) start as NULL = no-op
+ * locking, rather than as indeterminate stack garbage. */
+ whNvmContext nvm = {0};
+ whNvmConfig nvmCfg = {0};
whNvmMetadata meta;
uint8_t testData[] = "Hello, NVM with lock!";
diff --git a/wolfhsm/wh_client.h b/wolfhsm/wh_client.h
index e6b65f46b..89b5df002 100644
--- a/wolfhsm/wh_client.h
+++ b/wolfhsm/wh_client.h
@@ -2780,6 +2780,80 @@ int wh_Client_CertVerifyMultiRootAndCacheLeafPubKey(
const whNvmId* trustedRootNvmIds, uint16_t numRoots,
whNvmFlags cachedKeyFlags, whKeyId* inout_keyId, int32_t* out_rc);
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+/**
+ * @brief Send a request to clear the server's trusted certificate verify cache.
+ *
+ * Subsequent verification of any certificate will re-run the public-key
+ * signature check until that cert is verified again and re-cached.
+ *
+ * @param[in] c Pointer to the client context.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheClearRequest(whClientContext* c);
+
+/**
+ * @brief Receive the response to a verify-cache clear request.
+ *
+ * @param[in] c Pointer to the client context.
+ * @param[out] out_rc Pointer to store the response code from the server.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheClearResponse(whClientContext* c, int32_t* out_rc);
+
+/**
+ * @brief Synchronous helper to clear the server's trusted certificate verify
+ * cache.
+ *
+ * @param[in] c Pointer to the client context.
+ * @param[out] out_rc Pointer to store the response code from the server.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheClear(whClientContext* c, int32_t* out_rc);
+
+/**
+ * @brief Send a request to enable or disable the server's trusted certificate
+ * verify cache at runtime.
+ *
+ * Disabling clears all existing cache entries and suppresses both subsequent
+ * lookups and inserts until the cache is re-enabled. Enabling resumes normal
+ * caching from an empty state. The cache defaults to enabled at server init
+ * so this call is only needed to opt out (or re-enable after opting out).
+ *
+ * In per-client cache mode the toggle is scoped to this client's server. With
+ * WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL the toggle is shared across all
+ * clients connected to the same NVM context.
+ *
+ * @param[in] c Pointer to the client context.
+ * @param[in] enable Non-zero to enable caching, zero to disable.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheSetEnabledRequest(whClientContext* c,
+ uint8_t enable);
+
+/**
+ * @brief Receive the response to a verify-cache enable/disable request.
+ *
+ * @param[in] c Pointer to the client context.
+ * @param[out] out_rc Pointer to store the response code from the server.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheSetEnabledResponse(whClientContext* c,
+ int32_t* out_rc);
+
+/**
+ * @brief Synchronous helper to enable or disable the server's trusted
+ * certificate verify cache.
+ *
+ * @param[in] c Pointer to the client context.
+ * @param[in] enable Non-zero to enable caching, zero to disable.
+ * @param[out] out_rc Pointer to store the response code from the server.
+ * @return int Returns 0 on success, or a negative error code on failure.
+ */
+int wh_Client_CertVerifyCacheSetEnabled(whClientContext* c, uint8_t enable,
+ int32_t* out_rc);
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
#ifdef WOLFHSM_CFG_DMA
diff --git a/wolfhsm/wh_message_cert.h b/wolfhsm/wh_message_cert.h
index 8dbb05886..3de8440f7 100644
--- a/wolfhsm/wh_message_cert.h
+++ b/wolfhsm/wh_message_cert.h
@@ -31,22 +31,23 @@
#include "wolfhsm/wh_common.h"
#include "wolfhsm/wh_comm.h"
#include "wolfhsm/wh_message.h"
-#include "wolfhsm/wh_nvm.h"
#include "wolfhsm/wh_utils.h"
enum WH_MESSAGE_CERT_ACTION_ENUM {
- WH_MESSAGE_CERT_ACTION_INIT = 0x1,
- WH_MESSAGE_CERT_ACTION_ADDTRUSTED = 0x2,
- WH_MESSAGE_CERT_ACTION_ERASETRUSTED = 0x3,
- WH_MESSAGE_CERT_ACTION_READTRUSTED = 0x4,
- WH_MESSAGE_CERT_ACTION_VERIFY = 0x5,
- WH_MESSAGE_CERT_ACTION_VERIFY_MULTI_ROOT = 0x6,
- WH_MESSAGE_CERT_ACTION_ADDTRUSTED_DMA = 0x22,
- WH_MESSAGE_CERT_ACTION_READTRUSTED_DMA = 0x24,
- WH_MESSAGE_CERT_ACTION_VERIFY_DMA = 0x25,
- WH_MESSAGE_CERT_ACTION_VERIFY_ACERT = 0x26,
- WH_MESSAGE_CERT_ACTION_VERIFY_ACERT_DMA = 0x27,
- WH_MESSAGE_CERT_ACTION_VERIFY_MULTI_ROOT_DMA = 0x28,
+ WH_MESSAGE_CERT_ACTION_INIT = 0x1,
+ WH_MESSAGE_CERT_ACTION_ADDTRUSTED = 0x2,
+ WH_MESSAGE_CERT_ACTION_ERASETRUSTED = 0x3,
+ WH_MESSAGE_CERT_ACTION_READTRUSTED = 0x4,
+ WH_MESSAGE_CERT_ACTION_VERIFY = 0x5,
+ WH_MESSAGE_CERT_ACTION_VERIFY_MULTI_ROOT = 0x6,
+ WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_CLEAR = 0x7,
+ WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_SET_ENABLED = 0x8,
+ WH_MESSAGE_CERT_ACTION_ADDTRUSTED_DMA = 0x22,
+ WH_MESSAGE_CERT_ACTION_READTRUSTED_DMA = 0x24,
+ WH_MESSAGE_CERT_ACTION_VERIFY_DMA = 0x25,
+ WH_MESSAGE_CERT_ACTION_VERIFY_ACERT = 0x26,
+ WH_MESSAGE_CERT_ACTION_VERIFY_ACERT_DMA = 0x27,
+ WH_MESSAGE_CERT_ACTION_VERIFY_MULTI_ROOT_DMA = 0x28,
};
/* Simple reusable response message */
@@ -59,6 +60,19 @@ int wh_MessageCert_TranslateSimpleResponse(
uint16_t magic, const whMessageCert_SimpleResponse* src,
whMessageCert_SimpleResponse* dest);
+/* VerifyCacheSetEnabled Request */
+typedef struct {
+ uint8_t enable; /* 1 = enable, 0 = disable */
+ uint8_t WH_PAD[7];
+} whMessageCert_SetEnabledRequest;
+
+int wh_MessageCert_TranslateSetEnabledRequest(
+ uint16_t magic, const whMessageCert_SetEnabledRequest* src,
+ whMessageCert_SetEnabledRequest* dest);
+
+/* VerifyCacheSetEnabled Response */
+/* Use SimpleResponse */
+
/* Init Request/Response */
/* Empty request message */
/* Use SimpleResponse */
diff --git a/wolfhsm/wh_message_crypto.h b/wolfhsm/wh_message_crypto.h
index ac3417743..a5a3ef8da 100644
--- a/wolfhsm/wh_message_crypto.h
+++ b/wolfhsm/wh_message_crypto.h
@@ -149,14 +149,14 @@ int wh_MessageCrypto_TranslateRngResponse(
/*
* AES
*/
-/* AES CTR Request */
+/* AES CTR Request - fields ordered by size to keep padding trailing */
typedef struct {
uint32_t enc; /* 1 for encrypt, 0 for decrypt */
uint32_t keyLen; /* Length of key in bytes */
uint32_t sz; /* Size of input data */
- uint16_t keyId; /* Key ID if using stored key */
uint32_t left; /* unused bytes left from last call */
- uint8_t WH_PAD[2]; /* Padding for alignment */
+ uint16_t keyId; /* Key ID if using stored key */
+ uint8_t WH_PAD[2];
/* Data follows:
* uint8_t in[sz]
* uint8_t key[keyLen]
@@ -1068,13 +1068,16 @@ typedef struct {
uint32_t inSz;
} whMessageCrypto_Sha512DmaRequest;
-/* SHA2 DMA Response - carries updated state or final hash inline */
+/* SHA2 DMA Response - carries updated state or final hash inline.
+ * Fields ordered by size (8-byte-aligned struct first) to keep padding
+ * trailing. */
typedef struct {
+ whMessageCrypto_DmaAddrStatus
+ dmaAddrStatus; /* 8-byte aligned, place first */
+ uint8_t hash[64]; /* big enough for all SHA2 variants */
uint32_t hiLen;
uint32_t loLen;
- uint8_t hash[64]; /* big enough for all SHA2 variants */
uint32_t hashType;
- whMessageCrypto_DmaAddrStatus dmaAddrStatus;
uint8_t WH_PAD[4];
} whMessageCrypto_Sha2DmaResponse;
@@ -1138,6 +1141,7 @@ typedef struct {
typedef struct {
whMessageCrypto_DmaAddrStatus dmaAddrStatus;
uint32_t outSz;
+ uint8_t WH_PAD[4]; /* Round struct to 8-byte alignment */
} whMessageCrypto_AesEcbDmaResponse;
/* AES-ECB DMA translation functions */
@@ -1168,6 +1172,7 @@ typedef struct {
typedef struct {
whMessageCrypto_DmaAddrStatus dmaAddrStatus;
uint32_t outSz;
+ uint8_t WH_PAD[4]; /* Round struct to 8-byte alignment */
/* Trailing data: uint8_t iv[AES_IV_SIZE] */
} whMessageCrypto_AesCbcDmaResponse;
diff --git a/wolfhsm/wh_message_nvm.h b/wolfhsm/wh_message_nvm.h
index bf5a706fa..023158dd4 100644
--- a/wolfhsm/wh_message_nvm.h
+++ b/wolfhsm/wh_message_nvm.h
@@ -32,7 +32,6 @@
#include "wolfhsm/wh_common.h"
#include "wolfhsm/wh_comm.h"
#include "wolfhsm/wh_message.h"
-#include "wolfhsm/wh_nvm.h"
enum WH_MESSAGE_NVM_ACTION_ENUM {
WH_MESSAGE_NVM_ACTION_INIT = 0x1,
diff --git a/wolfhsm/wh_nvm.h b/wolfhsm/wh_nvm.h
index e41f7f7c0..91506f148 100644
--- a/wolfhsm/wh_nvm.h
+++ b/wolfhsm/wh_nvm.h
@@ -60,6 +60,9 @@
#include "wolfhsm/wh_common.h" /* For whNvm types */
#include "wolfhsm/wh_keycache.h" /* For whKeyCacheContext */
#include "wolfhsm/wh_lock.h"
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+#include "wolfhsm/wh_server_cert_cache.h" /* For whCertVerifyCacheContext */
+#endif
/**
* @brief NVM backend callback table.
@@ -137,6 +140,12 @@ typedef struct whNvmContext_t {
#if !defined(WOLFHSM_CFG_NO_CRYPTO) && defined(WOLFHSM_CFG_GLOBAL_KEYS)
whKeyCacheContext globalCache; /**< Global key cache (shared keys) */
#endif
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ whCertVerifyCacheContext globalCertVerifyCache; /**< Global cross-client
+ * trusted cert verify
+ * cache. Carries its own
+ * dedicated lock. */
+#endif
#ifdef WOLFHSM_CFG_THREADSAFE
whLock lock; /**< Lock for serializing NVM and global cache operations */
#endif
@@ -154,6 +163,16 @@ typedef struct whNvmConfig_t {
#ifdef WOLFHSM_CFG_THREADSAFE
whLockConfig*
lockConfig; /**< Lock configuration (NULL for no-op locking) */
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL
+ whLockConfig* certVerifyCacheLockConfig; /**< Lock config for the global
+ * cert verify cache. Independent
+ * from lockConfig — pass a
+ * separate platform context (e.g.
+ * a distinct posixLockContext) so
+ * the two locks back distinct
+ * mutexes. NULL for no-op
+ * locking. */
+#endif
#endif
} whNvmConfig;
diff --git a/wolfhsm/wh_server.h b/wolfhsm/wh_server.h
index f57cf2d7f..e332434a2 100644
--- a/wolfhsm/wh_server.h
+++ b/wolfhsm/wh_server.h
@@ -39,6 +39,7 @@ typedef struct whServerContext_t whServerContext;
#include "wolfhsm/wh_common.h"
#include "wolfhsm/wh_comm.h"
#include "wolfhsm/wh_keycache.h"
+#include "wolfhsm/wh_server_cert_cache.h"
#include "wolfhsm/wh_nvm.h"
#ifdef WOLFHSM_CFG_ENABLE_AUTHENTICATION
#include "wolfhsm/wh_auth.h"
@@ -160,6 +161,9 @@ typedef struct whServerConfig_t {
#ifdef WOLFHSM_CFG_LOGGING
whLogConfig* logConfig;
#endif /* WOLFHSM_CFG_LOGGING */
+#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER) && !defined(WOLFHSM_CFG_NO_CRYPTO)
+ whServerCertConfig* certConfig; /* optional; NULL = no verify callback */
+#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER && !WOLFHSM_CFG_NO_CRYPTO */
} whServerConfig;
@@ -186,6 +190,9 @@ struct whServerContext_t {
#ifdef WOLFHSM_CFG_LOGGING
whLogContext log;
#endif /* WOLFHSM_CFG_LOGGING */
+#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER) && !defined(WOLFHSM_CFG_NO_CRYPTO)
+ whServerCertContext cert; /* verify callback + verify cache */
+#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER && !WOLFHSM_CFG_NO_CRYPTO */
};
diff --git a/wolfhsm/wh_server_cert.h b/wolfhsm/wh_server_cert.h
index 7b9396655..7c56fe535 100644
--- a/wolfhsm/wh_server_cert.h
+++ b/wolfhsm/wh_server_cert.h
@@ -131,6 +131,26 @@ int wh_Server_CertVerifyMultiRoot(whServerContext* server, const uint8_t* cert,
whNvmFlags cachedKeyFlags,
whKeyId* inout_keyId);
+#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER) && !defined(WOLFHSM_CFG_NO_CRYPTO)
+/**
+ * @brief Register a verify callback at runtime.
+ *
+ * Replaces the callback previously set via whServerCertConfig.verifyCb (or by
+ * a prior call to this function). Pass NULL to unregister.
+ *
+ * The callback is applied to the per-request WOLFSSL_CERT_MANAGER created by
+ * wh_Server_CertVerify, so it participates in chain verification the same way
+ * a callback registered with wolfSSL_CertManagerSetVerify would. Verify-cache
+ * hits (when WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE is enabled) bypass the
+ * callback because they bypass wolfSSL's verify path entirely.
+ *
+ * @param server The server context.
+ * @param cb The callback to register, or NULL to unregister.
+ * @return WH_ERROR_OK on success, WH_ERROR_BADARGS if server is NULL.
+ */
+int wh_Server_CertSetVerifyCb(whServerContext* server, VerifyCallback cb);
+#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER && !WOLFHSM_CFG_NO_CRYPTO */
+
#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER_ACERT)
/**
* @brief Verifies an attribute certificate against a trusted root certificate
diff --git a/wolfhsm/wh_server_cert_cache.h b/wolfhsm/wh_server_cert_cache.h
new file mode 100644
index 000000000..0623631ac
--- /dev/null
+++ b/wolfhsm/wh_server_cert_cache.h
@@ -0,0 +1,235 @@
+/*
+ * Copyright (C) 2025 wolfSSL Inc.
+ *
+ * This file is part of wolfHSM.
+ *
+ * wolfHSM is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 3 of the License, or
+ * (at your option) any later version.
+ *
+ * wolfHSM is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with wolfHSM. If not, see .
+ */
+
+/*
+ * wolfhsm/wh_server_cert_cache.h
+ *
+ * Server-side cert subsystem types embedded in whServerContext:
+ * - whServerCertContext / whServerCertConfig: hold the user-injectable
+ * verify callback and (optionally) the trusted-cert verify cache.
+ * - whCertVerifyCacheContext: trusted-cert verify-result cache. Records
+ * SHA-256 hashes of DER-encoded CA certificates that have already been
+ * successfully verified, scoped to the set of trusted-root NVM IDs
+ * that were loaded when the verify ran. Hits apply across clients
+ * but require the cached root set to be a subset of the caller's
+ * currently-loaded root set.
+ *
+ * Only CA certs are inserted. Caching a leaf would let a future
+ * "leaf alone" verify falsely succeed via cache hit, because the
+ * cache hit bypasses the wolfSSL signature check that would otherwise
+ * have failed (the leaf's issuer is not in the cert manager when the
+ * leaf is supplied without its intermediates). CA caching is sound
+ * because the chain walk loads each verified CA into the cert manager
+ * before the next cert is processed.
+ *
+ * Soundness of the subset rule rests on X.509 verify monotonicity:
+ * adding more trusted roots can never invalidate a previously
+ * successful verify, so a chain that validated under set S still
+ * validates under any superset T ⊇ S. A cache hit therefore implies
+ * the cached verify's anchor (whichever root in S actually closed
+ * the chain) is currently trusted, regardless of which element of S
+ * it was — every element of S is in T by hypothesis.
+ *
+ * Both single-root and multi-root verifies populate the cache.
+ * Single-root entries have one-element sets (maximum reuse, since
+ * any later caller whose loaded set contains that root will hit).
+ * Multi-root entries have larger sets (narrower reuse — only later
+ * callers whose loaded set is a superset will hit) but capture
+ * verifies that pure single-root traffic would not generate.
+ *
+ * Lives in its own header to avoid circular dependencies between wh_server.h
+ * and wh_server_cert.h.
+ */
+
+#ifndef WOLFHSM_WH_SERVER_CERT_CACHE_H_
+#define WOLFHSM_WH_SERVER_CERT_CACHE_H_
+
+/* Pick up compile-time configuration */
+#include "wolfhsm/wh_settings.h"
+
+#if defined(WOLFHSM_CFG_CERTIFICATE_MANAGER) && !defined(WOLFHSM_CFG_NO_CRYPTO)
+
+#include
+
+#include "wolfhsm/wh_common.h" /* for whNvmId */
+#include "wolfhsm/wh_lock.h" /* for whLock (global cache lock) */
+
+#include "wolfssl/ssl.h" /* for VerifyCallback */
+
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+
+#ifndef WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT
+#define WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT 16
+#endif
+
+#define WH_CERT_VERIFY_CACHE_HASH_LEN 32 /* SHA-256 digest size */
+
+typedef struct whCertVerifyCacheSlot {
+ uint8_t committed; /* 0 = empty, 1 = valid */
+ uint8_t numRoots; /* count of valid entries in rootNvmIds */
+ uint8_t WH_PAD[2];
+ /* Set of trusted root NVM IDs loaded when this cert was verified. A
+ * lookup hits when this set is a subset of the caller's currently
+ * loaded set (verify monotonicity makes the over-approximation safe). */
+ whNvmId rootNvmIds[WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS];
+ uint8_t hash[WH_CERT_VERIFY_CACHE_HASH_LEN];
+} whCertVerifyCacheSlot;
+
+typedef struct whCertVerifyCacheContext {
+ whCertVerifyCacheSlot slots[WOLFHSM_CFG_CERT_VERIFY_CACHE_COUNT];
+ uint16_t writeIdx; /* FIFO ring write position */
+ /* Runtime enable flag. When zero, Lookup misses and Insert is a no-op,
+ * regardless of slot contents. Toggled by
+ * wh_Server_CertVerifyCache_SetEnabled (also reachable from clients via
+ * WH_MESSAGE_CERT_ACTION_VERIFY_CACHE_SET_ENABLED). Default is 1;
+ * explicitly initialized at server / NVM init so a fresh zero-init context
+ * is treated as disabled until init runs. */
+ uint8_t enabled;
+ uint8_t WH_PAD[5];
+#if defined(WOLFHSM_CFG_THREADSAFE) && \
+ defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ /* Dedicated lock for the global verify cache. Independent from the NVM
+ * lock so cert-cache operations do not serialize behind NVM I/O. Only
+ * present when the cache lives in the shared NVM context. */
+ whLock lock;
+#endif
+} whCertVerifyCacheContext;
+
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
+/* Per-server cert subsystem config, supplied via whServerConfig.certConfig.
+ * The verify callback signature matches wolfSSL's VerifyCallback so the same
+ * callback registered with wolfSSL_CertManagerSetVerify can be used here. */
+typedef struct {
+ VerifyCallback verifyCb; /* user-supplied; NULL = no callback */
+} whServerCertConfig;
+
+/* Per-server cert subsystem context, embedded by value in whServerContext.
+ * Holds the registered verify callback and (optionally) the per-client verify
+ * cache. When WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL is defined the cache
+ * is relocated into the shared whNvmContext, so the per-client copy is
+ * omitted. */
+typedef struct {
+ VerifyCallback verifyCb;
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ !defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL)
+ whCertVerifyCacheContext cache;
+#endif
+} whServerCertContext;
+
+/* Forward declaration to avoid pulling in wh_server.h */
+struct whServerContext_t;
+
+#ifdef WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE
+/**
+ * @brief Look up a cert hash in the verify cache against a set of loaded
+ * trusted roots.
+ *
+ * Hits when there exists a committed slot whose stored root set is a
+ * subset of the supplied set and whose hash matches. By verify
+ * monotonicity, a previously successful verify under the slot's set
+ * remains valid under any superset, so the hit is sound.
+ *
+ * @param server The server context.
+ * @param rootNvmIds Array of trusted root NVM IDs currently loaded
+ * (presented set).
+ * @param numRoots Number of entries in rootNvmIds (must be > 0).
+ * @param hash Pointer to a SHA-256 (32-byte) digest of the DER cert.
+ * @return WH_ERROR_OK on hit, WH_ERROR_NOTFOUND on miss,
+ * WH_ERROR_BADARGS on invalid arguments.
+ */
+int wh_Server_CertVerifyCache_Lookup(struct whServerContext_t* server,
+ const whNvmId* rootNvmIds,
+ uint16_t numRoots, const uint8_t* hash);
+
+/**
+ * @brief Insert a cert hash into the verify cache, recording the supplied
+ * root set as the entry's binding.
+ *
+ * No-op if a slot with the same hash and the same root set already
+ * exists. Uses FIFO ring overwrite when full.
+ *
+ * The supplied set must be the set of roots actually loaded into the
+ * cert manager at the time of the verify (post-filtering of any roots
+ * absent from NVM); recording roots that were not actually loaded would
+ * widen the entry's required-trust set without justification.
+ *
+ * @param server The server context.
+ * @param rootNvmIds Array of trusted root NVM IDs loaded for the verify.
+ * @param numRoots Number of entries in rootNvmIds (must be > 0 and
+ * <= WOLFHSM_CFG_CERT_MAX_VERIFY_ROOTS).
+ * @param hash Pointer to a SHA-256 (32-byte) digest of the DER cert.
+ */
+void wh_Server_CertVerifyCache_Insert(struct whServerContext_t* server,
+ const whNvmId* rootNvmIds,
+ uint16_t numRoots, const uint8_t* hash);
+
+/**
+ * @brief Clear all entries from the verify cache.
+ *
+ * In per-client mode (default) clears this server's cache only. In global
+ * mode (WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL) clears the shared
+ * cache for every connected client.
+ *
+ * @param server The server context.
+ */
+void wh_Server_CertVerifyCache_Clear(struct whServerContext_t* server);
+
+/**
+ * @brief Enable or disable the trusted cert verify cache at runtime.
+ *
+ * When disabled, all existing entries are cleared and subsequent calls to
+ * Lookup miss / Insert are no-ops until the cache is re-enabled. Enabling
+ * an already-enabled cache (or disabling an already-disabled one) is a
+ * no-op aside from acquiring the lock.
+ *
+ * In per-client mode (default) this affects this server's cache only. In
+ * global mode (WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL) this affects
+ * the shared cache observed by every connected client.
+ *
+ * @param server The server context.
+ * @param enable 1 to enable caching, 0 to disable (and flush).
+ * @return WH_ERROR_OK on success, WH_ERROR_BADARGS if server is invalid.
+ */
+int wh_Server_CertVerifyCache_SetEnabled(struct whServerContext_t* server,
+ uint8_t enable);
+
+/**
+ * @brief Evict every cache entry whose stored root set contains the
+ * supplied trusted-root NVM ID.
+ *
+ * Must be invoked whenever the trusted root at rootNvmId changes (add or
+ * erase). Without this, re-using a freed ID for a different root would
+ * let stale cache hits short-circuit verifies under a trust anchor that
+ * is no longer present at that ID.
+ *
+ * Entries whose stored set contains the evicted root are dropped
+ * entirely rather than stripped of that one root, because the original
+ * verify may have been anchored at the now-departed root.
+ *
+ * @param server The server context.
+ * @param rootNvmId NVM ID of the trusted root whose cache entries to drop.
+ */
+void wh_Server_CertVerifyCache_EvictRoot(struct whServerContext_t* server,
+ whNvmId rootNvmId);
+#endif /* WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE */
+
+#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER && !WOLFHSM_CFG_NO_CRYPTO */
+
+#endif /* !WOLFHSM_WH_SERVER_CERT_CACHE_H_ */
diff --git a/wolfhsm/wh_settings.h b/wolfhsm/wh_settings.h
index 18d23bd57..47b656a6c 100644
--- a/wolfhsm/wh_settings.h
+++ b/wolfhsm/wh_settings.h
@@ -169,7 +169,13 @@
#include
-#ifndef WOLFHSM_CFG_NO_CRYPTO
+/* WH_PADDING_CHECK is an internal sentinel set only by the wire-format
+ * struct-padding audit (test/wh_test_check_struct_padding.c). It suppresses
+ * external dependencies (wolfSSL headers, etc.) so that audit can compile
+ * without dragging in third-party source whose layout could perturb -Wpadded.
+ * It is NOT a public configuration flag — do not use it from application
+ * code. */
+#if !defined(WOLFHSM_CFG_NO_CRYPTO) && !defined(WH_PADDING_CHECK)
#ifdef WOLFSSL_USER_SETTINGS
#include "user_settings.h"
#else
@@ -181,7 +187,7 @@
#if defined(WOLFHSM_CFG_DEBUG) || defined(WOLFHSM_CFG_DEBUG_VERBOSE)
#define WOLFHSM_CFG_HEXDUMP
#endif
-#endif /* !WOLFHSM_CFG_NO_CRYPTO */
+#endif /* !WOLFHSM_CFG_NO_CRYPTO && !WH_PADDING_CHECK */
/* Platform system time access */
#if !defined WOLFHSM_CFG_NO_SYS_TIME && !defined(WOLFHSM_CFG_PORT_GETTIME)
@@ -388,7 +394,9 @@
#endif
/** Configuration checks */
-#ifndef WOLFHSM_CFG_NO_CRYPTO
+/* Skipped under WH_PADDING_CHECK because the wolfSSL feature macros
+ * referenced below are only defined when wolfssl/options.h is pulled. */
+#if !defined(WOLFHSM_CFG_NO_CRYPTO) && !defined(WH_PADDING_CHECK)
/* Crypto Cb is mandatory */
#ifndef WOLF_CRYPTO_CB
#error "wolfHSM requires wolfCrypt built with WOLF_CRYPTO_CB"
@@ -439,27 +447,55 @@
#endif /* !WOLFSSL_ACERT || !WOLFSSL_ASN_TEMPLATE */
#endif /* WOLFHSM_CFG_CERTIFICATE_MANAGER_ACERT */
-#endif /* !WOLFHSM_CFG_NO_CRYPTO */
+#endif /* !WOLFHSM_CFG_NO_CRYPTO && !WH_PADDING_CHECK */
#if defined(WOLFHSM_CFG_NO_CRYPTO) && defined(WOLFHSM_CFG_KEYWRAP)
#error "WOLFHSM_CFG_KEYWRAP is incompatible with WOLFHSM_CFG_NO_CRYPTO"
#endif
+/* Trusted cert verify cache requires the certificate manager and crypto.
+ * Enforce here so downstream code can gate on
+ * WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE alone instead of repeating the full
+ * dependency chain at every site. */
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ !defined(WOLFHSM_CFG_CERTIFICATE_MANAGER)
+#error \
+ "WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE requires WOLFHSM_CFG_CERTIFICATE_MANAGER"
+#endif
+
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE) && \
+ defined(WOLFHSM_CFG_NO_CRYPTO)
+#error \
+ "WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE is incompatible with WOLFHSM_CFG_NO_CRYPTO"
+#endif
+
+/* The global cross-client verify cache is a layered option on top of the
+ * per-client cache. Enforce the dependency so downstream code can gate on
+ * WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL alone. */
+#if defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL) && \
+ !defined(WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE)
+#error \
+ "WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE_GLOBAL requires WOLFHSM_CFG_CERTIFICATE_VERIFY_CACHE"
+#endif
+
/** Cache flushing and memory fencing synchronization primitives */
/* Create a full sequential memory fence to ensure compiler memory ordering */
#ifndef XMEMFENCE
- #ifndef WOLFHSM_CFG_NO_CRYPTO
- #include "wolfssl/wolfcrypt/wc_port.h"
- #define XMEMFENCE() XFENCE()
- #else
- #if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
- #include
- #define XMEMFENCE() atomic_thread_fence(memory_order_seq_cst)
- #elif defined(__GNUC__) || defined(__clang__)
- #define XMEMFENCE() __atomic_thread_fence(__ATOMIC_SEQ_CST)
- #else
- /* PPC32: __asm__ volatile ("sync" : : : "memory") */
- #define XMEMFENCE() do { } while (0)
+#if !defined(WOLFHSM_CFG_NO_CRYPTO) && !defined(WH_PADDING_CHECK)
+#include "wolfssl/wolfcrypt/wc_port.h"
+#define XMEMFENCE() XFENCE()
+#else
+#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) && \
+ !defined(__STDC_NO_ATOMICS__)
+#include
+#define XMEMFENCE() atomic_thread_fence(memory_order_seq_cst)
+#elif defined(__GNUC__) || defined(__clang__)
+#define XMEMFENCE() __atomic_thread_fence(__ATOMIC_SEQ_CST)
+#else
+/* PPC32: __asm__ volatile ("sync" : : : "memory") */
+#define XMEMFENCE() \
+ do { \
+ } while (0)
#warning "wolfHSM memory transports should have a functional XMEMFENCE"
#endif
#endif