Compliance & Regulation

What the EU Cyber Resilience Act Means for RISC-V Hardware Teams

Hardware security compliance and regulation

The EU Cyber Resilience Act (CRA) officially entered its transition period in late 2024, with full enforcement beginning in late 2027. For most software companies, the conversation has centered on patch management, software bills of materials, and vulnerability disclosure timelines. For hardware teams — especially those building on RISC-V — the conversation needs to start somewhere more fundamental: the silicon.

This piece is not a legal guide. For that, consult your compliance counsel. What this is: a technical perspective on what the CRA's requirements mean in practice for hardware architects designing RISC-V-based products destined for European markets, and why getting this right at the chip design stage is far less expensive than retrofitting compliance later.

What the CRA Actually Requires

The CRA applies to products with digital elements — which covers virtually every connected embedded device. The core obligations relevant to hardware teams fall into three categories:

Security by design: The CRA mandates that products be designed with security as a first principle, not a feature bolted on post-development. This means security requirements must be incorporated into the hardware design phase, not just addressed in firmware updates after the fact. For RISC-V system designers, this means the security architecture conversation needs to happen at the SoC or module design stage, before tape-out.

Integrity and authenticity: The CRA requires that connected products implement mechanisms to ensure the integrity of installed software and firmware. In hardware terms, this means verified boot — a cryptographically enforced boot chain that guarantees only authenticated firmware runs on the device. Without a hardware root of trust anchoring the boot chain, this requirement cannot be met in a meaningful way. Software-only signatures are not sufficient because they can be bypassed by an attacker with physical access or supply-chain compromise.

Incident handling and updateability: Products must support secure firmware updates delivered with cryptographic integrity protection. This is tightly coupled to the verified boot requirement: a device that can verify firmware integrity at boot must use the same cryptographic infrastructure to verify incoming firmware updates. Ad-hoc update mechanisms without hardware-anchored verification do not satisfy this requirement.

Why Software-Only Approaches Fall Short

It is tempting for embedded teams under schedule pressure to reach for a software-only compliance answer: add a bootloader that checks a signature, use TLS for update delivery, and call it done. This approach will fail scrutiny for several reasons.

First, software signatures are only as trustworthy as the environment that verifies them. If an attacker can compromise the bootloader itself — either during manufacturing, through a supply-chain attack, or via a firmware vulnerability — the signature check becomes meaningless. The CRA's requirement for security by design implies that the trust anchor must be hardware-rooted, not software-rooted.

Second, the CRA requires manufacturers to demonstrate compliance, not just assert it. Hardware root of trust mechanisms are designed to be auditable in ways that software security configurations are not. An evaluator can verify that a hardware security module's key storage and attestation logic is correct from a specification review; verifying that a software bootloader configuration is secure requires a complete dynamic analysis of every execution path.

Third, supply-chain attacks — explicitly called out in the CRA's threat model — are a hardware problem first. A device that arrives at a customer facility with compromised firmware cannot detect that compromise using only software mechanisms, because the attacker controls the software layer. Only a hardware root of trust that is established during manufacturing can provide meaningful supply-chain integrity guarantees.

The RISC-V Opportunity

For teams building on RISC-V, the CRA creates both a challenge and an opportunity. The challenge is obvious: compliance requires hardware security infrastructure that many existing RISC-V designs lack. The opportunity is that RISC-V's open and extensible architecture is exceptionally well-suited to implementing that infrastructure correctly.

RISC-V's Physical Memory Protection (PMP) mechanism and its hardware-enforced isolation primitives provide a foundation for implementing secure enclaves and trusted execution environments that can anchor a hardware root of trust. The RISC-V ISA's extensibility allows silicon designers to add security-specific instructions and hardware acceleration for cryptographic operations without the compatibility constraints that limit security innovation on proprietary architectures.

Most importantly, the open-source nature of the RISC-V ecosystem means that the security architecture of a RISC-V-based product can be published, reviewed, and independently audited — which is exactly what the CRA's transparency requirements favor. A hardware security architecture that is fully documented and has been reviewed by the open-source community provides a far stronger compliance basis than a proprietary black-box approach, regardless of how sophisticated the proprietary implementation might be.

Practical Steps for RISC-V Teams

If you are designing a RISC-V-based product that will be sold in Europe after 2027, here is where to focus your attention now:

Audit your boot chain today. Map every step from power-on reset to application startup. Identify where cryptographic integrity checks occur and what hardware backs them up. If your boot chain does not have a hardware-anchored trust anchor at the start of the chain, that is your first problem to solve — and solving it at design time is far less costly than retrofitting it into a product already in production.

Assess your key management infrastructure. CRA compliance requires that cryptographic keys used for firmware signing and update verification be managed with appropriate security controls. If your current key management is spreadsheets and manually rotated signing keys, you are not in a position to demonstrate the systematic controls the CRA expects.

Plan your attestation story. The CRA does not explicitly require remote attestation, but device integrity verification — the ability to demonstrate to a relying party that a device is in a known-good state — is increasingly a practical prerequisite for meeting the CRA's security by design mandate, particularly for devices deployed in enterprise and critical infrastructure contexts. Building attestation in from the start is significantly easier than adding it to a fielded product.

Document your threat model. The CRA requires that manufacturers conduct risk assessments. Document your hardware threat model now, even if it is incomplete. The documentation discipline forces clear thinking about your attack surface, and the resulting artifact is useful not just for CRA compliance but for every engineering decision that follows.

The 2027 Horizon

Late 2027 may seem distant from where most hardware teams sit today. It is not. A RISC-V SoC designed today for products shipping in 2026 and 2027 will be in the CRA enforcement window for the majority of its production lifecycle. Compliance decisions made during the current design cycle will determine whether those products can access European markets at all — and whether a competitor who made better security decisions in 2025 has an advantage that is nearly impossible to close after tape-out.

The CRA is one of several converging regulatory pressures — alongside the US executive orders on software supply chain security, NIST's IoT guidance, and sector-specific requirements from healthcare and critical infrastructure regulators — that are collectively making hardware security infrastructure a baseline requirement rather than an optional enhancement. Teams that treat this as a compliance checkbox will find themselves constantly behind. Teams that treat it as a design philosophy will find it increasingly to be a competitive advantage.

Hardware security done correctly does not add cost — it removes the much larger costs of field remediation, compliance failures, and security incidents. The CRA is, if nothing else, a forcing function that aligns the economics of hardware security with the engineering reality that has always been true: you cannot retrofit trust into silicon.

Questions about CRA compliance for your RISC-V hardware design? Get in touch with the zeroRISC team.