Every few years, a proprietary hardware security scheme gets reverse-engineered and published to the internet. The pattern is always the same: a semiconductor company ships millions of devices with a proprietary, undocumented security architecture. Researchers spend some months or years on it. Then the scheme turns out to have had structural weaknesses that, had it been open to review, would likely have been caught and corrected before the devices ever shipped.
This is not a hardware-specific problem. The principle that security through obscurity is not real security is as old as Kerckhoffs's principle, formulated in 1883. But hardware teams continue to fall into this trap more persistently than their software counterparts — and the consequences in hardware are more severe, because you cannot push a patch to 50 million deployed microcontrollers the way you can push a software update.
At zeroRISC, our entire platform is built on the opposite premise: that open, community-reviewable security architecture produces better security than proprietary alternatives. Here is the case for why that premise is correct, and what it means practically for hardware teams.
The Obscurity Trap
The appeal of proprietary hardware security is understandable. If adversaries do not know how your security scheme works, they cannot attack it — right? The problem with this reasoning has several layers.
Reverse engineering is asymmetric. Defending a secret costs continuous effort: documentation must be withheld, employees must be bound by NDAs, supply-chain partners must be constrained. Attacking a secret requires only a single success. One disassembly, one leaked specification, one ex-employee who decides to publish — and years of obscurity investment collapse instantly. Worse, the collapse often happens after millions of devices are already in the field.
Proprietary architectures accumulate undiscovered vulnerabilities. Security flaws in open architectures get found by the community and fixed. Security flaws in proprietary architectures get found by adversaries and exploited. The community finding and fixing a vulnerability is a one-time cost. Adversaries discovering and exploiting a vulnerability is an ongoing liability. The historical record of proprietary hardware security schemes is not encouraging: when they are eventually reversed, the vulnerabilities found are typically not subtle or novel — they are basic design errors that a competent review would have caught.
Obscurity is incompatible with compliance. Modern hardware security compliance requirements — whether from NIST, IEC 62443, or the EU CRA — increasingly require that manufacturers demonstrate their security architecture is sound, not merely assert it. A proprietary scheme that cannot be reviewed cannot be independently evaluated. As compliance requirements tighten, proprietary obscurity becomes a legal and market liability as well as a technical one.
What OpenTitan Proved
The OpenTitan project, launched by Google in 2019 as the first open-source silicon root of trust, is the most direct evidence for the superiority of open hardware security architecture. OpenTitan was designed openly from the start: the RTL is public, the threat model is published, the cryptographic interfaces are documented, and the security review history is visible.
The result has been a security architecture that has received more rigorous scrutiny than any proprietary root of trust design ever has. Researchers, customers, and potential adversaries all have access to the same information. The vulnerabilities that have been found have been found by researchers doing legitimate security work, documented, and fixed — publicly, with the entire community benefiting from the remediation.
The security posture of a device running an OpenTitan root of trust can be reasoned about precisely because the architecture is known. Customers can evaluate it. Auditors can certify it. Security researchers can find weaknesses before adversaries do. None of that is possible with a proprietary black box, regardless of how competent the engineers who designed it were.
The zeroRISC platform builds directly on OpenTitan's approach, extending it natively to the RISC-V ecosystem. Our threat model is published. Our security architecture documentation is public. We believe that publishing this information makes us more secure, not less — because it means the people most likely to find problems are the ones working openly to fix them, not the ones who have reverse-engineered us in private.
The Counter-Argument, Addressed
The most common pushback against open security architecture in hardware is: "But we have a lot of proprietary IP in our design. We cannot afford to open-source our security architecture without exposing trade secrets."
This conflates two separate things. The security architecture — the cryptographic protocols, the key management scheme, the attestation flow, the verified boot chain — does not need to be the same thing as the implementation details that constitute trade secrets. You can publish a complete and accurate description of your security architecture without revealing your differential pair routing or your analog IP.
The question is whether your security architecture is sound, independently of your implementation. If it is, publishing it costs you nothing and gains you credibility. If it is not, the fact that you have not published it does not make it more secure — it just means the people who find the flaws will be adversaries rather than researchers.
There is also a talent argument: the best hardware security engineers do not want to work on security schemes they cannot talk about. Open security architecture makes it possible to hire and retain the people most capable of building it well.
Building for Auditability
If you are designing a new RISC-V system and thinking about the security architecture, here are the practical implications of the open security model:
Design to be explained. If you cannot describe your security architecture in a threat model document that a competent external reviewer could evaluate, you probably do not understand it well enough yourself. The discipline of writing the threat model clarifies the architecture. If the threat model reveals gaps, that is valuable — far more valuable to discover during design than after deployment.
Prefer standard cryptographic interfaces over custom ones. Custom cryptographic schemes — even ones that seem clever — have a poor track record. Standard algorithms (AES, SHA-256, ECDSA, and their post-quantum successors) are battle-tested precisely because they have been open to scrutiny for years. The same logic applies to hardware security protocols: established standards like DICE and RATS are preferable to bespoke attestation schemes for the same reason.
Plan for the security architecture to become known. Design your system assuming that adversaries will eventually have complete knowledge of how your security architecture works. If your system is still secure under that assumption, you have a real security architecture. If it is not, the obscurity is a false comfort that will eventually be revealed as such.
The Competitive Angle
Open security architecture is not just a philosophical stance — it is increasingly a competitive advantage. As hardware security compliance requirements tighten globally, customers are increasingly asking for evidence of security claims, not just assertions. An open, publicly documented security architecture that has been independently reviewed is a differentiator in enterprise sales cycles, not a liability.
For RISC-V hardware teams in particular, the open-source heritage of the RISC-V ecosystem creates an expectation of transparency. Customers who chose RISC-V specifically because of its open, auditable architecture are not going to accept a proprietary black-box security scheme from their silicon vendor. Openness all the way down to the security architecture is increasingly what the RISC-V market expects.
Kerckhoffs was right 143 years ago, and the hardware industry is still learning the lesson. Security by obscurity is not security. The faster embedded hardware teams internalize that, the better the security of the systems they ship.
Interested in the zeroRISC open security architecture? Read our security model documentation or contact the team.