ENISA's Secure by Design Playbook: What It Means for Product Teams Under the CRA

ENISA's Security by Design and Default Playbook (v0.4, March 2026) gives SMEs 22 practical checklists for CRA compliance. We break down the principles, lifecycle activities, threat modelling process, and CRA mapping.

CRA Evidence Team
Author
March 25, 2026
24 min read
ENISA Secure by Design and Default Playbook, a practical guide for product teams under the CRA
In this article

Summary

  • ENISA published a Security by Design and Default Playbook (v0.4, March 2026), the first official EU guidance translating CRA security requirements into practical engineering checklists for SMEs
  • Covers the full product lifecycle: from requirements through decommissioning
  • Defines 22 security principles organised into Secure by Design (14) and Secure by Default (8)
  • Each principle has a one-page playbook with checklist, minimum evidence, and release gate criteria
  • Includes 8 risk management activities and a 5-step threat modelling process designed for lean teams
  • Introduces Machine-Readable Security Manifests (MRSM), a new concept for verifiable, machine-readable compliance evidence
  • Maps all 22 principles to CRA Annex I essential requirements (Annex C)
  • Currently a draft open for public consultation

What Is the ENISA Secure by Design Playbook?

On 19 March 2026, the European Union Agency for Cybersecurity (ENISA) published the Security by Design and Default Playbook, version 0.4, released as a draft for public consultation.

It is the first official EU guidance that translates CRA security requirements into concrete engineering checklists aimed at SMEs. The document is not legal guidance. It provides practical, technically grounded approaches that product teams can apply during design, build, and deployment phases.

The playbook targets SMEs (defined as organisations with fewer than 250 employees and annual turnover below EUR 50 million) that manufacture products with digital elements. This includes embedded software, IoT devices, connected systems, standalone software, and hardware with programmable components.

ENISA developed the playbook based on an analysis of existing security frameworks published by ENISA and other EU-based cybersecurity agencies, as well as guidance from NIST and OWASP. Common requirements and implementation patterns were identified and evaluated against SME capabilities to determine feasibility and adaptation requirements.

Annex C of the playbook maps all 22 principles directly to CRA Annex I essential requirements, providing a traceable link between engineering practices and regulatory obligations.

Important: This is a draft for public consultation (v0.4). ENISA is actively seeking feedback. The final version may differ.

Who Is This Playbook For?

The playbook identifies four primary groups (Section 1.3):

  • Software Developers & Engineers: people writing code who need practical ways to build security in without slowing down delivery
  • Technical Product Managers: people balancing feature work against security requirements and trying to make both fit
  • SME Security Leads: people translating enterprise-grade frameworks into something that works with limited budgets and small teams
  • System Architects: people designing systems who want security baked in from the start, not bolted on later

The common challenge ENISA acknowledges: most SMEs have no dedicated security team, limited budget for security tooling, and security work that constantly competes with feature delivery.

The playbook's response: structured checklists that help teams identify quick-win security controls, implement them in a realistic way, and build a repeatable baseline they can improve over time.

Security Across the Product Lifecycle

ENISA Security by Design, product lifecycle with security activities per phase

Security must be considered end-to-end, regardless of the production model used (V-model, Agile, or other). The playbook defines six phases, each with specific security actions and concrete deliverables.

Key principles from the document:

  • Use small, reusable artefacts (one-page context notes, simple diagrams, checklists)
  • Prefer automated controls in CI/CD over manual reviews, reserving deep review for high-risk changes
  • Introduce fast security gates aligned to existing agile ceremonies (Definition of Ready/Done, PR checks, release checklist)
Phase Key Actions Deliverable
Requirements Define product context (users, environments, data), "non-negotiable" security defaults, top risks/abuse cases, establish clear criteria for addressing risks 1-page Security Context & Assumptions + Security Requirements Checklist
Design Maintain one architecture diagram with trust boundaries, do a lightweight threat model on top 5-10 abuse cases, decide critical design controls Architecture + trust-boundary diagram + Top threats & mitigations
Development / Implementation Build secure defaults into code/config, enforce dependency hygiene, require PR review for security-sensitive changes, automate SAST/SCA in CI CI evidence (pipeline logs) + lightweight Secure coding / PR checklist
Testing & Acceptance Run automated security checks (SAST/dependency, basic DAST where relevant), validate default config, targeted pen test when potential risk triggers hit Release security checklist (pass/fail + exceptions + known issues/residual risk)
Deployment & Integration Ensure secure provisioning/enrolment, least-privilege runtime config, "health/security" indicators, controlled change management Deployment hardening checklist + Rollback plan + monitoring/alert list
Maintenance & Disposal Define patch intake + SLAs, vulnerability monitoring, incident handling, and an end-of-support/EOL plan; ensure secure disposal (data erasure, credential revocation) Vuln & patch process + EOL/disposal note + maintained risk register

Each phase produces a concrete deliverable. This is not abstract guidance.

Tip: The playbook recommends keeping lifecycle artefacts lightweight: a one-page security context note, a simple architecture diagram, and a release checklist are enough to start.

What Risk Management Activities Does ENISA Recommend?

Risk management activities provide the foundation for all Secure by Design and Default decisions. The playbook does not propose a heavyweight formal framework. Instead, it defines a minimum set of activities that can drive security decisions without creating heavy process (Section 2.2).

The document defines 8 activities (Table 2):

  1. Product context & scope: Define intended use, deployment environments, user/admin roles, data types/sensitivity, and key external dependencies. Deliverable: 1-2 page "Product Security Context" note (scope, assumptions, dependencies).
  2. Asset & harm identification: List top data, hardware, or function assets (credentials, customer data, PII, device control) and the key harm outcomes (privacy breach, takeover, outage, fraud, safety impact). Deliverable: Asset list + "Top harms" list (one page).
  3. Lightweight threat modelling: See the threat modelling section below.
  4. Risk register: Record 10-30 risks with likelihood/impact (simple scale), owner, treatment, status. Link high risks to backlog items/controls. Deliverable: Living risk register (spreadsheet or ticket board).
  5. Risk acceptance criteria: Define a set of non-negotiable risk conditions. For example: misuse of software updates, unauthorised administrative access, or exploitation of default credentials is NOT acceptable. Establish criteria for accepting residual risks that should not undermine essential cybersecurity requirements. Deliverable: 1-page Risk Acceptance & Exceptions policy.
  6. Security requirements baseline: Translate top risks into testable "must" requirements (authn/authz, secure defaults, secrets, encryption, logging, updates). Deliverable: SME security requirements checklist (testable controls).
  7. Release risk review gate: Formal pre-release gate: confirm checklist met, defaults verified, known vulns triaged, high risks treated/accepted with rationale. Decide go/no-go. Deliverable: Release security review record + documented exceptions.
  8. Change-triggered reassessment: Re-run context/threat/risk steps when major changes occur (architecture, auth model, critical dependency/supplier, deployment environment, after incidents). Deliverable: Updated context note, threat shortlist, and risk register entries (with date).

Note: Risk management is iterative, not one-time. The playbook specifies it must be revisited at defined lifecycle gates and triggered by significant events (major release, supplier change, new deployment context, incident learnings).

How Should SMEs Approach Threat Modelling?

ENISA threat modelling 5-step process for SMEs

The playbook builds on the STRIDE methodology (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) as the foundation for threat identification and classification (Section 2.3).

It explicitly warns against common anti-patterns: treating threat modelling as a one-off compliance exercise, over-engineering models that do not influence design or secure-by-default decisions, and failing to review the model following substantial product modifications or changes in the threat landscape.

For SMEs, particularly those developing products intended for non-critical or lower-risk environments, the objective is a "minimum viable" model: fast to produce, easy to refresh, and tightly coupled to delivery (architecture decisions, default configuration, and release gates).

The 5 steps (Table 3)

  1. Define scope, assumptions, and security objectives: Time-box the scoping step. Capture what's in/out of scope, the deployment context, and the assumptions you are relying on (e.g., "customer network is untrusted", "cloud APIs are internet-exposed"). State the security objectives that matter for this product (confidentiality, integrity, availability, plus privacy/safety if applicable). Identify the "crown jewels": what must not fail. Deliverable: 1-page "Threat Model Scope & Objectives".

  2. Model the system at a useful level of abstraction: Produce a single, simple architecture or data-flow diagram. Show main components, external entities, data stores, and the key entry points and data flows. A DFD-style diagram is the fastest high-value approach. The document says "don't overthink it". Deliverable: Diagram covering main components, external entities, data stores, entry points.

  3. Mark trust boundaries and privileged paths; identify key assets: Annotate the diagram with trust boundaries (internet-backend, device-cloud, user-admin, tenant-tenant) and the highest-privilege operations (firmware/OTA update, remote admin, key provisioning, identity issuance). This step turns "architecture" into "security-relevant architecture". Deliverable: Diagram with trust boundaries, privileged paths, top assets.

  4. Identify and prioritise top threats (5-10 abuse cases): Generate a short list of realistic abuse cases mapped to entry points and boundaries (e.g., "credential stuffing → account takeover", "malicious update", "API authorisation bypass", "MITM on onboarding"). Rank them with a lightweight scheme (High/Med/Low) based on impact and plausibility. OWASP describes threat identification and ranking as a core step in most threat modelling approaches. Deliverable: Top threats table with 5-10 abuse cases, impact + likelihood, "top risks" list.

  5. Define mitigations, secure defaults, and verification; set refresh triggers: For each top threat, specify the mitigation strategy, the required control(s), and the secure-by-default setting that should ship (e.g., "admin interface disabled by default", "no default passwords", "signed updates enforced", "least privilege roles", "authentication attempts rate-limited"). Map each control to a verification method (CI checks, tests, configuration validation, release gate). Define the triggers that require re-running the model (new internet-exposed interface, auth model change, new sensitive data, new critical dependency, major architecture change). Deliverable: Controls, Defaults, and Verification checklist.

Tip: Even 2 hours of collaborative threat modelling with your team produces actionable results. The document emphasises "minimum viable". You can always refine later.

What Are the 22 Security Principles?

ENISA Secure by Design and Secure by Default, all 22 principles in two pillars

The document defines 22 security principles (Section 3), each of which gets its own one-page playbook in Section 4. The playbooks are the document's core deliverable. Each one distils a single principle into an execution-focused guide with a checklist, minimum evidence, and release gate criteria. The principles are organised into two pillars: Secure by Design (how the system is engineered, 14 principles) and Secure by Default (how the product arrives and behaves when first turned on, 8 principles). Each pillar is further divided into two groups.

Architectural Foundations (6 principles)

These address how the system is structurally designed and built:

  1. Trust boundaries and threat modelling: Make trust explicit. Define where data, identities, and execution contexts cross from trusted to untrusted zones. Threat model to identify what could go wrong at those boundaries.
  2. Least privilege: Grant only the minimum access required. Apply consistently across user accounts, service accounts, APIs, and admin roles. Elevate only when needed, for the shortest duration.
  3. Strong identity and authentication architecture: Clear approach for how identities are created, verified, and managed for users, devices, services, and administrators. Resistant to credential stuffing, replay, and session hijacking.
  4. Attack surface minimisation: Reduce complexity. Remove default accounts, uninstall unused packages, close nonessential ports, limit exposed management interfaces. Ongoing vulnerability scanning.
  5. Defence in depth: Layered controls so failure of one does not mean full compromise. Preventive, detective, and corrective controls. Diverse and independent, not all relying on the same technology or trust assumption.
  6. Open Design (avoiding obscurity): Do not depend on the secrecy of the design for protection. Use well-studied algorithms and protocols, clear documentation, and designs that withstand scrutiny. Security should rest on protected keys, strong authentication, and robust implementation, not hidden mechanisms.

Operational Integrity (8 principles)

These address how the system is managed and maintained:

  1. Lifecycle management: Security extends beyond development. Maintain, update, and retire in a controlled manner. Apply secure by design from development through decommissioning.
  2. User centric design: Security must be usable by everyday users. Poor usability leads to insecure workarounds. Simple setup, automatic encryption, guided flows.
  3. Secure coding practices: Follow established secure coding standards. SAST tools, SCA for dependencies, DAST before deployment. Early identification, not after release.
  4. Logging, monitoring, and alerting: Generate security-relevant logs, retain for a defined period, and protect from tampering. Detect suspicious behaviours (failed auth, privilege escalation, unexpected config changes).
  5. Configuration and change management: Configurations must be controlled, consistent, and auditable. Baseline hardening, infrastructure-as-code, change process with review/testing/approval/rollback.
  6. Incident response and recovery: Prepared for vulnerabilities, compromised code, malicious updates, product misuse. Defined roles, escalation paths, documented playbooks, customer communication.
  7. Vulnerability and patch management: Practical, repeatable, risk-prioritised. Simple intake channel (security email + disclosure process), internal triage process, clear SLAs.
  8. Supply chain controls: Protect product integrity at the highest-impact points: code repositories, build systems, signing keys, distribution channels. At minimum: limited CI/CD access, MFA, peer review for security-critical changes, SBOMs.

Default Hardening (4 principles)

These ensure products start in a secure and restrictive state:

  1. Minimisation of default services: Non-essential features and services disabled by default. User must explicitly opt in.
  2. Restrictive initial access: Eliminate universal "admin/admin" credentials. Enforce unique passwords and mandatory password change on first boot.
  3. Secure communication by default: All external communications encrypted and authenticated from first connection. Strictly enforce TLS 1.3 or SSH. No HTTP/Telnet fallbacks.
  4. Unique device identity and secrets by default: Ship with unique per-device credentials and cryptographic identity. No shared keys or certificates across products. Protected against extraction.

Guided Protection (4 principles)

These support users in maintaining security after deployment:

  1. Mandatory security onboarding: Critical security features must be part of the initial setup wizard (MFA, encryption key, admin account setup). Do not hide in settings. Block operation until complete.
  2. Automated maintenance and updates: Automatic security updates enabled by default. Separate security updates from feature updates. Cryptographically verified. Safe failure modes (do not brick the device). Notify users.
  3. Transparent security posture: Clearly show current security state. Warn when the user reduces security. Explain impact in plain language. Offer one-click path to restore secure baseline.
  4. Secure recovery and ownership lifecycle: Guided recovery (credential reset, account recovery, factory reset, ownership transfer). Simple for users but resistant to account takeover and social engineering. Factory reset must fully remove previous user access.

CRA Link: Annex C of the playbook maps each of these 22 principles to specific CRA Annex I essential requirements, showing exactly which engineering practices support which legal obligations.

How Do the Playbooks Work?

ENISA's 22 playbooks at a glance, grouped by category

The playbook format

Section 4 is the most extensive part of the document. It provides a practical, lightweight way for SMEs to implement Security by Design and Default without creating a heavy governance burden. Each playbook distils a single security principle into a one-page, execution-focused guide that teams can apply repeatedly across releases and product lines (Section 4, p28).

The intent is to translate security principles from abstract aspirations into concrete engineering and operational actions, with clear expectations, verifiable outcomes, and a consistent "definition of done" for security. Each playbook follows the same five-section format:

  • Principle name: The security concept being implemented
  • Objective: What the principle is trying to achieve and what failure modes it reduces
  • Checklist: The highest-impact actions to implement (designed to be achievable by lean teams)
  • Minimum evidence: The smallest set of artefacts, logs, or configurations that demonstrate the checklist was implemented
  • Release gate: A copy/paste set of pass/fail criteria that can be used in a release review or CI/CD to prevent regressions

Important: This structure is deliberately aligned to how SMEs operate: short cycles, shared responsibilities, limited specialist capacity, and a need for high signal-to-noise guidance.

Using the playbooks

  • Treat each playbook's release gate as a standard agenda item in release readiness reviews
  • Implement the minimum evidence as repository artefacts and CI outputs wherever possible
  • Allow exceptions only with documented rationale, owner, and review date
  • Refresh playbooks periodically based on incident learnings, vulnerability trends, and product changes
  • The contents should be treated as a baseline, not a final state. Review and update as products evolve.

All 22 playbooks at a glance

Architectural Foundations:

# Playbook Focus
4.1 Trust boundaries & threat modelling Draw system diagram, mark boundaries, identify 5-10 abuse cases, define mitigations
4.2 Least privilege Minimum permissions per role/service, no shared admin, JIT access, privilege hygiene
4.3 Strong identity & auth architecture Authoritative identity sources, unique identities, MFA for privileged actions
4.4 Attack surface minimisation List exposed interfaces, default-deny, remove dev tooling from prod, minimal deps
4.5 Defence in depth Layer controls per critical asset, assume failure, multi-layer detection, diverse controls
4.6 Open Design Document security decisions, proven standards, SBOM, VDP, security-sensitive PR review

Operational Integrity:

# Playbook Focus
4.7 Lifecycle management Support commitments, update mechanism + rollback, vuln tracking, decommissioning plan
4.8 User centric design Safe defaults, guided onboarding, clear messaging, role-based access matching workflows
4.9 Secure coding practices Coding baseline, banned unsafe patterns, SAST/SCA in CI, negative tests for critical endpoints
4.10 Logging, monitoring & alerting Must-log events, structured audit logs, centralised collection, high-signal alerts
4.11 Configuration & change management Version + review config (IaC), harden defaults, separate environments, rollback plans
4.12 Incident response & recovery IR roles + escalation, runbooks with scenario checklists, containment tools, tabletop exercises
4.13 Vulnerability & patch management Intake channels, consistent triage with SLAs, dependency patching, secure release process
4.14 Supply chain controls Dependency inventory + SBOM, CI scanning, pipeline hardening, supplier baseline expectations

Default Hardening:

# Playbook Focus
4.15 Minimisation of default services Core-only enabled by default, explicit opt-in required, security implications disclosed
4.16 Restrictive initial access No default credentials, unique credentials per device, secure setup enforced before access
4.17 Secure communication by default Encrypt from first connection, no plaintext fallback, modern protocols only
4.18 Unique device identity & secrets Per-device crypto identity, no shared secrets, secrets protected at rest, revocation supported

Guided Protection:

# Playbook Focus
4.19 Mandatory security onboarding Security steps enforced in setup wizard, cannot be skipped, blocks operation until complete
4.20 Automated maintenance & updates Auto security updates by default, separate from features, cryptographically verified, safe failure
4.21 Transparent security posture Show current state, warn on security reduction, explain impact, one-click restore to baseline
4.22 Secure recovery & ownership lifecycle Guided recovery/transfer, strong verification, factory reset fully clears prior access

Deep dive: Playbook 4.13, Vulnerability & patch management

To show the practical depth of the format, here is Playbook 4.13 in full detail as it appears in the document:

Principle: Vulnerability and patch management should be practical, repeatable, and prioritised by risk. Manufacturers need a simple way for customers and researchers to report issues, and an internal process to triage findings quickly and decide what needs urgent action.

Objective: Identify, prioritise, and remediate vulnerabilities fast enough to reduce real-world exposure, across your code, dependencies, infrastructure, and (if applicable) devices/firmware. The focus is a simple intake-to-fix workflow, clear SLAs, and an update mechanism that makes patching reliable.

Checklist:

Action Details
Establish intake channels (don't miss issues) Sources: dependency scanning, SAST/DAST findings, supplier advisories, customer reports, security email, etc. Assign a single owner for triage and tracking.
Triage and prioritise consistently Use a lightweight severity approach (e.g., Critical/High/Med/Low) plus "internet-exposed?" and "known exploited?" flags. Decide quickly: fix now, mitigate, accept (time-bound), or defer (with rationale).
Patch dependencies and third parties proactively Maintain a regular cadence (e.g., weekly/monthly) for dependency updates. Pin versions; remove unused dependencies; track transitive dependencies.
Fix, test, and release with a secure process Ensure fixes are reviewed and tested; verify no regressions in auth/authz, input validation, and critical workflows. For devices/IoT: ensure secure OTA/update path and safe rollback where feasible.
Communicate and close the loop Track affected versions, customers/environments, and mitigation guidance. Publish security release notes or advisories as appropriate. Verify rollout completion and update the risk register.

Minimum evidence:

  • Vuln tracking board/register: issue, severity, affected components/versions, owner, status, target date
  • Defined SLAs (example): Critical triage within 48 hours; remediation/release target within X days (set to your reality)
  • Scanning evidence: CI outputs for dependency scanning + SAST (and DAST if applicable)
  • Proactive dependency patches: SBOM or dependency inventory per release (at minimum for shipped artifacts)
  • Patch release record: link from vuln ticket to PR(s) to tests to release version to rollout confirmation
  • Exception log: accepted risks have owner + expiry/review date and compensating controls (if any)

Release gate:

  • Dependency and SAST scans executed for the release; Critical/High findings addressed or documented exception (owner + expiry)
  • SBOM (or dependency inventory) generated/updated and stored for the release
  • Known vulnerabilities affecting shipped components are triaged with severity, owner, and target date
  • Patch process validated: fix reviewed, tests passed, and release notes updated as needed
  • For internet-exposed components: mitigations or patches for Critical/High are in place before release
  • OTA/update (if applicable) validated for secure delivery; rollback/recovery documented
  • Accepted residual risk is time-bound and tracked to closure or review date

What Are Machine-Readable Security Manifests?

ENISA MRSM four-layer model, from identity to evidence

Section 5 of the playbook introduces a new concept: the shift from static, document-heavy compliance to machine-readable, verifiable security attestations.

A machine-readable security attestation is a digital claim in JSON or YAML asserting that a specific security control, process, or property has been met. Unlike static PDF reports, these attestations can be generated and consumed by automated systems, enabling frequent updates and automated validation. Embedded in the development pipeline, security becomes intrinsic, not a post-development checkbox.

Four key properties

  • Demonstrability: Proactive capacity to provide machine-readable evidence that security requirements have been implemented, a shift from "claiming" to "showing"
  • Verifiability: An independent party can programmatically authenticate and validate the integrity of security claims, transparent, tamper-evident, and mapped to a recognised root of trust
  • Reusability: Use existing attestations to build on, integrate into the development cycle, and include in agile quality gates
  • Reliability: Rely on attestations for third-party due diligence, simplifying supply chain trust

The four-layer model

The playbook illustrates a hierarchical data model where every high-level security claim is backed by granular technical evidence:

  1. Metadata & Attestation (Identity domain): Product identity, versioning, manufacturer's cryptographic signature
  2. Control Layer (Governance domain): Structured security objectives aligned with requirements, principles, and regulations
  3. Implementation Layer / Threat-Mitigation Map (Operational domain): Maps specific threats to implemented mitigations, design principles, default settings, and human-readable descriptions
  4. Assessment & Verification Layer (Evidence domain): Machine-readable pass/fail results from automated gates, with links to SBOMs

The document also describes access control layers: a Public-Facing JSON providing high-level claims, and a Restricted Technical Overlay containing encrypted detailed tool configurations and test telemetry.

Existing ecosystem

The playbook situates MRSM within the existing standards landscape:

  • OSCAL (NIST): "Compliance as Code", standardised security control catalogues, system security plans, assessment results
  • CycloneDX CDXA (OWASP/ECMA-424): Originally an SBOM format, expanded to a full transparency standard. CDXA attestations for security claims, VEX for exploitability, CBOM for cryptographic assets
  • OpenSSF: Security Insights (machine-readable security facts in YAML), Scorecard (automated best-practice assessment)
  • OWASP ASVS: Application Security Verification Standard, underlying requirements. MLSVS extending to AI/ML
  • TC54 (Ecma International): Transparency Exchange API, standardising how SBOMs and attestations are discovered and shared

The SafeGate-X1 worked example

The document includes a complete scenario (pages 56-61) showing how a fictional hardware controller manufacturer would implement MRSM: a threat model with 5 threats (RCE via web API, privilege escalation, port scanning, credential stuffing, binary tampering), controls mapped to principles, and a JSON manifest showing how each threat_id maps to a principle, mitigation_control, secure_by_default_setting, and verification_gate with evidence_hash. It also includes a third-party verification table showing what auditors can spot-check.

Note: MRSM is an illustrative concept, not a proposed standard. But it signals where CRA compliance is heading: from static PDF evidence folders to verifiable, machine-readable artifacts that your CI/CD pipeline and customers can automatically verify.

How Do the Principles Map to CRA Requirements?

Annex C of the playbook provides a complete mapping of all 22 principles to specific CRA Annex I essential requirements. This is the engineering bridge between the playbook's guidance and your legal obligations.

CRA Annex I is divided into two parts:

  • Part 1 (ANNEX-1.PT1): Product security requirements, covering 14 requirements: cybersecurity risk assessment, secure defaults, updates, access control, data protection, integrity, data minimisation, availability, attack surface limitation, incident mitigation, logging, and secure data erasure
  • Part 2 (ANNEX-1.PT2): Vulnerability handling requirements, covering 8 requirements: SBOM, timely remediation, testing, disclosure, coordinated VDP, vulnerability intake, secure distribution of fixes, and timely dissemination

Each principle maps to multiple CRA requirements. Here are selected examples from Annex C:

Principle CRA Requirements Implementation Support
Trust boundaries & threat modelling ANNEX-1.PT1.1, PT1.2.d, PT1.2.e, PT1.2.f, PT1.2.j Supports risk assessment, access control, confidentiality, integrity, and attack surface limitation by making trust assumptions and boundaries explicit
Vulnerability & patch management ANNEX-1.PT2.1, PT2.2, PT2.4, PT2.5, PT2.6, PT2.7, PT2.8 Supports SBOM, timely remediation, disclosure, coordinated VDP, vulnerability intake, secure patch distribution, and timely dissemination
Supply chain controls ANNEX-1.PT1.2.a, PT2.1, PT2.7 Supports release without known exploitable vulnerabilities, SBOM generation, and secure distribution through protected build channels
Automated maintenance & updates ANNEX-1.PT1.2.b, PT1.2.c, PT2.2, PT2.7 Supports secure-by-default configuration, automatic security updates, timely remediation, and secure distribution of updates
Least privilege ANNEX-1.PT1.2.d, PT1.2.f, PT1.2.g Supports protection from unauthorised access, integrity protection, and data minimisation
Logging, monitoring & alerting ANNEX-1.PT1.2.d, PT1.2.l Supports detection of unauthorised access attempts and recording/monitoring of security-relevant internal activity

CRA Link: The playbook is not a legal compliance checklist, but it provides the engineering bridge to CRA Annex I. If you can demonstrate adherence to these 22 principles, you have substantial evidence supporting your CRA conformity assessment.

What Should You Do Next?

  1. Start with Section 2: Identify your current lifecycle phase and produce the deliverables from Table 1. Even a one-page Security Context note and a basic architecture diagram put you ahead of most teams.

  2. Run through the 8 risk management activities (Table 2): Most SMEs can produce the outputs in 1-2 focused days. Start with product context, asset/harm identification, and your risk acceptance criteria.

  3. Do a lightweight threat model (Table 3): Even 2 hours with your team, using a whiteboard and STRIDE, produces actionable results. Focus on the 5-10 abuse cases that matter most.

  4. Pick the 3-5 playbooks most relevant to your next release and use the checklists. Common starting points: 4.9 (Secure coding), 4.13 (Vulnerability management), 4.2 (Least privilege), 4.16 (Restrictive initial access).

  5. Use the release gate criteria as your pre-release security review agenda. This is the fastest path from "no security review process" to "documented, repeatable security gates".

  6. Download the full ENISA playbook: This is a v0.4 draft. Submit your feedback during the consultation period.

Tip: Start small. Pick one upcoming release, apply 3 playbooks, and use the release gates. You will have concrete evidence of Secure by Design practices you can build on.

Official Sources

Related Guides

This article is for informational purposes only and does not constitute legal advice. For specific compliance guidance, consult with qualified legal counsel.

Topics covered in this article

Share this article

Related Articles

Does the CRA apply to your product?

Answer 6 simple questions to find out if your product falls under the EU Cyber Resilience Act scope. Get your result in under 2 minutes.

Ready to achieve CRA compliance?

Start managing your SBOMs and compliance documentation with CRA Evidence.