ENISA's Technology and Innovation Radar: what the methodology means for CRA manufacturers
ENISA's April 2026 Technology and Innovation Radar explains how cybersecurity technologies move from "recognise" to "implement." Here is what the scoring means.
In this article
- Summary
- What ENISA means by a "signal"
- How ENISA builds and cleans its signal list
- How ENISA scores every signal: strength and momentum
- The five radar zones for strong signals
- Tracking early-stage technologies: the weak signal approach
- Where the data comes from
- The fast-track mechanism and signal lifecycle
- What the radar means for your CRA compliance programme
- Frequently Asked Questions
ENISA published its Technology and Innovation Radar (TIR) methodology in April 2026 (ISBN 978-92-9204-790-0, DOI 10.2824/2390334). Before the first edition of the radar goes live, the complete scoring framework is public. It explains exactly how ENISA will classify every cybersecurity technology on a five-zone scale, from "recognise" (not yet suitable for deployment) to "implement" (deploy now and scale).
For CRA manufacturers, this matters directly. Article 13 of the Cyber Resilience Act requires you to keep products secure throughout their lifecycle, reflecting the state of the art. The TIR will become the closest thing to an official EU map of what "state of the art" security looks like in practice. Knowing how the scoring works before the first results publish gives you a head start.
Here is what the methodology contains and what you need to understand.
Summary
- The TIR is part of ENISA's 2025-2027 single programming document, designed to systematically measure the impact of emerging technologies on cybersecurity by tracking key trends, assessing technological maturity, and mapping trajectories from research to market adoption.
- A "signal" is a tangible manifestation of novelty: an observable indicator of change in cybersecurity, classified as a tool, platform, technique, or trend.
- Every signal is scored on two composite dimensions: strength (how mature and established it is, anchored by Technology Readiness Level at 50% weight) and momentum (year-on-year growth in attention across academic, patent, and news sources).
- Weak signals go through a foresight evaluation on development speed and market potential, using the Technological Innovation Systems (TIS) framework by Markard and Truffer (2008).
- Strong signals go through an adoption likelihood survey grounded in the Unified Theory of Acceptance and Use of Technology (UTAUT, Venkatesh et al., 2003), covering users, technology suppliers, and institutional actors.
- Five radar zones classify strong signals: Recognise, Observe, Trial, Plan, and Implement.
- Four quadrants classify weak signals: Exploration, Market Consolidation, Tech Consolidation, and Transition Ready.
- Three primary data sources feed the scoring: Scopus (over 25.5 million open-access documents), PATSTAT (European Patent Office patent database), and the Europe Media Monitor (300,000 news articles per day in up to 70 languages), all accessed through the JRC TIM analytics platform.
- A public open-access dashboard is planned, where any organisation can explore signal classifications and the underlying indicators.
Source: ENISA Technology and Innovation Radar Methodology, Version 1.1, April 2026.
What ENISA means by a "signal"
A signal, in foresight practice, is a tangible manifestation of novelty: an observable indicator that something is emerging or changing. ENISA's TIR does not track broad categories like "artificial intelligence" or "zero trust." It tracks specific, named technological applications at a defined level of abstraction.
The methodology uses a four-tier taxonomy to classify every candidate signal before it enters the scoring pipeline. Signals that cannot be placed in one of these four categories are flagged as too vague and sent back for revision.
Tools are concrete software products or utilities that perform a specific task within a development or operations lifecycle. A network protocol analyser is a tool.
Platforms are foundational ecosystems or runtime environments that provide infrastructure, services, and integration capabilities for building, deploying, and scaling applications. Security orchestration, automation and response (SOAR) systems are platforms.
Techniques are systematic methods, patterns, or approaches used to design, test, and evolve software systems. Post-quantum cryptography is a technique.
Trends are emerging shifts in cybersecurity paradigms, frameworks, or ecosystem practices that influence how technologies are applied. Zero trust architecture is a trend.
A signal classified as too broad must be re-expressed before it can proceed. "AI in cybersecurity" is too vague. "Machine learning for behavioural threat detection" is specific enough to assess. ENISA applies a four-question decision tree to assign each signal: Can an organisation concretely deploy it? Does it perform a specific task or provide a foundational ecosystem? Is it a structured way of doing something rather than a product? Does it represent a broad security philosophy? The answers lead to one of the four categories, or a "too vague: review" outcome.
Each signal entry in ENISA's centralised repository must carry a minimum set of metadata:
- Signal name and short description
- Technology Readiness Level (TRL), either from the source or derived through best-available estimation
- Current adoption level (innovators, early adopters, early majority, late majority, laggards)
- Type of source (market report, policy paper, white paper, expert interview)
- Title of document or name of interviewed expert
- Publication or interview date
- Author, publishing organisation, or expert affiliation
- Domain and sector, if specified
This structured metadata is what makes the scoring reproducible and auditable across editions.
Why the taxonomy matters for Annex VII
This classification is compatible with how CRA technical documentation describes product security architecture. If you are documenting your security controls under Annex VII, organising them by signal type (which tools you use, which platforms they run on, which techniques guide your development, and which architectural trends you follow) creates a vocabulary that maps directly onto the framework ENISA will use in the radar. That alignment will make it easier to reference the radar when it publishes. See our guide on Annex VII technical documentation for the specific requirements.
How ENISA builds and cleans its signal list
Before any scoring takes place, ENISA runs a two-phase collection and cleaning process.
Primary signal collection draws from authoritative sources. ENISA applies four criteria when selecting sources: reputation and impartiality (the entity should be broadly recognised for subject-matter expertise and independence), methodological rigour (reports should demonstrate the use of analytical or empirical methods), recency (only reports published within the last one to two years), and transparency (the source should be clear about the data or evidence base used).
ENISA identifies ten types of authoritative sources relevant to the TIR:
| Type | Examples |
|---|---|
| Market analysts and consultancies | Gartner, IDC, Forrester, McKinsey, Boston Consulting Group |
| International organisations and standards bodies | OECD, ITU, ISO, World Economic Forum, ENISA |
| Universities and research centres | University of Oxford, Harvard, Fraunhofer Society, MIT, Joint Research Centre |
| Industry associations and technical alliances | ETSI, IEEE, Cloud Security Alliance, ECSO |
| Regulatory and legal institutions | Commission reports, national cybersecurity agencies |
| Investment banks and venture funds | Goldman Sachs, JP Morgan, PitchBook, CB Insights |
| Think tanks and policy institutes | RAND, Chatham House, Carnegie Endowment for International Peace |
| Tech news outlets | Wired, The Register, TechCrunch, Dark Reading |
| Technology companies and integrators | IBM, Cisco, Microsoft, Palo Alto Networks |
Supplementary signal collection uses expert input through online submission forms via EU Survey and facilitated group workshops with breakout groups of 10 to 20 participants each. ENISA plans to establish an Ad Hoc Working Group (AHWG) of up to 30 experts selected through an open call, covering technology vendors, integrators, service operators, end users, critical infrastructure representatives, conformity assessment bodies, auditors, and testing laboratories.
Signal cleaning follows collection. Each entry in the centralised repository is reviewed to eliminate duplicates and resolve ambiguities. Near-identical applications expressed in different wording, such as "zero trust security" and "zero trust architecture," are consolidated into a single harmonised formulation. Vendor-specific signals are excluded unless they can be generalised into a widely adopted technology class. Vague or overly broad entries are flagged and either revised or removed.
Signal clustering organises the cleaned signals using two levels. The first and compulsory level assigns each signal to one of the four abstraction types (tool, platform, technique, trend). The second, optional level assigns signals by domain or sector using the JRC Cybersecurity Taxonomy published by the European Commission in 2022. ENISA also notes that the ECSO taxonomy and the NIST Cybersecurity Framework may alternatively be used, but recommends selecting one standard at the outset and applying it across all editions to allow longitudinal comparisons.
How ENISA scores every signal: strength and momentum
Each signal that passes initial expert validation is scored on two composite dimensions.
Strength measures how mature and established a technology is. It reflects both development readiness (TRL) and recognition across authoritative sources, academic research, the patent landscape, and media coverage. The formula is:
TRL carries half the total weight because it most directly reflects whether a technology is deployable, not just discussed. TRL 1 to 2 represents experimental research and earns a score of 1. TRL 9, mature technology with proven performance in operational conditions, earns 5. TRL 5 to 6, technology validated in tests, earns 3. The four supporting indicators are normalised to percentile bands across the full signal dataset: values in the bottom 20th percentile score 1, values above the 80th percentile score 5.
Momentum measures how quickly a technology is gaining attention, capturing velocity rather than volume. The formula is:
Current adoption level anchors momentum at 50%, paralleling TRL's role in strength. Year-on-year growth thresholds determine the remaining scores: 20% or more YoY growth earns 5. Growth between 11% and 20% earns 4. Growth between 6% and 10% earns 3. Growth between 0% and 5% earns 2. Negative growth earns 1.
The two scores place each signal in one of four macro-categories on a strength/momentum matrix:
Low strength, high momentum. Gaining visibility and traction but technically immature. Promising but still in early stages of technological readiness.
Low strength, low momentum. Weak on both axes. Speculative or overhyped. Too immature for immediate strategic attention but could evolve over time.
High strength, low momentum. Technically solid but not yet achieving broad attention or uptake. May need policy or market interventions to unlock its value.
High strength, high momentum. Both mature and gaining widespread traction. High-priority areas for strategic monitoring, investment, or adoption decisions.
The methodology explicitly states that its scoring thresholds should not be altered arbitrarily between editions, unless substantive feedback is received or context-specific requirements emerge. This preserves year-on-year comparability and allows ENISA to build a longitudinal dataset showing how technologies move across the matrix over time.
The five radar zones for strong signals
Strong signals go through adoption likelihood scoring before placement on the radar. ENISA uses a survey instrument grounded in the UTAUT framework, adapted to cover three distinct respondent groups: technology users (assessing impact on security workflows and KPIs), technology suppliers and R&D entities (assessing deployment ease, client demand, and market intent), and institutional actors such as regulators and standards development organisations (assessing governance feasibility, policy alignment, and societal value).
Each group rates five constructs on a five-point Likert scale.
Performance expectancy: does using this technology help achieve security objectives? Does it make it easier to meet KPIs?
Effort expectancy: how easy is it to implement without major technical challenges or extensive training?
Social influence: are competitors and key partners already adopting or recommending it?
Facilitating conditions: are the budget, infrastructure, regulatory frameworks, and staff capabilities in place?
Behavioural intention: does the organisation plan to continue or expand investment in this technology?
The 15 items per group contribute equally to a group adoption likelihood score. The three group scores are averaged into an overall adoption likelihood score. This composite score positions each strong signal in one of five concentric zones on the radar chart:
Not currently recommended for adoption. May be due to immaturity, unresolved risks, lack of regulatory clarity, or misalignment with current cybersecurity needs. Monitor cautiously. Avoid active investment or deployment until further validation is available.
Worth investigating but not yet ready for scaled deployment. Early pilots or proofs of concept may be underway. Investigate potential use cases. Evaluate technical feasibility. Monitor for further maturation.
Suitable for controlled experimentation in real-world conditions. Sufficient maturity and promise demonstrated. Initiate pilot projects. Collect evidence. Build internal readiness for future scaling.
Proven potential and approaching operational relevance. No longer purely experimental. Develop integration roadmaps. Secure resources. Align governance or procurement mechanisms to support upcoming adoption.
Mature and strategically relevant for wide adoption. Proven value in cybersecurity operations. Successfully through trial phases. Supported by a growing ecosystem. Actively pursue integration, scale-up, and long-term deployment.
The radar chart divides signals into four quadrants by abstraction type: Tool, Platform, Technique, and Trend. The five concentric zones run from the outermost ring (Recognise) to the innermost (Implement). A technology positioned in the Implement ring of the Technique quadrant tells you, at a glance, that it is a mature, systematically applicable method that the European cybersecurity ecosystem considers ready for broad deployment.
Article 13 of the Cyber Resilience Act requires manufacturers to address vulnerabilities and maintain product security throughout the lifecycle, reflecting the state of the art. Technologies in the "Plan" or "Implement" zones will represent what ENISA considers mature and adoption-ready, based on evidence from users, suppliers, and institutional actors across the EU. If a security control your competitors have moved to "Trial" still sits at "Recognise" in your architecture, that gap belongs in your technical documentation with an explanation of why and a migration path.
Tracking early-stage technologies: the weak signal approach
Weak signals are technologies that score low on strength but may carry disruptive potential in the medium to long term. They do not appear on the main radar chart. Instead, ENISA positions them on a separate foresight chart along two dimensions.
Development speed uses the Technological Innovation Systems (TIS) framework by Markard and Truffer (2008). Expert panels score seven functions that reflect whether an innovation ecosystem is actively forming around the technology:
- Knowledge development and diffusion: is new research emerging and being shared through publications, conferences, or collaborations?
- Entrepreneurial experimentation: are start-ups, firms, or institutions testing the technology in real-world settings, with pilots, prototypes, or early use cases?
- Development of positive externalities: does the signal create network effects or synergies with other technologies or sectors?
- Guidance of the search: does it appear in strategic documents, policies, or organisational roadmaps?
- Market formation: are early markets or niche applications forming, even if broader adoption barriers remain?
- Resource mobilisation: is financial support, skilled labour, or infrastructure available to sustain its development?
- Creation of legitimacy: do institutional actors view the signal positively, and is broad acceptance growing?
Each function scores 1 to 5. The composite development speed score is the arithmetic mean of all seven.
Market potential is scored on three sub-dimensions: sector penetration (does the signal stay within one sector or spread broadly across many?), domain usefulness (how many of the 15 cybersecurity functions in the JRC European Cybersecurity Taxonomy does it address?), and type of adopter (highly specialised actors only, or accessible to SMEs and the general public?).
This produces a 2x2 foresight chart with four quadrants:
- Exploration: low development speed and low market potential. Speculative signals with limited current evidence but potential visionary value.
- Market Consolidation: high market potential but lower development speed. Early adopters are beginning to explore the signal. More structured development and support may follow.
- Tech Consolidation: high development speed but still limited market traction. Advancing in research and experimentation, but commercial uptake remains uncertain.
- Transition Ready: high on both dimensions. Closest to becoming strong signals. Technology foundations are consolidating and cross-sector interest is visibly increasing. Early policy discussions can help anticipate their impact and integration.
Weak signals do not feature on the main radar visualisation, but they are an important input for future editions. Transition Ready signals in particular are candidates for reclassification as strong signals in the next radar cycle.
Where the data comes from
ENISA's scoring relies on three primary quantitative data sources, all accessed through the JRC TIM analytics platform, an automated text and data mining system developed by the European Commission's Joint Research Centre.
Scopus, maintained by Elsevier, covers academic journals, preprints, books, and conference proceedings, including over 25.5 million open-access documents. It provides academic publication counts and year-on-year trends for signal strength and momentum scoring.
PATSTAT contains bibliographical and legal event patent data from EU Member States, extracted from the European Patent Office's database. It provides patent filing counts and year-on-year changes.
Europe Media Monitor (EMM) gathers approximately 300,000 news articles per day in up to 70 languages. It provides news and search trend counts and year-on-year changes.
The keyword construction that drives all three database queries is controlled through a centralised dictionary. The dictionary is validated by the ENISA Core Radar Team and frozen for each edition of the radar. A keyword set for "Extended Detection and Response" might include terms like 'XDR', 'Extended Detection and Response', and 'Advanced threat detection platform', combined with a publication year filter in a Boolean query structure. Any future modification to the dictionary requires documented justification, because changes affect score comparability across editions.
This approach makes the scoring reproducible. A manufacturer reviewing a technology's radar position can trace it back to the specific keywords, database queries, and normalisation thresholds used.
The fast-track mechanism and signal lifecycle
When EU institutions or Commission priorities flag a specific technology as strategically urgent, ENISA can activate a fast-track procedure. The requesting entity provides a baseline dataset: signal name and description, an estimated TRL, current adoption level, abstraction level classification, and sector and domain context. An internal team then conducts an initial screening to check for duplication and taxonomy coherence.
If the signal passes screening, a micro-panel of three to five experts completes a rapid validation in approximately 10 days. Experts provide quantitative strength and momentum estimates on a 1 to 5 scale and qualitative comments on strategic impact. The signal then advances directly to the evaluation phase.
Fast-track signals are marked with a distinct label on the public dashboard. They remain subject to full validation in the next regular update cycle. The methodology is explicit about the trade-off: fast-track signals lack the benchmarking depth of the standard process and cannot be directly compared with signals that went through full qualification.
Signal lifecycle management follows a parallel logic. Each new edition reviews all active signals. Signals that no longer exhibit sufficient strength and momentum are removed, assessed on a three-year basis. Signals that have reached technological maturity and widespread adoption are also phased out: they are acknowledged as having moved beyond the foresight scope of the radar and are tracked through other operational mechanisms instead. For manufacturers, a technology exiting the radar is a signal in itself: it has become a baseline expectation rather than a differentiating capability.
The public dashboard will be accompanied by a detailed methodological note, clearly stating how each signal was identified, how each indicator was measured or estimated, and any limitations or assumptions applied during the process. The technical format of the dashboard (PowerBI or equivalent) will be defined at a later stage of the project.
What the radar means for your CRA compliance programme
The TIR does not create new legal obligations. But it will become authoritative evidence of what the EU considers mature and adoption-ready across the cybersecurity ecosystem. That has practical consequences for three areas of your CRA compliance work.
Technical documentation under Annex VII. Annex VII requires manufacturers to document the security design of the product with digital elements, including the security solutions applied and the processes put in place for vulnerability handling. The TIR's four-tier signal taxonomy (tool, platform, technique, trend) provides a structured vocabulary for describing your security architecture. Documenting which radar zone your critical security controls occupied at the time of product design creates a timestamped record of your state-of-the-art assessment. See our guide on Annex VII technical documentation.
Vulnerability management under Article 13. Article 13(6) requires manufacturers to address vulnerabilities with no undue delay and to apply patches or mitigations in a timely manner. The radar will track the adoption lifecycle of vulnerability management tools and techniques specifically. A tool moving from "Observe" to "Trial" in consecutive radar editions is a quantified signal that the ecosystem is converging on it. Manufacturers who track these movements can make proactive tooling decisions rather than reactive ones. See our guide on ENISA's 24-hour vulnerability reporting obligations.
Secure by design decisions. The secure by design principles in ENISA's Security by Design and Default Playbook (v0.4, March 2026) describe practices rather than specific technology choices. The TIR fills the gap by naming which concrete tools and techniques are ready for those practices. A manufacturer implementing secure boot, for example, would use the radar to assess the maturity of the specific firmware signing infrastructure they are considering. The ENISA Secure by Design playbook covers the principles. The TIR will cover the technology implementations.
One practical note on timing. The first radar edition has not yet published. Until it does, the relevant evidence for state-of-the-art assessments comes from the same source categories ENISA uses for signal collection: Gartner, Fraunhofer, ETSI, ECSO, BSI, NCSC, and NIST publications. The TIR methodology makes those source categories explicit and provides a framework for weighting them.
The CRA's essential requirements in Annex I refer to security properties, not to specific standards. Harmonised standards such as EN 18031, BSI TR-03183, and IEC 62443 translate those properties into technical requirements. The TIR sits at a different level: it tracks whether the underlying technologies that implement those technical requirements are mature and adoption-ready. A standard can mandate that you use encryption. The radar will tell you which encryption platform or technique the ecosystem considers ready for the "Plan" or "Implement" zone.
Frequently Asked Questions
When will the first ENISA technology radar publish?
The April 2026 document describes the methodology only, not the results of the first radar edition. The TIR is part of ENISA's 2025-2027 single programming document. No specific publication date is given in the methodology. ENISA states that at least four distinct visualisation prototypes are planned before the public dashboard launches. Monitor the ENISA publications page for release announcements.
Does the radar directly tell me which security technologies I must use under CRA?
No. The CRA does not mandate specific technologies. Article 13 requires you to address security based on the state of the art, but it does not name particular tools or platforms. The TIR will provide EU-level evidence of what is mature and adoption-ready across the cybersecurity ecosystem. Technologies in the "Plan" or "Implement" zones are what ENISA considers strategically relevant for deployment. Your decisions about which technologies to include in your product architecture, and how you document those choices under Annex VII, remain yours to make. The radar is evidence to reference, not a checklist to follow.
What is Technology Readiness Level and why does it carry 50% of the strength score?
TRL is a scale from 1 (basic research) to 9 (mature technology with proven performance in operational conditions), used across EU-funded innovation programmes including Horizon Europe. ENISA assigns TRL half the strength weight because it most directly reflects whether a technology is deployable, not just discussed. A technology generating substantial patent and media attention at TRL 2 or 3 is a research project, not a candidate for enterprise deployment. The other four indicators (authoritative mentions, academic publications, patents, news trends) each contribute 12.5% to contextualise how widely the technology's maturity is recognised across the wider ecosystem.
Can manufacturers or industry bodies contribute signals to the radar?
Yes. The methodology includes a supplementary signal collection mechanism through expert workshops and online submission forms, including EU Survey. ENISA plans to establish an Ad Hoc Working Group (AHWG) of up to 30 experts selected through an open call, covering technology vendors, integrators, service operators, end users, critical infrastructure representatives, conformity assessment bodies, auditors, certification bodies, and testing laboratories. The open call has not been announced yet. The planned public dashboard will also include a feedback channel for proposing additional sources or flagging discrepancies. Manufacturers with active security research programmes and domain expertise in emerging cybersecurity technologies are directly relevant candidates for AHWG participation.
How is a "weak signal" different from a "strong signal" in this framework?
The distinction comes from the qualification step. Each candidate signal receives composite scores for strength (technological maturity, based on TRL and absolute counts from Scopus, PATSTAT, and EMM) and momentum (rate of change in attention, based on year-on-year trends from the same sources). Signals that cross defined thresholds on both axes are classified as strong and proceed to UTAUT-based adoption likelihood assessment. Those below the threshold are classified as weak and proceed to TIS-based foresight evaluation instead. The thresholds are set relative to the distribution of the full signal dataset for each edition, so the classification boundary shifts as the signal population changes. Both tracks feed into different visualisations on the public dashboard.
How does the ENISA radar relate to existing frameworks like Gartner's Hype Cycle or the JRC Innovation Radar?
ENISA conducted a desk research review of existing technology radars and foresight frameworks before designing its methodology (documented in Annex 0 of the April 2026 publication). The TIR borrows the five-zone ring structure from the Thoughtworks Technology Radar and adapts the JRC Innovation Radar's approach to assessing high-potential innovations. It differs from Gartner's Hype Cycle in that it relies on quantitative bibliometric and patent data rather than analyst opinion alone, and it explicitly separates weak signal foresight (TIS framework) from strong signal adoption assessment (UTAUT). All quantitative sources are EU data infrastructure: Scopus, PATSTAT, and EMM via JRC TIM analytics. See our analysis of ENISA's Secure by Design playbook for how other ENISA methodologies connect to CRA compliance.
What happens to a technology once it exits the radar?
Technologies that have reached widespread adoption are phased out of the TIR. The methodology acknowledges them as having moved beyond the foresight scope of the radar and states they are tracked through other operational or implementation-focused mechanisms instead. The threshold for removal is assessed on a three-year basis, based on whether signals still exhibit sufficient strength and momentum relative to the current dataset. A technology exiting the radar has become a baseline expectation rather than an emerging differentiating capability. For CRA purposes, a technology that has exited because it is "too mainstream" is no longer a discretionary choice. It is part of the state of the art you are expected to reflect.
This article is for informational purposes only and does not constitute legal advice. For specific compliance guidance, consult with qualified legal counsel.
Related Articles
Does the CRA apply to your product?
Answer 6 simple questions to find out if your product falls under the EU Cyber Resilience Act scope. Get your result in under 2 minutes.
Ready to achieve CRA compliance?
Start managing your SBOMs and compliance documentation with CRA Evidence.