Artificial Intelligence and Cybersecurity Research

Artificial Intelligence (AI) is a typical dual-use technology, where malicious actors and innovators are constantly trying to best each other’s work. This is a common situation with technologies used to prepare strategic intelligence and support decision making in critical areas. Malicious actors are learning how to make their attacks more efficient by using this technology to find and exploit vulnerabilities in ICT systems.

Taking one step further in clarifying this initial statement: with the help of AI, malicious actors can introduce new capabilities that can prolong or even expand cyber threat practises that have been in existence already for a long time. With AI, these capabilities are gradually becoming automated and harder to detect. This study explores some of these capabilities from a research perspective.
In this study, two dimensions of AI have been considered (categorisation explained in Section 4): (a) ensuring a secure and trustworthy AI and preventing its malicious use (‘AI-as-a-crime-service’ or ‘AI to harm’) and (b) the use of AI in cybersecurity (‘AI use cases’ or ‘AI to protect’).

The use cases of AI in cybersecurity are numerous and growing. Listing them exhaustively is beyond the scope of this study, as research in this area is constantly evolving. However, we present examples of some of these use cases throughout the report to better explain ongoing research efforts in this technology and explore areas where further research is needed.

The aim of this study is to identify needs for research on AI for cybersecurity and on securing AI, as part of ENISA’s work in fulfilling its mandate under Article 11 of the Cybersecurity Act1. This report is one of the outputs of this task. In it we present the results of the work carried out in 20212 and subsequently validated in 2022 and 2023 with stakeholders, experts and community members such as the ENISA AHWG on Artificial Intelligence3. ENISA will make its contribution through the identification of five key research needs that will be shared and discussed with stakeholders as proposals for future policy and funding initiatives at the level of the EU and Member States.

No prioritisation of research needs is presented in this report. ENISA conducts its annual prioritisation exercise taking into account the overall status of cybersecurity research and innovation in the EU, policy and funding initiatives for cybersecurity research and innovation in the Union and technical analysis on specific topics and technologies. The priorities for 2022 can be found in the ENISA Research and Innovation Brief Report.

Furthermore, in 2022, ENISA conducted a study reviewing the work of 44 research projects, programmes and initiatives on cybersecurity and AI, which were for the most part funded by the EU’s framework programmes over the period 2014 to 2027. The importance of this inventory relates to the specific role played by AI in the cybersecurity research field, given the continuous and intensifying interplay with other technology families. The fundamental question driving this study was whether investments in cybersecurity R&I on AI have enabled Europe to make progress in this area, especially those backed by EU funds. The findings of this study can also be found in the ENISA Research and Innovation Brief Report 2022.

While we recognise the immense potential in AI for innovation in cybersecurity and the many requirements needed to improve its security, we also acknowledge that there is still much work to be done to fully uncover and describe these requirements. This report is only an initial assessment of where we stand and where we need to look further in these two important facets of this technology.
Furthermore, according to the results of the ENISA study on EU-funded research projects on cybersecurity and AI mentioned earlier, the majority of the projects reviewed focused on machine learning techniques. This can be interpreted in two ways: as a sign that the market for such solutions is particularly appreciative of the potential benefits of ML compared to other fields of AI or that, for some reason, research and development in the other fields of AI is not being adequately considered by public funders despite their recognised potential. In this study, we also highlight the need to further explore the use of ML in cybersecurity but also to investigate other AI concepts.

ENISA has followed the steps outlined in the following list to identify the research needs presented in chapter 7.2 of this report.

• Identification from existing research papers of functions and use cases where AI is being used to support cybersecurity activities.
• Identification from existing research papers of areas where cybersecurity is needed to secure AI.
• Review of AI use cases.
• Analysis of open issues, challenges and gaps.
• Identification of areas where further knowledge is required.

These steps were carried out by experts who contributed to this report mainly through desk research, and the results were validated by members of the R&I community.

ENISA prepares these studies with the aim of using them as a tool to develop advice on cybersecurity R&I and present it to stakeholders. These stakeholders are the main target audience of this report and include members of the wider R&I community (academics, researchers and innovators), industry, the European Commission (EC), the European Cyber Security Competence Centre (ECCC) and the National Coordination Centres (NCCs).

Conclusions and Next Steps
AI is gaining attention in most quadrants of society and the economy, as it can impact people’s daily lives and plays a key role in the ongoing digital transformation through its automated decision-making capabilities. AI is also seen as an important enabler of cybersecurity innovation for two main reasons: its ability to detect and respond to cyber threats and the need to secure AI-based applications.

The EU has long considered AI as a technology of strategic importance and refers to it in various policy and strategy documents. ENISA is contributing to these EU efforts with technical studies on cybersecurity and AI. For example, the cyber threat landscape for AI123 raised awareness on the opportunities and challenges of this technology. The Agency has already published two studies on this topic and this report will be the third publication aiming to provide a research and innovation perspective of cybersecurity and AI. In preparing these studies, the Agency is supported by the R&I community and has established an ad-hoc working group124 with experts and stakeholders from different fields and domains.

This study makes recommendations to address some of the challenges through research and identifies key areas to guide stakeholders driving cybersecurity research and development on AI and cybersecurity. These recommendations constitute ENISA’s advice, in particular to the EC and ECCC, using its prerogative as an observer on the Governing Board and advisor to the Centre. The findings were used to produce an assessment of the current state of cybersecurity research and innovation in the EU and contribute to the analysis of research and innovation priorities for 2022, presented in a separate report.

In this context and as next steps, ENISA will:
1. present and discuss the research and innovation priorities identified in 2022 with members of the ECCC Governing Board and NCCs;
2. develop a roadmap and establish an observatory for cybersecurity R&I where AI is a key technology; and
3. continue identifying R&I needs and priorities as part of ENISA’s mandate (Article 11 of the CSA).

The European Union Agency for Cybersecurity, ENISA, is the Union’s agency dedicated to achieving a high common level of cybersecurity across Europe.

EDITORS
Corina Pascu (ENISA), Marco Barros Lourenco (ENISA)
AUTHORS
Dr. Stavros NTALAMPIRAS, University of Milan, I; Dr. Gianluca MISURACA, Co-Founder and
VP, Inspiring Futures, ES; Dr. Pierre Rossel, President at Inspiring Futures CH

ENISA Research and Innovation Brief