Artificial Intelligence: How to make Machine Learning Cyber Secure
Machine learning (ML) is currently the most developed and the most promising subfield of artificial intelligence for industrial and government infrastructures. By providing new opportunities to solve decision-making problems intelligently and automatically, artificial intelligence (AI) is applied in almost all sectors of our economy.
While the benefits of AI are significant and undeniable, the development of AI also induces new threats and challenges, identified in the ENISA AI Threat Landscape.
How to prevent machine learning cyberattacks? How to deploy controls without hampering performance? The European Union Agency for Cybersecurity answers the cybersecurity questions of machine learning in a new report recently published.
Machine learning algorithms are used to give machines the ability to learn from data in order to solve tasks without being explicitly programmed to do so. However, such algorithms need extremely large volumes of data to learn. And because they do, they can also be subjected to specific cyber threats.
The Securing Machine Learning Algorithms report presents a taxonomy of ML techniques and core functionalities. The report also includes a mapping of the threats targeting ML techniques and the vulnerabilities of ML algorithms. It provides a list of relevant security controls recommended to enhance cybersecurity in systems relying on ML techniques. One of the challenges highlighted is how to select the security controls to apply without jeopardising the expected level of performance.
The mitigation controls for ML specific attacks outlined in the report should in general be deployed during the entire lifecycle of systems and applications making use of ML.
Machine Learning Algorithms Taxonomy
Based on desk research and interviews with the experts of the ENISA AI ad-hoc working group, a total of 40 most commonly used ML algorithms were identified. The taxonomy developed is based on the analysis of such algorithms.
The non-exhaustive taxonomy devised is to support the process of identifying which specific threats target ML algorithms, what are the associated vulnerabilities and the security controls needed to address those vulnerabilities.
The EU Agency for Cybersecurity continues to play a bigger role in the assessment of Artificial Intelligence (AI) by providing key input for future policies. The Agency takes part in the open dialogue with the European Commission and EU institutions on AI cybersecurity and regulatory initiatives to this end.