Perspective: Artificial Intelligence

Dr. Ron Martin, Professor of Practice, Capitol Technology University
The European Union (EU) achieved a significant milestone in the field of Artificial Intelligence by releasing a draft of the EU Artificial Intelligence Act in March of this year. This 458-page document, a cornerstone in AI governance, outlines the deployment and use of AI and categorizes initiatives based on risk. The Act sets a standard for the responsible use of AI in the EU, keeping us all informed and current in this rapidly evolving field. Its size and detail have set the stage for a comprehensive investigation, and I am eager to share my perspective with you once I reach a conclusive research stage.
My journey into the realm of Artificial Intelligence (AI) has been marked by a series of challenges, the first and foremost being the understanding of its definition. The diverse viewpoints, ranging from an ability to a theory, have made it a complex subject. To align my lectures and perspective, I turned to the definition from the United States Code, a source that provides a relevant and important understanding of AI. In 15 U.S.C. 940 (3), AI is defined as:
The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3):  a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.  Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments, abstract such perceptions into models through analysis in an automated manner, and use model inference to formulate options for information or action.
As I tried to distinctly categorize the recent Executive Order (EO) 14110 signed on October 30, 2023, in the United States, the definition of the term under the United States code was most appropriate. Against this background, I shall present my perception of the premise of AI and the directive. Based on this loosely defined approach, I required a better definition of AI. Looking at it, I discovered that there are two main supporting sublevels of AI.
AI entails enhancing systems’ functionality by integrating computational procedures. Machine learning (ML) is a lower level, which builds procedures and strategies that let computers learn and draw conclusions on their own.
The subsequent level of ML is the Advanced Level, which is also called Deep Learning (DL). DL applies artificial neural networks to create layers, which allow the DL model to learn and extract high-level features of representations from the data. To explain it in layman’s terms, a neural network is a subset of ML that embeds a thinking pattern akin to the human brain while concluding.
As context for the directive provided to the Executive Branch of the U. S. Government, the EO concentrates on the safe, secure, and trustworthy application of AI. The key protective measures outlined in the order: The key protective measures outlined in the order:
1. Risk Management Framework: NIST released ‘An AI Risk Management Framework publication’ to Establish a framework for addressing risks that come with Artificial Intelligence, for instance, privacy, security, and ethical issues.
2. Transparency and Accountability: AI developers need to make the decision-making process explicit in their algorithms and approaches and take responsibility for the consequences that AI-affecting systems produce.
3. Data Protection: This requires rigorous measures to protect the information processed through AI systems and the individual information used in the system.
4. Bias Mitigation: Provides solutions for eliminating bias in AI systems, enabling companies to produce fairly and equally effective AI algorithms.
5. Collaboration and Standards: The companion document to the EO is a roadmap from the Cybersecurity and Infrastructure Security Agency that promotes active partnership between the government, industry, and academia in establishing and following standards and best practices for timely industrial AI application.
6. Public Awareness and Education: Organizations should include AQ’s instructions on how to use AI or when they intend to use AI.
They all seek to establish proper values so safe and ethical AI technologies can be developed and implemented.
Machine Learning and Deep Learning are the subcategories of AI. The supervision learning algorithms let computers learn from data and prediction problems; the deep learning algorithms are effective on complicated and unformatted data problems, as they use deep neural networks. In cybersecurity, both ML and DL are applied to increase threat identification, anomaly identification, and the overall performance of a security system. Every organization and individual “MUST” must undertake their own AI research depending on their proposed application and justification. Now, the United States and the European Union offer a relatively high level of AI development and utilization governance.
Dr. Ron Martin is a Professor of Practice at Capitol Technology University, specializing in the functional areas of Critical Infrastructure, Industrial Control System Security, Identity, credentials, and Access Management. Ron is an IEEE Senior Member, an active contributor at the Cloud Security Alliance, and a member of the International Association of CIP Professionals.