Research | The use of AI in Offensive Security | Annotated Bibliography
I am working on a research paper on the use of AI in Offensive Security for a school project.
What are annotated bibliographies used for?
An annotated bibliography provides an overview or a brief account of the available research on a given topic. Each paper has its bibliography and I am using this to keep track of the contents of each of the sources I am studying and learning from. “As a researcher, you have become an expert on your topic: you have the ability to explain the content of your sources, assess their usefulness, and share this information with others who may be less familiar with them.” 1
Here are my annotated bibliographies (10):
1.Zurowski, Samuel et al., (2022, August 1). A quantitative analysis of offensive cyber operation (OCO) Automation Tools: Proceedings of the 17th International Conference on Availability, reliability and security. Association for Computing Machinery. 42, 1–11. https://dl.acm.org/doi/abs/10.1145/3538969.3544414
- This article analyses the phases of an Offensive Cyber Operation (OCO) that can be automated using Artificial Intelligence (AI). Based off of that analysis the researchers curated a dataset of tools used in OCO and quantitatively studied them. One of the key tools examined is DeepExploit, a fully automated penetration testing tool that utilizes Machine Learning (ML). The paper suggests that the usage of AI or ML tools may be vulnerable to possible adversarial attacks, which by definition is an attack that targets vulnerabilities of AI algorithms. A significant discovery is that the majority of OCO tools use basic rule-based automation. This suggests a promising research avenue for leveraging Artificial Intelligence (AI) and Machine Learning (ML) in upcoming OCO tool development.
2. Auricchio Nicola et al., (2022, August 28).An automated approach to web offensive security. Computer Communications. 195, 248–261. https://doi.org/10.1016/j.comcom.2022.08.018
- This article focuses on explaining a Web Application Penetration Testing (WAPT) framework that allows for the integration, as well as orchestration, of several types of attacks. The framework consists of two primary components: an Executor, responsible for executing attacks, and an Orchestrator, which orchestrates these attacks across sequential phases. The author acknowledges that cybersecurity professionals use open-source tools, personal experience and intuition to orchestrate a set of automated tools to perform penetration testing. The article relates to article #1 because both list types of attacks used in a penetration testing engagement including open-source tools and private tools.
3. Mirsky Yisroel et al., (2022, November 6). The threat of offensive AI to organizations. Computers & Security. 124 https://www.sciencedirect.com/science/article/pii/S0167404822003984
- This article explores the various ways Artificial Intelligence (AI) is utilized in cybersecurity, including both beneficial and harmful applications, as well as offensive and defensive purposes. The authors elaborate on the question, Does AI benefit the attacker more than the defender? and they highlight different threats that companies are facing nowadays and the potential advantages that attackers have while performing penetration testing. The article identified 32 offensive AI capabilities that adversaries can use to improve their penetration testing attacks. This article relates to article #2 because both sources discuss automated tools created by the use of AI algorithms and relates to #1 because the authors present an attack methodology.
4. Malatji, M. et al., (2024, February 15). Artificial Intelligence (AI) cybersecurity dimensions: A comprehensive framework for understanding adversarial and offensive AI — Ai and Ethics. Springer. https://link.springer.com/article/10.1007/s43681-024-00427-4
- This paper delves into the complex aspects of cyberattacks powered by AI, providing insights into their implications, strategies for mitigation, underlying motivations, and significant societal impacts. Defensive AI utilizes machine learning (ML) and other AI techniques to enhance the security and resilience of computer systems and networks against cyberattacks. Offensive AI, or attacks leveraging AI, involves using AI for malicious purposes, such as creating new attack vectors or automating the exploitation of existing vulnerabilities by using automated algorithms. Adversarial AI involves the abuse and misuse of AI systems, targeting vulnerabilities to induce incorrect predictions. This paper is similar to papers #1 and #2, this paper also presents a framework the AI Cybersecurity Dimensions (AICD) Framework.
5. Jeppe T. et al., (2023, May 11). Artificial Intelligence and military superiority: 8 : How the ’cyber. Taylor & Francis. 1–22 https://www.taylorfrancis.com/chapters/oa-edit/10.4324/9781003284093-8/artificial-intelligence-military-superiority-jeppe-jacobsen-tobias-liebetrau
- This article explores the outcomes of implementing AI for offensive and defensive cyber operations, as well as AI-powered tools for defensive or offensive engagements, providing a detailed analysis of their functionalities and potential applications. The authors argue that the ongoing efforts to enhance both cyber offense and defense with AI present significant potential and risks to military dominance in the information space. This article provides a military perspective on the utilization of AI and ML cybersecurity tools. I can relate the content of this article with the article #3 because both argue that cyber security is dominated by the offensive side.
6. Aiyanyo, I. D. et al., (2020, August 22). A systematic review of defensive and offensive cybersecurity with Machine Learning. MDPI. 10(17) 5811 https://www.mdpi.com/2076-3417/10/17/5811
- This article consolidates over one hundred research papers on machine learning (ML) in defensive and offensive cybersecurity into one main review. The findings identify the most frequently used ML methods within supervised, unsupervised, and semi-supervised machine learning, and methods from ML that have shown better results in tackling various threats in defensive and offensive cybersecurity. This article and article #5 argue that the strongest defense against attacks and the best way to keep networks safe is know the ins and outs of the network. The authors engaged in cyber-attacks acting as malicious hackers would to try to gain illegal access, with the purpose of gathering data and exploring ML and AI techniques. Furthermore, the paper establishes a foundational reference point for readers, encompassing machine learning techniques, their objectives, and their efficacy in cybersecurity. It delves into current challenges and outlines future trajectories for the application of machine learning in cybersecurity.
7. Sarker Iqbal, et al., (2021, March 26) AI-Driven Cybersecurity: An Overview, Security Intelligence Modeling and Research Directions. SpringerLink. 2(173) https://link.springer.com/chapter/10.1007/978-3-031-15030-2
- In this paper, the author elaborates on various cybersecurity issues and aims to solve them intelligently using Artificial Intelligence (AI) algorithms. This paper presents a comprehensive exploration of “AI-driven Cybersecurity,” leveraging popular AI techniques such as Machine Learning (ML) and deep learning. It incorporates concepts from natural language processing, knowledge representation and reasoning, and knowledge or rule-based expert systems modeling. This paper relates to article #6 because both sources discuss the uses of natural language processing while attempting to create smart automations. The researchers aim to create a point of reference and a guideline describing these AI and ML methods to create automated and intelligent processes.
8. Francois, Mat, et at., (2021, August 28). Artificial Intelligence & Cybersecurity: A preliminary study of automated pentesting with Offensive Artificial Intelligence. SpringerLink. 425, 131–138 https://link.springer.com/chapter/10.1007/978-3-030-85977-0_10
- In this paper the authors aim to define a new framework for the industrialization method of penetration testing engagements. The research aims to help companies with the necessary resources to establish a penetration test task force capable of autonomously testing any system. It seeks to develop fully automated procedures for penetration testing and establish effective communication channels and support mechanisms for risk assessment reporting. This article relates to article #4 because both are developing a new framework to categorize penetration testing automation. This research utilizes artificial intelligence to automate penetration testing, making it autonomous. By integrating AI algorithms, such as machine learning and deep learning, the process becomes efficient, allowing organizations to identify and address vulnerabilities effectively across diverse systems and networks.
9. Åslaugson, Å., et al., (2021, September 3). Simulating SQL injection vulnerability exploitation using Q-learning reinforcement learning Agents. DUO Research Archive. 61. 101–125 https://doi.org/10.1016/j.jisa.2021.102903
- This paper explores the field of Machine Learning (ML) in cybersecurity focusing in SQL injection with ML. SQL injections are a critical vulnerability in web applications. SQL injections pose significant threats to web applications. The authors advocate for a structured approach to exploit SQL injection vulnerabilities and propose a defensive strategy. Rather than utilizing ML for offensive purposes, they focus on training ML models to detect and patch vulnerabilities. This defensive stance is crucial for ethical hackers and legitimate penetration testers aiming to secure systems. By using ML to proactively identify and mitigate vulnerabilities, organizations can strengthen their defenses against potential cyber threats. This article is different from the others because the researcher has a different approach, he focuses solely on exploiting and patching one vulnerability manually instead of using automated tools like in the other articles.
10. Confido, A., Ntagiou, et al., (2022, August 10). Reinforcing Penetration Testing Using AI. IEEE. 1–15 https://ieeexplore.ieee.org/abstract/document/9843459
- This article analyses the approach and results of applying Machine Learning (ML) and Artificial Intelligence (AI) algorithms to the penetration testing prototype developed by the European Space Agency, called PenBox. The researchers aim to improve the overall success of the framework by improving its efficiency in reporting vulnerabilities and by optimizing the process in terms of performance and cost-effectiveness. This paper relates to paper #8 because both are using Deep Learning to ensure randomization and complex neural network. This article also investigates how the PenBox prototype can be automated to help reduce human interaction during the penetration testing engagement and reporting phase.
Let’s connect on LinkedIn!