“Mitigating Risks of AI-driven Automation in Cybersecurity”
Main Article Content
Abstract
AI-driven automation has revolutionized cybersecurity by enhancing threat detection, response times, and vulnerability management. While these advancements offer significant improvements in efficiency and protection, they also introduce substantial risks. The most notable risks include the over-reliance on algorithms, vulnerabilities within AI systems, and the threat of adversarial machine learning attacks. Over-reliance can reduce human oversight and control, making systems susceptible to failures when AI algorithms misinterpret or overlook critical data. Additionally, AI models are vulnerable to exploitation through adversarial attacks that manipulate input data, leading to incorrect decisions that undermine security measures. This paper explores the various challenges posed by AI-driven automation in cybersecurity and presents strategies for mitigating these risks. The paper emphasizes the importance of maintaining human involvement through human-in-the-loop systems, ensuring continuous monitoring, and conducting routine testing to detect anomalies. Furthermore, it discusses techniques such as adversarial training and the adoption of explainable AI (XAI) to enhance system resilience and ensure transparency in decision-making processes. Through a combination of human intervention and robust technical defenses, organizations can better protect their AI-powered cybersecurity systems from the identified risks. This paper proposes a framework for responsible integration of AI in cybersecurity, offering a balanced approach that ensures efficiency while minimizing vulnerabilities. The findings of this paper provide actionable insights for cybersecurity professionals, offering methods to improve the robustness and reliability of AI systems. As AI continues to evolve, ongoing research is needed to address emerging threats and enhance the resilience of AI-driven cybersecurity systems.
Article Details
References
M. A. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,” arXiv preprint arXiv:1412.6572, Dec. 2014.
A. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” arXiv preprint arXiv:1702.08608, Feb. 2017.
B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018.
I. J. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” Proceedings of the 2017 IEEE Symposium on Security and Privacy, 2017, pp. 39–57.
M. Papernot, P. McDaniel, and S. S. Jha, “The limitations of deep learning in adversarial settings,” Proceedings of the IEEE European Symposium on Security and Privacy, 2016, pp. 372–387.
M. R. L. K. Z. K. Benassi, “Explaining AI decisions through visualization,” Journal of Cybersecurity, vol. 5, no. 1, pp. 45–59, 2019.
F. Chollet, “XAI: Explainable Artificial Intelligence,” Springer AI Journal, vol. 12, no. 2, pp. 20–34, 2020.
T. F. S. X. Li, “Deep adversarial learning for robust anomaly detection in cybersecurity,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 9, pp. 2330–2343, Sep. 2018.
L. K. Wang, X. Zhang, and Z. Zhang, “Adversarial machine learning in cybersecurity: A survey,” Computers & Security, vol. 76, pp. 120–139, 2018.
C. E. A. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Universal adversarial perturbations,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 86–94.
A. W. C. S. P. R. Raj, “Cybersecurity in the age of AI: Challenges and strategies,” Journal of Cybersecurity Research, vol. 4, no. 2, pp. 37–52, 2018.
R. Shokri, P. Gasti, M. S. Abraham, and R. H. V. Kolb, “Protecting against adversarial machine learning attacks in cybersecurity,” Journal of Artificial Intelligence, vol. 21, pp. 113–127, 2020.
K. R. G. Y. R. H. Z. S. Z. T. C. Zhang, “AI-enhanced threat detection in cybersecurity,” IEEE Transactions on Cybernetics, vol. 47, no. 4, pp. 1030–1045, 2020.
A. Abhinav and S. Sundararajan, “Leveraging deep learning for automated cybersecurity response,” IEEE Access, vol. 7, pp. 76325–76338, 2019.
J. Reardon and H. M. O. N. S. Haider, “Adversarial machine learning in practical applications of cybersecurity,” International Journal of Computer Science and Information Security, vol. 14, no. 8, pp. 22–34, 2019.
C. K. S. K. A. K. S. Sharma, “Defensive strategies for machine learning-based security systems in cybersecurity,” International Journal of Information Security, vol. 15, pp. 457–469, 2020.
D. J. C. R. F. S. A. G. W. Le, “On the risks of AI-driven cybersecurity systems,” IEEE Transactions on Security and Privacy, vol. 19, no. 1, pp. 45–55, 2021.
Y. X. He, S. Jiang, and H. Liu, “Mitigating adversarial attacks in AI-based network security systems,” Computational Intelligence and Neuroscience, vol. 2021, pp. 1–8, 2021.
L. H. B. Y. Chen, L. H. Miller, “Application of AI in enhancing cybersecurity systems,” International Journal of Cybersecurity, vol. 11, no. 4, pp. 57–63, 2021.
D. R. M. M. Q. Y. L. B. S. H. P. S. S. A. Dubey, “Adversarial attack simulations in cybersecurity: A framework for AI,” Journal of Cyber Intelligence and Security, vol. 6, pp. 12–25, 2022.
N. Y. X. Li and W. G. F. Meier, “Understanding AI vulnerability: A research perspective on cybersecurity and machine learning,” IEEE Access, vol. 8, pp. 21234–21249, 2020.
M. R. Q. K. S. K. S. K. Deshmukh, “Improving the security of AI-based intrusion detection systems,” Journal of Network and Computer Applications, vol. 58, pp. 29–45, 2020.
H. R. H. W. E. H. L. S. D. K. Trivedi, “Robustness of AI cybersecurity systems against adversarial manipulation,” IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 8, no. 2, pp. 93–105, 2021.
L. M. H. J. S. R. R. J. R. K. P. M. Shukla, “AI-based defense mechanisms for cybersecurity: A survey,” Future Generation Computer Systems, vol. 91, pp. 259–272, 2020.