Identyfikatory
Warianty tytułu
AI system in the context of: threatmodeling, its risk management and regulatory requirements
Języki publikacji
Abstrakty
Praca przedstawia kompleksowe podejście do modelowania zagrożeń i zarządzania ryzykiem w systemach SI (Sztuczna Inteligencja). Dostarczając metody, narzędzia i wskazówki dla organizacji w budowaniu odpornych na zagrożenia systemów SI, zgodnych z prawem. Proponowane systemowe podejście umożliwia identyfikację i minimalizację następstw zagrożeń w systemach SI, otwierając nowekierunki badań w dziedzinie bezpieczeństwa sztucznej inteligencji.
The work presents a comprehensive approach to threat modelling and risk management in AI (Artificial Intelligence) systems. Providing methods, tools and guidance for organisations in building resilient, legally compliant AI systems. The proposed systematic approachenables the identification and minimisation of the consequences of threats in AI systems, opening up new research directions in the field of artificial intelligence security.
Czasopismo
Rocznik
Tom
Strony
24--33
Opis fizyczny
Bibliogr. 37 poz., rys., tab.
Twórcy
autor
- Kazimierz Wielki University, Faculty of Computer ScienceKopernika 1, 85-074 Bydgoszcz
Bibliografia
- 1.Sangwan R., Badr Y., SrinivasanS. Cybersecurity for AI systems: A survey. Journal of Cybersecurity and Privacy, 3(2), 2023, 166-190.
- 2.Bogdanov D., Etti P., Kamm L., Ostrak A., Pern T., Stomakhin F., Toomsalu M., Valdma S.M., VeldreA. Risks and controls for artificial intelligence and machine learning systems. Version 1.0 [Report]. Estonian Research Institute at Tallinn University of Technology (RIA). 2024. Retrieved November 13, 2024, from https://www.ria.ee/sites/default/files/documents/2024-05/Risks-and-controls-for-artificial-intelligence-and-machine-learning-systems.pdf
- 3.Knockaert M., Everarts de Velp S., Norouzian M.R., Palacios C., Martínez C., Orduña R., Etxeberria X., Gil A., Pawlicki M., ChorasM. (2021). D7.1: AI systems threat analysis mechanisms and tools [Report]. SPARTA project number 830892. Pobrano 13listopada 2024, z: https://www.sparta.eu/assets/deliverables/SPARTA-D7.1-AI-systems-threat-analysis-mechanisms-and-tools-PU-M18_v1.1.pdf
- 4.ISO/IEC 22989:2022(E) Information technology - Artificial intelligence - Concepts and terminology [Norma]. International Organization for Standardization, 2022.
- 5.Parlament Europejski i Rada Artificial Intelligence Act.Pobrano 13 listopada 2024 z:https://eur-lex.europa.eu/legal-content/PL/TXT/HTML/?uri=OJ:L_202401689, 2024.
- 6.Liebl A., Klein T. AI Act:Risk classification of AI systems from a practical perspective [Report]. 2023. Applied AI Initiative. Pobrano 13listopada2024, z https://aai.frb.io/assets/files/AI-Act-Risk-Classification-Study-appliedAI-March-2023.pdf
- 7.Targowski A. Informatyka: modele systemów i rozwoju. Warszawa: Państwowe Wydawnictwo Ekonomiczne, 1980. Pobrano 13 listopada 2024, z https://bcpw.bg.pw.edu.pl/dlibra/doccontent?id=1702
- 8.ISO/IEC 42001:2023 Information technology - Artificial intelligence - Management system[Norma]. International Organization for Standardization2023.
- 9.ISO/IEC 31000 Risk management - Guidelines [Norma]. International Organization for Standardization2018.
- 10.ISO/IEC 27005:2022 Information security, cybersecurity and privacy protection - Information security management systems - Requirements [Norma]. International Organization for Standardization2022.
- 11.ISO/IEC 27090 Cybersecurity - Artificial Intelligence - Guidance for addressing security threats to artificial intelligence systems [Norma]. International Organization for Standardization.Pobrano 13 listopada 2024, z:https://www.iso.org/standard/56581.html.
- 12.ISO/IEC 27091 Cybersecurity and privacy - Artificial Intelligence - Privacy protection [Norma]. International Organization for Standardization.Pobrano 13 listopada 2024,https://www.iso.org/standard/56582.html.
- 13.ISO/IEC 5338:2023(E) Information technology - Artificial intelligence - AI system life cycle processes[Norma]. International Organization for Standardization2023.
- 14.ISO/IEC 8183:2023 Information technology - Artificial intelligence - Data life cycle framework [Norma]. International Organization for Standardization2023.
- 15.Pape N., Mansour C. PASTA Threat Modeling for Vehicular Networks Security. In 2024 7th International Conference on Information and Computer Technologies (ICICT)(pp. 474-478). IEEE. https://doi.org/10.1109/ICICT62343.2024.00083, 2024.
- 16.Stingelová B.,Thrakl C.T., Wrońska L., Jedrej-Szymankiewicz S., Khan S., Svetinovic D. User-Centric Security and Privacy Threats in Connected Vehicles: A Threat Modeling Analysis Using STRIDE and LINDDUN. In 2023 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech)(pp. 0690-0697). IEEE 2023 https://doi.org/10.1109/DASC/PiCom/CBDCom/Cy59711.2023.10361381
- 17.Azam N., Michala L., Ansari, S., Truong N. B. Data Privacy Threat Modeling for Autonomous Systems: A Survey From the GDPR's Perspective. IEEE Transactions on Big Data, 2023, 9(2), 388-414.
- 18.Mauri L., Damiani E. Modeling Threats to AI-ML Systems Using STRIDE. Sensors, 2022, 22(1), 1.
- 19.TeteS. Threat Modeling and Risk Analysis for Large Language Model (LLM)-Powered Applications. arXiv2024.
- 20.von der Assen J., Sharif J., Feng C., Killer C., Bovet G., Stiller B. Asset-Centric Threat Modeling for AI-Based Systems. In 2024 EEE International Conference on Cyber Security and Resilience (CSR)(pp. 437-444). IEEE2024.
- 21.Mauri L., Damiani E. STRIDE-AI: An Approach to Identifying Vulnerabilities of Machine Learning Assets. In 2021 IEEE Cybersecurity Development Conference (CSR)(pp. 147-154). IEEE2021. https://doi.org/10.1109/CSR51186.2021.9527917
- 22.Sharif J. Design and Implementation of a Threat Modeling Approach for AI-based Systems(Master's thesis). University of Zurich, Zurich, Switzerland 2023.
- 23.Tarandach I., Coles M.J. Threat Modeling: A Practical Guide for Development Teams. O'Reilly Media, Inc.2021.
- 24.Shostack A. Threat Modeling: Designing for Security. John Wiley & Sons, Inc.2014.
- 25.Sportelli M. The AI Act -A Policy Exploration. 2024. DOI: 10.13140/RG.2.2.11397.15847/1.
- 26.Ehsan U., Riedl M. O. Explainability pitfalls: Beyond dark patterns in explainable AI. Patterns, 2024, 5(6), 100971. https://doi.org/10.1016/j.patter.2024.100971.
- 27.Simchon A., Edwards M., Lewandowsky S., The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus,2024,3(2), pgae035. https://doi.org/10.1093/pnasnexus/pgae035
- 28.Loefflad C., Grossklags J. How the Types of Consequences in Social Scoring Systems Shape People's Perceptions and Behavioral Reactions. In Proceedings of the 024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24)(pp. 1515-1530). Association for Computing Machinery, 2024.https://doi.org/10.1145/3630106.3658986
- 29.Mitka A. The use of “real-time” remote biometric identification systems for law enforcement : comments in light of legislative work on the Artificial Intelligence Act. Problemy Współczesnego Prawa Międzynarodowego Europejskiego I Porównawczego, 2023, 21, 183–202. https://doi.org/10.26106/q3ta-bv90
- 30.Nair A., Greeshma M. R. Mastering Information Security Compliance Management: A Comprehensive Handbook on ISO/IEC 27001:2022. Packt Publishing Ltd.2023.
- 31.ISO/IEC 23894:2023 Information technology - Artificial intelligence - Guidance on risk management [Norma]. International Organization for Standardization2023.
- 32.Ebers M. Truly Risk-based Regulation of Artificial Intelligence How to Implement the EU’s AI Act.European Journal of Risk Regulation,20241–20. doi:10.1017/err.2024.78
- 33.Novelli C., Casolari F., Rotolo A., Taddeo M., Floridi L. AI risk assessment: A scenario-based, proportional methodology for the AI Act. Digital Society, 2024, 3(1), 13. https://doi.org/10.1007/s44206-024-00095-1
- 34.Muller B., Roth D., Kreimeyer M.Survey of the Role of Domain Experts in Recent AI System Life Cycle Models. In NORDDESIGN 2024 (pp. 256-265).
- 35.Steidl M., Golendukhina V., Felderer M., Ramler, R. Automation and Development Effort in Continuous AI Development: A Practitioners’ Survey. In 2023 IEEE Symposium on Software Engineering for AI (SEAA) (pp. 120-127). IEEE2023. https://doi.org/10.1109/SEAA60479.2023.00027
- 36.Dev J., Akhuseyinoglu N., Kayas G., Rashidi B., Garg V. Building Guardrails in AI Systems with Threat Modeling. Digital Government: Research and Practice2024. https://doi.org/10.1145/3674845
- 37.National Institute of Standards and Technology (NIST). Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations.2024.Pobrano 13 listopada z: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf
Uwagi
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki i promocja sportu (2025).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-64c5263d-8309-4d5d-bf70-f72a5db32a1b
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.