PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

When to Trust AI: Advances and Challenges for Certification of Neural Networks

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing. Early adoption of AI technology for real-world applications has not been without problems, particularly for neural networks, which may be unstable and susceptible to adversarial examples. In the longer term, appropriate safety assurance techniques need to be developed to reduce potential harm due to avoidable system failures and ensure trustworthiness. Focusing on certification and explainability, this paper provides an overview of techniques that have been developed to ensure safety of AI decisions and discusses future challenges.
Rocznik
Tom
Strony
25--37
Opis fizyczny
Bibliogr. 61 poz., il., wz., wykr.
Bibliografia
  • 1. M. Wicker, X. Huang, and M. Kwiatkowska, “Feature-guided black-box safety testing of deep neural networks,” in International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 2018, pp. 408–426.
  • 2. X. Huang, M. Kwiatkowska, S. Wang, and M. Wu, “Safety verification of deep neural networks,” in 29th International Conference on Computer Aided Verification, ser. Lecture Notes in Computer Science, vol. 10426. Springer, 2017. http://dx.doi.org/10.1007/978-3-319-63387-9_1 pp. 3–29.
  • 3. G. Katz, C. W. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient SMT solver for verifying deep neural networks,” in 29th International Conference on Computer Aided Verification, ser. Lecture Notes in Computer Science, vol. 10426. Springer, 2017. http://dx.doi.org/10.1007/978-3-319-63387-9_5 pp. 97–117.
  • 4. C. Brix, M. N. Müller, S. Bak, T. T. Johnson, and C. Liu, “First three years of the international verification of neural networks competition (VNN-COMP),” CoRR, vol. abs/2301.05815, 2023. http://dx.doi.org/10.48550/arXiv.2301.05815
  • 5. R. Falconmore, “On the role of explainability and uncertainty in ensuring safety of AI applications,” Ph.D. dissertation, University of Oxford, UK, 2022.
  • 6. B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli, “Evasion attacks against machine learning at test time,” in European Conference on Machine Learning and Knowledge Discovery in Databases, ser. Lecture Notes in Computer Science, vol. 8190. Springer, 2013. http://dx.doi.org/10.1007/978-3-642-40994-3_25 pp. 387–402.
  • 7. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in 2nd International Conference on Learning Representations, 2014. [Online]. Available: http://arxiv.org/abs/1312.6199
  • 8. B. Biggio and F. Roli, “Wild patterns: Ten years after the rise of adversarial machine learning,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018. http://dx.doi.org/10.1145/3243734.3264418 pp. 2154–2156.
  • 9. B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in 29th International Conference on Machine Learning. icml.cc / Omnipress, 2012.
  • 10. M. Wu, M. Wicker, W. Ruan, X. Huang, and M. Kwiatkowska, “A game-based approximate verification of deep neural networks with provable guarantees,” Theoretical Computer Science, vol. 807, pp. 298–329, 2020. http://dx.doi.org/10.1016/j.tcs.2019.05.046
  • 11. K. Leino, Z. Wang, and M. Fredrikson, “Globally-robust neural networks,” in 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 139. PMLR, 2021, pp. 6212–6222.
  • 12. M. Du, N. Liu, and X. Hu, “Techniques for interpretable machine learning,” Commun. ACM, vol. 63, no. 1, pp. 68–77, 2020. http://dx.doi.org/10.1145/3359786
  • 13. C. Molnar, Interpretable machine learning. Lulu. com, 2020.
  • 14. S. Bach, A. Binder, G. Montavon, F. Klauschen, K.-R. Müller, and W. Samek, “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, p. e0130140, 2015.
  • 15. M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in International conference on machine learning. PMLR, 2017, pp. 3319–3328.
  • 16. M. T. Ribeiro, S. Singh, and C. Guestrin, “"why should I trust you?": Explaining the predictions of any classifier,” in 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016. http://dx.doi.org/10.1145/2939672.2939778 pp. 1135–1144.
  • 17. M. T. Ribeiro, S. Singh, and C. Guestrin, “Anchors: High-precision model-agnostic explanations,” in Thirty-Second AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
  • 18. A. Ignatiev, N. Narodytska, and J. Marques-Silva, “Abduction-based explanations for machine learning models,” in Thirty-Third AAAI Conference on Artificial Intelligence. AAAI Press, 2019. doi: 10.1609/aaai.v33i01.33011511 pp. 1511–1519.
  • 19. E. L. Malfa, R. Michelmore, A. M. Zbrzezny, N. Paoletti, and M. Kwiatkowska, “On guaranteed optimal robust explanations for NLP models,” in Thirtieth International Joint Conference on Artificial Intelligence. ijcai.org, 2021. http://dx.doi.org/10.24963/ijcai.2021/366 pp. 2658–2665.
  • 20. A. Janusz, D. Slezak, S. Stawicki, and K. Stencel, “A practical study of methods for deriving insightful attribute importance rankings using decision bireducts,” Inf. Sci., vol. 645, p. 119354, 2023. doi: 10.1016/j.ins.2023.119354
  • 21. B. Wang, S. Webb, and T. Rainforth, “Statistically robust neural network classification,” in Uncertainty in Artificial Intelligence. PMLR, 2021, pp. 1735–1745.
  • 22. T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. T. Vechev, “AI2: safety and robustness certification of neural networks with abstract interpretation,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2018. http://dx.doi.org/10.1109/SP.2018.00058 pp. 3–18.
  • 23. S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana, “Efficient formal safety analysis of neural networks,” in Annual Conference on Neural Information Processing Systems, 2018, pp. 6369–6379.
  • 24. G. Singh, T. Gehr, M. Püschel, and M. T. Vechev, “An abstract domain for certifying neural networks,” Proc. ACM Program. Lang., vol. 3, no. POPL, pp. 41:1–41:30, 2019. http://dx.doi.org/10.1145/3290354
  • 25. S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C. Hsieh, and J. Z. Kolter, “Beta-crown: Efficient bound propagation with per-neuron split constraints for neural network robustness verification,” in Annual Conference on Neural Information Processing Systems, 2021, pp. 29 909–29 921.
  • 26. K. Matoba and F. Fleuret, “Exact preimages of neural network aircraft collision avoidance systems,” in Proceedings of the Machine Learning for Engineering Modeling, Simulation, and Design Workshop at Neural Information Processing Systems, 2020, pp. 1–9.
  • 27. S. Dathathri, S. Gao, and R. M. Murray, “Inverse abstraction of neural networks using symbolic interpolation,” in Thirty-Third AAAI Conference on Artificial Intelligence. AAAI Press, 2019. http://dx.doi.org/10.1609/aaai.v33i01.33013437 pp. 3437–3444.
  • 28. S. Kotha, C. Brix, Z. Kolter, K. Dvijotham, and H. Zhang, “Provably bounding neural network preimages,” CoRR, vol. abs/2302.01404, 2023. http://dx.doi.org/10.48550/arXiv.2302.01404
  • 29. X. Zhang, B. Wang, and M. Kwiatkowska, “On preimage approximation for neural networks,” CoRR, vol. abs/2305.03686, 2023. http://dx.doi.org/10.48550/arXiv.2305.03686
  • 30. A. Janusz, A. Zalewska, L. Wawrowski, P. Biczyk, J. Ludziejewski, M. Sikora, and D. Slezak, “Brightbox - A rough set based technology for diagnosing mistakes of machine learning models,” Appl. Soft Comput., vol. 141, p. 110285, 2023. doi: 10.1016/j.asoc.2023.110285
  • 31. E. Wong and J. Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” in 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 2018, pp. 5283–5292.
  • 32. P. Cousot and R. Cousot, “Abstract interpretation frameworks,” J. Log. Comput., vol. 2, no. 4, pp. 511–547, 1992. http://dx.doi.org/10.1093/logcom/2.4.511
  • 33. M. Wu and M. Kwiatkowska, “Robustness guarantees for deep neural networks on videos,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition. Computer Vision Foundation / IEEE, 2020. http://dx.doi.org/10.1109/CVPR42600.2020.00039 pp. 308–317.
  • 34. H. Zhang, T. Weng, P. Chen, C. Hsieh, and L. Daniel, “Efficient neural network robustness certification with general activation functions,” in Annual Conference on Neural Information Processing Systems, 2018, pp. 4944–4953.
  • 35. K. Xu, H. Zhang, S. Wang, Y. Wang, S. Jana, X. Lin, and C. Hsieh, “Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers,” in 9th International Conference on Learning Representations. OpenReview.net, 2021.
  • 36. P. Cousot and R. Cousot, “Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in Fourth ACM Symposium on Principles of Programming Languages. ACM, 1977. http://dx.doi.org/10.1145/512950.512973 pp. 238–252.
  • 37. M. Mirman, T. Gehr, and M. T. Vechev, “Differentiable abstract interpretation for provably robust neural networks,” in 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 80. PMLR, 2018, pp. 3575–3583.
  • 38. G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. T. Vechev, “Fast and effective robustness certification,” in Annual Conference on Neural Information Processing Systems, 2018, pp. 10 825–10 836.
  • 39. A. Albarghouthi, “Introduction to neural network verification,” Found. Trends Program. Lang., vol. 7, no. 1-2, pp. 1–157, 2021. http://dx.doi.org/10.1561/2500000051
  • 40. R. Ehlers, “Formal verification of piece-wise linear feed-forward neural networks,” in Automated Technology for Verification and Analysis. Springer International Publishing, 2017, pp. 269–286.
  • 41. S. Dutta, S. Jha, S. Sankaranarayanan, and A. Tiwari, “Output range analysis for deep feedforward neural networks,” in 10th International Symposium on NASA Formal Methods, ser. Lecture Notes in Computer Science, vol. 10811. Springer, 2018. http://dx.doi.org/10.1007/978-3-319-77935-5_9 pp. 121–138.
  • 42. M. Fischetti and J. Jo, “Deep neural networks and mixed integer linear optimization,” Constraints An Int. J., vol. 23, no. 3, pp. 296–309, 2018. http://dx.doi.org/10.1007/s10601-018-9285-6
  • 43. V. Tjeng, K. Y. Xiao, and R. Tedrake, “Evaluating robustness of neural networks with mixed integer programming,” in 7th International Conference on Learning Representations. OpenReview.net, 2019.
  • 44. J. P. Vielma, “Mixed integer linear programming formulation techniques,” SIAM Rev., vol. 57, no. 1, pp. 3–57, 2015. doi: 10.1137/130915303
  • 45. R. Bunel, J. Lu, I. Turkaslan, P. H. S. Torr, P. Kohli, and M. P. Kumar, “Branch and bound for piecewise linear neural network verification,” J. Mach. Learn. Res., vol. 21, pp. 42:1–42:39, 2020.
  • 46. S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana, “Formal security analysis of neural networks using symbolic intervals,” in 27th USENIX Security Symposium. USENIX Association, 2018, pp. 1599–1614.
  • 47. M. Sotoudeh, Z. Tao, and A. V. Thakur, “Syrenn: A tool for analyzing deep neural networks,” Int. J. Softw. Tools Technol. Transf., vol. 25, no. 2, pp. 145–165, 2023. http://dx.doi.org/10.1007/s10009-023-00695-1
  • 48. A. Albarghouthi and K. L. McMillan, “Beautiful interpolants,” in 25th International Conference on Computer Aided Verification, ser. Lecture Notes in Computer Science, vol. 8044. Springer, 2013. http://dx.doi.org/10.1007/978-3-642-39799-8_22 pp. 313–329.
  • 49. W. Ruan, M. Wu, Y. Sun, X. Huang, D. Kroening, and M. Kwiatkowska, “Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance,” in Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2019, pp. 5944–5952.
  • 50. T. Baluta, Z. L. Chua, K. S. Meel, and P. Saxena, “Scalable quantitative verification for deep neural networks,” in 43rd IEEE/ACM International Conference on Software Engineering. IEEE, 2021. http://dx.doi.org/10.1109/ICSE43902.2021.00039 pp. 312–323.
  • 51. P. Yang, R. Li, J. Li, C. Huang, J. Wang, J. Sun, B. Xue, and L. Zhang, “Improving neural network verification through spurious region guided refinement,” in 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, ser. Lecture Notes in Computer Science, vol. 12651. Springer, 2021, pp. 389–408.
  • 52. E. L. Malfa, M. Wu, L. Laurenti, B. Wang, A. Hartshorn, and M. Kwiatkowska, “Assessing robustness of text classification through maximal safe radius computation,” in Findings of the Association for Computational Linguistics, ser. Findings of ACL, vol. EMNLP 2020. Association for Computational Linguistics, 2020. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.266 pp. 2949–2968.
  • 53. E. L. Malfa and M. Kwiatkowska, “The king is naked: On the notion of robustness for natural language processing,” in Thirty-Sixth AAAI Conference on Artificial Intelligence. AAAI Press, 2022, pp. 11 047–11 057.
  • 54. E. Benussi, A. Patanè, M. Wicker, L. Laurenti, and M. Kwiatkowska, “Individual fairness guarantees for neural networks,” in Thirty-First International Joint Conference on Artificial Intelligence. ijcai.org, 2022. http://dx.doi.org/10.24963/ijcai.2022/92 pp. 651–658.
  • 55. R. Michelmore, M. Wicker, L. Laurenti, L. Cardelli, Y. Gal, and M. Kwiatkowska, “Uncertainty quantification with statistical guarantees in end-to-end autonomous driving control,” in IEEE International Conference on Robotics and Automation. IEEE, 2020. doi: 10.1109/ICRA40945.2020.9196844 pp. 7344–7350.
  • 56. M. Wicker, L. Laurenti, A. Patane, and M. Kwiatkowska, “Probabilistic safety for bayesian neural networks,” in Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, vol. 124. AUAI Press, 2020, pp. 1198–1207.
  • 57. M. Wicker, L. Laurenti, A. Patane, N. Paoletti, A. Abate, and M. Kwiatkowska, “Certification of iterative predictions in bayesian neural networks,” in Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, vol. 161. AUAI Press, 2021, pp. 1713–1723.
  • 58. P. Gourdeau, V. Kanade, M. Kwiatkowska, and J. Worrell, “On the hardness of robust classification,” in Advances in Neural Information Processing Systems, 2019, pp. 7444–7453.
  • 59. P. Gourdeau, V. Kanade, M. Kwiatkowska, and J. Worrell, “On the hardness of robust classification,” Journal of Machine Learning Research, vol. 22, 2021.
  • 60. P. Gourdeau, V. Kanade, M. Kwiatkowska, and J. Worrell, “Sample complexity bounds for robustly learning decision lists against evasion attacks,” in Thirty-First International Joint Conference on Artificial Intelligence. ijcai.org, 2022. http://dx.doi.org/10.24963/ijcai.2022/419 pp. 3022–3028.
  • 61. P. Gourdeau, V. Kanade, M. Kwiatkowska, and J. Worrell, “When are local queries useful?” in Advances in Neural Information Processing Systems, 2022.
Uwagi
1. This project received funding from the ERC under the European Union’s Horizon 2020 research and innovation programme (FUN2MODEL, grant agreement No. 834115) and ELSA: European Lighthouse on Secure and Safe AI project (grant agreement No. 101070617 under UK guarantee).
2. Main Track Invited Contributions
3. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-58b2280e-43fa-4748-895b-09c3013e7add
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.