PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Tytuł artykułu

Measuring Trustworthiness in Neuro-Symbolic Integration

Wybrane pełne teksty z tego czasopisma
Identyfikatory
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
Neuro-symbolic integration of symbolic and subsymbolic techniques represents a fast-growing AI trend aimed at mitigating the issues of neural networks in terms of decision processes, reasoning, and interpretability. Several state-of-the-art neuro-symbolic approaches aim at improving performance, most of them focusing on proving their effectiveness in terms of raw predictive performance and/or reasoning capabilities. Meanwhile, few efforts have been devoted to increasing model trustworthiness, interpretability, and efficiency - mostly due to the complexity of measuring effectively improvements in terms of trustworthiness and interpretability. This is why here we analyse and discuss the need for ad-hoc trustworthiness metrics for neuro-symbolic techniques. We focus on two popular paradigms mixing subsymbolic computation and symbolic knowledge, namely: (i) symbolic knowledge extraction (SKE), aimed at mapping subsymbolic models into human-interpretable knowledge bases; and (ii) symbolic knowledge injection (SKI), aimed at forcing subsymbolic models to adhere to a given symbolic knowledge. We first emphasise the need for assessing neuro-symbolic approaches from a trustworthiness perspective, highlighting the research challenges linked with this evaluation and the need for ad-hoc trust definitions. Then we summarise recent developments in SKE and SKI metrics focusing specifically on several trustworthiness pillars such as interpretability, efficiency, and robustness of neuro-symbolic methods. Finally, we highlight open research opportunities towards reliable and flexible trustworthiness metrics for neuro-symbolic integration.
Rocznik
Tom
Strony
1--10
Opis fizyczny
Bibliogr. 44 poz.
Twórcy
  • Alma Mater Studorium, Università di Bologna, Italy
  • Alma Mater Studorium, Università di Bologna, Italy
Bibliografia
  • 1. Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212–3232, 2019. [Online]. Available: https://ieeexplore.ieee.org/document/8627998
  • 2. A. Agiollo, G. Ciatto, and A. Omicini, “Shallow2Deep: Restraining neural networks opacity through neural architecture search,” in Explainable and Transparent AI and Multi-Agent Systems, ser. Lecture Notes in Computer Science, D. Calvaresi, A. Najjar, M. Winikoff, and K. Främling, Eds. Cham: Springer, 2021, vol. 12688, pp. 63–82. [Online]. Available: http://link.springer.com/10.1007/978-3-030-82017-6_5
  • 3. D. W. Otter, J. R. Medina, and J. K. Kalita, “A survey of the usages of deep learning for natural language processing,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 604–624, 2021. [Online]. Available: https://ieeexplore.ieee.org/document/9075398
  • 4. A. Agiollo, L. C. Siebert, P. K. Murukannaiah, and A. Omicini, “The quarrel of local post-hoc explainers for moral values classification in natural language processing,” in Explainable and Transparent AI and Multi-Agent Systems, ser. Lecture Notes in Computer Science, D. Calvaresi, A. Najjar, A. Omicini, R. Aydoǧan, R. Carli, G. Ciatto, Y. Mualla, and K. Främling, Eds. Springer, 2023, vol. 14127, ch. 6. [Online]. Available: http://link.springer.com/10.1007/978-3-031-40878-6_6
  • 5. Z. Zhang, P. Cui, and W. Zhu, “Deep learning on graphs: A survey,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 1, pp. 249–270, 2022. [Online]. Available: https://ieeexplore.ieee.org/document/9039675
  • 6. A. Agiollo and A. Omicini, “GNN2GNN: Graph neural networks to generate neural networks,” in Uncertainty in Artificial Intelligence, ser. Proceedings of Machine Learning Research, J. Cussens and K. Zhang, Eds., vol. 180. ML Research Press, Aug. 2022. ISSN 2640-3498 pp. 32–42, proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, UAI 2022, 1-5 August 2022, Eindhoven, The Netherlands. [Online]. Available: https://proceedings.mlr.press/v180/agiollo22a.html
  • 7. A. Agiollo, E. Bardhi, M. Conti, R. Lazzeretti, E. Losiouk, and A. Omicini, “GNN4IFA: Interest flooding attack detection with graph neural networks,” in 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), IEEE Computer Society. Los Alamitos, CA, USA: IEEE Computer Society, Jul. 2023. ISBN 978-1-6654-6512-0 pp. 615–630. [Online]. Available: https://www.computer.org/csdl/proceedings-article/eurosp/2023/651200a615
  • 8. K. Benidis, S. S. Rangapuram, V. Flunkert, Y. Wang, D. C. Maddix, A. C. Türkmen, J. Gasthaus, M. Bohlke-Schneider, D. Salinas, L. Stella, F. Aubet, L. Callot, and T. Januschowski, “Deep learning for time series forecasting: Tutorial and literature survey,” ACM Computing Surveys, vol. 55, no. 6, pp. 121:1–121:36, 2023. [Online]. Available: https://dl.acm.org/doi/10.1145/3533382
  • 9. A. Agiollo, M. Conti, P. Kaliyar, T. Lin, and L. Pajola, “DETONAR: Detection of routing attacks in RPL-based IoT,” IEEE Transactions on Network and Service Management, vol. 18, no. 2, pp. 1178 – 1190, 2021. [Online]. Available: https://ieeexplore.ieee.org/document/9415869
  • 10. J. Zhang and C. Li, “Adversarial examples: Opportunities and challenges,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 7, pp. 2578–2593, 2020. [Online]. Available: https://ieeexplore.ieee.org/document/8842604
  • 11. A. C. Serban, E. Poll, and J. Visser, “Adversarial examples on object recognition: A comprehensive survey,” ACM Computing Surveys, vol. 53, no. 3, pp. 66:1–66:38, 2021. [Online]. Available: https://dl.acm.org/doi/10.1145/3398394
  • 12. C. Novelli, M. Taddeo, and L. Floridi, “Accountability in artificial intelligence: what it is and how it works,” AI & SOCIETY, pp. 1–12, 2023. [Online]. Available: https://link.springer.com/10.1007/s00146-023-01635-y
  • 13. M. M. A. de Graaf and B. F. Malle, “How people explain action (and autonomous intelligent systems should too),” in 2017 AAAI Fall Symposia, Arlington, Virginia, USA, November 9-11, 2017. AAAI Press, 2017, pp. 19–26. [Online]. Available: https://aaai.org/ocs/index.php/FSS/FSS17/paper/view/16009
  • 14. C. Huang and B. Mutlu, “Robot behavior toolkit: generating effective social behaviors for robots,” in International Conference on Human-Robot Interaction, HRI’12, Boston, MA, USA - March 05 - 08, 2012, H. A. Yanco, A. Steinfeld, V. Evers, and O. C. Jenkins, Eds. ACM, 2012, pp. 25–32. [Online]. Available: https://dl.acm.org/doi/10.1145/2157689.2157694
  • 15. Z. Li, X. Wang, E. Stengel-Eskin, A. Kortylewski, W. Ma, B. V. Durme, and A. L. Yuille, “Super-CLEVR: A virtual benchmark to diagnose domain robustness in visual reasoning,” CoRR, vol. abs/2212.00259, 2022. [Online]. Available: https://arxiv.org/abs/2212.00259
  • 16. C. W. Wu, A. C. Wu, and J. Strom, “DeepTune: Robust global optimization of electronic circuit design via neuro- symbolic optimization,” in IEEE International Symposium on Circuits and Systems, ISCAS 2021, Daegu, South Korea, May 22-28, 2021. IEEE, 2021, pp. 1–5. [Online]. Available: https://ieeexplore.ieee.org/document/9401488
  • 17. A. Liu, H. Xu, G. Van den Broeck, and Y. Liang, “Out-of-distribution generalization by neural-symbolic joint training,” in AAAI Conference on Artificial Intelligence, vol. 37, no. 10, 2023, pp. 12 252–12 259. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/26444
  • 18. M. I. Nye, M. H. Tessler, J. B. Tenenbaum, and B. M. Lake, “Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning,” in Advances in Neural Information Processing Systems 34 (NeurIPS 2021), M. Ranzato, A. Beygelzimer, Y. N. Dauphin, P. Liang, and J. W. Vaughan, Eds., 2021, pp. 25 192–25 204. [Online]. Available: https://proceedings.neurips.cc/paper/2021/hash/d3e2e8f631bd9336ed25b8162aef8782-Abstract.html
  • 19. X. Xie, K. Kersting, and D. Neider, “Neuro-symbolic verification of deep neural networks,” in Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, L. De Raedt, Ed. ijcai.org, 2022, pp. 3622–3628. [Online]. Available: https://www.ijcai.org/proceedings/2022/503
  • 20. E. Marconato, G. Bontempo, E. Ficarra, S. Calderara, A. Passerini, and S. Teso, “Neuro symbolic continual learning: Knowledge, reasoning shortcuts and concept rehearsal,” CoRR, vol. abs/2302.01242, 2023. [Online]. Available: https://arxiv.org/abs/2302.01242
  • 21. C. Yang and S. Chaudhuri, “Safe neurosymbolic learning with differentiable symbolic execution,” in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [Online]. Available: https://openreview.net/forum?id=NYBmJN4MyZ
  • 22. M. R. Vilamala, T. Xing, H. Taylor, L. Garcia, M. Srivastava, L. M. Kaplan, A. D. Preece, A. Kimmig, and F. Cerutti, “DeepProbCEP: A neuro-symbolic approach for complex event processing in adversarial settings,” Expert Systems with Applications, vol. 215, pp. 119 376:1–26, 2023. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0957417422023946
  • 23. G. Ibarra-Vázquez, G. Olague, M. Chan-Ley, C. Puente, and C. Soubervielle-Montalvo, “Brain programming is immune to adversarial attacks: Towards accurate and robust image classification using symbolic learning,” Swarm and Evolutionary Computation, vol. 71, p. 101059, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S2210650222000311
  • 24. M. Denil and T. P. Trappenberg, “Overlap versus imbalance,” in Advances in Artificial Intelligence, ser. Lecture Notes in Computer Science, A. Farzindar and V. Keselj, Eds., vol. 6085. Springer, 2010, pp. 220–231. [Online]. Available: https://link.springer.com/10.1007/978-3-642-13059-5_22
  • 25. A. C. Lorena, L. P. F. Garcia, J. Lehmann, M. C. P. de Souto, and T. K. Ho, “How complex is your classification problem?: A survey on measuring classification complexity,” ACM Computing Surveys, vol. 52, no. 5, pp. 107:1–107:34, 2019. [Online]. Available: https://dl.acm.org/doi/10.1145/3347711
  • 26. C. G. Northcutt, L. Jiang, and I. L. Chuang, “Confident learning: Estimating uncertainty in dataset labels,” Journal of Artificial Intelligence Research, vol. 70, pp. 1373–1411, 2021. [Online]. Available: https://jair.org/index.php/jair/article/view/12125
  • 27. Y. Lu, Y. Cheung, and Y. Y. Tang, “Bayes imbalance impact index: A measure of class imbalanced data set for classification problem,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3525–3539, 2020. [Online]. Available: https://ieeexplore.ieee.org/document/8890005
  • 28. D. C. Corrales, J. C. Corrales, and A. Ledezma, “How to address the data quality issues in regression models: A guided process for data cleaning,” Symmetry, vol. 10, no. 4, p. 99, 2018. [Online]. Available: https://www.mdpi.com/2073-8994/10/4/99
  • 29. M. K. Sarker, L. Zhou, A. Eberhart, and P. Hitzler, “Neuro-symbolic artificial intelligence,” AI Communications, vol. 34, no. 3, pp. 197–209, 2021. [Online]. Available: https://content.iospress.com/articles/ai-communications/aic210084
  • 30. R. R. Hoffman, S. T. Mueller, G. Klein, and J. Litman, “Metrics for explainable AI: challenges and prospects,” CoRR, vol. abs/1812.04608, 2018. [Online]. Available: http://arxiv.org/abs/1812.04608
  • 31. A. Nguyen and M. R. Martı́nez, “On quantitative aspects of model interpretability,” CoRR, vol. abs/2007.07584, 2020. [Online]. Available: https://arxiv.org/abs/2007.07584
  • 32. A. Holzinger, A. M. Carrington, and H. Müller, “Measuring the quality of explanations: The system causability scale (SCS),” KI - Künstliche Intelligenz, vol. 34, no. 2, pp. 193–198, 2020. [Online]. Available: https://link.springer.com/10.1007/s13218-020-00636-z
  • 33. A. Holzinger, G. Langs, H. Denk, K. Zatloukal, and H. Müller, “Causability and explainability of artificial intelligence in medicine,” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, vol. 9, no. 4, pp. e1312:1–13, 2019. [Online]. Available: https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1312
  • 34. H. Lakkaraju, E. Kamar, R. Caruana, and J. Leskovec, “Interpretable & explorable approximations of black box models,” CoRR, vol. abs/1707.01154, 2017. [Online]. Available: http://arxiv.org/abs/1707.01154
  • 35. F. Chierichetti, R. Kumar, S. Lattanzi, and S. Vassilvitskii, “Matroids, matchings, and fairness,” in 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, ser. Proceedings of Machine Learning Research, K. Chaudhuri and M. Sugiyama, Eds., vol. 89. PMLR, 2019, pp. 2212–2220. [Online]. Available: http://proceedings.mlr.press/v89/chierichetti19a.html
  • 36. R. Calegari, G. G. Castañé, M. Milano, and B. O’Sullivan, “Assessing and enforcing fairness in the AI lifecycle,” in 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023). Macau, China: IJCAI, August 19–25 2023.
  • 37. B. Wagner and A. d’Avila Garcez, “Neural-symbolic integration for fairness in AI,” in AAAI-MAKE 2021 – Combining Machine Learning and Knowledge Engineering, ser. CEUR Workshop Proceedings, A. Martin, K. Hinkelmann, H. Fill, A. Gerber, D. Lenat, R. Stolle, and F. van Harmelen, Eds., vol. 2846. CEUR-WS.org, 2021. [Online]. Available: https://ceur-ws.org/Vol-2846/paper5.pdf
  • 38. S. Badreddine, A. S. d’Avila Garcez, L. Serafini, and M. Spranger, “Logic tensor networks,” Artificial Intelligence, vol. 303, pp. 103 649:1–39, 2022. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0004370221002009
  • 39. X. Gao, J. Zhai, S. Ma, C. Shen, Y. Chen, and Q. Wang, “FairNeuron: improving deep neural network fairness with adversary games on selective neurons,” in 44th International Conference on Software Engineering, ICSE 2022. ACM, 2022, pp. 921–933. [Online]. Available: https://dl.acm.org/doi/10.1145/3510003.3510087
  • 40. A. Agiollo, A. Rafanelli, and A. Omicini, “Towards quality-of-service metrics for symbolic knowledge injection,” in WOA 2022 – 23rd Workshop “From Objects to Agents”, ser. CEUR Workshop Proceedings, A. Ferrando and V. Mascardi, Eds., vol. 3261. Sun SITE Central Europe, RWTH Aachen University, 2022. ISSN 1613-0073 pp. 30–47. [Online]. Available: http://ceur-ws.org/Vol-3261/paper3.pdf
  • 41. A. Agiollo, A. Rafanelli, M. Magnini, G. Ciatto, and A. Omicini, “Symbolic knowledge injection meets intelligent agents: QoS metrics and experiments,” Autonomous Agents and Multi-Agent Systems, vol. 37, no. 2, pp. 27:1–27:30, Jun. 2023. [Online]. Available: https://link.springer.com/10.1007/s10458-023-09609-6
  • 42. J. Mao, C. Gan, P. Kohli, J. B. Tenenbaum, and J. Wu, “The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision,” in 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. [Online]. Available: https://openreview.net/forum?id=rJgMlhRctm
  • 43. Q. Zhang, L. Wang, S. Yu, S. Wang, Y. Wang, J. Jiang, and E. Lim, “NOAHQA: Numerical reasoning with interpretable graph question answering dataset,” in Findings of the Association for Computational Linguistics: EMNLP 2021, M. Moens, X. Huang, L. Specia, and S. W. Yih, Eds. ACL, 2021, pp. 4147–4161. [Online]. Available: https://aclanthology.org/2021.findings-emnlp.350/
  • 44. B. Škrlj, M. Martinc, N. Lavrač, and S. Pollak, “autoBOT: evolving neuro-symbolic representations for explainable low resource text classification,” Machine Learning, vol. 110, no. 5, pp. 989–1028, 2021. [Online]. Available: https://link.springer.com/article/10.1007/s10994-021-05968-x
Uwagi
1. This work was partially supported by PNRR – M4C2 – Investimento 1.3, Partenariato Esteso PE00000013 – “FAIR - Future Artificial Intelligence Research” – Spoke 8 “Pervasive AI”, funded by the European Commission under the NextGenerationEU programme, and by the CHIST-ERA IV project “EXPECTATION” – CHIST-ERA-19-XAI-005 –, co-funded by EU and the Italian MUR (Ministry for University and Research).
2. Main Track Invited Contributions
3. Opracowanie rekordu ze środków MEiN, umowa nr SONP/SP/546092/2022 w ramach programu "Społeczna odpowiedzialność nauki" - moduł: Popularyzacja nauki i promocja sportu (2024).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-b8a6eda6-96d8-47c2-b6b9-da52c88920c4
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.