Integrated neuro-symbolic architecture of user mental models for personalized explanations of intelligent systems decisions
Abstract
Keywords
Full Text:
PDF (Українська)References
Kingma, D.P., & Welling, M. Auto-Encoding Variational Bayes. In Advances in Neural Information Processing Systems (NeurIPS 2013), vol. 27. Available at: https://arxiv.org/abs/1312.6114 (Accessed: 12 December 2025).
Badreddine, S., d’Avila Garcez, A., Serafini, L., & Spranger, M. Logic Tensor Networks. Artificial Intelligence, 2022, vol. 303, Article no. 103649. DOI:10.1016/j.artint.2021.103649.
Miller, T. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence, 2019, vol. 267, pp. 1–38. DOI: 10.1016/j.artint.2018.07.007.
Chromik, M., Eiband, M., Buchner, F., Krüger, A., & Butz, A. I Think I Get Your Point, AI! The Illusion of Explanatory Depth in Explainable AI’, in Proceedings of the 26th International Conference on Intelligent User Interfaces. New York, ACM, 2021, pp. 307–317. DOI:10.1145/3397481.3450644.
Nimmo, R., Constantinides, M., Zhou, K., Quercia, D., & Stumpf, S. User Characteristics in Explainable AI: The Rabbit Hole of Personalization? In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI 2024). 2024, New York, ACM. DOI: 10.1145/3613904.3642352.
Pearl, J. Causality: Models, Reasoning, and Inference. 2nd edn. Cambridge, Cambridge University Press, 2009. ISBN 978-0521895606.
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. Causability and Explainability of Artificial Intelligence in Medicine. WIREs Data Mining and Knowledge Discovery, 2019, vol. 9, iss. 4, article no. e1312. DOI:10.1002/widm.1312.
European Commission Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). EUR-Lex, 2021. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (Accessed: 12 December 2025).
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., & et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 2020, vol. 58, pp. 82–115. DOI:10.1016/j.inffus.2019.12.012.
Ribeiro, M. T., Singh, S., & Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, ACM, 2016, pp. 1135–1144. DOI:10.1145/2939672.2939778.
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications. Proceedings of the IEEE, 2021, vol. 109, iss. 3, pp. 247–278. DOI: 10.1109/JPROC.2021.3060483.
Prenosil, G. A., Weitzel, T. K., Afshar-Oromieh, A., & et al. Neuro-symbolic AI for auditable cognitive information extraction from medical reports. Communications Medicine, 2025, vol. 5, Article no. 491. DOI: 10.1038/s43856-025-01194-x.
DOI: https://doi.org/10.32620/aktt.2026.1.12
