Ontology-Driven Neurosymbolic Method for Building Personalized Mental Models of Intelligent System Decisions
Сергій Федорович Чалий, Ірина Олександрівна Лещинська
Abstract
The subject of the research is the construction of personalized mental models of intelligent system decisions based on the integration of neural network and symbolic approaches for adapting the complexity of explanations to the level of user expertise. The goal is to increase the comprehensibility of explanations and user trust in the recommendations of intelligent systems in critical domains, including aviation, by developing an ontology-driven neurosymbolic method for building personalized mental models. Tasks to be solved: to conduct a critical analysis of approaches to building explanations for intelligent system decisions, to develop an ontology-driven neurosymbolic method for building personalized mental models of intelligent system decisions. The following results were obtained. The practical advantage of the developed method lies in the ability to automatically adapt the complexity and level of detail of explanations to the user's mental model, and ensure alignment of generated explanations with the internal representations of users about the logic of system decision-making, without involving domain experts in the process of generating personalized explanations. Conclusions. During the study, an ontology-driven neurosymbolic method for building personalized mental models of intelligent system decisions was developed based on the integration of neural network classification of user expertise levels with ontological formalization of domain knowledge and causal inference rules for automated adaptation of explanation complexity to the cognitive capabilities of users. The scientific novelty of the obtained results lies in the development of an ontology-driven neurosymbolic method for building personalized mental models of intelligent system decisions, which combines neural network classification of user expertise levels based on analysis of behavioral interaction trajectories, automatic transformation of neural network representations into symbolic rules through decision tree induction, formalization of an ontological domain model with causal inference rules, and vertical composition of mental model layers to adapt explanation complexity according to the determined user expertise level. The method enables an increase in the relevance of explanations based on personalized mental models that reflect user representations about intelligent system decisions. The results of experimental verification on e-commerce system data have demonstrated the effectiveness of the developed method through an increase in explanation relevance.
Keywords
personalized mental model, ontology-driven neurosymbolic method, explainable artificial intelligence, ontological model, causal inference rules, expertise level classification, behavioral trajectory, decision tree induction, explanation relevance, mental
References
Ribeiro M.T., Singh S., Guestrin C. "Why Should I Trust You?" Explaining the Predictions of Any Classifier // Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016. P. 1135–1144. DOI: 10.1145/2939672.2939778.
Norman D.A. Some Observations on Mental Models // Mental Models / D. Gentner, A.L. Stevens (eds.). Hillsdale: Lawrence Erlbaum Associates, 1983. P. 7–14.
Chalyi S., Leshchynskyi V. Method of constructing explanations for recommender systems based on the temporal dynamics of user preferences // EUREKA: Physics and Engineering. 2020. Vol. 3. P. 43--50. DOI: 10.21303/2461-4262.2020.001228.
Garcez A.S., Gori M., Lamb L.C., Serafini L., Spranger M., Tran S.N. Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning // Journal of Applied Logics. 2019. Vol. 6, No. 4. P. 611–632.
Quinlan J.R. Induction of Decision Trees // Machine Learning. 1986. Vol. 1, No. 1. P. 81–106. DOI: 10.1007/BF00116251.
Gruber T.R. A Translation Approach to Portable Ontology Specifications // Knowledge Acquisition. 1993. Vol. 5, No. 2. P. 199–220. DOI: 10.1006/knac.1993.1008.
DOI:
https://doi.org/10.32620/oikit.2026.107.18
Refbacks
- There are currently no refbacks.