Analysis of approaches to explainable semantic verification of financial data mining results

Dmytro Baraniei, Tetiana Filimonchuk

Abstract


The study examines methods of explainability and semantic verification for financial data mining outcomes in computer decision support systems, particularly in high-risk industries such as aerospace. The purpose of the article is to analyze modern explainability methods and approaches to verifying the results of intelligent systems within a financial context, identify their limitations, and justify an approach to explainable semantic verification based on a combination of xAI methods, ontological knowledge representation, and formal verification procedures. Tasks include: analyzing modern methods of explainability and approaches to verifying the functioning of intelligent systems; identifying the limitations of existing solutions in ensuring the logical admissibility, semantic consistency, and reliability of data mining, results; and developing an approach to explainable semantic verification specifically for financial data. The study employs methods of analyzing and generalizing scientific sources, systemic and comparative analysis, and approaches to ontological modeling and the semantic interpretation of machine learning results. The findings indicate that modern xAI approaches provide interpretations of machine learning outputs, but do not guarantee their logical admissibility, semantic consistency, or regulatory acceptability in the financial sphere. The feasibility of integrating xAI methods with ontological knowledge models and formal verification procedures is substantiated, allowing for expanded quality control of analytical conclusions. The proposed approach involves the sequential implementation of analytical result formation, explanation generation, semantic mapping, logical verification, and result reliability assessment. Conclusions. The scientific novelty of the obtained results lies in the justification of the proposed approach to explainable semantic verification of financial data mining results. Unlike existing methods, this approach provides not only a meaningful interpretation of the output, but also its logical verification and semantic consistency with domain-specific knowledge and industry constraints.

Keywords


financial intelligent systems; explainable verification; explainable artificial intelligence; ontological models; semantic mapping; information technology; algorithmic accountability

Full Text:

PDF

References


Kabir, S., Hossain, M. S., & Andersson, K. A Review of Explainable Artificial Intelligence from the Perspectives of Challenges and Opportunities. Algorithms, 2025, vol. 18, no. 9, Article no. 556. DOI: 10.3390/a18090556.

Chen, X.-Q., Ma, C.-Q., Ren, Y.-S., Lei, Y.-T. Huynh, N. Q. A.., Narayan, S. Explainable artificial intelligence in finance: A bibliometric review. Finance Research Letters, 2023, vol. 56, Article no. 104145. DOI: 10.1016/j.frl.2023.104145.

Kostopoulos, G., Davrazos, G. & Kotsiantis, S. Explainable Artificial Intelligence-Based Decision Support Systems: A Recent Review. Electronics, 2024, vol. 13, no. 14, Article no. 2842. DOI: 10.3390/electronics13142842.

Klein, T. & Walther, T. Editorial: Advances in Explainable Artificial Intelligence (xAI) in Finance. Finance Research Letters, 2024, vol. 70, Article no. 106358. DOI: 10.1016/j.frl.2024.106358

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, 2024. Available at: http://data.europa.eu/eli/reg/2024/1689/oj (accessed 28 January 2026).

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation, GDPR). Official Journal of the European Union, 2016. Available at: http://data.europa.eu/eli/reg/2016/679/oj (accessed 28 January 2026).

Pileggi, S. F. Ontology in Hybrid Intelligence: A Concise Literature Review. Future Internet, 2024, vol. 16, no. 8, Article no. 268. DOI: 10.3390/fi16080268.

Guizzardi, G. & Guarino, N. Explanation, semantics, and ontology. Data & Knowledge Engineering, 2024, vol. 153, Article no. 102325. DOI: 10.1016/j.datak.2024.102325.

Coussement, K., Lessmann, S. & Verstraeten, G. Explainable AI for enhanced decision-making. Decision Support Systems, 2024, vol. 184, Article no. 114276. DOI: 10.1016/j.dss.2024.114276.

Reis, M. I., Gonçalves, J. N. C., Cortez, P., Carvalho, M. S. & Fernandes, J. M. A context-aware decision support system for selecting explainable artificial intelligence methods in business organizations, Computers in Industry, 2025, vol. 165, Article no. 104233. DOI: 10.1016/j.compind.2024.104233.

Aljunaid, S. K., Almheiri, S. J., Dawood, H., & Khan, M. A. Secure and Transparent Banking: Explainable AI-Driven Federated Learning Model for Financial Fraud Detection. Journal of Risk and Financial Management, 2025, vol. 18, no. 4, Article no. 179. DOI: 10.3390/jrfm18040179.

Armary, P., El-Vaigh, C. B., Narsis, O. L. & Nicolle, C. Ontology learning towards expressiveness: A survey. Computer Science Review, 2025, vol. 56, Article no. 100693. DOI: 10.1016/j.cosrev.2024.100693.

Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., Chalita, R. & Herrera, F. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 2020, vol. 58, pp. 82–115. DOI: 10.1016/j.inffus.2019.12.012.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F. & Pedreschi, D. A survey of methods for explaining black box models. ACM Computing Surveys, 2018, vol. 51, no. 5, Article no. 93, pp. 1–42. DOI: 10.1145/3236009.

Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Del Ser, J., Guidotti, R., Hayashi, Y., Herrera, F., Holzinger, A., Jiang, R., Khosravi, H., Lecue, F., Malgieri, G., Páez, A., Samek, W., Schneider, J., Speith, T. & Stumpf S. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 2024, vol. 106, Article no. 102301. DOI: 10.1016/j.inffus.2024.102301.

Saarela, M. & Podgorelec, V. Recent Applications of Explainable AI (XAI): A systematic literature review. Applied Sciences, 2024, vol. 14, no. 19, Article no. 8884. DOI: 10.3390/app14198884.

Altukhi, Z. M., Pradhan, S. & Aljohani, N. A systematic literature review of the latest advancements in XAI. Technologies, 2025, vol. 13, no. 3, Article no. 93. DOI: 10.3390/technologies13030093.

Ribeiro, M. T., Singh, S. & Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144. DOI: 10.1145/2939672.2939778.

Lundberg, S. & Lee, S.-I. A Unified Approach to Interpreting Model Predictions. Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 1–10. DOI: 10.48550/arXiv.1705.07874.

Černevičienė, J. & Kabašinskas, A. Explainable artificial intelligence (XAI) in finance: a systematic literature review. Artificial Intelligence Review, 2024, vol. 57, Article no. 216, pp. 1–45. DOI: 10.1007/s10462-024-10854-8.

Hogan, A., Blomqvist, E., Cochez, M., D'Amato, C., De Melo, G., Gutierrez, C., Kirrane, S., Gayo, J. E. L., Navigli, R., Neumaier, S., Ngonga Ngomo, A.-C., Polleres, A., Rashid, S. M., Rula, A., Schmelzeisen, L., Sequeda, J., Staab, S. & Zimmermann, A. Knowledge Graphs. ACM Computing Surveys, 2021, vol. 54, no. 4, Article no. 71, pp. 1–37. DOI: 10.1145/3447772.

Rožanec, J. M., Fortuna, B. & Mladenić, D. Knowledge graph-based rich and confidentiality preserving Explainable Artificial Intelligence (XAI). Information Fusion, 2022, vol. 81, pp. 91-102. DOI: 10.1016/j.inffus.2021.11.015.

Waltersdorfer, L. & Sabou, M. Leveraging Knowledge Graphs for AI System Auditing and Transparency. Journal of Web Semantics, 2025, vol. 84, Article no. 100849. DOI: 10.1016/j.websem.2024.100849.

Kuiper, O., Van Den Berg, M., Van Der Burgt, J. & Leijnen, S. Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities. In: Artificial Intelligence and Machine Learning. CCIS, vol. 1530. Springer, 2022, pp. 105–119. DOI: 10.1007/978-3-030-93842-0_6.

Yildiz, K. & Cil, A. E. A systematic literature review on applications of explainable artificial intelligence in the financial sector. Internet of Things, 2025, vol. 33, Article no. 101696. DOI: 10.1016/j.iot.2025.101696.

Tafech, A., & Rabhi, F. CEDAR: An Ontology-Based Framework Using Event Abstractions to Contextualise Financial Data Processes. Electronics, 2026, vol. 15, no. 1, Article no. 145. DOI: 10.3390/electronics15010145.

Nawaz, U., Anees-ur-Rahaman, M. & Saeed, Z. A review of neuro-symbolic AI integrating reasoning and learning for advanced cognitive systems. Intelligent Systems with Applications, 2025, vol. 26, Article no. 200541. DOI: 10.1016/j.iswa.2025.200541.




DOI: https://doi.org/10.32620/aktt.2026.2.10