Basic model of non-functional characteristics for assessment of artificial intelligence quality
Abstract
Keywords
Full Text:
PDF (Українська)References
Islam, M. R., Ahmed, M. U., Barua, S., Begum, S. A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 2022, vol. 12, article id: 1353. DOI: 10.3390/app12031353.
European Commission, High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. Available at: https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf. (accessed 10.03.2022).
European Commission, High-Level Expert Group on Artificial Intelligence. The Assessment List for Trustworthy Artificial Intelligence (ALTAI). Available at: https://airegio.ems-carsa.com/nfs/programme_5/call_3/call_preparation/ALTAI_final.pdf. (accessed 10.03.2022).
ISO/IEC TR 24372:2021. Information technology. Artificial intelligence. Overview of computational approaches for AI systems. Available at: https://www.iso.org/standard/78508.html. (accessed 10.03.2022).
ISO/IEC TR 24028:2020. Information technology. Artificial intelligence. Overview of trustworthiness in artificial intelligence. Available at: https://www.iso.org/standard/77608.html. (accessed 10.03.2022).
ISO/IEC TR 24027:2021. Information technology. Artificial intelligence. Bias in AI systems and AI aided decision making. Available at: https://www.iso.org/standard/77607.html. (accessed 10.03.2022).
IEEE 2941-2021. Standard for Artificial Intelligence (AI) Model Representation, Compression, Distribution, and Management. Available at: https://ieeexplore.ieee.org/document/6922153. (accessed 10.03.2022).
Phillips, P. J., Hahn, C. A., Fontana, P. C., Broniatowski, D. A., Przybocki, M. A., Hahn, C. A., Fontana, P. C. Four Principles of Explainable Artificial Intelligence: Draft NISTIR 8312. Gaithersburg, National Institute of Standards and Technology Publ., 2020. 24 p. DOI: 10.6028/NIST.IR.8312.
Schwartz, R., Down, L., Jonas, A., Tabassi, E. Towards a Standard for Identifying and Managing Bias in Artificial Intelligence: NIST Special Publication 1270. Gaithersburg, National Institute of Standards and Technology, 2021. 77 p. DOI: 10.6028/NIST.SP.1270.
Stanton, В., Jensen, T. Trust and Artificial Intelligence: Draft NISTIR 8332. Gaithersburg, National Institute of Standards and Technology, 2022. 23 p. DOI: 10.6028/NIST.IR.8332-draft.
OECD. Tools for Trustworthy AI: A Framework to Compare Implementation Tools. Available at: https://www.oecd.org/science/tools-for-trustworthy-ai-008232ec-en.htm. (accessed 10.03.2022).
UNESCO. Recommendation on the Ethics of Artificial Intelligence. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000381137. (accessed 10.03.2022).
Christoforaki, M., Beyan, O. AI Ethics—A Bird’s Eye View. Applied Sciences, 2022, vol. 12, article id: 4130. DOI: 10.3390/app12094130.
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J. Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges. Natural Language Processing and Chinese Computing : collective monograph ; edited by J. Tang, M. Y. Kan, D. Zhao, S. Li, H. Zan. Berlin/Heidelberg : Springer International Publishing, 2019, vol. 11839, рр. 563-574. DOI: 10.1007/978-3-030-32236-6_51.
Chatila, R., Dignum, V., Fisher, M., Giannotti, F., Morik, K., Russell, S., Yeung, K. Trustworthy AI. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): collective monograph, edited by B. Braunschweig, M. Ghallab. Cham: Springer International Publishing, 2021, vol. 12600, рр. 13-39. DOI: 10.1007/978-3-030-69128-8.
Gordieiev, O. Kharchenko, V. IT-oriented software quality models and evolution of the prevailing characteristics. Proc. of 9th Int. Conf. on Dependable Systems, Services and Technologies (DESSERT), 2018, pp. 375-380. DOI: 10.1109/DESSERT.2018.8409162.
Gordieiev, O., Kharchenko, V., Fusani, M. Software quality standards and models evolution: greenness and reliability issues. Information and communication technologies in education, research, and industrial applications : collective monograph ; edited by V. Yakovyna, H. C. Mayr, M. Nikitchenko, G. Zholtkevych, A. Spivakovsky, S. Batsakis. Berlin/Heidelberg: Springer International Publishing, 2016, рр. 38-55. DOI: 10.1007/978-3-319-30246-1_3.
Gerstlacher, J., Groher, I., Plösch, R. Green and Sustainable Software in the Context of Software Quality Models. HMD Praxis der Wirtschaftsinformatik, 2021, article id: 554. DOI: 10.1365/s40702-021-00821-0.
Lenarduzzi, V., Lomio, F., Moreschini, S., Taibi, D., Tamburri, D. A. Software Quality for AI: Where We Are Now? Lecture Notes in Business Information Processing : collective monograph ; edited by D. Winkler, S. Biffl, D. Mendez, M. Wimmer, J. Bergsmann. Cham: Springer International Publishing, 2021, vol. 404, рр. 43-53. DOI: 10.1007/978-3-030-65854-0_4.
Smith, A. L., Clifford, R. Quality characteristics of artificially intelligent systems. CEUR Workshop Proceedings (CEUR-WS), 2020, vol. 2800, pp. 1-6.
ISO/IEC 25010:2011. Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models. Available at: https://www.iso.org/standard/35733.html. (accessed 10.03.2022).
Gordieiev, O. Software individual requirement quality model. Radioelectronic and Computer Systems, 2020, vol. 2, pp. 48-58. DOI: 10.32620/reks.2020.2.04.
The Industrial Internet of Things. Trustworthiness Framework Foundations. An Industrial Internet Consortium Foundational Document. Version V1.00 – 2021-07-15. Available at: https://www.iiconsortium.org/pdf/Trustworthiness_Framework_Foundations.pdf. (accessed 10.03.2022).
Cambridge Dictionary. Acceptability. Available at: https://dictionary.cambridge.org/dictionary/english/acceptability. (accessed 10.03.2022).
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 2020, vol. 58, pp. 82-115. DOI: 10.1016/j.inffus.2019.12.012.
Adadi, A., Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 2018, vol. 6, pp. 52138-52160. DOI: 10.1109/ACCESS.2018.2870052.
Burciaga, A. Six Essential Elements of a Responsible AI Model. Available at: https://www.forbes.com/sites/forbestechcouncil/2021/09/01/six-essential-elements-of-a-responsible-ai-model/?sh=21ebcb8456cf. (accessed 10.03.2022).
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H. Causability and Explainability of Artificial Intelligence in Medicine. WIREs Data Mining and Knowledge Discovery, 2019, vol. 9, pp. 1-13. DOI: 10.1002/widm.1312.
Cambridge Dictionary. Comprehensibility. Available at: https://dictionary.cambridge.org/dictionary/english/comprehensibility. (accessed 10.03.2022).
Ghajargar, M., Bardzell, J., Renner, A.S., Krogh, P.G., Höök, K., Cuartielles, D., Boer, L., Wiberg, M. From “Explainable AI” to “Graspable AI”. Proc. of 15th Int. Conf. on Tangible, Embedded, and Embodied Interaction (TEI), 2021, pp. 1-4. DOI: 10.1145/3430524.3442704.
Baum, K., Mantel, S., Schmidt, E., Speith, T. From Responsibility to Reason-Giving Explainable Artificial Intelligence. Philosophy & Technology, 2022, vol. 35, article id: 12. DOI: 10.1007/s13347-022-00510-w.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., Kagal, L. Explaining explanations: An overview of interpretability of machine learning. Proc. of 5th Int. Conf. on Data Science and Advanced Analytics (DSAA), 2018, pp. 80-89. DOI: 10.1109/DSAA.2018.00018.
Wright, D. Understanding "Trustworthy" AI: NIST Proposes Model to Measure and Enhance User Trust in AI Systems. Available at: https://www.jdsupra.com/legalnews/understanding-trustworthy-ai-nist-6387341. (accessed 10.03.2022).
Mora-Cantallops, M., Sánchez-Alonso, S., García-Barriocanal, E., Sicilia, M.-A. Traceability for Trustworthy AI: A Review of Models and Tools. Big Data and Cognitive Computing, 2021, vol. 5, iss. 2, article id: 20. DOI: 10.3390/bdcc5020020.
Zhang, C., Wang, J., Yen, G.G., Zhao, C., Sun, Q., Tang, Y., Qian, F., Kurths, J. When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey. Patterns, 2020, vol. 1, iss. 4, pp. 1-28. DOI: 10.1016/j.patter.2020.100050.
Patil, K. R., Heinrichs, B. Verifiability as a Complement to AI Explainability: A Conceptual Proposal [Preprint]. Available at: http://philsci-archive.pitt.edu/20297. (accessed 10.03.2022).
Moskalenko, V., Zaretskyi, M., Moskalenko, A., Korobov, A., Kovalskyi, Y. Bahatoetapnyi metod hlybynnoho navchannia z poperednim samonavchanniam dlia klasyfikatsiinoho analizu defektiv stichnykh trub [Multi-stage deep learning method with self-supervised pretraining for sewer pipe defects classification]. Radioelectronic and computer systems, 2021, vol. 4, pp. 71-81. DOI: 10.32620/reks.2021.4.06.
Kuchuk, H., Podorozhniak, A., Liubchenko, N., Onischenko, D. System of license plate recognition considering large camera shooting angles. Radioelectronic and Computer Systems, 2021, vol. 4, pp. 82-91. DOI: 10.32620/reks.2021.4.07.
Felderer, M., Ramler, R. Quality Assurance for AI-Based Systems: Overview and Challenges. Lecture Notes in Business Information Processing: collective monograph, edited by D. Winkler, S. Biffl, D. Mendez, M. Wimmer, J. Bergsmann. Cham: Springer International Publishing, 2021, vol. 404, рр. 33-42. DOI: 10.1007/978-3-030-65854-0_3.
Bloomfield, R., Netkachova, K., Stroud, R. Security-informed safety: If it’s not secure, it’s not safe. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): collective monograph, edited by S. Tonetta, E. Schoitsch, F. Bitsch. Cham, Springer International Publishing, 2013, vol. 8166, рр. 17-32. DOI: 10.1007/978-3-642-40894-6_2.
Potii, O., Illiashenko, O., Komin. D. Advanced security assurance case based on ISO/IEC 15408. Advances in Intelligent Systems and Computing: collective monograph, edited by W. Zamojski, J. Mazurkiewicz, J. Sugier, T. Walkowiak, J. Kacprzyk. Cham, Springer International Publishing, 2015, vol. 365, рр. 391-401. DOI: 10.1007/978-3-319-19216-1_37.
Illiashenko, О. О., Kolisnyk, М. А., Strielkina, А. E., Kotsiuba, І. V., Kharchenko, V.S. Conception and application of dependable Internet of Things based systems. Radio Electronics, Computer Science, Control, 2020, vol. 4, pp. 139-150. DOI: 10.15588/1607-3274-2020-4-14.
Tetskyi, A. G., Kharchenko, V. S., Uzun, D. D., Nechausov, A. S. Architecture and Model of Neural Network Based Service for Choice of the Penetration Testing Tools. International Journal of Computing. 2021, vol. 20(4), pp. 513-518. DOI: 10.47839/ijc.20.4.2438.
DOI: https://doi.org/10.32620/reks.2022.2.11
Refbacks
- There are currently no refbacks.