Intellectual code analysis in automation grading

Denys Seliutin, Elena Yashyna

Abstract


Grades for programming assignments continue to be difficult to assign despite the fact that students have a wide variety of strategies available to address challenges. The primary factor is the existence of several technological frameworks and a range of coding methodologies. The subject matter of this article is the process of intelligent evaluation of students’ knowledge based on code written by students during regular practical work. The goal is to develop an approach for intellectual code analysis that can be easily implemented and integrated into the most widespread grading systems. The tasks to be solved include: formalization of code representation for intellectual analysis by applications; analysis of the current state of research and development in the field of automated analysis and evaluation of software codes; introduction of a technique that offers substantial feedback through the integration of intelligent code analysis via code decomposition and providing grading systems an “understanding” of program log. The research subjects are methods of the programming code evaluation during distance learning. The methods used are: tree classification code analysis and graph traversing methods adopted for the tree linearization goal. The following results were obtained: 1. An examination of the current state of automated software code analysis and evaluation reveals that this issue is intricate due to the challenges involved in manually assessing programming projects. These challenges are further exacerbated by the intricate nature of the code, subjective judgment, and the need to adapt to various technical structures. Consequently, there is an urgent demand for automated assessment methods in educational settings. 2. The technique of representing the code structure as syntactic trees was employed to create an automated tool for analyzing software code. This facilitated the decomposition of the code into interrelated logical modules, enabling the analysis of the structure of these modules and the relationships between them. 3. The used methodologies and techniques were used for the analysis of Java code. The syntactic analysis enabled the detection of problematic and erroneous code blocks and the identification of fraudulent attempts (manipulating the program's output instead of performing the algorithm). Conclusions. Most current automatic student work evaluation systems rely on testing, which involves comparing the program's inputs and outputs. Unlike the other methods, the approach presented in this study examines the syntactic structure of the program. This enables precise identification of the position and type of mistakes. An astute examination of the gathered data will enable the formulation of precise suggestions for students to enhance their coding skills. The suggested instruments can be incorporated into the Intelligent Tutoring System designed for IT majors.

Keywords


data processing; intelligent data analysis; intelligent assessment systems; software code analysis; dynamic analysis of software code; feedback generation

Full Text:

PDF

References


Conejo, R., Barros, B. & Bertoa, M. F. Automated assessment of complex programming tasks using SIETTE. IEEE Transactions on Learning Technologies, 2019, vol. 12, no. 4, pp. 470–484. DOI: 10.1109/tlt.2018.2876249.

Bertagnon, A., & Gavanelli, M. MAESTRO: a semi-autoMAted Evaluation SysTem for pROgramming assignments. Proceeding of the 2020 international conference on computational science and computational intelligence (CSCI), Las Vegas, NV, USA, IEEE, 2020, pp. 953-958. DOI: 10.1109/csci51800.2020.00177.

Ala-Mutka, K. M. A survey of automated assessment approaches for programming assignments. Computer Science Education, 2005, vol. 15, iss. 2, pp. 83–102. DOI: 10.1080/08993400500150747.

Ball, T. The concept of dynamic analysis. ACM SIGSOFT Software Engineering Notes, 1999, vol. 24, iss. 6, pp. 216–234. DOI: 10.1145/318774.318944.

Coore, D., & Fokum, D. Facilitating course assessment with a competitive programming platform. Proceeding of the SIGCSE '19: the 50th ACM technical symposium on computer science education, New York, NY, USA, Association for Computing Machinery, 2019, pp. 449-455. DOI: 10.1145/3287324.3287511.

Ayewah, N., Pugh, W., Hovemeyer, D., Morgenthaler, J. D., & Penix, J. Using static analysis to find bugs. IEEE Software, vol. 25, no. 5, pp. 22–29. DOI: 10.1109/ms.2008.130.

Restrepo-Calle, F., Ramirez-Echeverry, J. & González, F. Using an interactive software tool for the formative and summative evaluation in a computer programming course: an experience report. Global Journal of Engineering Education, 2020, vol. 22, no. 3, pp. 174–185. Available at: https://www.researchgate.net/publication/346004432_Using_an_interactive_software_tool_for_the_formative_and_summative_evaluation_in_a_computer_programming_course_an_experience_report (accessed 09 June 2024).

Le, D. M. Model‐based automatic grading of object‐oriented programming assignments. Computer Applications in Engineering Education, 2021, vol. 30, iss. 2, pp. 435–457. DOI: 10.1002/cae.22464.

Liénardy, S., Leduc, L., Verpoorten, D., & Donnet, B. Café’: Automatic Correction and Feedback of Programming Challenges for a CS1 Course. Proceeding of the ACE'20: twenty-second australasian computing education conference, New York, NY, USA, Association for Computing Machinery, 2020, pp. 95–104. DOI: 10.1145/3373165.3373176.

Ahire, P., & Abraham, J. Perceive core logical blocks of a C program automatically for source code transformations. Proceeding of the 18-th Intelligent Systems Design and Applications conference, Springer, Cham, 2019, pp. 386–400. DOI: 10.1007/978-3-030-16657-1_36.

De Silva, D., Samarasekara, P., & Hettiarachchi, R. TechRxiv. A comparative analysis of static and dynamic code analysis techniques. 2023. DOI: 10.36227/techrxiv.22810664.v1. (unpublished).

Narayanan, S., & Simi, S. Source code plagiarism detection and performance analysis using fingerprint based distance measure method. Proceeding of the 2012 7th international conference on computer science & education (ICCSE 2012), Melbourne, VIC, Australia, 2012, pp. 1065–1068. DOI: 10.1109/iccse.2012.6295247.

Xu, W., & Ouyang, F. The application of AI technologies in STEM education: a systematic review from 2011 to 2021. International Journal of STEM Education, 2022, vol. 9, article no. 59. DOI: 10.1186/s40594-022-00377-5.

Barros, J. P. Assessment for computer programming courses: a short guide for the undecided teacher. Proceeding of the 14th international conference on computer supported education, Online Streaming, SciTePress, 2022, pp. 549–554. DOI: 10.5220/0011095800003182.

Samoaa, H. P., Bayram, F., Salza, P., & Leitner, P. A systematic mapping study of source code representation for deep learning in software engineering. IET Software, 2022, vol. 16, iss. 4, pp. 351–385. DOI: 10.1049/sfw2.12064.

Paiva, J., Leal, J., & Figueira, Á. Comparing semantic graph representations of source code: the case of automatic feedback on programming assignments. Computer Science and Information Systems, 2024, vol. 21, no. 1, pp. 117–142. DOI: 10.2298/csis230615004p.

Wojszczyk, R., Hapka, A., & Królikowski, T. Performance analysis of extracting object structure from source code. Procedia Computer Science, 2023, vol. 225, pp. 4065–4073. DOI: 10.1016/j.procs.2023.10.402.

Nguyen, A. T., & Hoang, V. D. Development of code evaluation system based on abstract syntax tree. Journal of Technical Education Science, 2024, vol. 19, no. 1, pp. 15–24. DOI: 10.54644/jte.2024.1514.

Ortin, F., Facundo, G., & Garcia, M. Analyzing syntactic constructs of Java programs with machine learning. Expert Systems With Applications, 2023, vol. 215, iss. C. DOI: 10.1016/j.eswa.2022.119398.




DOI: https://doi.org/10.32620/reks.2024.4.06

Refbacks

  • There are currently no refbacks.