Method of adaptive content generation in mobile applications based on personalized deep learning models

Oleksandr Vdovitchenko, Dmitro Rudenok

Abstract


The subject matter of this article is the adaptive generation and personalization of multimedia content in mobile information systems operating under limited computing resources and intermittent connectivity constraints. Particular attention is paid to crew information support systems and onboard in-flight entertainment systems, where autonomy of operation, minimization of power consumption, and protection of flight data privacy are critical requirements. This study aims to develop a comprehensive method for adaptive content generation that seamlessly combines the global knowledge of large-scale neural models with local personalization and real-time contextual adaptation to enhance the information relevance and energy efficiency of autonomous mobile devices. The tasks to be solved are as follows: (1) to analyze existing approaches to automatic content generation and identify their architectural limitations in specific mobile environments; (2) to develop a formal stochastic model of the system, including a global level of knowledge generalization, a local level of personalization, and a level of contextual adaptation; (3) to justify the choice of methods for structural optimization of deep neural networks (specifically knowledge distillation and quantization) to ensure offline operation; (4) to develop a method of adaptive generation based on personalized federated learning and contextual optimization algorithms; and (5) to implement an experimental model of the system and verify its effectiveness on open text datasets simulating query processing under limited context conditions. The methods used are as follows: Deep Learning methods for building compact generative transformer models such as DistilGPT; Federated Learning methods for decentralized updating of model weight coefficients without transferring raw data to a server; Reinforcement Learning methods, specifically contextual multi-armed bandit algorithms, for dynamic adaptation of the generation strategy to the current flight phase or user behavior; and neural network compression methods to reduce computational load. The following results were obtained. A method of adaptive generation has been developed and software-implemented, integrating personalized federated learning with a contextual optimization mechanism. During the experimental studies conducted on text datasets simulating technical documentation and crew queries, it was established that the use of the proposed approach increases the BLEU metric’s content generation quality by 38% compared to the baseline centralized model. Optimization of the neural network architecture resulted in a reduction in response latency by 34% and a decrease in power consumption of the mobile device by 10%–15%, which is a critical indicator for autonomous onboard systems. A high level of data privacy protection (privacy loss ε < 1) was confirmed when using differential privacy mechanisms. Conclusions. The proposed method solves the scientific and applied problem of deploying intelligent generative services in closed ecosystems with high privacy and autonomy requirements, such as aviation mobile applications. The integration of federated learning with local adaptation creates conditions for the autonomous self-learning of the system without the need for constant access to cloud computing resources. The scientific novelty of the results obtained is as follows: for the first time, the method of adaptive content generation is formalized as a unified stochastic system with contextual self-adaptation, which dynamically optimizes the generation strategy through a generalized loss function that considers quality, personalization, and the contextual response of the environment, unlike existing disparate solutions.

Keywords


adaptive content generation; personalization; federated learning; deep learning; mobile applications; Electronic Flight Bag; contextual adaptation; reinforcement learning; stochastic optimization; on-device artificial intelligence

References


Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., & Polosukhin, I. Attention Is All You Need. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017, vol. 30, pp. 5998–6008.

Hinton, G., Vinyals, O., & Dean, J. Distilling the Knowledge in a Neural Network. NIPS Deep Learning and Representation Learning Workshop. 2015, Available at: http://arxiv.org/abs/1503.02531. (accessed 12.08.2025).

Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., & Kalenichenko, D. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2704–2713. DOI: 10.1109/CVPR.2018.00286.

Han, S., Mao, H., & Dally, W.J. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations (ICLR), 2016. Available at: https://arxiv.org/abs/1510.00149. (accessed 12.08.2025).

McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. Communication-Efficient Learning of Deep Networks from Decentralized Data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017, vol. 54, pp. 1273–1282.

Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., et al. Advances and Open Problems in Federated Learning. Foundations and Trends® in Machine Learning, 2021, vol. 14, no. 1–2, pp. 1–210. DOI: 10.1561/2200000083.

Fallah, A., Mokhtari, A., & Ozdaglar, A. Personalized Federated Learning: A Meta-Learning Approach. Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020, vol. 33, pp. 8676–8689.

Xia, F., & Cheng, W. A survey on privacy-preserving federated learning against poisoning attacks. Cluster Computing, 2024, vol. 27, pp. 13565–13582. DOI: 10.1007/s10586-024-04629-7.

Chen, X., Dong, D., Li, T., He, T., Zhou, L., Li, L., & Xie, X.. Deep reinforcement learning in recommender systems: A survey and new perspectives. Knowledge-Based Systems, 2023, vol. 264, art. no. 110335. DOI: 10.1016/j.knosys.2023.110335.

Sanh, V., Debut, L., Chaumond, J., & Wolf, T. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. NeurIPS Workshop on Energy Efficient Machine Learning and Cognitive Computing, 2019. Available at: https://arxiv.org/abs/1910.01108. (accessed 12.08.2025).

Jiao, X., Yin, Y., Shang, L., Jiang, X., Chen, X., Li, L., Wang, F., & Liu, Q. TinyBERT: Distilling BERT for Natural Language Understanding. Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 4163–4174. DOI: 10.18653/v1/2020.findings-emnlp.372.

Bouneffouf, D., & Féraud, R. A Tutorial on Multi-Armed Bandit Applications for Large Language Models. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24), 2024, pp. 6608–6609. DOI: 10.1145/3637528.3671408.

Lin, Y., Hefny, H., Coutinho, O., & Rosing, T. S. A Survey on Reinforcement Learning for Recommender Systems. IEEE Transactions on Neural Networks and Learning Systems, 2023, (Early Access). DOI: 10.1109/TNNLS.2023.3283282.

Heydari, S., & Mahmoud, Q. H. Tiny Machine Learning and On-Device Inference: A Survey of Applications, Challenges, and Future Directions. Sensors, 2023, vol. 23, iss. 10, art. no. 4600. DOI: 10.3390/s23104600.




DOI: https://doi.org/10.32620/aktt.2025.6.09