Comparative Analysis of Parameter-Efficient-Fine-Tuning and Full Fine-Tuning Approaches for Indonesian Dialogue Summarization using mBART
(1) Universitas Gunadarma
(2) Universitas Gunadarma
Abstract
This study addresses the urgent need for efficient Indonesian dialogue summarization systems in remote working contexts by adapting the multilingual mBART-large-50 model. The DialogSum dataset was translated into Indonesian using Opus-MT, and two fine-tuning approaches, full fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) with LoRA, were evaluated. Experiments on 1,500 test samples revealed that full fine-tuning achieved superior performance (ROUGE-1: 0.3726), while PEFT reduced energy consumption by 68.7% with a moderate accuracy trade-off (ROUGE-1: 0.2899). A Gradio-based interface demonstrated practical utility, enabling direct comparison of baseline, fine-tuned, and PEFT models. Critical findings include translation-induced terminology inconsistencies (e.g., "Hebes" vs. "Hebei") and context retention challenges in long dialogues. This work contributes a scalable framework for low-resource language NLP and provides actionable insights for optimizing computational efficiency in real-world applications.
Keywords
Full Text:
PDFReferences
Zhang, Y., Sun, S., Galley, M., Chen, Y.-C., Brockett, C., Gao, X., Gao, J., Liu, J., and Dolan, B. (2020). Dialogptâ¯: Large-scale generative pretraining for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Association for Computational Linguistics.
Angelia, D. (2022). Platform Online Meeting Terpopuler Selama Pandemi Covid-19. [Accessed: 07 January 2025].
Lavinda (2023). Survei populix: Mayoritas orang indonesia gunakan zoom dan google. [Accessed: 07 January 2025].
Lundberg, C., Sánchez Viñuela, L., and Biales, S. (2022). Dialogue summarization using BART. In Shaikh, S., Ferreira, T., and Stent, A., editors, Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges, pages 121–125, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics.
Chen, Y., Liu, Y., and Zhang, Y. (2021b). DialogSum challenge: Summarizing real-life scenario dialogues. In Belz, A., Fan, A., Reiter, E., and Sripada, Y., editors, Proceedings of the 14th International Conference on Natural Language Generation, pages 308–313, Aberdeen, Scotland, UK. Association for Computational Linguistics.
Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Alberti, C., Ontanon, S., Pham, P., Ravula, A., Wang, Q., Yang, L., et al. (2020). Big bird: Transformers for longer sequences. Advances in neural information processing systems, 33:17283–17297.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2020). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics.
Uthus, D., Ontañón, S., Ainslie, J., and Guo, M. (2023). mlongt5: A multilingual and efficient text-to-text transformer for longer sequences. arXiv preprint arXiv:2305.11129
Gusev, I. (2020). Dataset for Automatic Summarization of Russian News, pages 122–134. Springer International Publishing.
Taunk, D. and Varma, V. (2023). Summarizing indian languages using multilingual transformers based models. arXiv preprint arXiv:2303.16657.
Nguyen, H., Phan, L., Anibal, J., Peltekian, A., and Tran, H. (2021). Viesum: how robust are transformer-based models on vietnamese summarization? arXiv preprint arXiv:2110.04257.
Nguyen, T.-H. and Do, T.-N. (2022). Text summarization on large-scale vietnamese datasets. Journal of information and communication convergence engineering, 20(4):309–316.
Zheng, K. and Zheng, W. (2022). Deep neural networks algorithm for vietnamese word segmentation. Scientific Programming, 2022:1–11
Wang, Y., Du, J., Kuang, J., Chen, C., Li, M., and Wang, J. (2023). Twoscaled identification of landscape character types and areas: A case study of the yunnanâvietnam railway (yunnan section), china. Sustainability, 15(7):6173.
Li, Y., Su, H., Shen, X., Li, W., Cao, Z., and Niu, S. (2017). Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957
Sun, K., Yu, D., Chen, J., Yu, D., Choi, Y., and Cardie, C. (2019). Dream: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217–231.
Cui, L., Wu, Y., Liu, S., Zhang, Y., and Zhou, M. (2020). Mutual: A dataset for multi-turn dialogue reasoning. arXiv preprint arXiv:2004.04494.
Chen, Y., Liu, Y., Chen, L., and Zhang, Y. (2021a). Dialogsum: A real-life scenario dialogue summarization dataset. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Association for Computational Linguistics.
Tiedemann, J. and Thottingal, S. (2020). OPUS-MT – building open translation services for the world. In Martins, A., Moniz, H., Fumega, S., Martins, B., Batista, F., Coheur, L., Parra, C., Trancoso, I., Turchi, M., Bisazza, A., Moorkens, J., Guerberof, A., Nurminen, M., Marg, L., and Forcada, M. L., editors, Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 479–480, Lisboa, Portugal. European Association for Machine Translation.
Tiedemann, J., Aulamo, M., Bakshandaeva, D., Boggia, M., Grönroos, S.- A., Nieminen, T., Raganato, A., Scherrer, Y., Vázquez, R., and Virpioja, S. (2023). Democratizing neural machine translation with opus-mt. Language Resources and Evaluation, 58(2):713–755.
Refbacks
- There are currently no refbacks.

Published by : ICSE (Institute of Computer Sciences and Engineering)
Website : http://icsejournal.com/index.php/JCSE/
Email: jcse@icsejournal.com
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.