Algorithms for Question Answering to Factoid Question
(1) University of Gunadarma
(2) University of Gunadarma
Abstract
Keywords
Full Text:
PDFReferences
A. Vaswani et al., “Attention Is All You Need,” 2017.
E. Hu et al., “LoRA: Low-Rank Adaptation of Large Language Models,” 2021.
K.-H. Kim and C.-S. Jeong, “Optimizing Single DGX-A100 System: Overcoming GPU Limitations via Efficient Parallelism and Scheduling for Large Language Models,” Applied Sciences, vol. 13, no. 16, p. 9306, Mar. 2023, doi: 10.3390/app13169306.
M. Nagel et al., “Quantization Techniques for Transformer Inference on Edge Devices,” in Proc. 2024 Int. Conf. Edge AI, 2024.
Y. Lan et al., “Complex Knowledge Base Question Answering: A Survey,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 11, pp. 11196–11215, Mar. 2023, doi: 10.1109/TKDE.2022.3223858.
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” Mar. 2018.
T. B. Brown et al., “Language Models are Few-Shot Learners,” Mar. 2020.
I. Sutskever, O. Vinyals, and Q. V Le, “Sequence to Sequence Learning with Neural Networks,” Mar. 2014.
F. Koto, A. Rahimi, J. H. Lau, and T. Baldwin, “IndoLEM and IndoBERT: A Benchmark Dataset and Pre-trained Language Model for Indonesian NLP,” in Proc. 28th Int. Conf. Comput. Linguist., 2020, pp. 757–770. doi: 10.18653/v1/2020.coling-main.66.
T. Alkhaldi, C. Chu, and S. Kurohashi, “A Peek Into the Memory of T5: Investigating the Factual Knowledge Memory in a Closed-Book QA Setting and Finding Responsible Parts,” J. Nat. Lang. Process., vol. 29, no. 3, pp. 762–784, 2022, doi: 10.5715/jnlp.29.762.
P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, “SQuAD: 100,000+ Questions for Machine Comprehension of Text,” Mar. 2016.
T. Kwiatkowski et al., “Natural Questions: A Benchmark for Question Answering Research,” Trans. Assoc. Comput. Linguist., vol. 7, pp. 453–466, Mar. 2019, doi: 10.1162/tacl_a_00276.
E. Qin et al., “HybridQA: A Benchmark for Multi-Modal Question Answering,” in Proc. 2021 Conf. Empirical Methods Natural Lang. Process., 2021, pp. 4540–4551. doi: 10.18653/v1/2021.emnlp-main.365.
P. Budzianowski and I. Vulić, “Hello, It’s GPT-2 - How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems,” in Proc. 3rd Workshop Neural Generation and Translation, ACL, 2019, pp. 15–22. doi: 10.18653/v1/D19-5602.
SAS Institute, “SEMMA: Data Mining Methodology,” SAS Technical Report, 2001.
S. Ruder et al., “Transfer Learning in Natural Language Processing,” in Proc. NAACL-HLT, 2019, pp. 15–18. doi: 10.18653/v1/N19-5004.
G. I. Winata et al., “NusaX: Multilingual Parallel Sentiment Dataset for 10 Indonesian Local Languages,” Mar. 2022.
K. Duan et al., “Enhancement of Question Answering System Accuracy via Transfer Learning and BERT,” Applied Sciences, vol. 12, no. 22, p. 11522, Mar. 2022, doi: 10.3390/app122211522.
Refbacks
- There are currently no refbacks.

Published by : ICSE (Institute of Computer Sciences and Engineering)
Website : http://icsejournal.com/index.php/JCSE/
Email: jcse@icsejournal.com
is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.