Document Type : Original Article
Authors
ShahabDanesh University Qom, Pardisan, Iran
Abstract
Abstract— Intent detection and slot filling are crucial for understanding human language and are essential for creating intelligent virtual assistants, chatbots, and other interactive systems that interpret user queries accurately. Recent advancements, especially in transformer-based architectures and large language models (LLMs), have significantly improved the effectiveness of intent detection and slot filling. This paper, proposes a method for effectively utilizing low volume fine-tuning data samples to enhance the natural language comprehension of lightweight language models, yielding a nimble and efficient approach. Our approach involves augmenting new data while increasing model layers to enhance understanding of desired intents and slots. We explored various synonym replacement methods and prompt-generated data samples created by large language models. To prevent semantic meaning disturbance, we established a lexical retention list containing non-O slots to preserve the sentence's core meaning. This strategy enhances the model's slot precision, recall, F1-score, and exact match metrics by 1.41%, 1.8%, 1.61%, and 3.81%, respectively, compared to not using it. The impact of increasing model layers was studied under different layer arrangement scenarios. Our results show that our proposed solution outperforms the baseline by 10.95% and 4.89% in exact match and slot F1-score evaluation metrics.
Keywords
Main Subjects