TY - JOUR
T1 - Uncertainty-Aware Active Meta-Learning for Few-Shot Text Classification
AU - Seo, Sanghyun
AU - Dingeto, Hiskias
AU - Kim, Juntae
N1 - Publisher Copyright:
© 2025 by the authors.
PY - 2025/4
Y1 - 2025/4
N2 - Low-resource natural language understanding is one of the challenges in the field of language understanding. As natural language processing and natural language understanding take center stage in machine learning, these challenges need solutions more than ever. This paper introduces the technique of Uncertainty-Aware Active Meta-Learning (UA-AML), a methodology designed to enhance the efficiency of models in low-resource natural language understanding tasks. This methodology is particularly significant in the context of limited data availability, a common challenge in the field of natural language processing. Uncertainty-Aware Active Meta-Learning enables the selection of high-quality tasks from a diverse range of task data available during the learning process. By quantifying the prediction uncertainty of the model for the input data, we provide a loss function and learning strategy that can adjust the influence of the input data on the model’s learning. This approach ensures that the most relevant and informative data are utilized during the learning process, optimizing learning efficiency and model performance. We have applied this meta-learning technique to tasks in low-resource natural language understanding, such as few-shot relation classification and few-shot sentiment classification. Our experimental results, which are carried out on the Amazon Review Sentiment Classification (ARSC) and the FewRel dataset, demonstrate that this technique can construct low-resource natural language understanding models with improved performance, providing a robust solution for tasks with limited data availability. This research contributes to the expansion of meta-learning techniques beyond their traditional application in computer vision, demonstrating their potential in natural language processing. Our findings suggest that this methodology can be effectively utilized in a wider range of application areas, opening new avenues for future research in low-resource natural language understanding.
AB - Low-resource natural language understanding is one of the challenges in the field of language understanding. As natural language processing and natural language understanding take center stage in machine learning, these challenges need solutions more than ever. This paper introduces the technique of Uncertainty-Aware Active Meta-Learning (UA-AML), a methodology designed to enhance the efficiency of models in low-resource natural language understanding tasks. This methodology is particularly significant in the context of limited data availability, a common challenge in the field of natural language processing. Uncertainty-Aware Active Meta-Learning enables the selection of high-quality tasks from a diverse range of task data available during the learning process. By quantifying the prediction uncertainty of the model for the input data, we provide a loss function and learning strategy that can adjust the influence of the input data on the model’s learning. This approach ensures that the most relevant and informative data are utilized during the learning process, optimizing learning efficiency and model performance. We have applied this meta-learning technique to tasks in low-resource natural language understanding, such as few-shot relation classification and few-shot sentiment classification. Our experimental results, which are carried out on the Amazon Review Sentiment Classification (ARSC) and the FewRel dataset, demonstrate that this technique can construct low-resource natural language understanding models with improved performance, providing a robust solution for tasks with limited data availability. This research contributes to the expansion of meta-learning techniques beyond their traditional application in computer vision, demonstrating their potential in natural language processing. Our findings suggest that this methodology can be effectively utilized in a wider range of application areas, opening new avenues for future research in low-resource natural language understanding.
KW - meta-learning
KW - natural language processing
KW - natural language understanding
KW - uncertainty quantification
UR - http://www.scopus.com/inward/record.url?scp=105002280761&partnerID=8YFLogxK
U2 - 10.3390/app15073702
DO - 10.3390/app15073702
M3 - Article
AN - SCOPUS:105002280761
SN - 2076-3417
VL - 15
JO - Applied Sciences (Switzerland)
JF - Applied Sciences (Switzerland)
IS - 7
M1 - 3702
ER -