Academic News

NLP Group has 1 paper accepted by AAAI 2024

Time:2023-12-20

In December 2023, NLP group hasone paper accepted by AAAI 2024.The full name of AAAI 2024is the Thirty-Eighth AAAI Conference on Artificial Intelligence, which is one of the top conferences in artificial intelligence. It is supportedannually by theAAAI, the Association for the Advancement of Artificial Intelligence. AAAI 2024will be held in Vancouver, Canadafrom February 20to February 27, 2024. 

The accepted paper is summarized as follows:

- TA&AT: Enhancing Task-Oriented Dialog with Turn-Level Auxiliary Tasks andAction-Tree Based Scheduled Sampling(Longxiang Liu, Xiuxing Li, Yang Feng)

- AAAIMain Conference, long paper

Abstract: Task-oriented dialog systems have witnessed substantial progress due to conversational pre-training techniques. Yet, two significant challenges persist. First, most systems primarily utilize the latest turn's state label for the generator. This practice overlooks the comprehensive value of state labels in boosting the model's understanding for future generations. Second, an overreliance on generated policy often leads to error accumulation, resulting in suboptimal responses when adhering to incorrect actions. To combat these challenges, we propose turn-level multi-task objectives for the encoder. With the guidance of essential information from labeled intermediate states, we establish a more robust representation for both understanding and generation. For the decoder, we introduce an action tree-based scheduled sampling technique. Specifically, inspired by SPACE, we model the hierarchical policy as trees and utilize the similarity between trees to sample negative policy based on scheduled sampling, hoping the model to generate invariant responses under perturbations. This method simulates potential pitfalls by sampling similar negative policy, bridging the gap between task-oriented dialog training and inference. Among methods without continual pre-training, our approach achieved state-of-the-art (SOTA) performance on the MultiWOZ dataset series and was also competitive with pre-trained SOTA methods.



附件下载: