In December 2022, the lab's tutorial workshop entitled "Non-Autoregressive Models for Fast Sequence Generation" was accepted at the EMNLP 2022 conference.
This three-hour workshop was conducted by researcher Yang Feng and Chenze Shao to talk about the latest research progress of non-autoregressive sequence generation models. Non-autoregressive sequence generation model refers to the model of parallel decoding to generate the entire sequence, which can significantly accelerate the speed of sequence generation, and has attracted widespread attention in the fields of machine translation, speech recognition, speech synthesis and so on.
This workshop comprehensively expounds the multi-modal challenges faced by non-autoregressive models in sequence generation and the current mainstream solutions, such as knowledge distillation, enhancing expression ability, modeling hidden variables, improving training objectives, improving decoding strategies, etc., and introduces in detail the progress of non-autoregressive models in various sequence generation tasks and the commonalities and differences of its application in different tasks, and looks forward to the future development direction of non-autoregressive generation.