Can we build robust Spoken Language Understanding (SLU) systems that recognise intents from utterances directly, without involving Automatic Speech Recognition (ASR) during training or evaluation? In this paper, Omilia R&D team tries to address this problem, by introducing novel end-to-end architectures for SLU. In parallel, the limitations of end-to-end SLU approaches are presented and discussed, by evaluating the system on wordings unseen during model training. https://arxiv.org/pdf/1910.10599.pdf

Elisavet Palogiannidi, Ioannis Gkinis, George Mastrapas, Petr Mizera, Themos Stafylakis

Abstract

Spoken Language Understanding (SLU) is the problem of extracting the meaning from speech utterances. It is typically addressed as a two-step problem, where an Automatic Speech Recognition (ASR) model is employed to convert speech into text, followed by a Natural Language Understanding (NLU) model to extract meaning from the decoded text. Recently, end-to-end approaches were emerged, aiming at unifying the ASR and NLU into a single SLU deep neural architecture, trained using combinations of ASR and NLU-level recognition units. In this paper, we explore a set of recurrent architectures for intent classification, tailored to the recently introduced Fluent Speech Commands (FSC) dataset, where intents are formed as combinations of three slots (action, object, and location). We show that by combining deep recurrent architectures with standard data augmentation, state-of-the-art results can be attained, without using ASR-level targets or pretrained ASR models. We also investigate its generalizability to new wordings, and we show that the model can perform reasonably well on wordings unseen during training.

View the full paper here.