标签: Publications

RCT: Random Consistency Training for Sound Event Detection

[Accepcted by Interspeech 2022] We introduce a semi-supervised learning method for SED and test its performance over DESED dataset. We obtain SOTA performance in all CRNN-based models, surpassing SCT and ICT methods.

Authors

Nian Shao; Erfan Loweimi; Xiaofei Li

Mixhead: Breaking the low-rank bottleneck in multi-head attention language models

[Accepted by KBS] We propose a novel solution for solving the representation bottleneck. By theoratically provements, we verify the existence of the problem and successfully increase the representation ability of the Transformer model. Experiments are done with language modeling/GLUE tasks on Transformer/BERT models.

Authors

Zhong Zhang; Nian Shao; Chongming Gao; Rui Miao; Qinli Yang; Junming Shao