Sorry, you need to enable JavaScript to visit this website.

Learning FOFE based FNN-LMs with noise contrastive estimation and part-of-speech features

Citation Author(s):
Junfeng Hou,Shiliang Zhang,Lirong Dai
Submitted by:
Junfeng Hou
Last updated:
14 October 2016 - 4:49am
Document Type:
Poster
Document Year:
2016
Event:
Presenters:
Junfeng Hou
Paper Code:
P2-8
 

A simple but powerful language model called fixed-size
ordinally-forgetting encoding (FOFE) based feedforward neural
network language models (FNN-LMs) has been proposed recently.
Experimental results have shown that FOFE based FNNLMs
can outperform not only the standard FNN-LMs but also
the popular recurrent neural network language models (RNNLMs).
In this paper, we extend FOFE based FNN-LMs from
several aspects. Firstly, we have proposed a new method to
further improve the performance of FOFE based FNN-LMs by
adding transitions of part-of-speech (POS) tags as additional
features. Secondly, we have investigated how to speedup the
learning of FOFE based FNN-LMs by using noise contrastive
estimation (NCE). As a result, we can dramatically speedup
the learning of FOFE based FNN-LMs while we still achieve
very competitive experimental results on Large Text Compression
Benchmark (LTCB).

up
0 users have voted: