Optimizing {CNN} model inference on {CPUs}
The popularity of Convolutional Neural Network (CNN) models and the ubiquity of CPUs
imply that better performance of CNN model inference on CPUs can deliver significant gain …
imply that better performance of CNN model inference on CPUs can deliver significant gain …
Uncertainty estimation in deep learning with application to spoken language assessment
A Malinin - 2019 - repository.cam.ac.uk
Since convolutional neural networks (CNNs) achieved top performance on the ImageNet
task in 2012, deep learning has become the preferred approach to addressing computer …
task in 2012, deep learning has become the preferred approach to addressing computer …
A two-timescale duplex neurodynamic approach to biconvex optimization
This paper presents a two-timescale duplex neurodynamic system for constrained biconvex
optimization. The two-timescale duplex neurodynamic system consists of two recurrent …
optimization. The two-timescale duplex neurodynamic system consists of two recurrent …
[PDF][PDF] Recurrent neural network language model adaptation for multi-genre broadcast speech recognition
Recurrent neural network language models (RNNLMs) have recently become increasingly
popular for many applications including speech recognition. In previous research RNNLMs …
popular for many applications including speech recognition. In previous research RNNLMs …
Bidirectional recurrent neural network language models for automatic speech recognition
Recurrent neural network language models have enjoyed great success in speech
recognition, partially due to their ability to model longer-distance context than word n-gram …
recognition, partially due to their ability to model longer-distance context than word n-gram …
CUED-RNNLM—An open-source toolkit for efficient training and evaluation of recurrent neural network language models
In recent years, recurrent neural network language models (RNNLMs) have become
increasingly popular for a range of applications including speech recognition. However, the …
increasingly popular for a range of applications including speech recognition. However, the …
Recurrent neural network language model training with noise contrastive estimation for speech recognition
In recent years recurrent neural network language models (RNNLMs) have been
successfully applied to a range of tasks including speech recognition. However, an …
successfully applied to a range of tasks including speech recognition. However, an …
Feedforward sequential memory networks: A new structure to learn long-term dependency
In this paper, we propose a novel neural network structure, namely\emph {feedforward
sequential memory networks (FSMN)}, to model long-term dependency in time series …
sequential memory networks (FSMN)}, to model long-term dependency in time series …
[PDF][PDF] The fixed-size ordinally-forgetting encoding method for neural network language models
In this paper, we propose the new fixedsize ordinally-forgetting encoding (FOFE) method,
which can almost uniquely encode any variable-length sequence of words into a fixed-size …
which can almost uniquely encode any variable-length sequence of words into a fixed-size …
Training language models for long-span cross-sentence evaluation
While recurrent neural networks can motivate cross-sentence language modeling and its
application to automatic speech recognition (ASR), corresponding modifications of the …
application to automatic speech recognition (ASR), corresponding modifications of the …