This item is available under a Creative Commons License for non-commercial use only
Computer Sciences, General language studies, Linguistics
This paper presents a short introduction to neural networks and how they are used for machine translation and concludes with some discussion on the current research challenges being addressed by neural machine translation (NMT) research. The primary goal of this paper is to give a no-tears introduction to NMT to readers that do not have a computer science or mathematical background. The secondary goal is to provide the reader with a deep enough understanding of NMT that they can appreciate the strengths of weaknesses of the technology. The paper starts with a brief introduction to standard feed-forward neural networks (what they are, how they work, and how they are trained), this is followed by an introduction to word-embeddings (vector representations of words) and then we introduce recurrent neural networks. Once these fundamentals have been introduced we then focus in on the components of a standard neural-machine translation architecture, namely: encoder networks, decoder language models, and the encoder-decoder architecture.
Kelleher, John D., ``Fundamentals of Machine Learning for Neural Machine Translation''. Presented at the ``Translating Europe Forum 2016: Focusing on Translation Technologies''. Organised by the European Commission Directorate-General for Translation. (2016), doi:10.21427/D78012