# Computational Model: Contextual Word Embeddings - Trained on English Wikipedia Corpora This archive contains a collection of computational models called word embeddings. These are vectors that contain numerical representations of words. These have been trained on real language sentences collected from the English Wikipedia (specifically for the [Polyglot project](https://sites.google.com/site/rmyeid/projects/polyglot)). As such, they contain contextual (thematic) knowledge about words (rather than taxonomic). ### Resource description and methodology We have trained a separate embedding model for every one of the 20 Wikipedia corpora that we used for our experiments, and thus make available 20 different embedding models. For training we used an off-the-shelf implementation of [pytorch](https://pytorch.org) and changed no major parameters, essentially using it 'as is'. Each model has been trained for 30 epochs. Each of the 20 models provided is saved in a separate folder, which contains two files: * `word2idx.dat` - a mapping of all words in the model's vocabulary to indexes * `idx2vec-e30.dat` - a map containing actual embeddings (numeric vectors) for every word, mapped to the word's index (we only provide the result of the final epoch of model training, epoch 30) Both files are needed to successfully use the embeddings. Each model's folder is named after the corpus the model was trained on. As the corpus files differed with regards to their sizes, these are also reflected in the model's folder name. The size of the training corpora is expressed in number of tokens (i.e. words), or percentage of the total of the original Wikipedia corpus. For example: The folder `wiki-corpus.004k.txt` contains a model that was trained on a corpus that has 4 000 tokens (words). The folder `wiki-corpus.005m.txt` contains a model that was trained on a corpus that has 5 000 000 tokens (words). The folder `wiki-corpus.05pc.txt` contains a model that was trained on a corpus that makes up a 5% chunk of the full Wikipedia corpus. This naming convention applies to all provided models. The models provided here are compressed into a gzip archive. To view them they first need to be extracted, which can be done using most standard archive managers (e.g. 7-Zip, WinRAR, etc.) Once extracted, the models need to be used with a programming language (we recommend Python 3.6) and can be utilised with the appropriate Python packages. On our [GitHub Page](https://github.com/GreenParachute/wordnet-randomwalk-python), you can find code that can utilise the provided word embedding models to measure word similarity or word relatedness. ### Contact and citation If you have any questions, feel free to: * read the below paper that describes the nature and use of these resources in more detail * contact us with any questions or concerns, and we'll be happy to discuss our work E-mail: filip.klubicka@adaptcentre.ie If you use any of the data or code in your research, please cite the following paper: ``` @article{maldonado2019size, author="Maldonado, Alfredo and Klubi{\v{c}}ka, Filip and Kelleher, John D.", title="Size Matters: The Impact of Training Size in Taxonomically-Enriched Word Embeddings", journal="Open Computer Science", publisher ="De Gruyter", year="2019", link=" " } ``` You can download the paper [here](https://www.degruyter.com/downloadpdf/j/comp.2019.9.issue-1/comp-2019-0009/comp-2019-0009.pdf).