Pre-trained word vectors of 30+ languages
This project has two purposes. First of all, I'd like to share some of my experience in nlp tasks such as segmentation or word vectors. The other, which is more important, is that probably some people are searching for pre-trained word vector models for non-English languages. Alas! English has gained much more attention than any other languages has done. Check this to see how easily you can get a variety of pre-trained English word vectors without efforts. I think it's time to turn our eyes to a multi language version of this.
Nearing the end of the work, I happened to know that there is already a similar job named
polyglot. I strongly encourage you to check this great project. How embarrassing! Nevertheless, I decided to open this project. You will know that my job has its own flavor, after all.
- nltk >= 1.11.1
- regex >= 2016.6.24
- lxml >= 3.3.3
- numpy >= 1.11.2
- konlpy >= 0.4.4 (Only for Korean)
- mecab (Only for Japanese)
- pythai >= 0.1.3 (Only for Thai)
- pyvi >= 0.0.7.2 (Only for Vietnamese)
- jieba >= 0.38 (Only for Chinese)
- gensim > =0.13.1
Background / References
- Check this to know what word embedding is.
- Check this to quickly get a picture of Word2vec.
- Watch this to really understand what's happening under the hood of Word2vec.
- Go get various English word vectors here if needed.
- Check this more ambitious project here
- STEP 1. Download the wikipedia database backup dumps of the language you want.
- STEP 2. Extract running texts from the downloaded file to build a corpus.
- STEP 3. Preprocess the corpus.
- STEP 4. Run Word2Vec.
Click the name of a language to download its pretrained word vectors. The zip file contains two files: .bin (word2vec model file) and .txt (word vector file). Any contributions are welcomed.
|Language||ISO 639-1||Vector Size||Corpus Size||Vocabulary Size||Training Algorithm|
|Norwegian Nynorsk||nn||100||114M||10036||negative sampling|