Deep learning for Natural Language Processing learnings 1

Dec 30, 2018 • Gaurav Dhingra

Conclusion of things done on weekend regarding learning word2vec.

  • word2vec is one of the really good ways to solve the problem of “word-embedding”. , developed by researchers at google (Tomas Mikolov and colleagues)

  • GloVe (Global Vectors for Word Representation) is another really good way to compute word embeddings, developed at Stanford.

  • The original research papers (by Tomas Mikolov and colleagues) on word2vec [1]. Efficient Estimation of Word Representation in Vector Spaces [2]. Distributed Representation of words and phrases and their composotionality

    these two papers on reading sucked the hell out of me (probably because I don’t have much background in Neural networks).

    Rather I found myself having to go through the paper [3]. word2vec explained: Deriving Mikolo et al’s negative sampling word embedding method

  • I’d a go through the lecture 7 (skipped the lectures 4, 5 and 6), to get an idea of what are deep learning frameworks like Tensorflow (lecuture 7 is an introduction to it). It gives a basic idea of why to use Tensorflow and performing basic operations with it.

  • All in all this weekend was all focused on things related to NLP stuff. I am hopeful that in the next year I might be able to take a few more step towards learning the NLP stuff.