Project dense vector representations of texts on a 2D plan to better understand neural models applied to NLP. Since the famous word2vec, embeddings are everywhere in NLP (and other close areas like IR). The main idea behind embeddings is to represent texts (made of characters, words, sentences, or even larger blocks) as numeric vectors. This works very well and provides some abilities unreachable with the classic BoW approach. However, embeddings (e.g. vector representations) are difficult to understand, analyze (and debug) for humans because they are made of much more than just 3 dimensions.