Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

12
  • 1
    $\begingroup$ Towards 2: kawine.github.io/blog/nlp/2019/06/21/word-analogies.html $\endgroup$ Commented Dec 24, 2020 at 16:21
  • 1
    $\begingroup$ I suspect that 1 is mostly a coincidence. There are trillions of concepts you might think of as defining some linear scale, millions of which are partially captured by the embedding space, and some of those are bound to (completely by chance) lie principally along one of the dimensions of the embedding. (But that's all speculation, so I'll leave this as a comment.) $\endgroup$ Commented Dec 26, 2020 at 4:41
  • $\begingroup$ @BenReiniger: Yeah there's a lot of follow-up questions one could ask related to 1. For example if we change the dimension from 300 to something else, would concepts be still captured in individual dimensions? I guess one could answer this through explicit experimentation if one has bandwidth $\endgroup$ Commented Dec 26, 2020 at 8:53
  • $\begingroup$ One possible explanation for 1. (useful for 2. as well) is that word2vec and similar tools extract the features through a process which checks inter-relations between words in a corpus (eg how many times this word is close to that word). But exactly these same inter-relations is what expresses meaning in natural language (eg natural language grammars). For example "water" would be distant on average from concepts/words such as "king" or "queen" and so on.. On the other hand "water" would be close on averege to "thirsty", "wet" and so on.. $\endgroup$ Commented Dec 26, 2020 at 17:57
  • $\begingroup$ @NikosM.: Good point - what you said covers why words occurring in the same contexts have close representations. But the fact that vector representations can nicely follow arithmetic rules is SUCH a strong claim (and this claim is made widely) that it's hard to digest. Not sure if this fact is empirically proven: given words $w_1,w_2,w_3,w_4$, let $w_1$ and $w_2$ be related by some NON-MATHEMATICAL, LANGUAGE-DEFINED relation (e.g. $w_1$ is the female version of $w_2$). Also let $w_3,w_4$ be related by that same language-defined relation, i.e. $w_3$ is the female version of $w_4$ ... (cont'd) $\endgroup$ Commented Dec 26, 2020 at 20:43