You are not logged in. Your edit will be placed in a queue until it is peer reviewed.
We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.
Required fields*
- $\begingroup$ That's all clear. What I don't understand is why the learning process produces good word vectors. Gradient descent trains the whole model and it just so happens that the first part of the model produces word vectors with a wonderfully benevolent quality (similarity as mentioned above). Why? $\endgroup$toughkip– toughkip2022-08-02 07:03:34 +00:00Commented Aug 2, 2022 at 7:03
- $\begingroup$ The weights are distributed thanks to hyperparameters (mainly window size and learning rate) and the good results probably come from many trials and errors, like many DL models. If you want to know how it works precisely step by step, you can look this notebook. github.com/chiaminchuang/A-Neural-Probabilistic-Language-Model/… $\endgroup$Nicolas Martin– Nicolas Martin2022-08-02 14:07:02 +00:00Commented Aug 2, 2022 at 14:07
- $\begingroup$ Does it answer your question? Please let me know if you need more details. $\endgroup$Nicolas Martin– Nicolas Martin2022-08-03 15:32:25 +00:00Commented Aug 3, 2022 at 15:32
- 1$\begingroup$ This doesn't answer my question, unfortunately. I'm still gathering information and reading a few papers such as the spiritual successors to the NNLM above (Word2Vec, Glove, ..). I assume to find relevant information about my question there. $\endgroup$toughkip– toughkip2022-08-05 06:39:06 +00:00Commented Aug 5, 2022 at 6:39
- $\begingroup$ Sorry for that, and please let me know if you find any relevant answers to your question. Although a mathematical understanding might be possible, many DL topics start with a conceptual idea that is improved through trials and errors, rather than rigorous logic. I might be wrong for NNLM, but the best way to understand such an algorithm is to ask authors themselves or redo it. $\endgroup$Nicolas Martin– Nicolas Martin2022-08-05 07:31:39 +00:00Commented Aug 5, 2022 at 7:31
| Show 3 more comments
How to Edit
- Correct minor typos or mistakes
- Clarify meaning without changing it
- Add related resources or links
- Always respect the author’s intent
- Don’t use edits to reply to the author
How to Format
- create code fences with backticks ` or tildes ~ ```
like so
``` - add language identifier to highlight code ```python
def function(foo):
print(foo)
``` - put returns between paragraphs
- for linebreak add 2 spaces at end
- _italic_ or **bold**
- indent code by 4 spaces
- backtick escapes
`like _so_` - quote by placing > at start of line
- to make links (use https whenever possible) <https://example.com>[example](https://example.com)<a href="https://example.com">example</a>
- MathJax equations
$\sin^2 \theta$
How to Tag
A tag is a keyword or label that categorizes your question with other, similar questions. Choose one or more (up to 5) tags that will help answerers to find and interpret your question.
- complete the sentence: my question is about...
- use tags that describe things or concepts that are essential, not incidental to your question
- favor using existing popular tags
- read the descriptions that appear below the tag
If your question is primarily about a topic for which you can't find a tag:
- combine multiple words into single-words with hyphens (e.g. machine-learning), up to a maximum of 35 characters
- creating new tags is a privilege; if you can't yet create a tag you need, then post this question without it, then ask the community to create it for you