Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • $\begingroup$ Thanks for your answer. As part of the project, I will be using Word2Vec embeddings, which I expect to produce the best results. However, I would like to investigate the difference in performance compared to a naive encoding like one-hot. The problem is that one-hot encoding with this dimensionality becomes too expensive with my data, even with very small batches. $\endgroup$ Commented Feb 28, 2023 at 19:19
  • $\begingroup$ I meant trainable embeddings, not pre-trained ones. Having a one-hot vector multiplied by a matrix is equivalent to having an embedding layer, but without the memory spent on the one-hot vectors. $\endgroup$ Commented Feb 28, 2023 at 19:33
  • $\begingroup$ Sorry, I don't think I can follow. I don't understand how your suggestion (having a one-hot vector multiplied by a matrix) differs from the linear layer approach in my question? Can you elaborate? $\endgroup$ Commented Feb 28, 2023 at 19:45
  • $\begingroup$ You save the memory of the one-hot vector with dimension 190000 x seq.length x batch size. $\endgroup$ Commented Feb 28, 2023 at 20:09