I'm learning tensorflow's wide_n_deep_tutorial these days, and I'm a little bit confused with the tf.contrib.layers.embedding_column. I wonder how does tensorflow implement the embedding column?
For example, suppose I have an sparse input with dimension 1000 and I want to embed it into a dense feature with dimension 10. Does it hold a fully connected network with 1000*10 params and train using BP to update the params? Or does it use some other techniques like FM to map the 1000 dim vector to a 10 dim vector?