Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

3
  • $\begingroup$ Why do you think cosine would be good on such data? Doesn't the denominator cause unwanted distortion here - these are not text documents. $\endgroup$ Commented Nov 6, 2017 at 2:54
  • $\begingroup$ I'm not an expert on clustering or similarity metrics, it just seemed like a good place to start. My understanding is that cosine similarity works better than things like euclidean distance on high dimensional data. You seem to be a lot more qualified than I on clustering -- why does the denom introduce distortion? I thought that because all of the features are in the same scale (0.0-1.0) it would be fine. $\endgroup$ Commented Nov 6, 2017 at 9:33
  • 1
    $\begingroup$ Cosine is equivalent to Euclidean on L2 normalized vectors. So it cannot have a systemic advantage for high dimensional data. L2 normalization is a good idea to account for different document lengths, but here it means you scale percentage values (that have a well defined scale) by some odd (inverse sum of squares) aggregate scaling factor. $\endgroup$ Commented Nov 6, 2017 at 20:32