Skip to main content

This metricThe Adjusted Rand Index can calculate the agreement between two cluster labelings, even if the labels don't match. Scikit Learn has a good implementation of this. The original paper describing this index is Hubert and Arabie, 1985 [1].

This might be a good point to start your investigation:

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html#sklearn.metrics.adjusted_rand_score

[1] Hubert, Lawrence, and Phipps Arabie. 1985. “Comparing Partitions.” Journal of Classification 2 (1). Springer-Verlag: 193–218.

This metric can calculate the agreement between two cluster labelings, even if the labels don't match. This might be a good point to start your investigation:

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html#sklearn.metrics.adjusted_rand_score

The Adjusted Rand Index can calculate the agreement between two cluster labelings, even if the labels don't match. Scikit Learn has a good implementation of this. The original paper describing this index is Hubert and Arabie, 1985 [1].

This might be a good point to start your investigation:

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html#sklearn.metrics.adjusted_rand_score

[1] Hubert, Lawrence, and Phipps Arabie. 1985. “Comparing Partitions.” Journal of Classification 2 (1). Springer-Verlag: 193–218.

Source Link
Tom
  • 1.1k
  • 6
  • 15

This metric can calculate the agreement between two cluster labelings, even if the labels don't match. This might be a good point to start your investigation:

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html#sklearn.metrics.adjusted_rand_score