3

Let me explain what I'm trying to do. I have plot of an Image's points/pixels in the RGB space. What I am trying to do is find elongated clusters in this space. I'm fairly new to clustering techniques and maybe I'm not doing things correctly, I'm trying to cluster using MATLAB's inbuilt k-means clustering but it appears as if that is not the best approach in this case.

What I need to do is find "color clusters".

This is what I get after applying K-means on an image. enter image description here

This is how it should look like:

enter image description here

for an image like this:

enter image description here

Can someone tell me where I'm going wrong, and what I can to do improve my results?


Note: Sorry for the low-res images, these are the best I have.

3
  • If it would help, I can reference you to slides and Matlab code about various clustering techniques in Matlab. Commented Nov 17, 2013 at 8:01
  • Sure, I'll take what I can get. Commented Nov 17, 2013 at 8:55
  • OK, please mail me: gil.levi100 "at" gmail.com and I'll send you back what I have. Commented Nov 17, 2013 at 12:02

3 Answers 3

2

Are you trying to replicate the results of this paper? I would say just do what they did.

However, I will add since there are some issues with the current answers.

1) Yes, your clusters are not spherical- which is an assumption k-means makes. DBSCAN and MeanShift are two more common methods for handling such data, as they can handle non spherical data. However, your data appears to have one large central clump that spreads outwards in a few finite directions.

For DBSCAN, this means it will put everything into one cluster, or everything is its own cluster. As DBSCAN has the assumption of uniform density and requires that clusters be separated by some margin.

MeanShift will likely have difficulty because everything seems to be coming from one central lump - so that will be the area of highest density that the points will shift toward, and converge to one large cluster.

My advice would be to change color spaces. RGB has issues, and it the assumptions most algorithms make will probably not hold up well under it. What clustering algorithm you should be using will then likely change in the different feature space, but hopefully it will make the problem easier to handle.

Sign up to request clarification or add additional context in comments.

11 Comments

That is the paper I am trying to implement as part of another paper, but I'm having serious difficulty understanding what the author(s) are doing.
What you asked is not the same thing as what is done in that paper. If you have questions or parts you don't understand in the paper - that would be a different set of questions. Knowing background also helps - but the paper is pretty straight forward. While the exact parameter values used is not stated, they give a step by step outline in section 3.
I seem to be mis-interpreting the paper then, I thought section 3 was applied after having obtained the orthogonal clusters.
You should speak with your advisor, you need help learning how to read papers and understanding the perquisite knowledge. There is nothing about orthogonality in that paper, and no step could be described as finding any components orthogonal to each other.
Seek a new advisor. I've answered the question that you did ask. If you don't understand wants going on in the paper - you need more than just getting a question or 2 answered. If your advisor thought k-means was going to replicate the paper's results - they either 1) didn't read it, 2) doesn't care, or 3) doesn't know what they are talking about.
|
1

k-means basically assumes clusters are approximately spherical. In your case they are definitely NOT. Try fit a Gaussian to each cluster with non-spherical covariance matrix. Basically, you will be following the same expectation-maximization (EM) steps as in k-means with the only exception that you will be modeling and fitting the covariance matrix as well.

Here's an outline for the algorithm

  1. init: assign each point at random to one of k clusters.
  2. For each cluster estimate mean and covariance
  3. For each point estimate its likelihood to belong to each cluster
    note that this likelihood is based not only on the distance to the center (mean) but also on the shape of the cluster as it is encoded by the covariance matrix
  4. repeat stages 2 and 3 until convergence or until exceeded pre-defined number of iterations

Comments

1

Take a look at density-based clustering algorithms, such as DBSCAN and MeanShift. If you are doing this for segmentation, you might want to add pixel coordinates to your vectors.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.