Skip to main content
Tweeted twitter.com/#!/StackCompSci/status/187523484673523714
added 444 characters in body
Source Link
Strin
  • 1.5k
  • 1
  • 11
  • 16

The main idea of k-Nearest-Neighbour takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like locality sensitive hashing can efficiently find nearest neighbours.

In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.

However, this review paperreview paper by John Lafferty in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved.

What is going wrong?

The main idea of k-Nearest-Neighbour takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like locality sensitive hashing can efficiently find nearest neighbours.

In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.

However, this review paper in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved.

What is going wrong?

The main idea of k-Nearest-Neighbour takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like locality sensitive hashing can efficiently find nearest neighbours.

In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.

However, this review paper by John Lafferty in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved.

What is going wrong?

language, links
Source Link
Raphael
  • 73.4k
  • 31
  • 184
  • 406

The main idea of K-Nearest-Beighboursk-Nearest-Neighbour takes into account the k-nearest point$k$ nearest points and take a vote fordecides the classification of the data by majority vote. If so, then it shouldn'tshould not have problemproblems in higher dimensional data because methods like Local Sensitivity Hashinglocality sensitive hashing can achieve really high performanceefficiently find nearest neighbours.

What's moreIn addition, feature selection with bayesianBayesian networks can reduce the dimension of data and make learning easier,.

However, this reviewreview paper in statistical learning pointedpoints out that non-parametric learning in high dimensional feature spacespaces is still a challenge and unsolved. see the review paper

What'sWhat is going wrong?

The main idea of K-Nearest-Beighbours takes into account the k-nearest point and take a vote for the classification of the data. If so, then it shouldn't have problem in higher dimensional data because methods like Local Sensitivity Hashing can achieve really high performance.

What's more, feature selection with bayesian networks can reduce the dimension of data and make learning easier,

However, this review in statistical learning pointed out that non-parametric learning in high dimensional feature space is still a challenge and unsolved. see the review paper

What's going wrong?

The main idea of k-Nearest-Neighbour takes into account the $k$ nearest points and decides the classification of the data by majority vote. If so, then it should not have problems in higher dimensional data because methods like locality sensitive hashing can efficiently find nearest neighbours.

In addition, feature selection with Bayesian networks can reduce the dimension of data and make learning easier.

However, this review paper in statistical learning points out that non-parametric learning in high dimensional feature spaces is still a challenge and unsolved.

What is going wrong?

Source Link
Strin
  • 1.5k
  • 1
  • 11
  • 16

Non-Parametric Methods Like K-Nearest-Neighbours in High Dimensional Feature Space

The main idea of K-Nearest-Beighbours takes into account the k-nearest point and take a vote for the classification of the data. If so, then it shouldn't have problem in higher dimensional data because methods like Local Sensitivity Hashing can achieve really high performance.

What's more, feature selection with bayesian networks can reduce the dimension of data and make learning easier,

However, this review in statistical learning pointed out that non-parametric learning in high dimensional feature space is still a challenge and unsolved. see the review paper

What's going wrong?