1
$\begingroup$

In K-Nearest Neighbor the value of k decides the accuracy of classification. What are the pros and cons of choosing smaller value for k and larger value for k?

$\endgroup$
1
  • 2
    $\begingroup$ This issue is certainly mentioned in the relevant literature, have you checked there? $\endgroup$ Commented Nov 21, 2015 at 21:28

2 Answers 2

2
$\begingroup$

There is no simple answer. The standard approach to choose $k$ is to try different values of $k$ and see which provides the best accuracy on your particular data set (using cross-validation or hold-out sets, i.e., a training-validation-test set split).

Intuitively, $k$-nearest neighbors tries to approximate a locally smooth function; larger values of $k$ provide more "smoothing", which or might not be desirable.

$\endgroup$
0
$\begingroup$

It's something about parameter tuning. You should change the K-value from lower values to high values and keep track of all accuracy value. But as whole if you choose the lower values in kNN your model will learn to predict more locally while if you choose the large values for kNN your model will learn to predict more globally.

$\endgroup$

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.