In K-Nearest Neighbor the value of k decides the accuracy of classification. What are the pros and cons of choosing smaller value for k and larger value for k?
2 Answers
There is no simple answer. The standard approach to choose $k$ is to try different values of $k$ and see which provides the best accuracy on your particular data set (using cross-validation or hold-out sets, i.e., a training-validation-test set split).
Intuitively, $k$-nearest neighbors tries to approximate a locally smooth function; larger values of $k$ provide more "smoothing", which or might not be desirable.
It's something about parameter tuning. You should change the K-value from lower values to high values and keep track of all accuracy value. But as whole if you choose the lower values in kNN your model will learn to predict more locally while if you choose the large values for kNN your model will learn to predict more globally.