In some articles, it's said knn uses hamming distance for one-hot encoded categorical variables. Does the scikit learn implementation of knn follow the same way.
Also are there any other ways to handle categorical input variables when using knn.
As stated in the docs, the KNeighborsClassifier from scikit-learn uses minkowski distance by default.
Other metrics can be used, and you can probably get a decent idea by looking at the docs for scikit-learn's DistanceMetric class