- Details
- Written by Administrator
- Category: Uncategorised

**K-nearest neighbour** is a classic, simple and well known classification algorithm. It''s a supervised learning algorithm.

Let's suppose we have a *Q* count set of points in our *n-dimensional* space where all items are placed. In this space, distance between points (elements) that we need to cluster is measured using different distance metrics.

If we have a new element *E* which we need to classify, the algorithm uses the following idea: count the K nearest neighbors and assign the new point to the majority.

In case of equality, we can either: give up furthest away element so equality is broken or choose with a evaluation function which class the new point will belong to ( preferential class for example ), or another solution implementation dependent.

The algorithm is very simple, and it has good results in practice. However it is sensible to outliers, and very far away points from the training set are not classified very well.

Of course K can variate according to data size, computation power, time needed for classification or more criteria.