Objectives
Upon completion of this lesson, you will be able to:
- define the kNN algorithm
- know when to use kNN
- tune the value of the hyper-parameter k
- appreciate the need for feature scaling
- recognize why outliers can be problematic for kNN
Introduction
The k-Nearest Neighbors (kNN) algorithm is a simple yet powerful supervised learning method used for classification and regression tasks. It is a non-parametric and instance-based learning algorithm, meaning that it does not assume a specific functional form for the data and instead memorizes the training data to make predictions.
The fundamental idea behind kNN is that similar data points tend to be found in close proximity to one another. The algorithm classifies or predicts the target value of a new data point by considering the target values of its nearest neighbors in the training data.
k-Nearest Neighbors (k-NN or kNN) is a type of instance-based learning algorithm used in both classification (predicting a categorical variable) and regression (predicting a numeric variable) tasks. It is called ‘lazy learning’ as it doesn’t build a model but instead memorizes the training dataset.
Examples of applications of kNN include classification, recommendation systems, image recognition, and text categorization, among others.
This lesson will only consider kNN for classification tasks.
Overview of the Algorithm
The basic steps involved in the k-NN algorithm are:
Choose the number of k and a distance metric: The number of neighbors (k) and the type of distance are chosen as per the problem context. For example, you could choose k=3 and use Euclidean distance as the metric.
Find the k nearest neighbors of the sample that you want to classify: For each data point that you want to make a prediction for, you identify the k points in the training dataset that are “closest” according to the chosen distance metric. While Euclidean distance is most common, other distance calculations methods can be used depending on the data.
Assign the label: For a classification problem, the new data point is assigned the class which has the majority among its k nearest neighbors. For a regression problem, the new data point could be assigned the mean value of its k nearest neighbors.
kNN is a simple algorithm that is easy to understand and implement. It works well with smaller datasets that have fewer dimensions (features), and where the data points are clearly separable. However, it may not be suitable for large, high-dimensional datasets, as the computation of distances can become very expensive. It is also sensitive to irrelevant or correlated features, which can impact the calculation of “closest” neighbors. The model does not perform an explicit training phase; it simply stores the dataset and makes predictions dynamically based on the stored instances.
An important aspect of kNN is the choice of k. A small value of k can make the algorithm sensitive to noise, while a large value of k makes the boundaries between classes less distinct. Typically, the optimal value of k is selected using techniques such as cross-validation.
To illustrate how kNN works, consider the following dataset of points plotted on a two-dimensional plane:
\[
\begin{array}{|c|c|c|}
\hline
x_1 & x_2 & Class \\
\hline
1 & 2 & A \\
2 & 3 & A \\
3 & 1 & B \\
5 & 4 & B \\
4 & 3 & A \\
\hline
\end{array}
\]
Suppose we need to classify a new data point at (3,2). If \(k = 3\), we find the three closest points using the Euclidean distance formula:
\[
d = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}
\]
After computing distances, we identify the closest three points and assign the most frequent class.
In the narrated presentation below, Khoury Boston’s Dr. Schedlbauer provides an introduction to the kNN machine learning algorithm for predicting categorical target variables.
Feature Scaling
Feature scaling, also known as normalization or standardization, is the process of transforming numerical features in a dataset so that they have a consistent scale. This is critical in many machine learning algorithms, including k-Nearest Neighbors (kNN), to prevent features with larger numerical ranges from dominating those with smaller ranges.
Feature scaling is necessary in machine learning models that rely on distance-based calculations, such as k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Principal Component Analysis (PCA), and Gradient Descent-based algorithms (e.g., Regression, Neural Networks).
Without scaling, features with larger ranges (e.g., “Annual Salary” in thousands vs. “Age” in years) can disproportionately influence distance calculations, leading to biased results. Scaling generally improves classification accuracy in distance-based algorithms.
There are two primary methods of feature scaling:
- Min-Max Normalization (Rescaling to [0,1] or [-1,1])
- Z-Score Standardization (Mean = 0, Standard Deviation = 1)
Min-Max Normalization
Min-max normalization (or rescaling) scales the features to a fixed range, typically [0,1] or [-1,1]. This ensures that all values fall within the same range.
The formula for min-max normalization is:
\[
X_{\text{scaled}} = \frac{X - X_{\text{min}}}{X_{\text{max}} - X_{\text{min}}}
\]
In R, we can normalize a dataset by building a simple function:
# Define a min-max normalization function
min_max_normalize <- function(x) {
return((x - min(x)) / (max(x) - min(x)))
}
# apply scaling to numeric features of the 'iris' dataset
data(iris)
iris_normalized <- as.data.frame(lapply(iris[, 1:4], min_max_normalize))
# View scaled data
head(iris_normalized)
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1 0.22222222 0.6250000 0.06779661 0.04166667
## 2 0.16666667 0.4166667 0.06779661 0.04166667
## 3 0.11111111 0.5000000 0.05084746 0.04166667
## 4 0.08333333 0.4583333 0.08474576 0.04166667
## 5 0.19444444 0.6666667 0.06779661 0.04166667
## 6 0.30555556 0.7916667 0.11864407 0.12500000
Z-Score Standardization
Z-score standardization (or standard scaling) transforms the feature distribution to have a mean of 0 and standard deviation of 1.
The formula for standardization is:
\[
X_{\text{standardized}} = \frac{X - \mu}{\sigma}
\]
where:
- \(\mu\) is the mean of the feature
- \(\sigma\) is the standard deviation
In R, we can use the Base R scale()
function for standardization or write our own function:
# Standardize the dataset using scale()
iris_standardized <- as.data.frame(scale(iris[, 1:4]))
# View standardized data
head(iris_standardized)
## Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1 -0.8976739 1.01560199 -1.335752 -1.311052
## 2 -1.1392005 -0.13153881 -1.335752 -1.311052
## 3 -1.3807271 0.32731751 -1.392399 -1.311052
## 4 -1.5014904 0.09788935 -1.279104 -1.311052
## 5 -1.0184372 1.24503015 -1.335752 -1.311052
## 6 -0.5353840 1.93331463 -1.165809 -1.048667
Standardization is preferred when data follows a normal distribution and distance-based algorithms (e.g., kNN or regression) are used. However, since there is a presumption of normality, we must ensure that the features are reasonably normally distributed. Use a visual method such as a Q-Q Plot or a statistical test such as the Shapiro-Wilk Test to verify the assumption.
Since kNN relies on distance, scaling is necessary. All features must be scaled using the same approach.
Selecting Scaling Method
- Use Min-Max Normalization when data does not follow a normal distribution or when feature values must be within a fixed range.
- Use Z-Score Standardization when data follows a normal distribution and algorithms rely on distance metrics.
Classifying with kNN in R
To demonstrate kNN in R, we will use the well-known built-in iris dataset. The dataset consists of measurements of sepal length, sepal width, petal length, and petal width, along with the species of the iris flower as the target variable. The target variable is one of several pre-defined values, so it is categorical. Predicting a categorical variables is referred to as a classification task as we want to “classify” which type of iris flower a plant most likely is based on its characteristics of sepal length, sepal width, petal length, and petal width.
We will perform the following steps:
- Load the dataset and explore it. The data set is most commonly in a CSV file, but it must be tabular and have each case on a row. One of the columns must be the target variable. The data generally comes from observation and the assignment of classes is often done manually.
- Shape the data. kNN makes specific assumptions about the data. The data should not have outliers, no missing values, and the scales of all features must be normalized. In addition, all of the features must be numeric, so any categorical features must be encoded.
- Split the data into training and testing sets. This is an essential step in machine learning. We choose a random subset of the data to build the model and then use the held-back data to determine how well the model works by comparing the prediction made by the model against the actual value of the target variable. This allows us to calculate metrics such as accuracy where we measure how often the algorithm made the correct prediction. By doing this we can evaluate the effectiveness of the classification model and also use it for comparison to other models.
- Train a kNN model using the
class::knn()
function. There are numerous implementations of kNN found in a variety of packages. Other packages that include an implementation of kNN are caret and FNN.
- Make predictions on the test data. This is the validation step where the algorithm is used to make predictions for each of the held-back cases and comparing them to the actual observed value which allows calculating metrics such as accuracy.
- Evaluate the model by calculating accuracy for classification. Evaluating classification models, including k-Nearest Neighbors (kNN), involves using various performance metrics to assess how well the model distinguishes between classes. These metrics provide insights into the accuracy, robustness, and reliability of the model’s predictions. Accuracy is the proportion of correctly predicted values for the target variable in the held-back test data. It is generally reported as a percentage. Other evaluation metrics include precision, recall, Kappa Statistic, and the F1-Score.
1: Load Data
There are a number of packages that are needed and that must first be installed and then loaded:
library(class)
library(caret)
2: Inspect Structure
First, we load the iris dataset and inspect its structure. The iris dataset is “built-in” and thus does not have to be loaded from an external file.
# Load the iris dataset
data(iris)
Let’s check the structure of the dataset:
# Check the structure of the dataset
str(iris)
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
And let’s look at the first few rows:
# View first few rows
head(iris, 4)
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 5.1 3.5 1.4 0.2 setosa
## 2 4.9 3.0 1.4 0.2 setosa
## 3 4.7 3.2 1.3 0.2 setosa
## 4 4.6 3.1 1.5 0.2 setosa
So, we can see that iris has 150 observations (rows or cases), with four numeric features and one categorical variable (Species), which is our target variable.
3: Shape Data
A: Identify Outliers
The first step is to identify whether any of the numeric features have outliers or extreme values. The most common rule of thumb is to consider any value that is more than 3 standard deviations from the mean an outlier. Outliers are then either removed or treated as missing values. We need to be careful to ignore missing values in the calculation of the mean and standard deviation as those have not yet been dealt with.
## copy data so we do not pollute the original data
df <- iris
## calculate mean and sd for each numeric column
numeric.cols <- sapply(df, is.numeric)
for (c in 1:length(numeric.cols)) {
if (numeric.cols[c] == TRUE) {
## find outliers in numeric column
m <- mean(df[,c], na.rm = T)
s <- sd(df[,c], na.rm = T)
outliers <- which(abs((m - df[,c]) / s) > 3.0)
if (length(outliers) > 0) {
## found outliers; replace with NA and impute later
cat("Found outliers in column '", names(df)[c], "': \n")
cat(" --> ", df[outliers,c], "\n\n")
}
}
}
## Found outliers in column ' Sepal.Width ':
## --> 4.4
There is one outlier, but it is not too far from the mean, so we will simply ignore it.
Lesson 3.203 – Detecting and Managing Outliers provides a more detailed treatment of this subject.
B: Manage Missing Values
Missing values must be addressed either by eliminating any observation (row) that has a missing value in any of its features or by imputing the missing value with an estimate. For numeric features, a common strategy is to impute the missing value with the mean or median, while for categorical features, the mode is often used.
The first step is to check each feature for missing values. While we can certainly check each feature manually, iterating over all columns is more practical. For missing numeric values we can check for NA with the function is.na()
, while for character (text) features we need to check whether it is ““.
num.Rows <- nrow(df)
num.Cols <- ncol(df)
found <- F
for (c in 1:num.Cols){
missing.Values <- which(is.na(df[,c]) | df[,c] == "")
num.Missing.Values <- length(missing.Values)
if (num.Missing.Values > 0) {
print(paste0("Column '", names(df)[c], "' has ", num.Missing.Values, " missing values"))
found <- T
}
}
if (!found) {
print("no missing values detected")
}
## [1] "no missing values detected"
There are no missing values in this dataset. If you want to know more about how to deal with missing values, study Lesson 3.204 – Managing Missing Values in Data.
C: Encode Categorical Features
So far, we have primarily dealt with the numeric features. Let’s turn our attention to the categorical features. Binary categorical features (where there are only two possible values) should be turned into a 0/1 encoding. Multi-level categorical features are often encoded with one-hot dummy codes, their frequency, or one of several other methods. At the end of the day, we must convert categorical features to numeric features. Lesson 3.207 - Encoding Categorical Features provides a more detailed treatment of this subject.
There are no categorical features in this dataset, so we can move on to the next step.
D: Scale Numeric Features
Numeric features must be scaled for kNN to perform well. We will use z-score standardization as our method for scaling and use the scale()
function from Base R to do so:
# standardize the numeric features using scale()
iris.scaled <- as.data.frame(scale(iris[, 1:4]))
# add the species into the scaled dataframe
iris.scaled$Species <- iris$Species
# view standardized data
head(iris.scaled,4)
## Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1 -0.8976739 1.01560199 -1.335752 -1.311052 setosa
## 2 -1.1392005 -0.13153881 -1.335752 -1.311052 setosa
## 3 -1.3807271 0.32731751 -1.392399 -1.311052 setosa
## 4 -1.5014904 0.09788935 -1.279104 -1.311052 setosa
Going forward, we will use the scaled dataset in iris.scaled
rather than the original unscaled dataset.
4: Split Data
To evaluate our model, we split the data into a training set (80%) and a test set (20%). The selection of the proportion to hold back for testing is generally 20%, but for smaller dataset we may choose 10% or 15%. When the target classes are not all occurring in the same proportion, then we may end up with class imbalance, i.e., in our case, a iris species may appears in the testing data but not in the training data as there are too few occurrences.
When we have class imbalance we need to select the training and test data differently, using methods such as stratified sampling or SMOTE. Lesson 3.224 - Managing Class Imbalance provides an introduction to these methods.
Now, kNN involves two elements of “randomness”: the selection of the subset sample for training and dealing with ties during prediction. So, if we use random numbers then every time we run the code we may get different results. To avoid this, we need to set a “seed” for the random number generator so it generates the same random numbers every time. And, naturally, we choose 42 as the seed as that is the answer to the universe.
# Set seed for reproducibility
set.seed(42)
So, now we can split the data by selecting a random subset for training (to make predictions) and the held-back data to evaluate the accuracy and performance of the model. While there are many ways to do this, we are using the createDataPartition()
function from the caret package. This function is commonly used for splitting data into training and test sets (without replacement) while maintaining the distribution of the target variable (stratified sampling). It is particularly useful for classification tasks to ensure that both sets contain representative proportions of each class.
# Split the dataset into training and testing sets
trainIndex <- createDataPartition(iris.scaled$Species, p = 0.8, list = FALSE)
trainData <- iris.scaled[trainIndex, ]
testData <- iris.scaled[-trainIndex, ]
Just a quick check of the dimensions to be sure it all went well.
# Check the dimensions
dim(trainData)
## [1] 120 5
## [1] 30 5
At this point, the training set contains approximately 120 observations, while the test set contains 30.
5: Train Model
Since kNN does not require explicit training like regression, we simply prepare the features and use the knn()
function from the class package. The function requires a vector of the known target variables (this is the key to “supervised machine learning”) to be separate from the predictor features.
# Extract features (predictors) and target variable
trainX <- trainData[, -5] # extract the Species column
trainY <- trainData$Species # predictor features
testX <- testData[, -5] # extract the Species column
testY <- testData$Species # predictor features
# Choose k (e.g., k = 5)
k_value <- 5
# Perform _kNN_ classification on the test data using the training data
predictions <- class::knn(train = trainX, test = testX, cl = trainY, k = k_value)
The predictions for the observations in the testing data using the training are stored in the variable predictions
.
The choice of k, i.e., how many of the “closest” neighbors are considered when determining the predicting class, is arbitrarily set at 5 in this example. Of course, we need to find the “optimal” k to get the “best” predictions by trying various k on the testing data and on different training samples. There are methods for this, such as k-fold cross validation or bagging, but those are beyond the scope of this primer. In practice, it has been found through empirical studies that a k equal to the square root of the number of observations in the training data is generally best.
6: Evaluate Model
To measure how well our model performs, we compute the accuracy by comparing the predicted and actual values. Accuracy measure the percentage of correct predictions, i.e., in our case the number of times that the algorithm correctly predicted the iris species for the test data.
# Calculate accuracy
accuracy <- sum(predictions == testY) / length(testY)
This results in a model Accuracy of 86.67%. Naturally, we must have an accuracy of more than 50% otherwise random guessing is just as effective.
7: Make Prediction
To make a prediction for a new unseen instance, we create a new data frame with a sample observation.
Let’s assume we have been provided with the data for an iris flower with the following measurements and we want to determine to which species it likely belongs:
- Sepal Length: 5.8
- Sepal Width: 3.0
- Petal Length: 4.2
- Petal Width: 1.2
We construct this case as a data frame and pass it to the “trained” kNN model using a k of 5.
# create dataframe containing new observations to be classified
new.case <- data.frame(Sepal.Length = 5.8, Sepal.Width = 3.0,
Petal.Length = 4.2, Petal.Width = 1.2)
# predict species for the new case using kNN with the training data
predicted.species <- knn(train = trainX, test = new.case, cl = trainY, k = 5)
Note that the prediction result is a variable of type “factor” which can be converted to a character string as follows:
species <- as.character(predicted.species)
8: Report Result
kNN predicts (based on finding the 5 closest neighbors and seeing which species they belong to) that the new case is most likely of the ‘virginica’ species. Based on our measured accuracy, we can say that this prediction is correct with a probability of 0.87.
Summary
The k-Nearest Neighbors (kNN) algorithm is a versatile and intuitive method for both classification and regression tasks. Unlike other machine learning models, kNN does not explicitly learn a function; instead, it memorizes the training data and makes predictions dynamically based on similarity measures. Choosing the right value of k is important, as small values lead to high variance, while large values increase bias.
By implementing kNN in R, we illustrated how to split data into training and test sets, perform classification and regression, and evaluate model performance.
References
No references.
---
title: "kNN for Classification: A Primer"
params:
  category: 3
  stacks: 0
  number: 409
  time: 90
  level: beginner
  tags: _kNN_,machine learning,classification,caret,class
  description: "Explains the _kNN_ (k Nearest Neighbor) machine learning
                algorithm for predicting a categorical target variable
                (classification). Demonstrates how to train and evaluate
                a _kNN_ model in R."
date: "<small>`r Sys.Date()`</small>"
author: "<small>Martin Schedlbauer</small>"
email: "m.schedlbauer@neu.edu"
affilitation: "Northeastern University"
output: 
  bookdown::html_document2:
    toc: true
    toc_float: true
    collapsed: false
    number_sections: false
    code_download: true
    theme: lumen
    highlight: tango
---

---
title: "<small>`r params$category`.`r params$number`</small><br/><span style='color: #2E4053; font-size: 0.9em'>`r rmarkdown::metadata$title`</span>"
---

```{r code=xfun::read_utf8(paste0(here::here(),'/R/_insert2DB.R')), include = FALSE}
```

------------------------------------------------------------------------

## Objectives

Upon completion of this lesson, you will be able to:

-   define the *kNN* algorithm
-   know when to use *kNN*
-   tune the value of the hyper-parameter *k*
-   appreciate the need for feature scaling
-   recognize why outliers can be problematic for *kNN*

------------------------------------------------------------------------

## Introduction

The k-Nearest Neighbors (*kNN*) algorithm is a simple yet powerful supervised learning method used for classification and regression tasks. It is a **non-parametric** and **instance-based** learning algorithm, meaning that it does not assume a specific functional form for the data and instead memorizes the training data to make predictions.

The fundamental idea behind *kNN* is that similar data points tend to be found in close proximity to one another. The algorithm classifies or predicts the target value of a new data point by considering the target values of its nearest neighbors in the training data.

k-Nearest Neighbors (*k-NN* or *kNN*) is a type of instance-based learning algorithm used in both classification (predicting a categorical variable) and regression (predicting a numeric variable) tasks. It is called 'lazy learning' as it doesn't build a model but instead memorizes the training dataset.

Examples of applications of *kNN* include classification, recommendation systems, image recognition, and text categorization, among others.

This lesson will only consider *kNN* for classification tasks.

## Overview of the Algorithm

The basic steps involved in the k-NN algorithm are:

1.  **Choose the number of k and a distance metric**: The number of neighbors (*k*) and the type of distance are chosen as per the problem context. For example, you could choose *k=3* and use Euclidean distance as the metric.

2.  **Find the *k* nearest neighbors of the sample that you want to classify**: For each data point that you want to make a prediction for, you identify the *k* points in the training dataset that are "closest" according to the chosen distance metric. While Euclidean distance is most common, other distance calculations methods can be used depending on the data.

3.  **Assign the label**: For a classification problem, the new data point is assigned the class which has the majority among its *k* nearest neighbors. For a regression problem, the new data point could be assigned the mean value of its *k* nearest neighbors.

*kNN* is a simple algorithm that is easy to understand and implement. It works well with smaller datasets that have fewer dimensions (features), and where the data points are clearly separable. However, it may not be suitable for large, high-dimensional datasets, as the computation of distances can become very expensive. It is also sensitive to irrelevant or correlated features, which can impact the calculation of "closest" neighbors. The model does not perform an explicit training phase; it simply stores the dataset and makes predictions dynamically based on the stored instances.

An important aspect of *kNN* is the choice of *k*. A small value of *k* can make the algorithm sensitive to noise, while a large value of *k* makes the boundaries between classes less distinct. Typically, the optimal value of *k* is selected using techniques such as cross-validation.

To illustrate how *kNN* works, consider the following dataset of points plotted on a two-dimensional plane:

$$
\begin{array}{|c|c|c|}
\hline
x_1 & x_2 & Class \\
\hline
1 & 2 & A \\
2 & 3 & A \\
3 & 1 & B \\
5 & 4 & B \\
4 & 3 & A \\
\hline
\end{array}
$$

Suppose we need to classify a new data point at (3,2). If $k = 3$, we find the three closest points using the Euclidean distance formula:

$$
d = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}
$$

After computing distances, we identify the closest three points and assign the most frequent class.

In the narrated presentation below, Khoury Boston's Dr. Schedlbauer provides an introduction to the *kNN* machine learning algorithm for predicting categorical target variables.

<iframe src="https://player.vimeo.com/video/830397363?h=4b41fa001a&amp;title=0&amp;byline=0&amp;portrait=0&amp;speed=0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen title="Understanding _kNN_" data-external="1">

</iframe>

## Required Data Shaping

Prior to applying *kNN*, the data must be shaped:

1.  remove outliers and extreme values
2.  impute or remove missing feature values
3.  normalize numeric features to a common scale
4.  encode categorical features

For more information on how shape the data for use with *kNN*, consult these lessons:

1.  [3.203 -- Detecting and Managing Outliers](http://artificium.us/lessons/03.ml/l-3-203-outliers/l-3-203.html)
2.  [3.204 -- Managing Missing Values in Data](http://artificium.us/lessons/03.ml/l-3-204-missing-values/l-3-204.html)
3.  [3.206 -- Normalizing Numeric Features for Machine Learning Algorithms](http://artificium.us/lessons/03.ml/l-3-206-feature-normalization/l-3-206.html)
4.  [3.207 -- Encoding Categorical Features](http://artificium.us/lessons/03.ml/l-3-207-categorical-encoding/l-3-207.html)

Step (3) requiring all features to be normalized to a common scale is essential for *kNN* and cannot be omitted.

## Feature Scaling

Feature scaling, also known as *normalization* or *standardization*, is the process of transforming numerical features in a dataset so that they have a consistent scale. This is critical in many machine learning algorithms, including **k-Nearest Neighbors (kNN)**, to prevent features with larger numerical ranges from dominating those with smaller ranges.

Feature scaling is necessary in machine learning models that rely on distance-based calculations, such as k-Nearest Neighbors (kNN), Support Vector Machines (SVM), Principal Component Analysis (PCA), and Gradient Descent-based algorithms (*e.g.*, Regression, Neural Networks).

Without scaling, features with larger ranges (*e.g.*, "Annual Salary" in thousands vs. "Age" in years) can disproportionately influence distance calculations, leading to biased results. Scaling generally improves *classification accuracy* in distance-based algorithms.

There are two primary methods of feature scaling:

1.  Min-Max Normalization (Rescaling to [0,1] or [-1,1])
2.  Z-Score Standardization (Mean = 0, Standard Deviation = 1)

### Min-Max Normalization

Min-max normalization (or rescaling) scales the features to a fixed range, typically [0,1] or [-1,1]. This ensures that all values fall within the same range.

The formula for min-max normalization is:

$$
X_{\text{scaled}} = \frac{X - X_{\text{min}}}{X_{\text{max}} - X_{\text{min}}}
$$

In R, we can normalize a dataset by building a simple function:

```{r}
# Define a min-max normalization function
min_max_normalize <- function(x) {
  return((x - min(x)) / (max(x) - min(x)))
}

# apply scaling to numeric features of the 'iris' dataset
data(iris)
iris_normalized <- as.data.frame(lapply(iris[, 1:4], min_max_normalize))

# View scaled data
head(iris_normalized)
```

### Z-Score Standardization

Z-score standardization (or standard scaling) transforms the feature distribution to have a *mean of 0* and *standard deviation of 1*.

The formula for standardization is:

$$
X_{\text{standardized}} = \frac{X - \mu}{\sigma}
$$

where:

-   $\mu$ is the mean of the feature
-   $\sigma$ is the standard deviation

In R, we can use the Base R `scale()` function for standardization or write our own function:

```{r}
# Standardize the dataset using scale()
iris_standardized <- as.data.frame(scale(iris[, 1:4]))

# View standardized data
head(iris_standardized)
```

*Standardization* is preferred when data follows a *normal distribution* and distance-based algorithms (*e.g.*, *kNN* or regression) are used. However, since there is a presumption of normality, we must ensure that the features are reasonably normally distributed. Use a visual method such as a Q-Q Plot or a statistical test such as the Shapiro-Wilk Test to verify the assumption.

> Since *kNN* relies on distance, **scaling is necessary**. All features must be scaled using the same approach.

### Selecting Scaling Method

-   Use **Min-Max Normalization** when data **does not follow a normal distribution** or when feature values must be within a fixed range.
-   Use **Z-Score Standardization** when data **follows a normal distribution** and algorithms rely on distance metrics.

## Classifying with *kNN* in R

To demonstrate *kNN* in R, we will use the well-known built-in *iris* dataset. The dataset consists of measurements of **sepal length, sepal width, petal length, and petal width**, along with the species of the iris flower as the target variable. The target variable is one of several pre-defined values, so it is categorical. Predicting a categorical variables is referred to as a *classification task* as we want to "classify" which type of iris flower a plant most likely is based on its characteristics of **sepal length, sepal width, petal length, and petal width**.

We will perform the following steps:

1.  Load the dataset and explore it. The data set is most commonly in a CSV file, but it must be tabular and have each case on a row. One of the columns must be the target variable. The data generally comes from observation and the assignment of classes is often done manually.
2.  Shape the data. kNN makes specific assumptions about the data. The data should not have outliers, no missing values, and the scales of all features must be normalized. In addition, all of the features must be numeric, so any categorical features must be encoded.
3.  Split the data into training and testing sets. This is an essential step in machine learning. We choose a random subset of the data to build the model and then use the held-back data to determine how well the model works by comparing the prediction made by the model against the actual value of the target variable. This allows us to calculate metrics such as accuracy where we measure how often the algorithm made the correct prediction. By doing this we can evaluate the effectiveness of the classification model and also use it for comparison to other models.
4.  Train a *kNN* model using the `class::knn()` function. There are numerous implementations of *kNN* found in a variety of packages. Other packages that include an implementation of *kNN* are **caret** and **FNN**.
5.  Make predictions on the test data. This is the validation step where the algorithm is used to make predictions for each of the held-back cases and comparing them to the actual observed value which allows calculating metrics such as accuracy.
6.  Evaluate the model by calculating accuracy for classification. Evaluating classification models, including k-Nearest Neighbors (*kNN*), involves using various performance metrics to assess how well the model distinguishes between classes. These metrics provide insights into the accuracy, robustness, and reliability of the model's predictions. Accuracy is the proportion of correctly predicted values for the target variable in the held-back test data. It is generally reported as a percentage. Other evaluation metrics include precision, recall, Kappa Statistic, and the *F1*-Score.

### 1: Load Data

There are a number of packages that are needed and that must first be installed and then loaded:

```{r loadPackgs, warning=F, message=F}
library(class)
library(caret)
```

### 2: Inspect Structure

First, we load the *iris* dataset and inspect its structure. The *iris* dataset is "built-in" and thus does not have to be loaded from an external file.

```{r}
# Load the iris dataset
data(iris)
```

Let's check the structure of the dataset:

```{r}
# Check the structure of the dataset
str(iris)
```

And let's look at the first few rows:

```{r}
# View first few rows
head(iris, 4)
```

So, we can see that *iris* has 150 observations (rows or cases), with four numeric features and one categorical variable (*Species*), which is our target variable.

### 3: Shape Data

#### A: Identify Outliers

The first step is to identify whether any of the numeric features have outliers or extreme values. The most common rule of thumb is to consider any value that is more than 3 standard deviations from the mean an outlier. Outliers are then either removed or treated as missing values. We need to be careful to ignore missing values in the calculation of the mean and standard deviation as those have not yet been dealt with.

```{r}
## copy data so we do not pollute the original data
df <- iris

## calculate mean and sd for each numeric column
numeric.cols <- sapply(df, is.numeric)

for (c in 1:length(numeric.cols)) {
  if (numeric.cols[c] == TRUE) {
    ## find outliers in numeric column
    m <- mean(df[,c], na.rm = T)
    s <- sd(df[,c], na.rm = T)
    
    outliers <- which(abs((m - df[,c]) / s) > 3.0)
    
    if (length(outliers) > 0) {
      ## found outliers; replace with NA and impute later
      cat("Found outliers in column '", names(df)[c], "': \n")
      cat("   --> ", df[outliers,c], "\n\n")
    }
  }
}
```

There is one outlier, but it is not too far from the mean, so we will simply ignore it.

Lesson [3.203 -- Detecting and Managing Outliers](http://artificium.us/lessons/03.ml/l-3-203-outliers/l-3-203.html) provides a more detailed treatment of this subject.

#### B: Manage Missing Values

Missing values must be addressed either by eliminating any observation (row) that has a missing value in any of its features or by imputing the missing value with an estimate. For numeric features, a common strategy is to impute the missing value with the mean or median, while for categorical features, the mode is often used.

The first step is to check each feature for missing values. While we can certainly check each feature manually, iterating over all columns is more practical. For missing numeric values we can check for *NA* with the function `is.na()`, while for character (text) features we need to check whether it is "".

```{r checkMissingValues}
num.Rows <- nrow(df)
num.Cols <- ncol(df)

found <- F

for (c in 1:num.Cols){
  missing.Values <- which(is.na(df[,c]) | df[,c] == "")
  num.Missing.Values <- length(missing.Values)
  if (num.Missing.Values > 0) {
    print(paste0("Column '", names(df)[c], "' has ", num.Missing.Values, " missing values"))
    found <- T
  }
}

if (!found) {
  print("no missing values detected")
}
```

There are no missing values in this dataset. If you want to know more about how to deal with missing values, study [Lesson 3.204 -- Managing Missing Values in Data](http://artificium.us/lessons/03.ml/l-3-204-missing-values/l-3-204.html).

#### C: Encode Categorical Features

So far, we have primarily dealt with the numeric features. Let's turn our attention to the categorical features. Binary categorical features (where there are only two possible values) should be turned into a 0/1 encoding. Multi-level categorical features are often encoded with one-hot dummy codes, their frequency, or one of several other methods. At the end of the day, we must convert categorical features to numeric features. Lesson [3.207 - Encoding Categorical Features](http://artificium.us/lessons/03.ml/l-3-207-categorical-encoding/l-3-207.html) provides a more detailed treatment of this subject.

There are no categorical features in this dataset, so we can move on to the next step.

#### D: Scale Numeric Features

Numeric features must be scaled for *kNN* to perform well. We will use *z-score standardization* as our method for scaling and use the `scale()` function from Base R to do so:

```{r}
# standardize the numeric features using scale()
iris.scaled <- as.data.frame(scale(iris[, 1:4]))

# add the species into the scaled dataframe
iris.scaled$Species <- iris$Species

# view standardized data
head(iris.scaled,4)
```

Going forward, we will use the scaled dataset in `iris.scaled` rather than the original unscaled dataset.

### 4: Split Data

To evaluate our model, we split the data into a *training set (80%)* and a *test set (20%)*. The selection of the proportion to hold back for testing is generally 20%, but for smaller dataset we may choose 10% or 15%. When the target classes are not all occurring in the same proportion, then we may end up with class imbalance, *i.e.*, in our case, a iris species may appears in the testing data but not in the training data as there are too few occurrences.

When we have class imbalance we need to select the training and test data differently, using methods such as stratified sampling or SMOTE. Lesson [3.224 - Managing Class Imbalance](http://artificium.us/lessons/03.ml/l-3-224-managing-class-imbalance/l-3-224.html) provides an introduction to these methods.

Now, *kNN* involves two elements of "randomness": the selection of the subset sample for training and dealing with ties during prediction. So, if we use random numbers then every time we run the code we may get different results. To avoid this, we need to set a "seed" for the random number generator so it generates the same random numbers every time. And, naturally, we choose 42 as the seed as that is the answer to the universe[^1].

[^1]: The number 42 holds a special place in computer science and geek culture primarily due to its reference in Douglas Adams' novel *The Hitchhiker’s Guide to the Galaxy*. In the book, a group of hyper-intelligent beings builds a supercomputer named *Deep Thought* to find the "Answer to the Ultimate Question of Life, the Universe, and Everything." After seven and a half million years of computation, the machine finally reveals the answer: "42." The problem, however, is that nobody knows anymore what the actual question was.

```{r}
# Set seed for reproducibility
set.seed(42)
```

So, now we can split the data by selecting a random subset for training (to make predictions) and the held-back data to evaluate the accuracy and performance of the model. While there are many ways to do this, we are using the `createDataPartition()` function from the **caret** package. This function is commonly used for splitting data into training and test sets (without replacement) while maintaining the distribution of the target variable (stratified sampling). It is particularly useful for classification tasks to ensure that both sets contain representative proportions of each class.

```{r}
# Split the dataset into training and testing sets
trainIndex <- createDataPartition(iris.scaled$Species, p = 0.8, list = FALSE)
trainData <- iris.scaled[trainIndex, ]
testData <- iris.scaled[-trainIndex, ]
```

Just a quick check of the dimensions to be sure it all went well.

```{r}
# Check the dimensions
dim(trainData)
dim(testData)
```

At this point, the training set contains approximately 120 observations, while the test set contains 30.

### 5: Train Model

Since *kNN* does not require explicit training like regression, we simply prepare the features and use the `knn()` function from the **class** package. The function requires a vector of the known target variables (this is the key to "supervised machine learning") to be separate from the predictor features.

```{r}
# Extract features (predictors) and target variable
trainX <- trainData[, -5]       # extract the Species column
trainY <- trainData$Species     # predictor features

testX <- testData[, -5]         # extract the Species column
testY <- testData$Species       # predictor features

# Choose k (e.g., k = 5)
k_value <- 5

# Perform _kNN_ classification on the test data using the training data
predictions <- class::knn(train = trainX, test = testX, cl = trainY, k = k_value)
```

The predictions for the observations in the testing data using the training are stored in the variable `predictions`.

The choice of *k*, *i.e.*, how many of the "closest" neighbors are considered when determining the predicting class, is arbitrarily set at 5 in this example. Of course, we need to find the "optimal" *k* to get the "best" predictions by trying various *k* on the testing data and on different training samples. There are methods for this, such as k-fold cross validation or bagging, but those are beyond the scope of this primer. In practice, it has been found through empirical studies that a *k* equal to the square root of the number of observations in the training data is generally best.

### 6: Evaluate Model

To measure how well our model performs, we compute the accuracy by comparing the predicted and actual values. Accuracy measure the percentage of correct predictions, *i.e.*, in our case the number of times that the algorithm correctly predicted the iris species for the test data.

```{r}
# Calculate accuracy
accuracy <- sum(predictions == testY) / length(testY)
```

This results in a model Accuracy of `r round(accuracy * 100, 2)`%. Naturally, we must have an accuracy of more than 50% otherwise random guessing is just as effective.

### 7: Make Prediction

To make a prediction for a new unseen instance, we create a new data frame with a sample observation.

Let's assume we have been provided with the data for an iris flower with the following measurements and we want to determine to which species it likely belongs:

-   Sepal Length: 5.8
-   Sepal Width: 3.0
-   Petal Length: 4.2
-   Petal Width: 1.2

We construct this case as a data frame and pass it to the "trained" *kNN* model using a *k* of 5.

```{r}
# create dataframe containing new observations to be classified
new.case <- data.frame(Sepal.Length = 5.8, Sepal.Width = 3.0, 
                       Petal.Length = 4.2, Petal.Width = 1.2)

# predict species for the new case using kNN with the training data
predicted.species <- knn(train = trainX, test = new.case, cl = trainY, k = 5)
```

Note that the prediction result is a variable of type "factor" which can be converted to a character string as follows:

```{r}
species <- as.character(predicted.species)
```

### 8: Report Result

*kNN* predicts (based on finding the 5 closest neighbors and seeing which species they belong to) that the new case is most likely of the '`r species`' species. Based on our measured accuracy, we can say that this prediction is correct with a probability of `r round(accuracy, 2)`.

## Summary

The k-Nearest Neighbors (*kNN*) algorithm is a versatile and intuitive method for both classification and regression tasks. Unlike other machine learning models, *kNN* does not explicitly learn a function; instead, it memorizes the training data and makes predictions dynamically based on similarity measures. Choosing the right value of **k** is important, as small values lead to high variance, while large values increase bias.

By implementing *kNN* in R, we illustrated how to split data into training and test sets, perform classification and regression, and evaluate model performance.

------------------------------------------------------------------------

## Files & Resources

```{r zipFiles, echo=FALSE}
zipName = sprintf("LessonFiles-%s-%s.zip", 
                 params$category,
                 params$number)

textALink = paste0("All Files for Lesson ", 
               params$category,".",params$number)

# downloadFilesLink() is included from _insert2DB.R
knitr::raw_html(downloadFilesLink(".", zipName, textALink))
```

------------------------------------------------------------------------

## See Also

-   [3.101 -- Overview of the Machine Learning Pipeline](http://artificium.us/lessons/03.ml/l-3-101-intro-ml/l-3-101.html)
-   [3.208 -- Distance Measures](http://artificium.us/lessons/03.ml/l-3-208-distance-measures/l-3-208.html)
-   [3.203 -- Detecting and Managing Outliers](http://artificium.us/lessons/03.ml/l-3-203-outliers/l-3-203.html)
-   [3.224 -- Managing Class Imbalance](http://artificium.us/lessons/03.ml/l-3-224-managing-class-imbalance/l-3-224.html)
-   [3.203 -- Detecting and Managing Outliers](http://artificium.us/lessons/03.ml/l-3-203-outliers/l-3-203.html)
-   [3.207 - Encoding Categorical Features](http://artificium.us/lessons/03.ml/l-3-207-categorical-encoding/l-3-207.html)
-   [3.411 -- Simple Implementation of *kNN* in R](http://artificium.us/lessons/03.ml/l-3-411-_kNN_-implementation-r/l-3-411.html)

## References

No references.

## Errata

None collected yet. [Let us know](https://form.jotform.com/212187072784157){target="_blank"}.
