Table of Contents

## What is Boosting?

Boosting is an efficient algorithm that is able to convert a weak learner into a strong learner.

Example:

Suppose we want to check that an email is “spam email” or “safe email”?

In this case, there can be multiple possibilities like;

**Rule 1:**Email contains only links to some websites.- Decision: It is a spam

**Rule 2:**Email from an official email address. e.g [email protected]- Decision: It is not spam.

**Rule 3:**Email has a request to get private bank details. e.g bank account number and father/mother name etc.- Decision: It is a spam

Now the question is that the 3 rules discussed above or enough to classify an email as “spam” or not?

**Answer:**These 3 rules are not enough. These 3 rules are weak learner. So we need to boost these learners. We can boost the weak learners to the stronger learner by boosting.- Boosting can be done by combining and assigning weights to every weak learner.

Boosting have greater accuracy as compared to Bagging.

**Types of boosting algorithm:**

Three main types of boosting algorithm are as follows;

- XGBoost algorithm
- AdaBoost algorithm
- Gradient tree boosting algorithm.

## Next Similar Tutorials

- Decision tree induction on categorical attributes – Click Here
- Decision Tree Induction and Entropy in data mining – Click Here
- Overfitting of decision tree and tree pruning – Click Here
- Attribute selection Measures – Click Here
- Computing Information-Gain for Continuous-Valued Attributes in data mining – Click Here
- Gini index for binary variables – Click Here
- Bagging and Bootstrap in Data Mining, Machine Learning – Click Here
- Evaluation of a classifier by confusion matrix in data mining – Click Here
- Holdout method for evaluating a classifier in data mining – Click Here
- RainForest Algorithm / Framework – Click Here
- Boosting in data mining – Click Here
- Naive Bayes Classifier – Click Here