List of clustering algorithms in data mining

List of clustering algorithms in data mining

In this tutorial, we will try to learn little basic of clustering algorithms in data mining. A list of clustering algorithms is given below;

  1. K-Means Clustering
  2. Agglomerative Hierarchical Clustering
  3. Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
  4. Expectation–Maximization (EM) Clustering using Gaussian Mixture Models (GMM)
  5. Mean-Shift Clustering
list of clustering algorithms in data mining
Figure: list of clustering algorithms in data mining

1. K-Means Clustering

K-Means Clustering is a technique in which we move the data points to the nearest neighbors on the basis of similarity or dissimilarity.

Step 1: Find the centroid randomly.

Step 2: Assign cluster to each data set.

Step 3: Repeat the process again and again. Every time, the total error rate is changed. When we observe that the error rate is not changed further, we can stop and finalize the clusters.

Agglomerative Hierarchical Clustering

Hierarchical cluster analysis is also known as hierarchical cluster analysis. In this type of clustering, we build a hierarchy of clusters.   There are two types of Strategies for hierarchical clustering.

  1. Agglomerative Strategies 
  2. Divisive Strategies

Agglomerative Strategies

In Agglomerative Strategies, each observation starts in its own cluster, and then pairs of clusters are merged as one moves up the hierarchy. This kind of strategy is known as bottom-up strategy.

Divisive Strategies

In Divisive Strategies, all observations start in one cluster and then split the clusters when we move down the hierarchy.

These splitting and merging are determined in a greedy way.

Advantages of Agglomerative Hierarchical Clustering

  • Hierarchical Clustering is very helpful in ordering the objects in such a way that it is informative for data display.
  • When we generate smaller clusters, it is very helpful for us in discover the information.

Disadvantages of Agglomerative Hierarchical Clustering

  1. Not allowed for a relocation of objects that may have been wrongly grouped at an early stage. The result should be examined in detail to ensure that it is giving accurate information.
  2. The use of different kinds of distance metrics for measuring the distance between the clusters may generate different kinds of results.

Density-Based Spatial Clustering of Applications with Noise (DBSCAN)

It is also known as DBSCAN.  This clustering algorithm was proposed by Martin Ester, Hans – Peter Kriegal, Xiaowei Xu and Jorg Sander and in 1996.

Suppose we have a set of points, it can groups together points that are nearby neighbors and also marks the outliers points that lie on a big distance.

Expectation-Maximization (EM) Clustering using Gaussian Mixture Models (GMM)

Gaussian mixture models (GMM) are well known due to their use in data clustering. Given a fitted GMM, cluster assigns query data to the component and yielding the highest posterior probability. When we assign a data point to exactly one cluster, then this kind of clustering is called hard clustering.

Mean-Shift Clustering

Mean-Shift Clustering is one of the simple and flexible clustering technique that has several advantages when we compare it with other approaches. First of all, we need to represent our data in a mathematical manner. This method follows the concept of kernel density estimation (KDE).

KDE is a technique to estimate the distribution for a set of data.

This method works by placing one kernel on each point of the data in the data set. A kernel means a weighting function. There are many types of kernels, but the Gaussian kernel is a good choice for placing on the data point.

Next Similar Tutorials

  1. KMeans Clustering in data mining. – Click Here
  2. KMeans clustering on two attributes in data mining. – Click Here
  3. List of clustering algorithms in data mining. – Click Here
  4. Markov cluster process  Model with Graph Clustering – Click Here.