Clustering refers to finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups.
Kmeans algorithm:
- Kmeans is a flat clustering algorithm.http://nlp.stanford.edu/IR-book/html/.../k-means-1.html#sec:kmeans.
- Kmeans is an iterative algorithm, whose first step is to select k initial centroids (also called seeds), which are randomly selected data points.
- In the second step, k clusters are formed by assigning all points to their closest centroid, and the centroid of each cluster is recomputed.
- This is done iteratively till a stopping criterion is met (for example: the centroids don't change).
- The "closeness" can be measured by various distance measures, or similarity measures or correlation. Different distance measures were experimented for our case, and gave similar results. Thus, results were generated with Euclidean as the chosen distance measure.
- Another parameter to the kmeans algorithm is the number of clusters. Since our data is small (a matrix of size 39*49), we selected two as the number of clusters. The output of the kmeans clustering is validated with the domain knowledge, and is shown to give meaningful results.
Anomaly Detection
A typical problem definition of anomaly detectionhttp://www.dtc.umn.edu/publications/r.../2008_16.pdf is: Given a dataset D, find all the data points x belonging to D having the top-n largest anomaly scores calculated by a predetermined function f(x). It is associated with numerous challenges, like:
- How does one define normal, and define deviations from it?
- How many outliers are there?
- How can anomalies be validated as either real or noise?
- How does one determine when all the anomalies have been caught?
Some of the application areas include credit card fraud detection, telecommunication fraud detection, network intrusion detection, fault detection.
The general steps for any anomaly detection algorithms include:
- Build a profile of the “normal” behavior. An example of a profile could be a pattern or a characteristic.
- Use the “normal” profile to detect anomalies. Anomalies are generally defined as data points, whose characteristics "differ" significantly from the rest of the data.
Some popular types of anomaly detection methods are:
Distance-based Method:
- Here data is represented as a vector of features.
- The three major approaches in this method include:
Nearest-Neighbor Based Approach<http://dl.acm.org/citation.cfm?id=335437>
- In this approach, the distance of each data point with every other point is computed, and the outliers are define based on any chosen metric. Some metrics could be:
- Find points which have less than m neighboring points within a distance D.
- Find points whose distance to the kth nearest neighbor is greatest.
- Find points whose average distance to the k nearest neighbors is greatest.
Density-based: LOF approachhttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.8948...
- For each point, compute the density of its local neighborhood.
- Compute local outlier factor (LOF) of a sample p as the average of the ratios of the density of sample p and the density of its nearest neighbors.
- Outliers are points with largest LOF value.
Clustering-Based<http://dl.acm.org/citation.cfm?id=335437>
- Run any clustering algorithm to cluster the data into groups/clusters.
- Choose a metric for defining outliers. Some examples include:
- Points which lie far from the cluster centroids.
Parzen Window Density basedhttp://ssg.mit.edu/cal/abs/2000_sprin.../parzen62.pdf
- Non-parametric density estimation:
- These attempt to estimate the density directly from the data without assuming a particular form for the underlying distribution.
- Simple Ex: A histogram (Divide the sample space into a number of bins and approximate the density at the center of each bin by the fraction of points in the training data that fall into the corresponding bin)
- The Parzen window estimate resembles the histogram, with the exception that the bin locations are determined by the data.
- Assume that the region R that encloses the k examples is a hypercube with sides of length h centered at x.
- Volume: V=h^d, where d is the number of dimensions.
- To find the number of examples that fall within this region, we define a kernel function
- This kernel, which corresponds to a unit hypercube centered at the origin, is known as a Parzen window or the naïve estimator.
Non-classical Multidimensional Scaling(MDS)http://www.mathworks.com/help/stats/e.../non-classical-multidimensional-scaling.html: A way to visualize dissimilarity
-
- Non-classical Multidimensional Scaling (MDS) takes a dissimilarity matrix and creates a configuration of points in 1D, 2D, or 3D, where the inter-point distances are close to the original dissimilarity between the points. Dissimilarity could be physical or abstract.