Outlier Detection
Outlier Detection
03
Introduction to Data Mining
Instructor: Abdullah Mueen
LECTURE 9: OUTLIER DETECTION
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               2
What Are Outliers?
Outlier: A data object that deviates significantly from the normal objects as if it were generated by a
different mechanism
 ◦ Ex.: Unusual credit card purchase, sports: Michael Jordon, Wayne Gretzky, ...
Outliers are different from the noise data
 ◦ Noise is random error or variance in a measured variable
 ◦ Noise should be removed before outlier detection
Outliers are interesting: It violates the mechanism that generates the normal data
Outlier detection vs. novelty detection: early stage, outlier; but later merged into the model
Applications:
 ◦ Credit card fraud detection
 ◦ Telecom fraud detection
 ◦ Customer segmentation
 ◦ Medical analysis
                                                                                                      3
Types of Outliers (I)                                                              Global Outlier
                                                                                                    4
Types of Outliers (II)
                                                                                   Collective Outlier
 Collective Outliers
  ◦ A subset of data objects collectively deviate significantly from the whole data set, even if the
    individual data objects may not be outliers
  ◦ Applications: E.g., intrusion detection:
    ◦ When a number of computers keep sending denial-of-service packages to each other
   Detection of collective outliers
 Consider not only behavior of individual objects, but also that of groups of objects
     Need to have the background knowledge on the relationship among data objects,
      such as a distance or similarity measure on objects.
A data set may have multiple types of outlier
                                                                                               5
 Challenges of Outlier Detection
Modeling normal objects and outliers properly
   Hard to enumerate all possible normal behaviors in an application
 The border between normal and outlier objects is often a gray area
   Choice of distance measure among objects and the model of relationship among objects are often
    application-dependent
   E.g., clinic data: a small deviation could be an outlier; while in marketing analysis, larger fluctuations
   Noise may distort the normal objects and blur the distinction between normal objects and outliers. It
    may help hide outliers and reduce the effectiveness of outlier detection
Understandability
   Specify the degree of an outlier: the unlikelihood of the object being generated by a normal
    mechanism
                                                                                                        6
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               7
Outlier Detection I: Supervised Methods
Two ways to categorize outlier detection methods:
 ◦ Based on whether user-labeled examples of outliers can be obtained:
   ◦ Supervised, semi-supervised vs. unsupervised methods
 ◦ Based on assumptions about normal data and outliers:
   ◦ Statistical, proximity-based, and clustering-based methods
Outlier Detection I: Supervised Methods
 ◦ Modeling outlier detection as a classification problem
   ◦ Samples examined by domain experts used for training & testing
 ◦ Methods for Learning a classifier for outlier detection effectively:
   ◦ Model normal objects & report those not matching the model as outliers, or
   ◦ Model outliers and treat those not matching the model as normal
 ◦ Challenges
   ◦ Imbalanced classes, i.e., outliers are rare: Boost the outlier class and make up some artificial outliers
   ◦ Catch as many outliers as possible, i.e., recall is more important than accuracy (i.e., not mislabeling normal
     objects as outliers)
                                                                                                             8
Outlier Detection II: Unsupervised Methods
Assume the normal objects are somewhat ``clustered'‘ into multiple groups, each having some distinct features
An outlier is expected to be far away from any groups of normal objects
Weakness: Cannot detect collective outlier effectively
 ◦ Normal objects may not share any strong patterns, but the collective outliers may share high similarity in a
   small area
Ex. In some intrusion or virus detection, normal activities are diverse
 ◦ Unsupervised methods may have a high false positive rate but still miss many real outliers.
 ◦ Supervised methods can be more effective, e.g., identify attacking some key resources
Many clustering methods can be adapted for unsupervised methods
 ◦ Find clusters, then outliers: not belonging to any cluster
 ◦ Problem 1: Hard to distinguish noise from outliers
 ◦ Problem 2: Costly since first clustering: but far less outliers than normal objects
   ◦ Newer methods: tackle outliers directly
                                                                                                           9
Outlier Detection III: Semi-Supervised Methods
Situation: In many applications, the number of labeled data is often small: Labels could be on outliers
only, normal objects only, or both
If only some labeled outliers are available, a small number of labeled outliers many not cover the
possible outliers well
 ◦ To improve the quality of outlier detection, one can get help from models for normal objects
   learned from unsupervised methods
                                                                                                10
Outlier Detection (1): Statistical Methods
 Statistical methods (also known as model-based methods) assume that the normal data follow
 some statistical model (a stochastic model)
  ◦ The data not following the model are outliers.
Example (right figure): First use Gaussian distribution to model the normal data
   For each object y in region R, estimate gD(y), the probability of y fits the Gaussian
    distribution
   If gD(y) is very low, y is unlikely generated by the Gaussian model, thus an outlier
                                                                                      11
Outlier Detection (2): Proximity-Based Methods
An object is an outlier if the nearest neighbors of the object are far away, i.e., the proximity of
the object is significantly deviates from the proximity of most of the other objects in the same
data set
   Example (right figure): Model the proximity of an object using its 3 nearest neighbors
       Objects in region R are substantially different from other objects in the data set.
    Thus the objects in R are outliers
    
Often have a difficulty in finding a group of outliers which stay close to each other
                                                                                              12
Outlier Detection (3): Clustering-Based Methods
Normal data belong to large and dense clusters, whereas outliers belong to
small or sparse clusters, or do not belong to any clusters
 Example (right figure): two clusters
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               14
Statistical Approaches
Statistical approaches assume that the objects in a data set are generated by a stochastic process (a generative
model)
Idea: learn a generative model fitting the given data set, and then identify the objects in low probability regions
of the model as outliers
Methods are divided into two categories: parametric vs. non-parametric
Parametric method
 ◦ Assumes that the normal data is generated by a parametric distribution with parameter θ
 ◦ The probability density function of the parametric distribution f(x, θ) gives the probability that object x is
   generated by the distribution
 ◦ The smaller this value, the more likely x is an outlier
Non-parametric method
 ◦ Not assume an a-priori statistical model and determine the model from the input data
 ◦ Not completely parameter free but consider the number and nature of the parameters are flexible and not
   fixed in advance
 ◦ Examples: histogram and kernel density estimation
                                                                                                              15
Parametric Methods I: Detection Univariate Outliers Based
on Normal Distribution
Univariate data: A data set involving only one attribute or variable
Often assume that data are generated from a normal distribution, learn the
parameters from the input data, and identify the points with low probability as
outliers
Ex: Avg. temp.: {24.0, 28.9, 28.9, 29.0, 29.1, 29.1, 29.2, 29.2, 29.3, 29.4}
 ◦ Use the maximum likelihood method to estimate μ and σ
                                                                                  16
Parametric Methods I: The Grubb’s Test
Univariate outlier detection: The Grubb's test (maximum normed residual test) ─ another
statistical method under normal distribution
 ◦ For each object x in a data set, compute its z-score: x is an outlier if
                                                                                              17
Parametric Methods II: Detection of Multivariate Outliers
Multivariate data: A data set involving two or more attributes or variables
Transform the multivariate outlier detection task into a univariate outlier detection problem
                                                                                                18
    Parametric Methods III: Using Mixture of Parametric
    Distributions
    Assuming data generated by a normal distribution could be sometimes overly simplified
    Example (right figure): The objects between the two clusters cannot be captured as outliers
    since they are close to the estimated mean
   To overcome this problem, assume the normal data is generated by two
    normal distributions. For any object o in the data set, the probability that
    o is generated by the mixture of the two distributions is given by
     where fθ1 and fθ2 are the probability density functions of θ1 and θ2
   Then use EM algorithm to learn the parameters μ1, σ1, μ2, σ2 from data
   An object o is an outlier if it does not belong to any cluster
Non-Parametric Methods: Detection Using Histogram
 The model of normal data is learned from the input data
 without any a priori structure.
 Often makes fewer assumptions about the data, and thus
 can be applicable in more scenarios
 Outlier detection using histogram:
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               21
Proximity-Based Approaches: Distance-Based vs. Density-Based Outlier
Detection
Intuition: Objects that are far away from the others are outliers
Assumption of proximity-based approach: The proximity of an outlier deviates
significantly from that of most of the others in the data set
Two types of proximity-based outlier detection methods
 ◦ Distance-based outlier detection: An object o is an outlier if its neighborhood
   does not have enough other points
◦ Density-based outlier detection: An object o is an outlier if its density is
  relatively much lower than that of its neighbors
                                                                                 22
Distance-Based Outlier Detection
For each object o, examine the # of other objects in the r-neighborhood of o, where r is a user-specified distance threshold
An object o is an outlier if most (taking π as a fraction threshold) of the objects in D are far away from o, i.e., not in the r-neighborhood of
o
Equivalently, one can check the distance between o and its k-th nearest neighbor ok, where . o is an outlier if dist(o, ok) > r
Efficiency: Actually CPU time is not O(n2) but linear to the data set size since for most non-outlier objects, the inner loop terminates early
                                                                                                                                        23
Distance-Based Outlier Detection: A Grid-Based Method
  Why efficiency is still a concern? When the complete set of objects cannot be
  held into main memory, cost I/O swapping
  The major cost: (1) each object tests against the whole data set, why not only its
  close neighbor? (2) check objects one by one, why not group by group?
  Grid-based method (CELL): Data space is partitioned into a multi-D grid. Each cell
  is a hyper cube with diagonal length r/2
    In Fig., o1 and o2 are local outliers to C1, o3 is a global outlier, but o4 is not an outlier. However,
    proximity-based clustering cannot find o1 and o2 are outlier (e.g., comparing with O4).
   Intuition (density-based outlier detection): The density around an outlier
    object is significantly different from the density around its neighbors
   Method: Use the relative density of an object against its neighbors as
    the indicator of the degree of the object being outliers
   k-distance of an object o, distk(o): distance between o and its k-th NN
   k-distance neighborhood of o, Nk(o) = {o’| o’ in D, dist(o, o’) ≤ distk(o)}
        Nk(o) could be bigger than k since multiple objects may have
         identical distance to o
                                                                                                     25
Local Outlier Factor: LOF
     The lower the local reachability density of o, and the higher the local
      reachability density of the kNN of o, the higher LOF
     This captures a local outlier whose local density is relatively low
      comparing to the local densities of its kNN
                                                                                 26
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               27
Clustering-Based Outlier Detection (1 & 2):
Not belong to any cluster, or far from the closest one
 An object is an outlier if (1) it does not belong to any cluster, (2) there is a large
 distance between the object and its closest cluster , or (3) it belongs to a small or
 sparse cluster
    Case I: Not belong to any cluster
       Identify animals not part of a flock: Using a density-
    Ex. In the figure, o is outlier since its closest large cluster is C1, but the
     similarity between o and C1 is small. For any point in C3, its closest
     large cluster is C2 but its similarity from C2 is low, plus |C3| = 3 is small
                                                                                      29
Clustering-Based Method: Strength and Weakness
Strength
 ◦ Detect outliers without requiring any labeled data
 ◦ Work for many types of data
 ◦ Clusters can be regarded as summaries of the data
 ◦ Once the cluster are obtained, need only compare any object against the clusters to determine
   whether it is an outlier (fast)
Weakness
 ◦ Effectiveness depends highly on the clustering method used—they may not be optimized for
   outlier detection
 ◦ High computational cost: Need to first find clusters
 ◦ A method to reduce the cost: Fixed-width clustering
   ◦ A point is assigned to a cluster if the center of the cluster is within a pre-defined distance
     threshold from the point
   ◦ If a point cannot be assigned to any existing cluster, a new cluster is created and the distance
     threshold may be learned from the training data under certain conditions
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               31
    Classification-Based Method I: One-Class Model
    Idea: Train a classification model that can distinguish “normal” data from outliers
    A brute-force approach: Consider a training set that contains samples labeled as “normal” and
    others labeled as “outlier”
     ◦ But, the training set is typically heavily biased: # of “normal” samples likely far exceeds # of
       outlier samples
     ◦ Cannot detect unseen anomaly
     One-class model: A classifier is built to describe only the normal class.
        Learn the decision boundary of the normal class using classification
       the training set, but often difficult to obtain representative and high-
       quality training data
                                                                                                   33
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               34
Mining Contextual Outliers I: Transform into
Conventional Outlier Detection
If the contexts can be clearly identified, transform it to conventional outlier detection
  1.Identify the context of the object using the contextual attributes
  2.Calculate the outlier score for the object in the context using a conventional outlier detection method
Ex. Detect outlier customers in the context of customer groups
 ◦ Contextual attributes: age group, postal code
 ◦ Behavioral attributes: # of trans/yr, annual total trans. amount
Steps: (1) locate c’s context, (2) compare c with the other customers in the same group, and (3) use a
conventional outlier detection method
If the context contains very few customers, generalize contexts
  ◦ Ex. Learn a mixture model U on the contextual attributes, and another mixture model V of the data on the
    behavior attributes
  ◦ Learn a mapping p(Vi|Uj): the probability that a data object o belonging to cluster Uj on the contextual
    attributes is generated by cluster Vi on the behavior attributes
  ◦ Outlier score:
                                                                                                              35
Mining Contextual Outliers II: Modeling Normal Behavior with
Respect to Contexts
In some applications, one cannot clearly partition the data into contexts
 ◦ Ex. if a customer suddenly purchased a product that is unrelated to those she recently browsed, it
    is unclear how many products browsed earlier should be considered as the context
Model the “normal” behavior with respect to contexts
 ◦ Using a training data set, train a model that predicts the expected behavior attribute values with
   respect to the contextual attribute values
 ◦ An object is a contextual outlier if its behavior attribute values significantly deviate from the values
   predicted by the model
Using a prediction model that links the contexts and behavior, these methods avoid the explicit
identification of specific contexts
Methods: A number of classification and prediction techniques can be used to build such models,
such as regression, Markov Models, and Finite State Automaton
                                                                                                   36
Mining Collective Outliers I: On the Set of “Structured Objects”
Collective outlier if objects as a group deviate significantly from the entire data
Need to examine the structure of the data set, i.e, the relationships between multiple data
objects
   Each of these structures is inherent to its respective type of data
      For temporal data (such as time series and sequences), we explore
        the structures formed by time, which occur in segments of the time
        series or subsequences
      For spatial data, explore local areas
   Difference from the contextual outlier detection: the structures are often
    not explicitly defined, and have to be discovered as part of the outlier
    detection process.
   Collective outlier detection methods: two categories
      Reduce the problem to conventional outlier detection
                                                                                                  38
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               39
Challenges for Outlier Detection in High-Dimensional Data
Interpretation of outliers
 ◦ Detecting outliers without saying why they are outliers is not very useful in high-D due to many
   features (or dimensions) are involved in a high-dimensional data set
 ◦ E.g., which subspaces that manifest the outliers or an assessment regarding the “outlier-ness” of
   the objects
Data sparsity
 ◦ Data in high-D spaces are often sparse
 ◦ The distance between objects becomes heavily dominated by noise as the dimensionality
   increases
Data subspaces
 ◦ Adaptive to the subspaces signifying the outliers
 ◦ Capturing the local behavior of data
Scalable with respect to dimensionality
 ◦ # of subspaces increases exponentially
                                                                                               40
Approach I: Extending Conventional Outlier Detection
Method 1: Detect outliers in the full space, e.g., HilOut Algorithm
 ◦ Find distance-based outliers, but use the ranks of distance instead of the absolute distance in
   outlier detection
 ◦ For each object o, find its k-nearest neighbors: nn1(o), . . . , nnk(o)
 ◦ The weight of object o:
                                                                                                41
Approach II: Finding Outliers in
Subspaces
Extending conventional outlier detection: Hard for outlier interpretation
Find outliers in much lower dimensional subspaces: easy to interpret why and to what extent the object is an outlier
 ◦ E.g., find outlier customers in certain subspace: average transaction amount >> avg. and purchase frequency << avg.
Ex. A grid-based subspace outlier detection method
 ◦ Project data onto various subspaces to find an area whose density is much lower than average
 ◦ Discretize the data into a grid with φ equi-depth (why?) regions
 ◦ Search for regions that are significantly sparse
    ◦ Consider a k-d cube: k ranges on k dimensions, with n objects
    ◦ If objects are independently distributed, the expected number of objects falling into a k-dimensional region is (1/
      φ)kn = fkn,the standard deviation is
                                                                                                                  42
Approach III: Modeling High-Dimensional Outliers
Develop new models for high-dimensional outliers directly
Avoid proximity measures and adopt new heuristics that do not deteriorate in high-dimensional data
                                                                                                43
Chapter 12. Outlier Analysis
  Outlier and Outlier Analysis
Statistical Approaches
Proximity-Base Approaches
Clustering-Base Approaches
Classification Approaches
Summary
                                               44
Summary
Types of outliers
 ◦ global, contextual & collective outliers
Outlier detection
 ◦ supervised, semi-supervised, or unsupervised
Proximity-base approaches
Clustering-base approaches
Classification approaches
45