Chapter ML:I
I. Introduction
q Examples of Learning Tasks
q Specification of Learning Problems
ML:I-9
Introduction
STEIN/LETTMANN 2005-2015
Examples of Learning Tasks
Car Shopping Guide
Which criteria form the basis of a decision?
ML:I-10
Introduction
STEIN/LETTMANN 2005-2015
Examples of Learning Tasks
Risk Analysis for Credit Approval
Customer 1
house owner
yes
income (p.a.)
51 000 EUR
repayment (p.m.) 1 000 EUR
credit period
7 years
SCHUFA entry
no
age
37
married
yes
...
ML:I-11
Introduction
...
Customer n
house owner
no
income (p.a.)
55 000 EUR
repayment (p.m.) 1 200 EUR
credit period
8 years
SCHUFA entry
no
age
?
married
yes
...
STEIN/LETTMANN 2005-2015
Examples of Learning Tasks
Risk Analysis for Credit Approval
Customer 1
house owner
yes
income (p.a.)
51 000 EUR
repayment (p.m.) 1 000 EUR
credit period
7 years
SCHUFA entry
no
age
37
married
yes
...
...
Customer n
house owner
no
income (p.a.)
55 000 EUR
repayment (p.m.) 1 200 EUR
credit period
8 years
SCHUFA entry
no
age
?
married
yes
...
Learned rules:
(income>40 000 AND credit_period<3) OR
house_owner=yes
THEN credit_approval=yes
IF
SCHUFA_entry=yes OR
(income<20 000 AND repayment>800)
THEN credit_approval=no
IF
ML:I-12
Introduction
STEIN/LETTMANN 2005-2015
Examples of Learning Tasks
Image Analysis
ML:I-13
Introduction
[Mitchell 1997]
STEIN/LETTMANN 2005-2015
Examples of Learning Tasks
Image Analysis
[Mitchell 1997]
Sharp
Left
Straight
Ahead
Sharp
Right
30 Output
Units
...
4 Hidden
Units
30x32 Sensor
Input Retina
ML:I-14
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Definition 1 (Machine Learning
[Mitchell 1997])
A computer program is said to learn
q
from experience
with respect to some class of tasks and
a performance measure,
if its performance at the tasks improves with the experience.
ML:I-15
Introduction
STEIN/LETTMANN 2005-2015
Remarks:
q Example chess
task = playing chess
performance measure = number of games won during a world championship
experience = possibility to play against itself
q Example optical character recognition
task = isolation and classification of handwritten words in bitmaps
performance measure = percentage of correctly classified words
experience = collection of correctly classified, handwritten words
q A corpus with labeled examples forms a kind of compiled experience.
q Consider the different corpora that are exploited for different learning tasks in the webis
group. [www.uni-weimar.de/medien/webis/corpora]
ML:I-16
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Learning Paradigms
1. Supervised learning
Learn a function from a set of input-output-pairs. An important branch of
supervised learning is automated classification. Example: optical character
recognition
2. Unsupervised learning
Identify structures in data. Important subareas of unsupervised learning
include automated categorization (e.g. via cluster analysis), parameter
optimization (e.g. via expectation maximization), and feature extraction (e.g.
via factor analysis).
3. Reinforcement learning
Learn, adapt, or optimize a behavior strategy in order to maximize the own
benefit by interpreting feedback that is provided by the environment.
Example: development of behavior strategies for agents in a hostile
environment.
ML:I-17
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Learning Paradigms
1. Supervised learning
Learn a function from a set of input-output-pairs. An important branch of
supervised learning is automated classification. Example: optical character
recognition
2. Unsupervised learning
Identify structures in data. Important subareas of unsupervised learning
include automated categorization (e.g. via cluster analysis), parameter
optimization (e.g. via expectation maximization), and feature extraction (e.g.
via factor analysis).
3. Reinforcement learning
Learn, adapt, or optimize a behavior strategy in order to maximize the own
benefit by interpreting feedback that is provided by the environment.
Example: development of behavior strategies for agents in a hostile
environment.
ML:I-18
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: Kind of Experience
[Mitchell 1997]
1. Feedback
direct: for each board configuration the best move is given.
indirect: only the final result is given after a series of moves.
ML:I-19
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: Kind of Experience
[Mitchell 1997]
1. Feedback
direct: for each board configuration the best move is given.
indirect: only the final result is given after a series of moves.
2. Sequence and distribution of examples
A teacher presents important example problems along with a solution.
The learner chooses from the examples; e.g. pick a board for which the
best move is unknown.
The selection of examples to learn from should follow the (expected)
distribution of future problems.
ML:I-20
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: Kind of Experience
[Mitchell 1997]
1. Feedback
direct: for each board configuration the best move is given.
indirect: only the final result is given after a series of moves.
2. Sequence and distribution of examples
A teacher presents important example problems along with a solution.
The learner chooses from the examples; e.g. pick a board for which the
best move is unknown.
The selection of examples to learn from should follow the (expected)
distribution of future problems.
3. Relevance under a performance measure
How far can we get with experience?
Can we master situations in the wild?
(playing against itself will be not enough to become world class)
ML:I-21
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: Ideal Target Function
[Mitchell 1997]
(a) : Boards Moves
(b) : Boards R
ML:I-22
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: Ideal Target Function
[Mitchell 1997]
(a) : Boards Moves
(b) : Boards R
A recursive definition of , following a kind of means-ends analysis:
Let be o Boards.
1. (o) = 100, if o represents a final board state that is won.
2. (o) = 100, if o represents a final board state that is lost.
3. (o) = 0, if o represents a final board state that is drawn.
4. (o) = (o) otherwise.
o denotes the best final state that can be reached if both sides play optimally.
Related: game playing, minimax strategy, - pruning.
[Study course on Search Algorithms, Stein 1998-2015]
ML:I-23
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: From the Real World to a Model World y
(o) ; y((o)) y(x)
ML:I-24
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: From the Real World to a Model World y
(o) ; y((o)) y(x)
y(x) = w0 + w1 x1 + w2 x2 + w3 x3 + w4 x4 + w5 x5 + w6 x6
where
x1
x2
x3
x4
x5
x6
ML:I-25
= number of black pawns on board o
= number of white pawns on board o
= number of black pieces on board o
= number of white pieces on board o
= number of black pieces threatened on board o
= number of white pieces threatened on board o
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Example Chess: From the Real World to a Model World y
(o) ; y((o)) y(x)
y(x) = w0 + w1 x1 + w2 x2 + w3 x3 + w4 x4 + w5 x5 + w6 x6
where
x1
x2
x3
x4
x5
x6
= number of black pawns on board o
= number of white pawns on board o
= number of black pieces on board o
= number of white pieces on board o
= number of black pieces threatened on board o
= number of white pieces threatened on board o
Other approaches to formulate y:
q
case base
set of rules
neural network
polynomial function of board features
ML:I-26
Introduction
STEIN/LETTMANN 2005-2015
Remarks:
q The ideal target function interprets the real world, say, a real-world object o, to
compute (o). This computation can be operationalized by a human or by some other
(even arcane) mechanism of the real world.
q To simulate the interesting aspects of the real world by means of a computer, we define a
model world. This model world is restricted to particulartypically easily
measurablefeatures x that are derived from o, with x = (o). In the model world, y(x) is
the formalized counterpart of (o).
q y is called model function or model, is called model formation function.
q The key difference between an ideal target function and a model function y lies in the size
and the representation of their respective domains. Examples:
A chess grand master assesses a board o in its entirety, both intuitively and analytically;
a chess program is restricted to particular features x, x = (o).
A human mushroom picker assesses a mushroom o with all her skills (intuitively,
analytically, by tickled senses); a classification program is restricted to a few surface
features x, x = (o).
ML:I-27
Introduction
STEIN/LETTMANN 2005-2015
Remarks (continued) :
q For automated chess playing a real-valued assessment function is needed; such kind of
problems form regression problems. If only a small number of values are to be considered
(e.g. school grades), we are given a classification problem. A regression problem can be
transformed into a classification problem by means of domain discretization.
q Regression problems and classification problems often differ with regard to assessing the
achieved accuracy or goodness of fit. For regression problems the sum of the squared
residuals may be a sensible criterion; for classification problems the number of misclassified
examples may be more relevant.
q For classification problems, the ideal target function is also called ideal classifier; similarly,
the model function y is called classifier.
q Decision problems are classification problems with two classes.
q The halting problem for Turing machines is an undecidable classification problem.
ML:I-28
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
[model world]
How to Construct a Classifier y
Characterization of the real world:
q
O is a set of objects.
C is a set of classes.
: O C is the ideal classifier for O.
ML:I-29
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
How to Construct a Classifier y
Characterization of the real world:
q
O is a set of objects.
C is a set of classes.
: O C is the ideal classifier for O.
Classification problem:
q
Given some o O, determine its class (o) C.
Acquisition of classification knowledge:
1. Build a database of examples of the form (o, (o)), o OD , OD O.
2. Abstract the objects o OD towards feature vectors x X, with x = (o).
3. Compute (x, c(x)), with x = (o) and c(x) defined as (o), o OD .
ML:I-30
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
How to Construct a Classifier y
[real world]
(continued)
Characterization of the model world:
q
X is a set of feature vectors, also called feature space.
C is a set of classes.
c : X C is the ideal classifier for X.
D = {(x1, c(x1)), . . . , (xn, c(xn))} X C is a set of examples.
ML:I-31
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
How to Construct a Classifier y
(continued)
Characterization of the model world:
q
X is a set of feature vectors, also called feature space.
C is a set of classes.
c : X C is the ideal classifier for X.
D = {(x1, c(x1)), . . . , (xn, c(xn))} X C is a set of examples.
Machine learning problem:
q
Based on D, approximate the ideal classifier c by a classifier y.
Formulate a model function y : X C, x 7 y(x)
Apply statistics, as well as theory and algorithms from the field of machine
learning to maximize the goodness of fit between c and y.
ML:I-32
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
How to Construct a Classifier y
(continued)
Objects
Classes
c y
X
Feature space
Semantics:
Ideal classifier for real-world objects.
Model formation function.
c Ideal classifier for vectors from the feature space.
y Classifier.
c y c is approximated by y.
ML:I-33
Introduction
STEIN/LETTMANN 2005-2015
Remarks:
q The feature space X comprises vectors x1 , x2 , . . ., which can be considered as abstractions
of real-world objects o1 , o2 , . . ., and which have been computed according to our view of the
real world.
q The model formation function determines the level of abstraction between o and x,
x = (o). I.e., determines the representation fidelity, exactness, quality, or simplification.
q Though models an object o O only imperfectly as x = (o), c(x) must be considered as
ideal classifier, since c(x) is defined as (o) and hence yields the true real-world classes.
I.e., c and have different domains each, but they return the same images.
q c(x) is often termed ground truth (for x and the underlying classification problem).
Observe that this term is justified by the fact that c(x) (o).
ML:I-34
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
LMS Algorithm for Fitting y
[IGD Algorithm]
Algorithm: LMS
Least Mean Squares.
Input:
Training examples of the form (x, c(x)) with target function value c(x) for x.
Learning rate, a small positive constant.
Internal:
y(D) Set of y(x)-values computed from the elements x in D given some w.
Output:
Weight vector.
LMS(D, )
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
ML:I-35
initialize_random_weights((w0 , w1 , . . . , wp ))
REPEAT
(x, c(x)) = random_select(D)
y(x) = w0 + w1 x1 + . . . + wp xp
error = c(x) y(x)
FOR j = 0 TO p DO
wj = error xj
// xD : x|x0 1
wj = wj + wj
ENDDO
UNTIL(convergence(D, y(D)))
return((w0 , w1 , . . . , wp ))
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Design of Learning Systems
[p.12, Mitchell 1997]
compute_performance()
Chess
program
ML:I-36
Introduction
Solution trace
Moves, (o*)
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Design of Learning Systems
[p.12, Mitchell 1997]
evaluation()
compute_performance()
Chess
program
Solution trace
Moves, (o*)
accept
Move
analysis
improve
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
ML:I-37
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Design of Learning Systems
[p.12, Mitchell 1997]
evaluation()
compute_performance()
Chess
program
Solution trace
Moves, (o*)
accept
Move
analysis
improve
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
generalize()
Hypothesis
(w0, ..., wp)
ML:I-38
Introduction
LMS
algorithm
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Design of Learning Systems
[p.12, Mitchell 1997]
evaluation()
compute_performance()
Solution trace
Chess
program
Moves, (o*)
accept
Move
analysis
improve
Training examples
(x1, c(x1)), ...,
(xn, c(xn))
Problem
Chess board
generalize()
problem
generation()
Hypothesis
(w0, ..., wp)
LMS
algorithm
Important design aspects:
1. kind of experience
2. fidelity of the model formation function : O X
3. class or structure of the model function y
4. learning method for fitting y
ML:I-39
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Related Questions
Model functions y:
q
What are important classes of model functions?
What are methods to fit (= learn) model functions?
What are measures to assess the goodness of fit?
How does the example number affect the learning process?
How does noise affect the learning process?
ML:I-40
Introduction
STEIN/LETTMANN 2005-2015
Specification of Learning Problems
Related Questions
(continued)
Generic learnability:
q
What are the theoretical limits of learnability?
How can we use nature as a model for learning?
Knowledge acquisition:
q
How can we integrate background knowledge into the learning process?
How can we integrate human expertise into the learning process?
ML:I-41
Introduction
STEIN/LETTMANN 2005-2015