0% found this document useful (0 votes)
18 views32 pages

Webir 06

This lecture focuses on the Vector Space Model for web information retrieval, discussing how documents can be represented as vectors using tf-idf values. It emphasizes the importance of cosine similarity over Euclidean distance for measuring document proximity to queries, which are also treated as vectors. The lecture concludes with considerations for integrating vector space models with various query types and scoring methods.

Uploaded by

Tilahun Eshetu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views32 pages

Webir 06

This lecture focuses on the Vector Space Model for web information retrieval, discussing how documents can be represented as vectors using tf-idf values. It emphasizes the importance of cosine similarity over Euclidean distance for measuring document proximity to queries, which are also treated as vectors. The lecture concludes with considerations for integrating vector space models with various query types and scoring methods.

Uploaded by

Tilahun Eshetu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Web Information Retrieval

Lecture 6
Vector Space Model
Recap of the last lecture
 Parametric and field searches
 Zones in documents
 Scoring documents: zone weighting
 Index support for scoring
 tfidf and vector spaces
This lecture
 Vector space model
 Efficiency considerations
 Nearest neighbors and approximations
Documents as vectors
 At the end of Lecture 5 we said:
 Each doc j can now be viewed as a vector of tfidf
values, one component for each term
 So we have a vector space
 terms are axes
 docs live in this space
 even with stemming, may have 20,000+ dimensions
Example

Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth

Brutus 3.0 8.3 0.0 1.0 0.0 0.0


Caesar 2.3 2.3 0.0 0.5 0.3 0.3
mercy 0.5 0.0 0.7 0.9 0.9 0.3
Why turn docs into vectors?
 First application: Query-by-example
 Given a doc D, find others “like” it.
 Now that D is a vector, find vectors (docs) “near” it.
Intuition
t3
d2

d3
d1

t1

d5
t2
d4

Postulate: Documents that are “close together”


in the vector space talk about the same things.
The vector space model
Query as vector:
 We regard query as short document

 We return the documents ranked by the closeness of

their vectors to the query, also represented as a


vector.
Desiderata for proximity
 If d1 is near d2, then d2 is near d1.
 If d1 near d2, and d2 near d3, then d1 is not far from d3.
 No doc is closer to d than d itself.
First cut
 Distance between d1 and d2 is the length of the vector
|d1 – d2|.
 Euclidean distance
 Why is this not a great idea?
 We still haven’t dealt with the issue of length
normalization
 However, we can implicitly normalize by looking at
angles instead
Sec. 6.3

Why distance is a bad idea


The Euclidean
distance between q
and d2 is large even
though the
distribution of terms
in the query q and
the distribution of
terms in the
document d2 are very
similar.
Sec. 6.3

Use angle instead of distance


 Thought experiment: take a document d and append it to
itself. Call this document d′.
 “Semantically” d and d′ have the same content
 The Euclidean distance between the two documents can
be quite large
 The angle between the two documents is 0,
corresponding to maximal similarity.

 Key idea: Rank documents according to angle with


query.
Sec. 6.3

From angles to cosines


 The following two notions are equivalent.
 Rank documents in decreasing order of the angle between
query and document
 Rank documents in increasing order of
cosine(query,document)
 Cosine is a monotonically decreasing function for the
interval of interest [0o, 90o]
Sec. 6.3

From angles to cosines

 But how – and why – should we be computing cosines?


Cosine similarity
 Distance between vectors d1 and d2 captured by the
cosine of the angle x between them.
 Note – this is similarity, not distance

t3
d2

d1
θ

t1

t2
Cosine similarity
 A vector can be normalized (given a length of 1) by
dividing each of its components by its length – here
we use the L2 norm
x2 x x
i
2
i

 This maps vectors onto the unit sphere:




M
 Then, dj  i 1
wi , j  1
 Longer documents don’t get more weight
Cosine similarity
 

M
d j  dk wi , j wi ,k
sim(d j , d k )  cos(d j , d k )     i 1

i 1 w i1 i,k
M M
d j dk 2
i, j w 2

 Cosine of angle between two vectors


 The denominator involves the lengths of the vectors.

Normalization
Normalized vectors
 For normalized vectors, the cosine is simply the dot
product:
   
cos(d j , d k )  d j  d k
Cosine similarity exercises
 Exercise: Rank the following by decreasing cosine
similarity:
 Two docs that have only frequent words (the, a, an, of)
in common.
 Two docs that have no words in common.
 Two docs that have many rare words in common
(wingspan, tailfin).
Exercise
 Euclidean distance between vectors:

 d  d i ,k 
M
d j  dk 
2
i 1 i, j

 Show that, for normalized vectors, Euclidean


distance gives the same proximity ordering as the
cosine measure
Example
 Docs: Austen's Sense and Sensibility, Pride and
Prejudice; Bronte's Wuthering Heights
SaS PaP WH
affection 115 58 20
jealous 10 7 11
gossip 2 0 6

SaS PaP WH
affection 0.996 0.993 0.847
jealous 0.087 0.120 0.466
gossip 0.017 0.000 0.254
Example
 Docs: Austen's Sense and Sensibility, Pride and
Prejudice; Bronte's Wuthering Heights
SaS PaP WH
affection 115 58 20
jealous 10 7 11
gossip 2 0 6

SaS PaP WH
affection 0.996 0.993 0.847
jealous 0.087 0.120 0.466
gossip 0.017 0.000 0.254
Example
 Docs: Austen's Sense and Sensibility, Pride and
Prejudice; Bronte's Wuthering Heights
SaS PaP WH
affection 115 58 20
jealous 10 7 11
gossip 2 0 6

SaS PaP WH
affection 0.996 0.993 0.847
jealous 0.087 0.120 0.466
gossip 0.017 0.000 0.254

 cos(SAS, PAP) = .996 x .993 + .087 x .120 + .017 x 0.0 = 0.999


 cos(SAS, WH) = .996 x .847 + .087 x .466 + .017 x .254 = 0.889
Queries as vectors
 Key idea 1: Do the same for queries: represent them
as vectors in the space
 Key idea 2: Rank documents according to their
proximity to the query in this space
 proximity = similarity of vectors
Cosine(query,document)
Unit vectors
Dot product
  


M
  qd q d qi d i
cos(q , d )         i 1
q d
i 1 q i1 i
M M
qd 2
i d 2

cos(q, d) is the cosine similarity of q and d or, equivalently,

the cosine of the angle between q and d.


Summary: What’s the real point of
using vector spaces?
 Key: A user’s query can be viewed as a (very) short
document.
 Query becomes a vector in the same space as the
docs.
 Can measure each doc’s proximity to it.
 Natural measure of scores/ranking – no longer
Boolean.
 Queries are expressed as bags of words
 Other similarity measures: see
http://www.lans.ece.utexas.edu/~strehl/diss/node52.html for a
survey
Interaction: vectors and phrases
 Phrases don’t fit naturally into the vector space world:
 “hong kong” “new york”
 Positional indexes don’t capture tf/idf information for
“hong kong”
 Biword indexes treat certain phrases as terms
 For these, can pre-compute tf/idf.
 A hack: we cannot expect end-user formulating
queries to know what phrases are indexed
Vectors and Boolean queries
 Vectors and Boolean queries really don’t work
together very well
 We cannot express AND, OR, NOT, just by summing
term frequencies
Vector spaces and other operators
 Vector space queries are apt for no-syntax, bag-of-
words queries
 Clean metaphor for similar-document queries
 Not a good combination with Boolean, positional
query operators, phrase queries, …
 But …
Query language vs. scoring
 May allow user a certain query language, say
 Freetext basic queries
 Phrase, wildcard etc. in Advanced Queries.
 For scoring (oblivious to user) may use all of the
above, e.g. for a freetext query
 Highest-ranked hits have query as a phrase
 Next, docs that have all query terms near each other
 Then, docs that have some query terms, or all of them
spread out, with tf x idf weights for scoring
Exercises
 How would you augment the inverted index built in
lectures 1–3 to support cosine ranking computations?
 What information do we need to store?
 Walk through the steps of serving a query.
 The math of the vector space model is quite
straightforward, but being able to do cosine ranking
efficiently at runtime is nontrivial
Resources
 IIR Chapters 6.3, 7.3

You might also like