0% found this document useful (0 votes)
8 views25 pages

2 DistDesign

The document outlines the design and architectural issues related to distributed and parallel database management systems (DBMS), focusing on data distribution, fragmentation, query processing, and transaction management. It discusses various design approaches, including top-down and bottom-up methodologies, and details fragmentation strategies such as horizontal and vertical fragmentation. Additionally, it addresses the correctness of fragmentation, allocation alternatives, and the information requirements necessary for effective database distribution and querying.

Uploaded by

x0rzavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views25 pages

2 DistDesign

The document outlines the design and architectural issues related to distributed and parallel database management systems (DBMS), focusing on data distribution, fragmentation, query processing, and transaction management. It discusses various design approaches, including top-down and bottom-up methodologies, and details fragmentation strategies such as horizontal and vertical fragmentation. Additionally, it addresses the correctness of fragmentation, allocation alternatives, and the information requirements necessary for effective database distribution and querying.

Uploaded by

x0rzavi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Outline

n Introduction & architectural issues


q Data distribution
q Fragmentation
q Data Allocation
q Distributed query processing
q Distributed query optimization
q Querying multidatabase systems
q Distributed transactions & concurrency control
q Distributed reliability
q Database replication
q Parallel database systems
q Database integration & querying
q Advanced topics

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 1

Design Problem

n In the general setting :


Making decisions about the placement of data and
programs across the sites of a computer network as well as
possibly designing the network itself.

n In Distributed DBMS, the placement of


applications entails
l placement of the distributed DBMS software; and

l placement of the applications that run on the database

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 2

Page 1
Distribution Design

n Top-down
l mostly in designing systems from scratch

l mostly in homogeneous systems

n Bottom-up
l when the databases already exist at a number of sites

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 3

Top-Down Design
Requirements
Analysis

Objectives
User Input
Conceptual View Integration View Design
Design

Access
GCS Information ES’s

Distribution
Design User Input

LCS’s

Physical
Design

LIS’s

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 4

Page 2
Distribution Design Issues

 Why fragment at all?

 How to fragment?

 How much to fragment?

 How to test correctness?

 How to allocate?

 Information requirements?

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 5

Fragmentation
n Can't we just distribute relations?
n What is a reasonable unit of distribution?
l Relation
u Views are subsets of relations è locality
u Extra communication
l Fragments of relations (sub-relations)
u Concurrent execution of a number of transactions that
access different portions of a relation
u Views that cannot be defined on a single fragment will
require extra processing
u Semantic data control (especially integrity
enforcement) more difficult

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 6

Page 3
Fragmentation Alternatives –
Horizontal
PROJ"
PROJ1 : projects with budgets PNO" PNAME" BUDGET" LOC"

less than $200,000 P1" Instrumentation" 150000" Montreal"


P2" Database Develop."135000" New York"
PROJ2 : projects with budgets P3 "
P4"
CAD/CAM"
Maintenance"
250000"
310000"
New
New York"
York"
Paris"
greater than or equal to P5" CAD/CAM" 500000" Boston"

$200,000
PROJ1" PROJ2"

PNO" PNAME" BUDGET" LOC" PNO" PNAME" BUDGET" LOC"


P1" Instrumentation" 150000" Montreal" P3 " CAD/CAM" 250000" New York"
P2" Database Develop." 135000" New York" P4" Maintenance" 310000" Paris"
P5" CAD/CAM" 500000" Boston"

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 7

Fragmentation Alternatives –
Vertical
PROJ"
PROJ1: information about PNO" PNAME" BUDGET" LOC"

project budgets P1" Instrumentation" 150000" Montreal"


P2" Database Develop."135000" New York"
PROJ2: information about P3 "
P4"
CAD/CAM"
Maintenance"
250000"
310000"
New
New York"
York"
Paris"
project names and P5" CAD/CAM" 500000" Boston"

locations

PROJ1" PROJ2"

PNO" BUDGET" PNO" PNAME" LOC"

P1" 150000" P1" Instrumentation" Montreal"


P2" 135000" P2" Database Develop." New York"
P3 " 250000" P3 " CAD/CAM" New York"
P4" 310000" P4" Maintenance" Paris"
P5" 500000" P5" CAD/CAM" Boston"

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 8

Page 4
Degree of Fragmentation

finite number of alternatives

tuples relations
or
attributes

Finding the suitable level of partitioning


within this range

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 9

Correctness of Fragmentation
n Completeness
l Decomposition of relation R into fragments R1, R2, ..., Rn is
complete if and only if each data item in R can also be
found in some Ri
n Reconstruction
l If relation R is decomposed into fragments R1, R2, ..., Rn,
then there should exist some relational operator∇such
that
R = ∇1≤i≤nRi
n Disjointness
l If relation R is decomposed into fragments R1, R2, ..., Rn,
and data item di is in Rj, then di should not be in any
other fragment Rk (k ≠ j ).

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 10

Page 5
Allocation Alternatives

n Non-replicated
l partitioned : each fragment resides at only one site

n Replicated
l fully replicated : each fragment at each site
l partially replicated : each fragment at some of the sites

n Rule of thumb:

read-only queries

If ≥ 1 replication is advantageous,
update quries

otherwise replication may cause problems

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 11

Information Requirements

n Four categories:
l Database information
l Application information
l Communication network information
l Computer system information

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 12

Page 6
Fragmentation

n Horizontal Fragmentation (HF)


l Primary Horizontal Fragmentation (PHF)

l Derived Horizontal Fragmentation (DHF)

n Vertical Fragmentation (VF)

n Hybrid Fragmentation (HF)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 13

PHF – Information Requirements


n Database information
l Relationship
PAY"

TITLE,"SAL"

L 1"
EMP" PROJ"

ENO,"ENAME, TITLE" PNO, PNAME, BUDGET, LOC"

L 2" L 3"
ASG"

ENO, PNO, RESP, DUR"

l Cardinality of each relation: card(R)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 14

Page 7
PHF - Information Requirements
n Application Information
l simple predicates : Given R[A1, A2, …, An], a simple predicate pj
is
pj : Ai θ Value
where θ ∈{=,<,≤,>,≥,≠}, Value ∈Di and Di is the domain of Ai.
For relation R we define Pr = {p1, p2, …,pm}
Example :
PNAME = "Maintenance"
BUDGET ≤ 200000
l minterm predicates : Given R and Pr = {p1, p2, …,pm}
define M = {m1,m2,…,mr} as

M = { mi|mi = ∧ pj∈Pr pj* }, 1≤j≤m, 1≤i≤z


where pj* = pj or pj* = ¬(pj).

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 15

PHF – Minterm Examples


n Simple predicates on PROJ (partial)

p1: LOC = “Montreal" p2: LOC=“New York"

p3: LOC = “Paris" p4: BUDGET ≤ 200000

p5: BUDGET ≤ 200000

n Minterm predicates on PROJECT (Partial)

m1: LOC = "Montreal" ∧ BUDGET ≤ 200000

m2: NOT(LOC="Montreal") ∧ BUDGET ≤ 200000

m3: LOC = "Montreal” ∧ NOT(BUDGET ≤ 200000)

m4: NOT(LOC = "Montreal") ∧ NOT(BUDGET ≤ 200000)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 16

Page 8
PHF – Information Requirements
n Application information.
l minterm selectivities: sel(mi).
u The number of tuples of the relation that would
be accessed by a user query which is specified
according to a given minterm predicate mi.
l access frequencies: acc(qi).
u The frequency with which a user application qi
accesses data.
u Access frequency for a minterm predicate can
also be defined.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 17

Primary Horizontal Fragmentation


Definition :
Rj = σFj(r ), 1 ≤ j ≤ w
where Fj is a selection formula, which is (preferably) a
minterm predicate.
Therefore,
A horizontal fragment Ri of relation R consists of all
the tuples of R which satisfy a minterm predicate mi.




Given a set of minterm predicates M, there are as
many horizontal fragments of relation R as there are
minterm predicates.
Set of horizontal fragments also referred to as minterm
fragments.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 18

Page 9
PHF – Algorithm
Given: A relation R, the set of simple predicates Pr
Output: The set of fragments of R, FR = {R1, R2,
…,Rw} that obey the fragmentation rules.

Preliminaries :
l Pr should be complete
l Pr should be minimal

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 19

Completeness of Simple Predicates


n A set of simple predicates Pr is said to be complete
if and only if the accesses to the tuples of the
minterm fragments defined on Pr requires that two
tuples of the same minterm fragment have the
same probability of being accessed by any
application.
n Example :
l Assume PROJ(PNO,PNAME,BUDGET,LOC) has two
applications defined on it.
l Find the budgets of projects at each location. (1)
l Find projects with budgets less than or equal to $200000. (2)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 20

Page 10
Completeness of Simple Predicates
According to (1),
Pr={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”}
which is not complete with respect to (2).
Modify
Pr ={LOC=“Montreal”,LOC=“New York”,LOC=“Paris”,
BUDGET≤200000,BUDGET>200000}
which is complete.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 21

Minimality of Simple Predicates


n If a predicate influences how fragmentation is
performed, (i.e., causes a fragment f to be
further fragmented into, say, fi and fj) then
there should be at least one application that
accesses fi and fj differently.
n In other words, the simple predicate should be
relevant in determining a fragmentation.
n If all the predicates of a set Pr are relevant,
then Pr is minimal.
acc(mi)
––––– ≠ acc(m
–––––j)
card(fi) card(fj)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 22

Page 11
Minimality of Simple Predicates
Example :
Pr ={LOC=“Montreal”,LOC=“New York”, LOC=“Paris”,
BUDGET≤200000,BUDGET>200000}

is minimal (in addition to being complete).


However, if we add
PNAME = “Instrumentation”

then Pr is not minimal.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 23

COM_MIN Algorithm
Given: a relation R and a set of simple
predicates Pr
Output: a complete and minimal set of simple
predicates Pr' for Pr

Rule 1: a relation or fragment is partitioned into


at least two parts which are accessed
differently by at least one application.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 24

Page 12
COM_MIN Algorithm
 Initialization :
● find a pi ∈Pr such that pi partitions R according to Rule 1
● set Pr' = pi ; Pr ← Pr – pi ; F ←fi

 Iteratively add predicates to Pr' until it is


complete
 find a pj ∈Pr such that pj partitions some fk defined
according to minterm predicate over Pr' according to Rule 1
 set Pr' = Pr' ∪ pi ; Pr ← Pr – pi; F ← F ∪ fi
 if ∃pk ∈Pr' which is nonrelevant then
Pr' ← Pr' – pk
F ← F – fk

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 25

PHORIZONTAL Algorithm
Makes use of COM_MIN to perform fragmentation.
Input: a relation R and a set of simple
predicates Pr
Output: a set of minterm predicates M according
to which relation R is to be fragmented

 Pr' ← COM_MIN (R,Pr)


 determine the set M of minterm predicates
 determine the set I of implications among pi ∈ Pr
 eliminate the contradictory minterms from M

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 26

Page 13
PHF – Example
n Two candidate relations : PAY and PROJ.
n Fragmentation of relation PROJ
l Applications:
u Find the name and budget of projects given their location
s Issued at three sites
u Access project information according to budget
s one site accesses <200000 other accesses ≥200000
l Simple predicates
l For application (1)
p1 : LOC = “Montreal”
p2 : LOC = “New York”
p3 : LOC = “Paris”
l For application (2)
p4 : BUDGET ≤ 200000
p5 : BUDGET > 200000
l Pr = Pr' = {p1,p2,p3,p4,p5}

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 27

PHF – Example
n Fragmentation of relation PROJ continued
l Minterm fragments left after elimination
m1 : (LOC = “Montreal”) ∧ (BUDGET ≤ 200000)

m2 : (LOC = “Montreal”) ∧ (BUDGET >200000)


m3 : (LOC = “New York”) ∧ (BUDGET ≤ 200000)

m4 : (LOC = “New York”) ∧ (BUDGET >200000)


m5 : (LOC = “Paris”) ∧ (BUDGET ≤ 200000)

m6 : (LOC = “Paris”) ∧ (BUDGET > 200000)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 28

Page 14
PHF – Example

PROJ1" PROJ2"

PNO" PNAME" BUDGET" LOC" PNO" PNAME" BUDGET" LOC"

Database"
P1" Instrumentation" 150000" Montreal" P2" 135000" New York"
Develop."

PROJ4" PROJ6"

PNO" PNAME" BUDGET" LOC" PNO" PNAME" BUDGET" LOC"

P3 " CAD/CAM" 250000" New York" P4" Maintenance" 310000" Paris"

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 29

PHF – Correctness
n Completeness
l Since Pr' is complete and minimal, the selection predicates are
complete

n Reconstruction
l If relation R is fragmented into FR = {R1,R2,…,Rr}

R = ∪∀Ri ∈FR Ri

n Disjointness
l Minterm predicates that form the basis of fragmentation should be
mutually exclusive.

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 30

Page 15
Vertical Fragmentation
n Has been studied within the centralized
context
l design methodology
l physical clustering

n More difficult than horizontal, because more


alternatives exist.
Two approaches :
l grouping
u attributes to fragments
l splitting
u relation to fragments

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 31

Vertical Fragmentation
n Overlapping fragments
l grouping

n Non-overlapping fragments
l splitting

We do not consider the replicated key attributes


to be overlapping.
Advantage:
Easier to enforce functional dependencies
(for integrity checking etc.)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 32

Page 16
VF – Information Requirements

n Application Information
l Attribute affinities
u a measure that indicates how closely related the attributes are
u This is obtained from more primitive usage data
l Attribute usage values
u Given a set of queries Q = {q1, q2,…, qq} that will run on the
relation R[A1, A2,…, An],

⎧
1 if attribute Aj is referenced by query qi
use(qi,Aj) = ⎨

⎩
0 otherwise

use(qi,•) can be defined accordingly

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 33

VF – Definition of use(qi,Aj)
Consider the following 4 queries for relation PROJ
q1: SELECT BUDGET q2: SELECT PNAME,BUDGET
FROM PROJ FROM PROJ
WHERE PNO=Value
q3: SELECT PNAME q4: SELECT SUM(BUDGET)
FROM PROJ FROM PROJ
WHERE LOC=Value WHERE LOC=Value
Let A1= PNO, A2= PNAME, A3= BUDGET, A4= LOC
A1 A2 A3 A4
q1 1 0 1 0
q2 0 1 1 0
q3 0 1 0 1
q4 0 0 1 1
CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 34

Page 17
VF – Affinity Measure aff(Ai,Aj)

The attribute affinity measure between two attributes Ai


and Aj of a relation R[A1, A2, …, An] with respect to the set
of applications Q = (q1, q2, …, qq) is defined as follows :

aff (Ai, Aj) =



(query access)
all queries that access A and Ai j

access
query access =

access frequency of a query *

execution
all sites

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 35

VF – Calculation of aff(Ai, Aj)

Assume each query in the previous example


accesses the attributes once during each S1 S2 S3
execution.
q1 15 20 10
Also assume the access frequencies q2 5 0 0
q3 25 25 25
q
4 3 0 0
Then
A1 A2 A3 A4
aff(A1, A3) = 15*1 + 20*1+10*1
A 45 0 45 0
= 45 1
A 0 80 5 75
and the attribute affinity matrix AA is A
2
45 5 53 3
3
A 4 0 75 3 78

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 36

Page 18
VF – Clustering Algorithm
n Take the attribute affinity matrix AA and
reorganize the attribute orders to form clusters
where the attributes in each cluster
demonstrate high affinity to one another.
n Bond Energy Algorithm (BEA) has been used
for clustering of entities. BEA finds an
ordering of entities (in our case attributes)
such that the global affinity measure is
maximized.
AM =



i j
(affinity of Ai and Aj with their neighbors)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 37

Bond Energy Algorithm


Input: The AA matrix
Output: The clustered affinity matrix CA which
is a perturbation of AA
Œ Initialization: Place and fix one of the columns
of AA in CA.
 Iteration: Place the remaining n-i columns in
the remaining i+1 positions in the CA matrix.
For each column, choose the placement that
makes the most contribution to the global
affinity measure.
Ž Row order: Order the rows according to the
column ordering.
CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 38

Page 19
Bond Energy Algorithm
“Best” placement? Define contribution of a
placement:

cont(Ai, Ak, Aj) = 2bond(Ai, Ak)+2bond(Ak, Al) –2bond(Ai, Aj)

where n

bond(Ax,Ay) = ∑
aff(A ,A )aff(A ,A )
z =1
z x z y

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 39

BEA – Example
Consider the following AA matrix and the corresponding CA matrix
where A1 and A2 have been placed. Place A3:
A1 A2 A3 A4 A1 A2
A 1 45 0 5 0 45 0
A 2 0 80 5 75 0 80
AA = CA =
A 3 45 5 53 3 45 5
A 4 0 75 3 78 0 75

Ordering (0-3-1) :
cont(A0,A3,A1) = 2bond(A0 , A3)+2bond(A3 , A1)–2bond(A0 , A1)
= 2* 0 + 2* 4410 – 2*0 = 8820
Ordering (1-3-2) :
cont(A1,A3,A2) = 2bond(A1 , A3)+2bond(A3 , A2)–2bond(A1,A2)
= 2* 4410 + 2* 890 – 2*225 = 10150
Ordering (2-3-4) :
cont (A2,A3,A4) = 1780

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 40

Page 20
BEA – Example
n Therefore, the CA matrix has the form A1 A3 A2

45 45 0

0 5 80

45 53 5

0 3 75
n When A is placed, the final form of the CA matrix
4

(after row organization) is A1 A3 A2 A4


A1 45 45 0 0
A3 45 53 5 3
A2 0 5 80 75
A4 0 3 75 78
CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 41

VF – Algorithm
How can you divide a set of clustered attributes
{A1, A2, …, An} into two (or more) sets {A1, A2, …, Ai}
and {Ai, …, An} such that there are no (or minimal)
applications that access both (or more than one) of
the sets.
A1 A2 A3 … Ai Ai+1 . . .Am
A1
A2
TA
...

Ai

Ai+1
BA
...

Am

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 42

Page 21
VF – ALgorithm
Define
TQ = set of applications that access only TA
BQ = set of applications that access only BA
OQ = set of applications that access both TA and BA
and
CTQ = total number of accesses to attributes by applications
that access only TA
CBQ = total number of accesses to attributes by applications
that access only BA
COQ = total number of accesses to attributes by applications
that access both TA and BA
Then find the point along the diagonal that maximizes
CTQ*CBQ-COQ2

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 43

VF – Algorithm
Two problems :
 Cluster forming in the middle of the CA matrix
l Shift a row up and a column left and apply the algorithm to
find the “best” partitioning point
l Do this for all possible shifts
l Cost O(m2)

 More than two clusters


l m-way partitioning
l try 1, 2, …, m–1 split points along diagonal and try to find
the best point for each of these
l Cost O(2m)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 44

Page 22
VF – Correctness
A relation R, defined over attribute set A and key K,
generates the vertical partitioning FR = {R1, R2, …, Rr}.
n Completeness
l The following should be true for A:
A = ∪ AR i

n Reconstruction
l Reconstruction can be achieved by
R = ⋈K Ri, ∀Ri ∈ FR

n Disjointness
l TID's are not considered to be overlapping since they are maintained
by the system
l Duplicated keys are not considered to be overlapping
CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 45

Fragment Allocation
n Problem Statement
Given
F = {F1, F2, …, Fn} fragments
S ={S1, S2, …, Sm} network sites
Q = {q1, q2,…, qq} applications
Find the "optimal" distribution of F to S.
n Optimality
l Minimal cost
u Communication + storage + processing (read & update)
u Cost in terms of time (usually)
l Performance
Response time and/or throughput
l Constraints
u Per site constraints (storage & processing)

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 46

Page 23
Information Requirements
n Database information
l selectivity of fragments
l size of a fragment

n Application information
l access types and numbers
l access localities

n Communication network information


l unit cost of storing data at a site
l unit cost of processing at a site

n Computer system information


l bandwidth
l latency
l communication overhead
CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 47

Allocation Model
General Form
min(Total Cost)
subject to
response time constraint
storage constraint
processing constraint

Decision Variable

⎧"1 if fragment Fi is stored at site Sj


Xij =
⎨"
⎩"0 otherwise

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 48

Page 24
Allocation Model
n Total Cost


query processing cost +

all queries



cost of storing a fragment at a site
all sites all fragments

n Storage Cost (of fragment Fj at Sk)


(unit storage cost at Sk) * (size of Fj) * xjk

n Query Processing Cost (for one query)


processing component + transmission component

CS742 – Distributed & Parallel DBMS M. Tamer Özsu Page 2. 49

Page 25

You might also like