0% found this document useful (0 votes)
103 views194 pages

Containment Algorithms For Nonconvex Polygons With Applications To Layout

This thesis examines algorithms for determining if one nonconvex polygon is contained within another nonconvex polygon, known as the translational containment problem. It presents several geometric restrictions of the configuration space that can reduce the search space for containment testing. It also explores applications of containment algorithms to the layout problem of packing nonconvex polygons. The thesis contributes new algorithms for containment testing and layout of nonconvex polygons with improved running times over previous methods.

Uploaded by

wongtheenguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views194 pages

Containment Algorithms For Nonconvex Polygons With Applications To Layout

This thesis examines algorithms for determining if one nonconvex polygon is contained within another nonconvex polygon, known as the translational containment problem. It presents several geometric restrictions of the configuration space that can reduce the search space for containment testing. It also explores applications of containment algorithms to the layout problem of packing nonconvex polygons. The thesis contributes new algorithms for containment testing and layout of nonconvex polygons with improved running times over previous methods.

Uploaded by

wongtheenguyen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 194

Containment Algorithms for Nonconvex

Polygons with Applications to Layout


Karen McIntosh Daniels
TR-12-95
May 1995

Center for Research in Computing Technology


Harvard University
Cambridge, Massachusetts
Containment Algorithms for Nonconvex Polygons
with Applications to Layout
A thesis presented
by
Karen McIntosh Daniels
to
The Division of Applied Sciences
in partial ful llment of the requirement for the degree of
Doctor of Philosophy
in the subject of
Computer Science

Harvard University
Cambridge, Massachusetts

May, 1995
c 1995 by Karen McIntosh Daniels
All rights reserved.
Table of Contents

List of Figures 7
List of Tables 10
1 Introduction 13
1.1 Background : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13
1.1.1 Containment, Packing, and Layout : : : : : : : : : : : : : : : : : : : 14
1.1.2 Thesis Statement and Overview of Chapter : : : : : : : : : : : : : : 16
1.2 Overview of Containment Work : : : : : : : : : : : : : : : : : : : : : : : : : 17
1.2.1 Related Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 17
1.2.2 Our Containment Work : : : : : : : : : : : : : : : : : : : : : : : : : 20
1.3 Overview of Layout Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : 22
1.3.1 Related Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 23
1.3.2 Convex Items : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 24
1.3.3 Nonconvex Items : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 25
1.3.4 Assignment : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 29
1.3.5 Our Layout Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : 29
1.4 Project Background : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 32
1.4.1 Compaction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 32
1.4.2 Panel Placement : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
1.4.3 Trim Placement : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
1.5 Benchmarks : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
1.6 Contribution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 34
1.7 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 35

I Translational Containment 37
2 Preliminaries 38
2.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 38
2.1.1 Overview of Chapter : : : : : : : : : : : : : : : : : : : : : : : : : : : 41

3
2.2 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 41
2.3 Hardness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 44
2.4 The Restrict/Evaluate/Subdivide Paradigm : : : : : : : : : : : : : : : : : : 45
2.5 Approach to Running Time Analysis : : : : : : : : : : : : : : : : : : : : : : 46
2.6 Related Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 47
2.7 Containment Characterization : : : : : : : : : : : : : : : : : : : : : : : : : : 48
2.8 Overview of Part One : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 50
3 Con guration Space Restrictions 51
3.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 51
3.1.1 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 51
3.1.2 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 52
3.2 Properties of Valid Restrictions : : : : : : : : : : : : : : : : : : : : : : : : : 52
3.2.1 Boundary Intersection : : : : : : : : : : : : : : : : : : : : : : : : : : 52
3.2.2 Loose Fits : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 56
3.2.3 Annular Con guration Spaces : : : : : : : : : : : : : : : : : : : : : 56
3.2.4 Size and s-Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : 56
3.2.5 Identical Items : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 58
3.3 Steady-State Geometric Restriction : : : : : : : : : : : : : : : : : : : : : : : 58
3.3.1 Avnaim and Boissonnat's Solutions to 2NN and 3NP : : : : : : : : 59
3.3.2 Steady-State Restriction : : : : : : : : : : : : : : : : : : : : : : : : 59
3.3.3 Running Time : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 61
3.3.4 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 61
3.3.5 E ectiveness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 63
3.4 Steady-State Linear Programming Restriction : : : : : : : : : : : : : : : : : 73
3.4.1 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 73
3.4.2 E ectiveness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 75
3.5 Subset Restriction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
3.6 Union Restriction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
3.7 Characterization-Based Restriction : : : : : : : : : : : : : : : : : : : : : : 78
3.8 Symmetry Breaking and Subset Substitution Restrictions : : : : : : : : : : 79
3.9 Closed Operators : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 80
3.10 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 81
4 Evaluation and Subdivision 82
4.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 82
4.1.1 Evaluation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 82
4.1.2 Subdivision : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 83
4.1.3 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 83
4.2 Greedy Evaluation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84
4.2.1 Algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 84
4.2.2 Heuristics for Selecting uij : : : : : : : : : : : : : : : : : : : : : : : : 85
4.2.3 Comparing Heuristics for Selecting uij : : : : : : : : : : : : : : : : : 88
4.3 Overlap Reduction Evaluation : : : : : : : : : : : : : : : : : : : : : : : : : : 89
4.4 Evaluation of a Restricted U : : : : : : : : : : : : : : : : : : : : : : : : : : 91
4.5 Subdivision : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 92
4.5.1 De nition and Goals : : : : : : : : : : : : : : : : : : : : : : : : : : : 92

4
4.5.2 Size-Based Method : : : : : : : : : : : : : : : : : : : : : : : : : : : : 93
4.5.3 Combinatorially-Based Methods : : : : : : : : : : : : : : : : : : : : 93
4.5.4 Distance-Based Method : : : : : : : : : : : : : : : : : : : : : : : : : 96
4.6 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 98
5 Algorithms 100
5.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 100
5.1.1 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 101
5.2 An Approximate Containment Algorithm : : : : : : : : : : : : : : : : : : : 101
5.2.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 101
5.2.2 Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 104
5.2.3 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 105
5.3 A Hybrid Containment Algorithm : : : : : : : : : : : : : : : : : : : : : : : 113
5.3.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 113
5.3.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 115
5.3.3 Conclusion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 119
5.4 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 119

II Applications of Containment to Layout 122


6 Preliminaries 123
6.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 123
6.1.1 Overview of Chapter : : : : : : : : : : : : : : : : : : : : : : : : : : : 124
6.2 Overview of Container Transformation Work : : : : : : : : : : : : : : : : : 124
6.3 Overview of Minimal Container Work : : : : : : : : : : : : : : : : : : : : : 124
6.3.1 Related Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 125
6.3.2 Our Minimal Container Work : : : : : : : : : : : : : : : : : : : : : : 126
6.4 Overview of Maximal Item Work : : : : : : : : : : : : : : : : : : : : : : : : 126
6.4.1 Our Maximal Item Work : : : : : : : : : : : : : : : : : : : : : : : : 126
6.5 Overview of Part Two : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 127
7 Limited Gaps 128
7.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 128
7.1.1 Decomposing the Container into Gaps : : : : : : : : : : : : : : : : : 129
7.1.2 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 130
7.2 The Challenge of Constructing Gap Sets : : : : : : : : : : : : : : : : : : : : 130
7.2.1 Compaction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 130
7.2.2 Partitioning Squares : : : : : : : : : : : : : : : : : : : : : : : : : : : 130
7.2.3 Other Methods : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 131
7.3 Limited Gaps : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 131
7.3.1 Reachable Regions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 131
7.3.2 De ning Limited Gaps using Probability of Interference : : : : : : : 132
7.4 Ecient Computation of Limited Gaps : : : : : : : : : : : : : : : : : : : : : 133
7.4.1 Breaking Polygons : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133
7.4.2 Inking Polygons : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 133
7.5 MAAPRs : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 136

5
7.5.1 Summary of MAAPR Results : : : : : : : : : : : : : : : : : : : : : : 136
7.5.2 MAAPR Characterization and Naive Algorithm : : : : : : : : : : : 137
7.6 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 143
8 Minimal Container Problems 145
8.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 145
8.1.1 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146
8.2 Strip Packing : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 147
8.3 Minimal Square Enclosures : : : : : : : : : : : : : : : : : : : : : : : : : : : 149
8.4 Minimal Rectangular Enclosures : : : : : : : : : : : : : : : : : : : : : : : : 153
8.5 Tight Convex and Nonconvex Enclosures : : : : : : : : : : : : : : : : : : : : 157
8.6 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 158
9 Maximal Item Problems 159
9.1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 159
9.1.1 Overview : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 159
9.2 Maximally Scaled Items : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 160
9.3 An Elimination Technique : : : : : : : : : : : : : : : : : : : : : : : : : : : : 167
9.4 Randomized Subset Construction : : : : : : : : : : : : : : : : : : : : : : : : 168
9.5 Pre-packing : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 170
9.5.1 Assignment Strategy : : : : : : : : : : : : : : : : : : : : : : : : : : : 171
9.5.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 173
9.6 Summary and Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 180

III Conclusion and Appendix 181


10 Conclusion and Future Work 182
10.1 Conclusion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 182
10.2 Future Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
10.2.1 Containment : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
10.2.2 Applications of Containment to Layout : : : : : : : : : : : : : : : : 185
A Containment File Format 186
Bibliography 188

6
List of Figures

1.1 Translational containment problem for six items : : : : : : : : : : : : : : : 14


1.2 The marker-making task in the apparel industry : : : : : : : : : : : : : : : 33
2.1 (B + t) \ A if and only if t 2 A  ?B : : : : : : : : : : : : : : : : : : : : : 42
2.2 (B + t)  A if and only if t 2 A ?B : : : : : : : : : : : : : : : : : : : : : 42
3.1 Boundary intersection example 1: the containment problem : : : : : : : : : 53
3.2 Boundary intersection example 1: two one-component U0j s, with U0?j s : : : 53
3.3 Boundary intersection example 1: two two-component Uij s, with Uij? s : : : : 54
3.4 Boundary intersection example 2: the containment problem : : : : : : : : : 54
3.5 Boundary intersection example 2: a one-component U0j , with U0?j s : : : : : 55
3.6 Boundary intersection example 2: a two-component Uij , with Uij? s : : : : : 55
3.7 An annular con guration space : : : : : : : : : : : : : : : : : : : : : : : : : 57
3.8 Geometric restriction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 59
3.9 SSGR containment problem: place 3 convex items : : : : : : : : : : : : : : 62
3.10 A one-component U0j , with U0?j for the SSGR of U : : : : : : : : : : : : : : 62
3.11 A two-component Uij , with Uij? for the SSGR of U : : : : : : : : : : : : : : 63
3.12 A 6NN example with 1.8% SSGR restriction : : : : : : : : : : : : : : : : : : 65
3.13 \L"-shaped container and 3 items : : : : : : : : : : : : : : : : : : : : : : : : 65
3.14 A 5NN example with 52% average SSGR restriction : : : : : : : : : : : : : 66
3.15 An infeasible 3CN example : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
3.16 A 3CC example with an unrestrictive container : : : : : : : : : : : : : : : : 67
3.17 A 6NC example with 53% average SSGR restriction : : : : : : : : : : : : : 68
3.18 A 5CC example with 48% average SSGR restriction : : : : : : : : : : : : : 69
3.19 U23 \ (U21  U13) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 69
3.20 A 9CC example with strong SSGR restriction : : : : : : : : : : : : : : : : : 70
3.21 A highly restrictive 6NN example : : : : : : : : : : : : : : : : : : : : : : : : 70
3.22 A 6NN example: e ect of SSGR on two one-component U0j s : : : : : : : : 71
3.23 Counterexample for A  A  ?A : : : : : : : : : : : : : : : : : : : : : : : : 72
3.24 Selected Uij s for 6NN example : : : : : : : : : : : : : : : : : : : : : : : : : 73

7
3.25 SSLPR containment problem: place 3 nonconvex items : : ::::::: ::: 74
3.26 Two two-component U0j s, with U0?j s for SSLPR of U : : : ::::::: ::: 74
3.27 A one-component Uij , with Uij? s for SSLPR of U : : : : : ::::::: ::: 75
3.28 E ect of SSLPR on Uij s when a single component of a Uij is selected : ::: 76
3.29 Example for which SSLPR is more powerful than SSGR : ::::::: ::: 77
3.30 Regularization example : : : : : : : : : : : : : : : : : : : ::::::: ::: 80
3.31 Two rectangles with degenerate intersection : : : : : : : : ::::::: ::: 80
4.1 3CN example which does not yield a common intersection of translated U0j s 85
4.2 No common intersection of translated U0j s : : : : : : : : : : : : : : : : : : : 86
4.3 No common intersection after maximizing intersection : : : : : : : : : : : : 90
4.4 Common intersection after maximizing convexity of the union : : : : : : : : 91
4.5 Distance-based subdivision : : : : : : : : : : : : : : : : : : : : : : : : : : : 96
4.6 Common situation for p = tj ? ti : : : : : : : : : : : : : : : : : : : : : : : : 98
5.1 3NN example: time = 6 seconds : : : : : : : : : : : : : : : : : : : : : : : : 105
5.2 3NN example: time = 5 seconds : : : : : : : : : : : : : : : : : : : : : : : : 106
5.3 4NN example: time = 90 seconds : : : : : : : : : : : : : : : : : : : : : : : : 107
5.4 5NN example: time = 60 seconds : : : : : : : : : : : : : : : : : : : : : : : : 107
5.5 4NN feasible example solved using partial characterization : : : : : : : : : : 109
5.6 4NN example: time = 15 seconds : : : : : : : : : : : : : : : : : : : : : : : : 118
5.7 7NN example: time = 176 seconds : : : : : : : : : : : : : : : : : : : : : : : 118
7.1 Trim placement task : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 129
7.2 A partitioning square which fragments the container : : : : : : : : : : : : : 131
7.3 A neck which is usable although items cannot pass through it : : : : : : : : 131
7.4 Algorithmic results for a variety of polygon types : : : : : : : : : : : : : : : 136
7.5 Edge contact types for a determining set : : : : : : : : : : : : : : : : : : : : 138
7.6 Slope lemma : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 139
7.7 1-Parameter problems with two and three dependent sliding contacts : : : : 140
7.8 Determining sets of contacts for the MAAPR : : : : : : : : : : : : : : : : : 141
7.9 Limited gaps example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 144
8.1 Incremental layout task : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 147
8.2 Loose layout : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 148
8.3 Final layout : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 148
8.4 Compaction solution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 149
8.5 Minimal square enclosure for 3 nonconvex items : : : : : : : : : : : : : : : : 150
8.6 Minimal square enclosure for 4 items : : : : : : : : : : : : : : : : : : : : : : 151
8.7 Minimal square enclosure for 5 nonconvex items : : : : : : : : : : : : : : : : 152
8.8 Tight rectangular enclosure for 3 convex items : : : : : : : : : : : : : : : : 154
8.9 Tight rectangular enclosure for 3 nearly identical nonconvex items : : : : : 155
8.10 Tight rectangular enclosure for 3 nonconvex items : : : : : : : : : : : : : : 156
8.11 Tight nonconvex enclosure for 3 items : : : : : : : : : : : : : : : : : : : : : 157
9.1 Scaling circles : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 161
9.2 Scaling up items in a greedy fashion : : : : : : : : : : : : : : : : : : : : : : 164
9.3 Scaling up items in an enhanced balanced fashion : : : : : : : : : : : : : : : 165

8
9.4 Scaling up two items : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 166
9.5 Find a \large" subset which ts : : : : : : : : : : : : : : : : : : : : : : : : : 167
9.6 A \large" subset which ts : : : : : : : : : : : : : : : : : : : : : : : : : : : 168
9.7 Items and container for randomized construction task : : : : : : : : : : : : 169
9.8 Randomly constructed 3-item group : : : : : : : : : : : : : : : : : : : : : : 169
9.9 Randomly constructed 5-item group : : : : : : : : : : : : : : : : : : : : : : 169
9.10 Trim placement task for 37188c.mrk : : : : : : : : : : : : : : : : : : : : : : 175
9.11 First-Fit for 37188c.mrk : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 176
9.12 Assignment for 37188c.mrk : : : : : : : : : : : : : : : : : : : : : : : : : : : 176
9.13 Assignment plus First-Fit for 37188c.mrk : : : : : : : : : : : : : : : : : : : 177
9.14 Trim placement task for 41132b.mrk : : : : : : : : : : : : : : : : : : : : : : 177
9.15 First-Fit for 41132b.mrk : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 178
9.16 Assignment for 41132b.mrk : : : : : : : : : : : : : : : : : : : : : : : : : : : 178
9.17 Assignment plus First-Fit for 41132b.mrk : : : : : : : : : : : : : : : : : : : 179

9
List of Tables

3.1 SSGR applied to infeasible examples : : : : : : : : : : : : : : : : : : : : : : 64


4.1 Number of solutions found without subdivision : : : : : : : : : : : : : : : : 89
5.1 Running times in seconds ((*) denotes infeasible example) : : : : : : : : : : 106
5.2 Running time, in seconds, and number of hypotheses examined by approxi-
mate algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 108
5.3 Number of hypotheses examined by sequential algorithm ((*) denotes infea-
sible example) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 110
5.4 Number of hypotheses examined for identical items ((*) denotes infeasible
example) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 111
5.5 Exact vs. approximate running times for 3CN examples (100 calls each) : : 113
5.6 Running times for feasible examples in seconds : : : : : : : : : : : : : : : : 116
5.7 Running times for infeasible examples in seconds : : : : : : : : : : : : : : : 120
5.8 Number of hypotheses examined ((*) denotes infeasible example) : : : : : : 120
5.9 Number of hypotheses evaluated for restriction options ((*) denotes infeasible
example) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121
5.10 Number of hypotheses evaluated for infeasible problems (no hypothesis re-
striction) : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121
9.1 Pre-packing results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 174

10
Abstract
Layout and packing are NP-hard geometric optimization problems of practical impor-
tance for which nding a globally optimal solution is intractable if P6=NP. Such problems
appear in industries such as aerospace, ship building, apparel and shoe manufacturing,
furniture production, and steel construction. At their core, layout and packing problems
have the common geometric feasibility problem of containment: nd a way of placing a set
of items into a container. In this thesis, we focus on containment and its applications to
layout and packing problems. We demonstrate that, although containment is NP-hard, it
is fruitful to: 1) develop algorithms for containment, as opposed to heuristics, 2) design
containment algorithms so that they say \no" almost as fast as they say \yes", 3) use ge-
ometric techniques, not just mathematical programming techniques, and 4) maximize the
number of items for which the algorithms are practical.
Our approach to containment is based on a new restrict/evaluate/subdivide paradigm.
We develop new theory and practical techniques for the operations within the paradigm.
The techniques are appropriate for two-dimensional containment problems in which the
items and container may be irregular (i.e. nonconvex) polygons and have multiple com-
ponents, and in which the items may be translated, but not rotated. Our techniques can
be combined to form a variety of two-dimensional translational containment algorithms.
The paradigm is designed so that, unlike existing iteration-based algorithms, containment
algorithms based on the paradigm are adept at saying \no", even for slightly infeasible prob-
lems. Infeasible problems occur frequently in practice. We present two algorithms based on
our paradigm. We obtain the rst practical running times for NP-complete two-dimensional
translational containment problems for up to ten nonconvex items in a nonconvex container.
Most of our examples are from apparel manufacturing. Typically, each item has from 4 to
100 vertices and the container has from 100 to 300 vertices.
We demonstrate that viewing containment as a feasibility problem has many bene ts
for packing and layout problems. For example, we present an e ective method for nding
minimal enclosures which uses containment to perform binary search on a parameter. Com-
paction techniques can accelerate the search. We also use containment to develop the rst
practical pre-packing strategy for a multi-stage pattern layout problem in apparel manu-
facturing. Pre-packing is a layout method which packs items into a collection of containers
by rst generating groups of items which t into each container and then assigning groups
to containers.

11
Acknowledgments
My research has been carried out under the supervision of my advisor, Victor Milenkovic,
Assistant Professor of Computer Science at Harvard, who is now an Associate Professor at
the University of Miami. His guidance and support of my research have been superb. I
gratefully acknowledge the advice provided by my entire thesis committee: Professor Victor
Milenkovic, Professor Fred Abernathy and Professor Harry Lewis. In addition, I thank
Professor Les Valiant, who served on my quali er committee.
This thesis research is part of a project funded by the Sloan Foundation within the
Harvard Center for Textile and Apparel Research. The principal investigator is Professor
Fred Abernathy, who has helped us forge relationships with industry. Don Walker of the
Harvard Oce of Technology and Licensing has helped us formalize our relationships with
industry. Microdynamics, Inc. and Gerber Garment Technologies have helped us transfer
our work to the apparel manufacturing industry.
Special thanks are due to two former Harvard graduate students: Dr. Zhenyu Li of
GTE Labs and Dr. Dan Roth. Zhenyu Li was an integral member of our research team
while working on his dissertation at Harvard. His comments on drafts of the thesis were
very helpful. I am grateful to Dan Roth for his collaboration on nding the maximum-area
axis-parallel rectangle in a polygon.
Our research team has involved a number of Harvard undergraduates over the years:
Rajarshi Bhattachyarya, Lee Wexler, Jacqueline Huang, Jackie Chang, Sanjoy Dasgupta,
Venkatesh Reddy, Shivashish Chatterjee, Kirat Singh, Eric Wilfrid, Sanjay Madan, Matt
LaMantia, John Immel, Ben Westbrook, Ed Han, and Iraklis Kourtidis. I also thank future
Harvard student Umesh Shankar for his work while at the MIT Summer Research Institute.
Administrative assistance from Carol Harlow, Sheila Toomey, Baiba Menke, Pauline
Mitchell, and the rest of the sta has been wonderful. The constant support of the Aiken
computer systems sta has kept our machines up and running through years of research and
many demonstrations. I convey long-distance thanks to Irina Kaliman at the University of
Miami for her support of our software and for her translations of Russian layout work.
On a personal note, I thank my family for their encouragement and patience: my daugh-
ters, Theresa and Michelle, and my husband Murray Daniels. My mother, Jane McIntosh
has also provided constant support. I dedicate this thesis to my husband, Murray, whose
user interface code is an integral part of our project's software, and whose love and support
have made it possible for me to pursue this research.

12
Chapter 1
Introduction

1.1 Background
Many industrial problems involve placing objects into containers so that no two objects
overlap each other. The general goal is to either minimize the size of the container or nd
a maximal1 collection of placed objects. These problems are known by a variety of names,
such as layout, packing, loading, placement, and nesting. A number of industries apply
layout techniques when cutting new parts from stock material. In apparel manufacturing,
for instance, pattern pieces are arranged on cloth. The goal is to nd a non-overlapping
arrangement which uses the least amount of cloth. In sheet metal layout, objects are cut
from metal stock sheets. The goal here is typically to cut as many objects as possible from
a given sheet. In shoe manufacturing, each hide is a di erent shape, and the goal is to cut
as many objects from each hide as possible.
Layout applications are not limited to cutting operations, nor to processes where the
goal is to minimize the waste of material. In VLSI layout, for example, rectangular modules
are arranged on a chip in such a way as to best meet the two competing goals of minimizing
chip area and minimizing the length of interconnections between modules on the chip. In
furniture layout an arrangement of furniture usually must satisfy a set of aesthetic criteria.
When putting together a jigsaw puzzle, the goal is simply to nd a feasible solution.
Placement problems also appear in robotics. The simplest problem is to nd a position
for a robot amidst a collection of obstacles. This is generalized to planning a motion for
the robot through the obstacles. This can be further generalized to problems in assembly
planning in which motions must be planned not only for the robot, but also for parts which
the robot manipulates.
People typically do not think of all of these problems as similar, as they correspond to
entirely di erent elds of research. However, some work in robot motion planning and com-
putational geometry has noted the correspondence between placement and motion planning
problems [Cha83, CK89, AB88, Avn89]. For example, Avnaim [Avn89] solves placement
and displacement (motion planning) problems, noting that the former type is useful in
layout applications and the latter in robotics. The relationship between cutting, packing,
1
One possible maximality criterion is the number of objects. Another is the total area of the objects.

13
and motion planning was also observed by Cuninghame-Green in a survey paper [CG89].
At their core, all these problems have the common problem of containment: nd a way of
placing a set of objects into a container.
In this thesis, we focus on containment and its applications to layout and packing prob-
lems. In Section 1.1.1 we de ne containment, packing and layout, and show how these form
a hierarchy of problems. We also brie y discuss the state-of-the-art with respect to these
problems. Section 1.1.2 states our main thesis and provides an overview of the remainder
of this chapter.
1.1.1 Containment, Packing, and Layout
In the rst part of Section 1.1.1 we de ne containment and show how it ts into a
hierarchy of layout and packing problems. We then discuss how dicult these problems are
to solve (their \hardness") and the current state-of-the-art in solving them in the second
part of Section 1.1.1.
Hierarchy of Problems
We de ne containment as follows: given a container, a set of items, and a set of al-
lowable transformations of the items, nd transformations of the items which result in a
non-overlapping arrangement of the items within the container. Figure 1.1 shows an ex-
ample of a two-dimensional containment problem for six nonconvex polygonal items and a
nonconvex polygonal container consisting of the complement of some other, already placed,
items within the rectangle. The items are pattern pieces from a layout problem in ap-
parel manufacturing. The pieces are allowed to translate but not rotate, and so this is a
translational containment problem.

22 17
28
15
18
31
16
7
8 1
21 2

3
30
12
9 5

26 19
20 29
10 6

25 4
13

0 14
27 24

11 23

Figure 1.1: Translational containment problem for six items


A packing problem is an extension of a containment problem. Its solution is a non-

14
overlapping arrangement that also optimizes some criteria applied to either the container
or the items. Typically the goal is to either nd the \minimal"2 container into which a given
set of items t, or to nd the \maximal" collection of items which t into a given container.
A packing problem is therefore an optimization problem over a range of containers or
collections of items. A containment problem considers only the feasibility of placing a given
collection into a given container. Thus, containment is the feasibility problem corresponding
to the packing optimization problem.
One example of a \minimal container" type of packing problem is the classical one-
dimensional bin-packing problem. People often think of this as a specialized two-dimensional
packing problem for which a set of rectangular (axis-parallel) bins and a set of rectangular
items are given. The set of bins is in nite; the set of items is nite. Bins and items all
have unit width. Each item has height  c, where c is the common bin height. The task
is to minimize the number of bins required to pack all the items using only translation.
An example of the \maximal collection of items" type of packing problem is the three-
dimensional task of cargo load planning for aircraft, where the goal is to eciently utilize
the available three-dimensional cargo space.
The terms packing and layout are sometimes used interchangeably in the literature.
Often, however, a layout problem refers to a packing problem which either has additional
constraints or multiple goals. For example, as mentioned earlier, a VLSI layout problem
has the two competing goals of minimizing chip area and minimizing the length of intercon-
nections between modules. This type of problem also has constraints such as a requirement
to maintain a certain minimum separation between modules. For other types of layout
problems, the constraints are sometimes in the form of aesthetic criteria, as in the case of
laying out furniture in a room.
De nitions vary, but we consider containment, packing, and layout to be a hierarchy of
problems. Loading generally refers to a speci c type of three-dimensional layout problem
that arises in shipping. In VLSI design, placement is that aspect of the layout task concerned
with the positions of modules, whereas routing deals with interconnections between modules.
The term nesting is associated with packing tasks for irregularly shaped items, in particular
ones which can be placed inside of others.
State-of-the-Art
Work on containment problems for small numbers of items (three or fewer) appears
occasionally in the literature on computational geometry and has primarily been motivated
by problems in robotics. Section 1.2.1 gives a detailed account of this work.
By comparison, the literature on packing and layout problems is vast. This work appears
in the literature on management sciences, engineering sciences, information and computer
science, mathematics, and operations research. Section 1.3.1 reviews this work, which
employs a broad range of techniques.
For rectangular shapes, the layout and packing literature and algorithms are well-
developed. For example, some special types of rectangular problems lend themselves to
linear and/or dynamic programming; in particular, when guillotine cuts3 are used exclu-
sively [DD92]. Ground-breaking work in this area was done in the 1960's by Gilmore and
2
One possible minimality criterion is area.
3
A guillotine cut is a horizontal or vertical cut which completely partitions the remaining material.

15
Gomory [GG65]. There also exists a rich literature on bin-packing for rectangles that con-
tains much theoretical work on the e ectiveness of simple heuristics [JGJ84].
For irregular shapes, packing and cutting algorithms have yet to match human perfor-
mance. In industry, such problems are usually solved either by humans or by computer-
based heuristics. Expert humans construct nearly optimal layouts in some domains. One
very large clothing company has humans create 1500 layouts per week, averaging about 1.5
hours of labor for a layout of 150 items. The automatic generation of production-quality
layouts of irregular items is an active area of research.
What is primarily missing from work on packing and layout is the ability to quickly
detect an infeasible problem, that is, to say \no" to the containment question. Infeasible
problems arise frequently in practice and this shortcoming signi cantly limits the types of
techniques which can be used in layout and packing.
Saying \no" is dicult. Except in the simplest of cases, heuristics and even humans
cannot detect infeasibility. For instance, the example in Figure 1.1 is (not obviously) in-
feasible. A human could try for a long time to place the items but would never succeed.
Furthermore, this failure would not constitute a proof of infeasibility until every possible
position had been tried.
Containment in its most general form is NP-hard and therefore computationally in-
tractable if P6=NP. Current containment algorithms have worst-case running times which
are exponential in the number of items being placed. Furthermore, current containment
algorithms attain their worst-case running times when faced with any infeasible problem.
Multiple resolution techniques for approximating polygonal boundaries can be used to im-
prove the ability to say \no". However, algorithms using these techniques still perform
slowly when faced with problems which are just barely infeasible4 . Figure 1.1 is an example
of such a problem. This type of problem arises frequently in practice because a tight packing
is the goal of a packing problem.
Another shortcoming of current practical containment algorithms is that they place at
most three items. Why has the computational geometry community not produced practical
containment algorithms for larger numbers of items? Since containment is NP-hard, perhaps
their view is that computational geometry techniques are not powerful enough for such
problems and that mathematical programming techniques o er the only chance for success.
1.1.2 Thesis Statement and Overview of Chapter
This thesis provides two-dimensional translational containment algorithms which over-
come both of the disadvantages of existing containment algorithms. Our algorithms are
designed to place an arbitrary number of items5 . The algorithms can say both \no" and
\yes" quickly. They do not necessarily perform more slowly on \slightly" infeasible inputs.
One of our algorithms is practical for up to ten items. The implementation of the other
algorithm is currently fast for up to four items. The slowness of our current implementation
of polygon set operations makes this algorithm slow for more than four items. The ability
to solve containment for up to ten items is powerful. Previously unsolved packing and lay-
out problems for hundreds of items can now be solved using containment as the feasibility
4
In this case, the problem is infeasible and the global overlap minimum is small but nonzero.
5
Our examples are drawn primarily from the apparel industry. Each item is represented as a polygon
which typically has from 4 to 100 vertices. The container is also represented as a polygon; it typically has
from 100 to 300 vertices. The items and container may be nonconvex.

16
oracle inside other methods.
Our ability to say \no" well comes primarily from our new geometric techniques. Our
ability to say \yes" for so many items is partly due to our geometric techniques and partly
due to our application of and improvements to existing linear programming-based tech-
niques. Thus, we demonstrate that a computational geometry approach can contribute
signi cantly to practical algorithms for NP-hard geometric problems.
Our main thesis is that, although containment is NP-hard, it is fruitful to: 1) develop
algorithms for containment, as opposed to heuristics, 2) design containment algorithms so
that they say \no" almost as fast as they say \yes", 3) use geometric techniques, not just
mathematical programming techniques, and 4) maximize the number of items for which the
algorithms are practical.
The remainder of the thesis is divided into two main parts. Part I describes our work
on containment algorithms. Part II presents applications of containment to layout. The
applications validate our approach to containment.
The remainder of this chapter is organized as follows. Section 1.2 discusses work from
the containment literature and then gives an overview of our own work on containment.
Section 1.3 describes work from the layout literature and discusses our applications of con-
tainment to layout problems. Section 1.4 describes the research project which has motivated
the work in this thesis. Section 1.5 discusses benchmarks. Section 1.6 highlights the main
contributions of the thesis. Finally, Section 1.7 provides a road map for the remainder of
the thesis.

1.2 Overview of Containment Work


Containment problems are, in general, NP-hard. As we show in Section 2.3, containment
is NP-complete if the items are allowed to translate but not rotate. Part I of the thesis
discusses our work on the NP-complete two-dimensional translational containment problem
for polygons. The polygons and container may be nonconvex. We present new theory
leading to practical algorithms for up to ten polygons.
The following two subsections provide an overview of related work on containment and of
our work on containment. Section 1.2.1 surveys the literature on containment. Section 1.2.2
summarizes our work on containment.
1.2.1 Related Work
The term containment appears as early as 1983 in a seminal paper by Chazelle on two-
dimensional polygon containment [Cha83]. Subsequent work on containment has appeared
in literature on robot motion planning, computational geometry, and operations research.
The layout and packing communities do not seem to use this containment work. Either they
are unaware of it, or they have encountered diculties when implementing it6 . In layout,
the equivalent problem is sometimes called \arrangement"7.
The next two parts of this subsection survey related work on containment. The rst part
discusses work on rotational containment. The second part treats work on translational
6
Some of the containment research has focused more on combinatorial issues than on implementation
details, and so this is not surprising.
7
The term arrangement has a very di erent meaning in the computational geometry community.

17
containment. For these discussions, we assume that the container is a polygon with n
vertices and that a polygonal item to be placed has at most m vertices.
Rotational Containment
Chazelle [Cha83] considers tting a single polygon P into another polygon C , where
both translation and rotation of P are allowed. He observes that, if one only seeks a single
solution, it suces to examine (stable) placements of P in C for which P 's three degrees of
freedom are removed. He gives an O(mn2 ) time algorithm for solving the problem when C
is convex, and an O(m + n) time algorithm if C is convex and only translations are allowed.
He gives a naive algorithm for the general case requiring O(m3 n3 (m + n) log(m + n)) time.
Avnaim [Avn89] and Avnaim et al. [AB88] give an O(m3 n3 log mn) time algorithm for a
single nonconvex polygon in a nonconvex container. Their algorithm yields all the solutions.
Grinde and Cavalier [GC93] use mathematical programming to solve the rotational
containment problem for a convex polygon in a convex container. Their algorithm requires
O(bmn) time, where b is the number of two-contact sets obtained as P rotates through
2 . As b = O(m + n) when the polygon and container are convex, their algorithm requires
O(m2 n + mn2 ) time.
Translational Containment
In order to discuss related work on translational containment, we use the con guration
space concept from robotics. In robotics, a point in the con guration space is a con g-
uration. The components of the con guration are the values of the joint angles and any
other parameters necessary to specify the robot position and orientation. For our approach
(as in [AB87, Avn89, Dev90]), an essential type of con guration space is the set of  k rela-

tive positions for a pair of polygons. For k polygons and one container, there are +2 1
such con guration spaces. A con guration in this type of con guration space is a (k + 1)-
tuple (t0 ; t1; : : :; tk ), where ti is a translation of the ith polygon and t0 is a translation of
the container, usually set to (0; 0). If applying these translations to the polygons yields a
non-overlapping arrangement, the (k + 1)-tuple is a valid con guration.
In the following discussion, we use a shorthand notation to describe various types of
translational containment problems. The shorthand is based on the two de nitions below.
De nition 1.1 The kNN problem is that of nding a valid con guration of k noncon-
vex polygons in a nonconvex container. Variations replace N with either C (convex), R
(rectangle), or P (parallelogram).
De nition 1.2 The (r,k)NN problem is that of nding all subsets of size k out of a set of
r nonconvex polygons such that the k polygons can be simultaneously placed in a nonconvex
container. The variations of De nition 1.1 apply.
We know of at least one industrial solution to translational containment, and it is used
in an interactive setting. That solution appears to be based on inner and outer approxi-
mations of polygonal boundaries, and the running time increases with the tightness of the
t. Published algorithms for translational containment for small k have asymptotic running
times which tend to depend on a high power of n. For arbitrary k and nonconvex polygons

18
and container, the only known algorithms have running time which is exponential in k. This
is not surprising, since kNN is NP-complete.
A \naive" kNN algorithm requires O(k2 (mn)2k log mn) time. It searches for a valid
con guration by iterating over all choices of 2k contacting vertex/edge pairs among the
polygons and container. More details appear in Section 2.6. The naive algorithm has
running time in ((mn)2k ) because it must iterate over all possible choices of contacts
whenever the problem is infeasible.
Fortune [For85] gives a solution to 1CN by computing the Minkowski sum8 using a gen-
eralized Voronoi diagram. This O(mn log mn) result is an improvement over the O((n2 +
mn) log mn) time algorithm for 1CN given by Baker et al. [BFM86]. Avnaim and Bois-
sonnat [AB87, Avn89] use the Minkowski sum and convex decomposition to solve 1NN and
2NN, and Avnaim [Avn89] gives an algorithm for 3NN which iterates over the edges of the
con guration space polygons. Devillers [Dev90] gives faster algorithms for 2CN and 3CN.
These running times are summarized below.
k kCN kNN
1 O(mn log mn) [For85] O(m2 n2 log mn) [AB87, Avn89]
2 O(m2 n2 log m) [Dev90] O(m4 n4 log mn) [AB87, Avn89]
3 O(m3 n3 log m) [Dev90] O(m14n6 log mn) [Avn89]
Avnaim and Boissonnat also give a solution to the 3NP problem, three nonconvex poly-
gons in a parallelogram container, using time in O(m60 log m) [AB87, Avn89]. Their 3NP
algorithm is based on polygon unions, intersections, and Minkowski sum operations. Even
though the asymptotic running time of the 3NP algorithm contains a polynomial whose
degree is high compared to the others, in practice the 3NP algorithm is faster than the 3NN
algorithm, which is, in turn, faster than the naive algorithm. (We defer further discussion of
this until Section 2.6.) Unfortunately, Avnaim shows that there is no formula for a solution
to 3NN purely based on polygon union, intersection, and Minkowski sums [Avn89]. This
means that, for k > 2, any kNN algorithm must use iteration over edges, or subdivision,
or some technique other than set operations on polygonal regions. The fact that Avnaim's
algorithm for 3NN uses iteration gives it the (m14n6 ) lower bound.
Milenkovic, et al. [DML94] o er three approaches to translational containment. The
rst method operates on convex polygons. It iterates over combinations of orientations for
the lines which separate the convex polygons. This approach yields an algorithm for 3CN
with running time in O(m3n log mn), an improvement of a factor of O(n2 ) over previous
algorithms (in particular, [Dev90]). The second method uses a MIP (mixed integer pro-
gramming) model for kNN. This approach is described in detail in [Li94]. It takes one or
two minutes on a typical workstation for two or three polygons, but is slow for four or more
polygons (it can easily take more than an hour to detect an infeasible 4CN problem). The
third method is analgorithm
 for kNN which nds an -approximate con guration9 using
k
time in O( 1 log 1 k6s log s), where s is the maximum number of vertices in a polygon
8
For polygons A and B , the Minkowski sum is de ned as A  B = fa + b j a 2 A; b 2 B g. See Section 2.2
for more details.
9
For an -approximate con guration, no point of any polygon is more than 2 inside of the boundary
of any other polygon when the polygons are laid out according to the con guration. For some cutting
applications, CAD vendors round polygon vertex coordinates to integers. In such cases, an overlap of up to
one unit (corresponding to .01 inches) is acceptable.

19
which is created through successive applications of a certain collection of polygon set oper-
ations (see the discussion of size analysis in Section 3.2.4). This algorithm is introduced in
[DM, DM95], and appears in Chapter 5 of this thesis.
Milenkovic [Mil, DM95] improves upon the 3CN algorithm of [DML94] by giving an
algorithm which has the same asymptotic running time but is simpler to implement. He also
introduces an exact kNN algorithm based on linear programming, and approximation and
subdivision of the con guration spaces, which runs in O((mn)2k+1LP(2k; O(kmn + k2m2 )))
time, where LP(a; b) is the time to solve a linear program with a variables and b constraints.
This algorithm is practical for k  3.
1.2.2 Our Containment Work
Existing two-dimensional translational containment algorithms developed by other re-
searchers are practical only for k  3. We present new theory leading to algorithms which
handle any k. One of our algorithms is practical for k  10; the implementation of the
other algorithm is currently fast for k  4. As stated in Section 1.1.2, our examples are
drawn primarily from the apparel industry. Each item is represented as a polygon which
typically has from 4 to 100 vertices. The container is also represented as a polygon; it
typically has from 100 to 300 vertices. The items and container may be nonconvex. Under
these conditions, we can solve containment problems quickly, typically in several minutes
or less on a 50 MHz SPARCstation10 . Our work is therefore practical for solving a number
of layout-related problems for nonconvex polygons.
Although containment is NP-hard in general, we believe that it is valuable to make the
practical value of k as large as possible. Thus, we continue the \bottom-up" progression
(1CC, 1NN, 2CN, 2NN, etc.) of containment work which has taken place in robotics and
computational geometry. Our ability to say \no" well comes primarily from the use of our
new geometric techniques. Our ability to say \yes" for so many items is partly due to our
geometric techniques and partly due to our application of and improvements to existing
linear programming-based11 techniques. Thus, we demonstrate that a computational ge-
ometry approach can contribute signi cantly to practical algorithms for NP-hard geometric
problems.
Saying \no" quickly is important. Iteration-based algorithms for containment attain
their worst-case running time when the containment problem is infeasible. This situation
occurs frequently in practice. We replace iteration with subdivision. Our algorithms are
speci cally designed to detect infeasibility quickly, and we have some theoretical justi cation
for their behavior, which we discuss in more detail later in the thesis.
We present a new paradigm for containment algorithms. Our containment algorithms
apply three operations, in order, to the con guration spaces: 1) restriction, which prunes
the spaces and helps detect infeasibility, 2) evaluation, which tries to nd a solution, and 3)
subdivision, which partitions one of the con guration spaces into two parts. The algorithm
then recurses on each part while keeping the other con guration spaces constant.
We introduce new theory and practical techniques for each of these types of opera-
tions for the case of two-dimensional containment problems in which the items and con-
tainer may be irregular (i.e. nonconvex) and have multiple components, and in which
10
SPARCstation is a trademark of SPARC, International, Inc., licensed exclusively to Sun Microsystems,
Inc.
11
We use the CPLEX 2.1 linear programming system (CPLEX Optimization, Inc.).

20
the items may be translated, but not rotated. Our techniques can be combined in vari-
ous ways to form two-dimensional translational containment algorithms which follow the
restrict/evaluate/subdivide paradigm. The paradigm is designed so that, unlike existing
iteration-based algorithms, containment algorithms based on the paradigm are adept at
saying \no", even for slightly infeasible problems.
At present there appear to be two approaches to containment within the restrict/ eval-
uate/subdivide framework: direct and indirect. Direct methods act on the con guration
spaces directly, and these methods tend to be based on computational geometry. Indirect
methods act on convex approximations to the con guration spaces: either inner or outer
approximations. Indirect methods use linear programming or other types of mathematical
programming, which is why they can only act on convex approximations. Direct methods
have a more accurate view of the \world", but they can only operate on a few (three, in
our case) con guration spaces at a time. Indirect methods can act on all the con gura-
tion spaces simultaneously in a single operation, but they use only approximations to the
con guration spaces.
The thesis presents two new algorithms based on our paradigm. They are brie y de-
scribed below. The rst is a direct algorithm. We call the second a hybrid algorithm because
it uses an indirect evaluation method, and both direct and indirect methods for restriction.
Direct Algorithm
This thesis presents a new direct algorithm for containment. It is an approximate
algorithm which produces an -approximate con guration whose overlap is below a given
tolerance. As mentioned earlier, for some cutting applications, CAD vendors round polygon
vertex coordinates to integers. In such cases, an overlap of up to one unit, corresponding
to .01 inches, is acceptable. Our approximate algorithm uses our new direct techniques for
restriction, evaluation, and subdivision. It appears in [DM95, DM] as well as in Section 5.2.
Our direct restrictions are based on polygon set operations, and so we call them geometric
restrictions. They are powerful tools for detecting infeasibility. We de ne two general types
of restrictions: valid restrictions, which preserve all solutions to a containment problem, and
semi-valid restrictions, which preserve a single solution, if one exists. We present a variety
of restrictions and several general properties of valid restrictions. We examine, in detail, the
e ectiveness of our geometric restrictions, which are valid restrictions. Our characterization
of the types of containment problems for which these restrictions are e ective is based on
theoretical as well as experimental results.
The evaluation method uses restriction together with a geometric method for maximizing
the intersection of translations of the con guration spaces. The evaluation method often
nds a containment solution without any subdivision. The subdivision method is a size-
based one. Its goal is to partition a con guration space into two parts, each smaller than
the unsubdivided con guration space.
Our approximate algorithm has an extra layer of structure on top of the restrict/evaluate/
subdivide paradigm. This layer has a set of containment subproblems to which course-
grained parallelism could be applied in the future. We create the subproblems by charac-
terizing a valid con guration for translational containment.
The current implementation of this algorithm is fast for up to four items. For k > 4,
the current implementation of our geometric restrictions, based on polygon set operations,
slows the algorithm down considerably. Increasing the speed of our polygon set operations

21
is a subject of future work (see Section 10.2.1). In our experiments, our approximate
algorithm performs fewer subdivisions than the faster, hybrid algorithm, and so we believe
the approximate algorithm has the potential to be competitive with the hybrid algorithm.
Hybrid Algorithm
In addition to our direct algorithm, we also present signi cant improvements to the
restriction, evaluation, and subdivision techniques of Milenkovic's exact kNN algorithm
[Mil], which is an indirect algorithm. Milenkovic's indirect algorithm, in its original form,
was slower than our direct algorithm for infeasible problems, and its practical upper bound
on k was k  3. The improved indirect algorithm is currently our fastest containment
algorithm for 4  k < 10.
Our rst improvement to the indirect algorithm is a proof that restrictions, when used
in the indirect algorithm, do not increase the algorithm's worst-case running time if the
algorithm bases certain decisions on the unrestricted con guration spaces. This is important
because restrictions can add vertices to the con guration spaces, and so they might have
increased the worst-case running time. The proof allows us to add our new direct (geometric)
restrictions to the algorithm without increasing the worst-case number of subdivisions. It
also allows us to apply a new linear programming-based restriction method, developed by
Milenkovic.
The second improvement to the indirect algorithm is related to the evaluation method.
We apply the work of Li and Milenkovic on overlap resolution [Li94, ML, LM93c, LM93a]
to improve the evaluation step. Their overlap resolution algorithm is an indirect algorithm.
It uses inner convex approximations, an approach which complements the outer approxi-
mations of the indirect containment algorithm. Our third improvement is our new practical
subdivision technique which attempts to maximize the distance from the current evaluation
to the convex approximations of the subdivided con guration spaces.
We call the resulting algorithm a hybrid algorithm because it uses indirect methods
for evaluation, developed by Li and Milenkovic, and both direct and indirect methods for
restriction. The hybrid algorithm is described in [MD95] as well as in Section 5.3.
The hybrid algorithm, in addition to being a formidable containment algorithm, has
the property that a partial containment solution from it is valuable. This is because each
evaluation step produces either a non-overlapping solution or an overlapping con guration
for which a linear program has reached a local minimum. As a result, the hybrid algorithm,
when terminated early, is the most powerful heuristic known for generating a local overlap
minimum.

1.3 Overview of Layout Work


Part II of the thesis discusses our application of containment to layout problems. Layout
and packing are NP-hard problems for which nding a globally optimal solution is gener-
ally intractable. The more restricted problem of packing squares onto a rectangle without
rotation (as in VLSI applications) is NP-complete. Rectangle packing is also NP-complete
when rotation is not allowed, even for the one-dimensional bin-packing problem [JGJ84].
In general, translational packing is NP-complete.
In this section we survey related work in layout and then describe our applications of
containment to layout. Section 1.3.1 discusses related work at a high level. Section 1.3.2

22
reviews work on layout of convex items. Section 1.3.3 treats work on nonconvex items.
1.3.1 Related Work
Work on layout is spread across a broad range of literature. Relevant areas include
[Dyc90]: management sciences, engineering sciences, information and computer science,
mathematics, and operations research.
Work on layout, cutting, packing, loading and nesting has been surveyed in [Hin80,
SP92, Dyc90, DF92, DD92, DD]. Sweeney and Paternoster give an application-oriented
bibliography for cutting and packing problems in [SP92]. Dowsland and Dowsland survey
research on packing problems in [DD92]. In [DD], the same authors review work on irregular
nesting problems, emphasizing the di erent types of approaches which have been taken.
Dyckho analyses the structure of cutting and packing problems and presents a classi cation
in [Dyc90]; this forms the basis for Dyckho and Finke's meta-survey for cutting and packing
literature [DF92].
In [DF92], Dyckho and Finke distinguish four major types of cutting and packing
problems based on the way items are assigned to containers and on the assortment of items.
They observe that, in the layout literature, problems of the same type tend to be solved using
the same technique. The four types are: 1) bin-packing, in which heterogeneous items are
assigned to a selection of containers, 2) cutting stock, which is similar to bin-packing, but the
items can be partitioned into a small set of equivalence classes, based on shape, 3) knapsack,
in which all the containers must be used and the items are heterogeneous, and 4) pallet
loading, in which all the containers must be used but the items are homogeneous. These
bin-packing and knapsack de nitions are more general than typical usage in the literature.
Because of their diculty, bin-packing problems are typically solved sub-optimally using
fast, simple heuristics such as First-Fit. The small number of distinct shapes involved in
cutting stock problems often allows them to be solved using repeating patterns. Knapsack
problems are frequently solved using branch-and-bound or dynamic programming. For a
pallet loading problem, the containers are often homogeneous. In this case, a solution for
one container can be applied to all the others.
Layout techniques cover a wide range of methods. In some cases exact techniques from
mathematical programming, such as linear or dynamic programming, are appropriate for
all or part of the problem. Integer and mixed integer programming are still considered
too slow for even moderate sized problems. Some researchers are experimenting with AI
approaches, such as heuristic search, knowledge-based systems, case-based reasoning, and
neural networks. Other researchers use improvement strategies based on \meta-heuristics"
such as simulated annealing, genetic algorithms and tabu search12. Other techniques in-
clude: database/substitution, greedy heuristics, Monte Carlo placement, lattice packings,
and clustering methods.
Our discussion of layout work treats only two-dimensional layout problems because the
containment problem we solve is two-dimensional. We begin in the rst part of Section 1.3.2
with the simplest of situations in which the items are rectangular. Next, we consider
arbitrary convex items in the second part of Section 1.3.2. Section 1.3.3 discusses nonconvex
items. The amount of published research on nonconvex items in nonconvex containers is
quite small compared with the amount of research on convex items. Finally, we discuss in
12
Tabu search is a (usually deterministic) neighborhood search technique with mechanisms for managing
restrictions on \taboo" regions of the search space and for keeping historical information [Ree93].

23
Section 1.3.4 how the layout literature treats the assignment problem, that is, the problem
of assigning items to di erent components of a container. The assignment problem is a
combinatorial optimization problem which can arise as a subproblem of containment when
the container consists of multiple (non-identical) components.
1.3.2 Convex Items
Here we review layout literature related to convex items. The rst part of Section 1.3.2
surveys work on rectangular items, and the second part of Section 1.3.2 discusses work on
convex items which need not be rectangular.
Rectangular Items
Dyckho [Dyc90] observes that most problems in the cutting and packing literature
deal with shapes which can be speci ed using a small number of parameters, such as the
rectangle. Research into problems for which both the container and items are rectangular
is reviewed in [DF92, DD92, JGJ84].
Co man, et al. [JGJ84] survey approximation13 algorithms for bin-packing. The survey
focuses on approximation algorithms which are e ective in practice, and simple enough so
that their worst-case or average-case performance has been analyzed in the literature. These
include, for example, First-Fit, Next-Fit, Best-Fit, and their variations. One-dimensional
bin-packing was de ned above in Section 1.1. In two-dimensional strip-packing, a semi-
in nite vertical strip of unit width is given, along with a set of rectangular items of width
 1. The goal is to pack items on the strip while minimizing the height of the strip. The
classical two-dimensional bin-packing problem [JL91] is a variant on strip-packing in which
horizontal boundaries are added to the strip at integer heights. Few published analyses
exist for the case where the bins are not identical.
Israni and Sanders [IS82] present heuristics for packing non-identical rectangles. One
uses decreasing length, perpendicular strip placement (DLPER), and another uses decreas-
ing height, perpendicular strip placement (DHPER). These heuristics essentially place the
rectangles in nested \L"-shaped strips. In [IS85], Israni and Sanders compare the perfor-
mance of DLPER and DHPER with other bin-packing heuristics.
Some special cases of rectangular two-dimensional packing for cutting stock problems
lend themselves to linear and/or dynamic programming; for example, when guillotine cuts14
are used exclusively [DD92]. Gilmore and Gomory [GG65] solve some restricted types of
guillotine cutting problems using a linear programming formulation whose speed relies on
solving knapsack problems (the knapsack problems can be solved using dynamic program-
ming). Dagli and Tatoglu [DT87] observe that if each item has a \weight" and if at least one
dimension of the items and containers is equal, the two-dimensional problem reduces to a
one-dimensional single container problem which can be solved with dynamic programming.
13
An approximation algorithm for an optimization (minimization) problem has the property that the
value of its output is 1 +  times the optimal (minimum) value. This is di erent from the de nition of an
approximate algorithm, in which the solution generated by the algorithm is accurate to within a tolerance
. For example, a containment algorithm which produces an -approximate con guration is an approximate
algorithm because the con guration may contain small overlaps.
14
We remind the reader that a guillotine cut is a horizontal or vertical cut which completely partitions
the remaining material.

24
In [AA76], Adamowicz and Albano lay out rectangles by rst forming candidate strips,
and then selecting a subset of candidate strips using dynamic programming. Haims and
Freeman [HF70] also use dynamic programming to place rectangles; they use linear pro-
gramming and two-dimensional dynamic programming and take a multi-stage approach.
Convex Items
Non-rectangular packing results have appeared primarily for problems that pack identi-
cal copies of a convex polygon. It is well known that the densest packing by translates of a
convex polygon is given by a lattice packing [Rog51]. In a lattice packing, the polygons are
placed at points on a lattice. A lattice is determined by two linearly independent vectors; a
point on the lattice is an integral linear combination of the two vectors. The densest lattice
packing of an n-vertex convex polygon can be found in O(n) time [MS90b]. A double-lattice
packing is the union of two lattice packings, such that a rotation of either of the two lattice
packings by  produces the other lattice packing. The densest double-lattice packing can
be found in O(n) time for any convex
p polygon that is allowed to rotate by  [Mou91]. The
density of such a packing is  3=2 [KK90]. Dori and Ben-Bassat [DBB84] show how to
15
nd an appropriate circumscribing hexagon for a convex polygon in order to nd a lattice
packing.
1.3.3 Nonconvex Items
Dowsland and Dowsland [DD] survey work on irregular nesting problems, highlighting
the di erent types of approaches which have been taken. No fast, high quality solutions
have emerged to date for layout problems involving a variety of nonconvex items. Many
algorithms for packing nonconvex objects are unpublished; [QS87] suggests this may be
due to \commercial con dentiality". Published work with nonconvex items appears to be
narrowly focused on the particular application under consideration.
Published methods for packing nonconvex objects can be classi ed (as in [AS80]) as: 1)
indirect, (or two-stage) where nonconvex items are rst enclosed in convex polygons, and
techniques for packing convex objects are then applied, and 2) direct, where the nonconvex
polygons themselves are manipulated. In the following discussion, we restate (1) more gen-
erally in terms of clustering: an item or collection of items is replaced by an approximation,
and placement is performed on the approximations. We further subdivide (1) as follows: (a)
preprocessing: clustering occurs only as a preprocessing step, and (b) integration: clustering
is integrated into the layout algorithm.
Clustering
Preprocessing: One popular preprocessing method encloses polygons in rectangles. The
advantage of this is the abundance of literature on packing rectangles; the disadvantage is
that the resultant waste typically makes this approach unacceptable in practice. For rect-
angular containers, Adamowicz and Albano [AA76], Haims and Freeman [HF70], Amaral,
et al. [ABJ90], Dagli and Tatoglu [DT87], and Bailleul et al. [BTS83] pack nonconvex
items into rectangles to form composite items, and then place the rectangles. In [AA76],
Adamowicz and Albano give an algorithm for pairwise clustering of irregular shapes in a
15
Packing density is the ratio of used area to total area.

25
rectangle, and describe limited clustering options for more than two items. They assume the
shapes can rotate. The rectangles are placed in strips using dynamic programming. (Linear
and/or dynamic programming are not perceived as feasible options unless nonconvex items
are enclosed rst within rectangles.) Amaral, et al. [ABJ90] adopt the approach of [AA76]
for clustering items.
Bohme and Graham [BG79] enclose each item within another polygon, and then use
Monte Carlo methods to place each polygon onto rectangular metal plates. It has been
suggested that a nonconvex polygon be approximated by its convex hull, if the waste is not
too large [CG89, DBB84]. If the waste is large, Dori and Ben-Bassat [DBB84] suggest that
small groups of nonconvex items be combined into a convex object and then convex packing
techniques can be applied. This approach is also followed by Karoupi and Loftus [KL91].
However, this leaves open the problem of how to combine nonconvex items automatically
into a convex (or nearly convex) one. Heistermann and Lengauer [HL92] observe that it is
not known what conditions a composite part must satisfy in order to be useful in nesting.
If the composite convex objects are di erent from each other, is not clear that algorithms
for packing non-identical convex objects have advanced enough to make such an approach
practical [DD92].
If a large set of nonconvex items to be placed on a rectangle (or rectangular strip)
contains many identical items, an ecient layout can often be created using repeating
patterns. For some item shapes, lattice or double-lattice packings are appropriate. However,
it is not necessarily true that the densest packing of identical nonconvex polygons is lattice-
based [BK87].
Prasad and Somasundaram [PS91] and Babaev [Bab82] follow the repeating pattern
approach for nesting sheet-metal blanks. They form an ecient cluster of items, then build
a repeating pattern. Ismail and Hon [IH92] use a genetic algorithm to cluster two similar
shapes together.
To reduce waste, an indirect method can approximate each polygon using a collection
of polygons. For example, Qu and Sanders [QS87] approximate nonconvex items by non-
overlapping rectangles, then use a greedy layout algorithm to arrange the rectilinear items
into layers of descending staircase patterns. They assume the container is a set of rectangular
stock sheets.
Tanaka and Wachi [TW73] place horizontally convex approximations of garment pieces
on a rectangular strip using heuristic search. They report material waste of 28% to 39%.
Nakajima and Hayashi [NH84] improve upon the algorithm of [TW73] using several tech-
niques; they obtain slightly less waste than [TW73].
Heistermann and Lengauer [HL92] use a exible approximate representation for noncon-
vex items and a nonconvex container. To compute the approximation, they rst construct
a set of triangles along the boundary, then iteratively smooth the contour, selecting, at each
step, the smallest area triangle. This technique could be used in our containment work.
Integration: Some work rests on the observation that good local nesting often produces
good global nesting. Dagli and Tatoglu [DT87] cluster pairs of items in a greedy fashion.
They rst cluster two items, make them a unit, cluster this unit with another item, and so
on. Babaev [Bab82] also places non-identical nonconvex items this way. Dighe and Jakiela
[DJ95] present a hierarchical genetic algorithm. The algorithm uses a binary tree in which
each item (together with values for its three parameters: x, y , and ) constitutes a leaf, and
a node represents a search for the minimal enclosing rectangle of its two children. Thus,
the genetic algorithm seeks the best binary tree for a set of items. They note that \genetic

26
algorithms rely on the hypothesis that combining two good building blocks leads to another
good building block".
Milenkovic, et al. [MDL92] take advantage of the large/small item dichotomy and
natural equivalence classes in pants layout problems. To place large items, they build
column clusters of four items and form a grid-like layout consisting of a set of columns. An
algorithm based on dynamic programming (and allowing backtracking) adds one column at
a time to the layout. The technique of clustering items into columns also appeared in the
work of Gurel at IBM in the 1960's [Gur68b, Gur68a, Gur69].
Heckmann and Lenguaer [HL95] use dynamic approximations for items. As temperature
decreases in their simulated annealing process, they use tighter approximations. They
report that this approach can reduce running time by up to 15%. This idea of using tighter
approximations later in the process is similar to the use of convex hulls in [Mil] and in
our hybrid containment algorithm; as subdivision proceeds, the convex hulls become better
approximations to the subdivided portions of the con guration spaces.
Direct
Dealing directly with an assortment of nonconvex polygons is an extremely dicult
problem. If many items are similar in shape and size, it can be useful to partition the set
of items into equivalence classes. For example, if a large/small item dichotomy exists, it is
common to place all the large items rst and then place the small ones [Bab82, MDL92,
ABJ90].
In some applications the problem de nition speci es equivalence classes. For wood
[OF93] and sheet-metal cutting [PS91, BG79, Bab82], there is often more than one copy of
each item, and so an equivalence class might contain all identical items. Items in apparel
manufacturing are usually similar in shape and size because a layout typically contains a
mixture of sizes for the same garment style. Heistermann and Lengauer [HL92] use topology
classes of nonconvex polygons to help select the next item to be placed on a leather hide
using a greedy strategy. One of the ways Nakajima and Hayashi [NH84] improve upon the
algorithm of Tanaka and Wachi [TW73] is to apply the Group Technology manufacturing
principle to classify items according to a criterion such as size. Group Technology is de ned
by Moon and Kao [MK93] as \a manufacturing principle and practice of identifying simi-
larities among diverse parts, forming part families, classifying new parts into part families,
and eciently storing and retrieving information about part families." See [HW89] for more
details. We note that grouping together similar items can have bene ts for containment;
see the discussion of symmetry breaking and subset substitution restrictions in Section 3.8.
Some of the earliest work in direct placement of nonconvex items was done by IBM
and used a greedy approach to placement (see, for example [MH69]). An early nongreedy
approach was taken by Adamowicz [Ada69]. Albano and Sappupo [AS80] note that the
experimental system in [Ada69] involves iterative application of linear programming. For
problems of nontrivial size, techniques of mathematical programming have been viewed as
too slow even in cases where they can be applied. Integer and mixed integer formulations
(for exact solutions) have, to date, led to programs whose running time is unacceptably
slow. AI and meta-heuristic techniques can produce layouts with low waste, but, again, at
the cost of much computation time (e.g. several hours even for 20 or 30 items).
Some researchers are experimenting with AI approaches, such as heuristic search, case-
based reasoning, knowledge-based systems, and neural networks. For example, the Clavier

27
system, described in [HT94], uses cased-based reasoning to arrange composite parts in a
convection autoclave (oven) for a curing process. Yuzu et al. [YLW87] present an expert
system for placing nonconvex polygons in the garment industry. Albano and Sappupo
[AS80] pack assortments of nonconvex polygons using heuristic search techniques. They
allow nonconvex items to rotate by  and they pack them onto a rectangle. They control
the number of nodes expanded in their search tree by, for example, introducing a waste
parameter. They present results for the value of 20% waste for some very small apparel
examples containing no more than 30 items. They assume the container is a single rectangle
large enough to contain all the items.
Other researchers use improvement strategies for nonconvex polygons based on meta-
heuristics such as simulated annealing, genetic algorithms, and tabu search. A central
question for such methods is how to reconcile the two competing goals of minimizing waste
and eliminating overlaps. Some researchers explicitly forbid overlap; others allow it but
assign a penalty for it. For example, the Dowsland survey discusses the tabu search algo-
rithm of Blazewicz et al. [BHW93]; this algorithm forbids overlaps. Both approaches rely
on fast overlap detection. The latter also requires powerful overlap reduction which, until
the recent work of Li and Milenkovic [Li94, ML, LM93c, LM93a], has not been available.
This thesis o ers an even more powerful overlap reduction heuristic (recall our discussion
of the overlap reduction evaluation method within the hybrid containment algorithm in
Section 1.2.2).
The Dowsland survey [DD] discusses some simulated annealing work, such as that of
Oliveira and Ferreira [OF93]. We augment the Dowsland list with two more examples.
Mangen and Lasudry use simulated annealing for marker-making16 in the garment industry
[ML91]. They forbid overlaps. Heckmann and Lengauer [HL95] apply simulated annealing
to textile manufacturing. They allow overlaps and use a penalty term. In both of these
cases, items are nonconvex and the container is a rectangular strip. (In the latter case,
the nonconvex polygons which are laid out are nonconvex approximations of the original
items.) Heckmann and Lengauer obtain waste levels comparable to human levels for small
numbers of items (15-25). Achieving this requires multiple runs of their algorithm and from
one to ve hours of computing time. Heckmann and Lengauer also use simulated annealing
to solve a fteen piece puzzle in seven hours; in this case the container is a rectangle.
Heckmann and Lengauer compare their simulated annealing results to results they obtain
using search techniques closely related to simulated annealing, such as Threshold Accepting
(TA). The primary di erences between TA and simulated annealing are: 1) TA has only
one parameter, and 2) \it accepts every new con guration which is not much worse than
the old one" [Due93].
Dighe and Jakiela [DJ95] survey the published work on genetic algorithms for layout
and introduce genetic algorithms capable of placing irregular shapes. They forbid overlaps
between items. In addition to the hierarchical clustering algorithm described earlier, Dighe
and Jakiela give an alternate genetic algorithm which is based on ordering the items and
adding them one at a time to the layout without overlap, using a leftmost heuristic. In this
case, the genetic algorithm seeks the best ordering of the items.
Li and Milenkovic [Li94, ML, LM93c, LM93a] introduce a substitution-based approach
to pants marker-making for the garment industry. Given a database of markers, a new
marker can be created by nding a template marker in the database whose items are similar
16
Marker-making is the process of laying out pattern pieces on cloth.

28
to the new ones, arranging the new items in the same pattern as those of the template, and
then eliminating overlaps. This method is appropriate for applications in which there is
a large collection of previous layouts, and the items and containers are suciently similar
from one layout to another. It is not appropriate, in general, for sheet metal work, where
the layouts are di erent enough from each other that humans nd it dicult to develop
expertise. It is also not appropriate for leather nesting problems, because each hide is
di erent with respect to both shape and the location of defects [HL92].
1.3.4 Assignment
The assignment problem is a combinatorial optimization problem which can arise as
a subproblem of containment when the container consists of multiple components which
are not identical and are considered separately. If the layout algorithm treats components
individually, it must decide which items to assign to each component of the container. Arbel
[Arb93] identi es two di erent assignment strategies: 1) pre-packing, where feasible groups
for each container are generated, and then assignment is performed, and 2) post-packing,
where groups are assigned based on some estimate of eciency, without determining their
feasibility.
There is a rich literature on combinatorial optimization. See [CMTS79, PS82, Law81,
LLKS85] for discussions of exact methods, and [Ree93] for a treatment of modern heuristics.
In spite of this, discussions of assignment in the literature on nonconvex layout are rare.
Often the issue is either ignored, or else resolved using a greedy strategy. Dagli and Tatoglu
[DT87] observe that most researchers do not handle allocation of items among multiple
containers. Arbel [Arb93] dismisses pre-packing as prohibitively expensive, and therefore
uses post-packing to pack nonconvex items on rectangular stock sheets. His post-packing
method uses integer programming. Bohme and Graham [BG79], who use Monte Carlo
methods to place items onto metal plates, state that the set of candidate plates is selected
by a preprocessor; neither this algorithm nor the method of selecting the next plate to
pack is described. Qu and Sanders [QS87], who approximate nonconvex items by non-
overlapping rectangles, do not indicate how their algorithm selects the next sheet to be
packed. Heistermann and Lengauer [HL92], for whom a nonconvex container is partitioned
into quality zones, use a greedy algorithm which treats one region at a time.
To our knowledge, no successful pre-packing strategies appear in the literature.
1.3.5 Our Layout Work
We claim that the ability to quickly solve containment problems leads to improved
methods for solving a variety of layout problems. Recall that our containment approach
is a \bottom-up" one in which we increase the practical value of k. Clustering techniques
for layout also use a bottom-up approach. Some of them even allow the items to rotate.
However, the methods which cluster more than two arbitrary nonconvex polygons are greedy
heuristics. Our containment-based solutions to minimal container problems do not allow
the items to rotate, but, in contrast to the heuristics, we establish an actual lower bound
on the size of the container for multiple items.
In addition, we believe that some potentially promising layout techniques have been
abandoned by the layout community because they lack strong bottom-up methods. The
primary example of this is pre-packing. If the layout world had a practical method for

29
generating groups of items which t into each container, this, plus an appropriate assignment
strategy, would make pre-packing viable. We demonstrate, through our containment-based
solutions to maximal item problems, that pre-packing can work.
Before describing our applications in more detail, we pause to argue that, in general,
feasibility problems can help to solve optimization problems. We also show how to construct
geometric optimization problems from a containment problem.
One need only look as far as the Russian ellipsoid method in linear programming [Kha79]
to nd a powerful example of how solving feasibility problems can facilitate solving opti-
mization problems17 . In addition, a feasibility problem is a decision problem which can
be used as an oracle in order to solve problems involving a parameter. For example, one
can perform binary search on the value of the parameter, or, in some cases, one can use
Meggido's parametric search technique [Meg83]. The binary search approach has been sug-
gested by Chazelle [Cha83] in the computational geometry literature as a way to nd the
maximum scaling factor which allows a polygon to t inside another polygon. In the litera-
ture on combinatorial optimization, Reeves [Ree93] discusses binary search and comments:
\However, we would not usually try to solve an instance of an optimization problem by
this means!". Our belief is consistent with both of these views. Binary search is valuable,
but pure binary search alone is insucient. The best approach, in practice, accelerates the
binary search by using an improvement strategy (typically a heuristic) to help update the
upper bound.
This method provides one way to establish a lower bound for a solution to a minimization
problem. The bound can be compared with a value obtained by a heuristic. The ability to
solve a feasibility problem thus makes it possible to evaluate the e ectiveness of a heuristic,
whether or not the heuristic is automated. Evaluating the e ectiveness of heuristics is a
challenging and important task. Other techniques exist for establishing lower bounds for
some types of optimization problems. For example, in the area of combinatorial optimiza-
tion, lower bounds for minimization problems can be found using relaxation techniques18
[Ree93].
We construct two di erent types of geometric optimization problems from a containment
problem. The rst type is a minimal container problem; the second is a maximal item
problem. Given a collection of items and a particular type of container, a minimal container
problem asks for the minimal19 container of that type which contains the given items.
Similarly, given a container and a particular collection of item characteristics, a maximal
item problem asks for the maximal collection of items which possesses those characteristics
and which also ts into the container.
The thesis gives examples of layout problems of each type which can be solved using
two-dimensional translational containment for irregular polygons. We give four examples
of minimal container problems: 1) strip packing, 2) nding minimal square enclosures for
items, 3) nding minimal rectangular enclosures, and 4) nding tight convex and nonconvex
enclosures.
From the discussion of related work in layout in Section 1.3.3, we see that solutions to
17
The Russian ellipsoid method is not currently regarded as practical, but it was the rst polynomial-time
algorithm for linear programming.
18
Reeves et al. [Ree93] discuss two types of relaxation methods: 1) linear programming relaxation for
integer or mixed-integer programming, which can either be solved heuristically using dual ascent or exactly,
and 2) Lagrangean relaxation.
19
One possible minimality criterion is area.

30
minimal enclosure problems are often required for the rst (clustering) stage of a multi-stage
layout problem. Minimal container problems also arise when nding the best stock sheet
size for a given set of items, or the best cloth width in garment manufacturing. Section 6.3.1
discusses computational geometry work on various types of enclosure problems for single
items. Current solutions to minimal enclosure problems for more than two items are rare and
heuristic. This work appears in the layout literature (see Section 1.3.3 and Section 6.3.1).
The problem is typically solved either incrementally by adding one item at a time [DT87],
or by placing small items in gaps inside the bounding box of two large items [AA76].
Our general approach to nding minimal enclosures is to nd an optimal value for a
parameter via binary search combined with containment. The binary search can be ac-
celerated by using compaction techniques, such as those developed by Li and Milenkovic
[Li94, ML, LM93c, LM93a], to help update the upper bound (i.e. to improve a feasible solu-
tion found by containment). Our hybrid containment algorithm is appropriate for minimal
enclosure problems. It is extremely fast for loose ts, and so the initial stages of the search
are fast.
For the problem of nding tight convex and nonconvex enclosures, we show that re-
moving unreachable portions of a tight rectangular enclosure can yield tight convex and
nonconvex enclosures which preserve all the containment solutions for the rectangle.
We also give four examples of maximal item problems. For this discussion, let kpractical
be the largest number of polygons for which current containment algorithms are practical.
Our problems are: 1) for a feasible containment problem, nding maximally scaled copies
of items which t in the container, 2) given a container and a collection of  kpractical items
for which the sum of the item areas does not exceed the area of the container, nding a
\large" sub-collection which ts in the container, 3) given a container and a collection of
items for which either the sum of the item areas exceeds the area of the container or the
number of items is > kpractical, nding a \large" sub-collection of  kpractical items which
ts in the container, and 4) problem (3) for a collection of containers.
For problem (1) we use nested binary search combined with containment. The key to
solving problem (2) is to carefully choose which item to eliminate from an infeasible subset.
We reject the most \troublesome" item, which is identi ed using the results of the overlap
elimination evaluation method of the hybrid containment algorithm. For problem (3), we
randomly construct a set of items and solve problem (2). This yields a rich collection of
sets which t into the container. For each set of size n which ts, we automatically have 2n
subsets which t and which need not be individually represented. This work provides the
rst practical method for constructing sets of up to ten polygons which t in a container.
For problem (4), we solve problem (3) for each container, and then apply a look-ahead
assignment strategy. This is a pre-packing strategy. Containment thus allows us to trans-
form a geometric optimization problem into a combinatorial optimization problem. Our
results challenge the existing belief that pre-packing methods are impractical for layout
[Arb93].
In a multi-stage layout problem, problem (3) might involve a large, highly nonconvex
container. Highly nonconvex containers also appear in industries in which defects in material
cause the material to be partitioned into \quality zones" (see Heistermann and Lengauer's
work with leather [HL92]). We show how to decompose such a container in a natural fashion,
transforming this problem into problem (4). The transformation removes portions of the
container which are clearly not reachable by any item. Our decomposition method uses
an algorithm we developed for nding the maximum-area axis-parallel rectangle (MAAPR)

31
in a polygon. Our theoretical work on this problem gives the fastest known asymptotic
running time for a MAAPR algorithm for polygons.

1.4 Project Background


The work in this thesis is part of a project whose goal is to automate the marker-making
process in the U.S. apparel industry. Marker-making is the process of laying out pattern
pieces on cloth so as to minimize cloth waste. As an example, Figure 1.2 depicts a multi-
stage marker-making problem from the apparel industry. At the top is a collection of large
pants panels and a set of smaller trim pieces. The marker-making task is a strip-packing
problem: arrange these items on a semi-in nite strip of material while minimizing the length
of the strip.
Our marker-making research has been carried out under the direction of Victor J.
Milenkovic, Assistant Professor of Computer Science at Harvard University20 . The research
is part of a larger project, funded by the Sloan Foundation. The principal investigator
of this project is Frederick H. Abernathy, Professor of Mechanical Engineering at Harvard
University and director of the Harvard Center for Apparel and Textile Research.
Humans generally place the large panels rst and then place the trim in the gaps formed
by neighboring panels. Thus, they rst solve a panel placement problem followed by a trim
placement problem. A layout of panels is shown in the gure, followed by the complete
layout produced by a human expert. In this example, the items may translate and may also
rotate by a discrete set of angles.
The goal of our project is to meet or exceed human performance for the pants marker-
making problem. One way to do this is to apply an improvement strategy to an existing
layout produced by a human. Another way is to generate a new layout from scratch using,
for example, a two-stage approach such as the one adopted by human marker-makers.
Accordingly, we initially partitioned our project into three subprojects: 1) compaction, 2)
panel placement, and 3) trim placement. Below we brie y describe these three parts of our
project.
1.4.1 Compaction
Li and Milenkovic's work on compaction has been a signi cant part of our project [Li94,
ML, LM93c, LM93a]. Given a layout, their algorithms use linear programming to improve
the layout through continuous motion of the pieces. A position-based linear programming
model is at the heart of the algorithms. The algorithms accomplish a variety of tasks. One
task is to reduce the length of a marker. This type of compaction can be used to accelerate
binary search in our work on minimal enclosures. Another task is to reduce overlaps among
pieces. This is an integral part of the substitution-based approach to pants marker-making
described in Section 1.3.3. Thus, compaction can serve both as an improvement method for
existing layouts and as the basis of a method for constructing layouts. The overlap reduction
evaluation method we apply within our hybrid containment algorithm is an extension, by
Milenkovic, of the overlap reduction work.
20
Victor Milenkovic is now an Associate Professor at the University of Miami.

32
Set of polygonal parts

Strip of fixed width and unknown length

Placement of large pieces

Name: 37457c
Width: 59.75 in
Length: 269.04 in
Pieces: 108
Efficiency: 89.54%

Human layout with 180 degree rotations and xy-flips allowed

Figure 1.2: The marker-making task in the apparel industry

33
1.4.2 Panel Placement
Section 1.3.3 summarized the work of Milenkovic [MDL92] on placing large items by
building column clusters. To place large items, he builds column clusters of four items and
forms a grid-like layout consisting of a set of columns. An algorithm based on dynamic
programming adds one column at a time to the layout. Backtracking is allowed.
1.4.3 Trim Placement
The trim placement task falls within the two-dimensional bin-packing category, accord-
ing to Dyckho and Finke's typology [DF92]. One might argue that it also belongs in the
cutting stock category because the items can be partitioned into equivalence classes. How-
ever, the items in each class are not congruent; they are graded 21 copies of each other. Trim
placement is not solved using repeating patterns, and so it does not t Dyckho and Finke's
cutting stock type. However, approaches to trim placement can certainly take advantage of
the similarity between pieces in the same equivalence class.
Trim placement can be viewed as a containment problem for which the container is the
complement of the placed panels within the original container. The container is highly
nonconvex and has multiple components. The number of items far exceeds the number
which can currently be handled by any practical containment algorithm. However, practical
algorithms for solving the associated packing problem (i.e. place as many pieces as possible)
can be developed if we rst decompose the container into a collection of containers, where
each container's capacity is appropriate for a containment algorithm.
This trim placement problem provided the original motivation for our work on contain-
ment. Most of the containment examples in this thesis are drawn from various marker-
making problems in the apparel industry.
Surprisingly, we found that the problem of decomposing a complex container is a chal-
lenge in its own right. We therefore discuss container decomposition in a separate chapter,
Chapter 7, within the applications part of the thesis.

1.5 Benchmarks
As we have already discussed, our test data comes primarily from apparel manufacturing.
We welcome data from other sources. We propose that a set of benchmarks be constructed
so that researchers may compare the performance of their translational containment algo-
rithms. We o er to provide data for our containment examples to interested researchers,
providing that the data is not used for commercial bene t. To facilitate this exchange while
protecting the source of the data, we o er our data using a simple le format, which is given
in Appendix A.

1.6 Contribution
The primary contribution of this thesis is our demonstration that, although containment
is NP-hard, it is fruitful to: 1) develop algorithms for containment, as opposed to heuristics,
2) design containment algorithms so that they say \no" almost as fast as they say \yes", 3)
21
In apparel manufacturing, grading is a combination of scaling and morphing.

34
use geometric techniques, not just mathematical programming techniques, and 4) maximize
the number of items for which the algorithms are practical.
Our approach to containment has a number of important features. First, we intro-
duce a restrict/evaluate/subdivide paradigm. Second, we develop new theory and practical
techniques for the operations within the paradigm. The techniques are appropriate for two-
dimensional containment problems in which the items and container may be irregular (i.e.
nonconvex) and have multiple components, and in which the items may be translated, but
not rotated. Our techniques can be combined to form a variety of two-dimensional transla-
tional containment algorithms which follow the restrict/evaluate/subdivide paradigm. The
paradigm is designed so that, unlike existing iteration-based algorithms, containment algo-
rithms based on the paradigm are adept at saying \no", even for slightly infeasible problems.
Infeasible problems occur frequently in practice.
We develop geometric restrictions which are powerful con guration space pruning tech-
niques. They often detect infeasibility without any evaluation or subdivision. We give two
algorithms which use geometric restrictions and are based on the restrict/evaluate/subdivide
paradigm. We obtain the rst practical running times for NP-complete two-dimensional
translational containment problems for up to ten nonconvex items in a nonconvex container.
Our examples are drawn primarily from the apparel industry. Each item is represented as
a polygon which typically has from 4 to 100 vertices. The container is also represented as
a polygon; it typically has from 100 to 300 vertices.
We demonstrate that viewing containment as a feasibility problem has many bene ts
for layout optimization problems. We present an e ective method for nding minimal
enclosures which uses containment to perform binary search on a parameter. We note that
compaction can be used to accelerate the search. This work represents the rst practical
approach to nding minimal enclosures for multiple nonconvex items. Clustering is used
often in layout, but current clustering techniques are weak. Although they often allow
items to rotate, they typically use greedy heuristics, and therefore cannot provide a lower
bound on the size of the container. Our minimal enclosure work should lead to more
powerful clustering methods, and is an example of how strong \bottom-up" techniques
from computational geometry can help the layout community.
We believe that some potentially promising layout techniques have been abandoned by
the layout community because they lack strong bottom-up methods. The primary example
of this is pre-packing. We challenge the view that pre-packing is impractical for multiple-
container packing problems by generating a rich collection of groups for each container,
using containment, and then applying an assignment strategy.
Our advances in containment will help automate layout for industrial processes that rely
on two-dimensional packing of objects. Our results are particularly relevant to industries
which deal with irregularly shaped items. This includes industries such as aerospace, ship
building, apparel and shoe manufacturing, furniture production, and steel construction.

1.7 Overview
The remainder of the thesis is divided into two main parts. Part I describes our
work on translational containment for nonconvex items in a nonconvex container. This
part contains four chapters. The rst chapter introduces our restrict/evaluate/subdivide
paradigm for containment and provides the notational and technical background required by

35
our containment work. The second chapter describes our con guration space restrictions.
The third chapter discusses evaluation and subdivision techniques. The fourth chapter
presents two di erent algorithms which can be formed via speci c choices of restriction,
evaluation, and subdivision techniques. Portions of our containment work also appear in
[DML94, DM, DM95, MD95].
Part II of the thesis gives examples of how containment can be applied to solve various
layout problems. This part contains four chapters. The rst chapter discusses preliminaries.
The second shows how to transform a container so that regions not reachable by any item
are eliminated and the remaining reachable regions are decomposed in a natural way. Our
container decomposition work also appears in [DM94]. Our theoretical work on nding the
maximum-area axis-parallel rectangle in a polygon, which is used for the decomposition, is
summarized in that chapter and also appears in [DMR93, DMR]. The third chapter solves
four minimal container problems. The fourth solves four maximal item problems.
Conclusions, future work, and an appendix containing a le format for containment
examples appear in Part III of the thesis.

36
Part I

Translational Containment

37
Chapter 2
Preliminaries

2.1 Introduction
In this part of the thesis we consider the two-dimensional translational containment
problem for polygons. The problem is stated as follows: given a container and a set of
items, nd translations of the items which result in a non-overlapping arrangement of the
items within the container. The items and container may be nonconvex.
Existing two-dimensional translational containment algorithms developed by other re-
searchers are practical only for k  3. We present new theory leading to the rst practical
running times for two-dimensional translational containment problems for up to ten non-
convex items in a nonconvex container. For 3  k  10, one of our algorithms typically
runs in several minutes or less on a 50 MHz SPARCstation, and it is therefore practical
for solving a number of layout-related problems. Most of our examples are from apparel
manufacturing. Usually, each item has from 4 to 100 vertices, and the container has from
100 to 300 vertices.
Although containment is NP-hard in general, we believe that it is valuable to increase
the practical value of k as much as possible. Thus, we continue the \bottom-up" progres-
sion (1CC, 1NN, 2CN, 2NN, etc.) of containment work which has taken place in robotics
and computational geometry. Since containment is NP-hard and all known containment
algorithms have running time which is exponential in k, an increase in the practical value
of k from three to ten is signi cant.
Our ability to say \no" well comes primarily from the use of our new geometric tech-
niques. Our ability to say \yes" for so many items is partly due to our geometric techniques
and partly due to our application of and improvements to existing linear programming-based
techniques. Thus, we demonstrate that a computational geometry approach can contribute
signi cantly to practical algorithms for NP-hard geometric problems.
Saying \no" quickly is important. Current translational containment algorithms are
iteration-based. Iteration-based algorithms for containment attain their worst-case run-
ning time when the containment problem is infeasible. This situation occurs frequently
in practice. We replace iteration with subdivision. Our subdivision-based algorithms are
speci cally designed to detect infeasibility quickly.

38
We present a new paradigm for containment algorithms. The paradigm is based on
operations on two-dimensional con guration spaces. As discussed in Section 1.2.1, each of
these spaces represents the set of relative positions for a pair of polygons. The algorithm
applies three types of operations, in order, to the con guration spaces: 1) restriction, which
prunes the spaces and helps detect infeasibility, 2) evaluation, which tries to nd a solution,
and 3) subdivision. Subdivision partitions one of the con guration spaces into two parts.
The algorithm then recurses on each part while keeping the other con guration spaces
constant.
We introduce new theory and techniques for each of these types of operations for the case
of two-dimensional containment problems in which the items and container may be irregu-
lar (i.e. nonconvex) and have multiple components, and the items may be translated, but
not rotated. Our techniques can be combined to form a variety of two-dimensional transla-
tional containment algorithms which follow the restrict/evaluate/subdivide paradigm. The
paradigm is designed so that, unlike existing iteration-based algorithms, containment algo-
rithms based on the paradigm are adept at saying \no", even for slightly infeasible problems.
At present there appear to be two approaches to containment within the restrict/ eval-
uate/subdivide framework: direct and indirect. Direct methods act on the con guration
spaces directly, and these methods tend to be based on computational geometry. Indirect
methods act on convex approximations to the con guration spaces: either inner or outer
approximations. Indirect methods use linear programming or other types of mathematical
programming, which is why they can only act on convex approximations. Direct methods
have a more accurate view of the \world", but they can only operate on a few con guration
spaces at a time (three, in our case). Indirect methods can act on all the con guration spaces
simultaneously in a single operation, but they use only approximations to the con guration
spaces.
This thesis presents two containment algorithms which follow our paradigm. The rst
operates directly on the con guration spaces and is therefore a direct algorithm. We call the
second a hybrid algorithm because it uses an indirect evaluation method, and both direct
and indirect methods for restriction. Our direct algorithm is also an approximate algorithm
because it produces a con guration whose overlap is below a given tolerance. Ironically,
our hybrid algorithm, whose indirect evaluation method operates on approximations of the
con guration spaces, is an exact algorithm because it produces a con guration with no
overlap. We brie y discuss each algorithm individually below.
Direct Algorithm
Our new direct algorithm for containment is an approximate algorithm which produces
a solution whose overlap is below a given tolerance. As mentioned in Section 1.2.1, for some
cutting applications, CAD vendors round polygon vertex coordinates to integers. In such
cases, an overlap of less than one unit, corresponding to .01 inches, is acceptable. For the
applications we consider, the ratio of the largest value to the smallest nonzero value is usually
under 10,000; this makes an approximate algorithm a practical choice. Our approximate
algorithm uses our new direct techniques for restriction, evaluation, and subdivision. The
algorithm appears in [DM95, DM] as well as in Section 5.2.
Our direct restrictions are based on polygon set operations, and so we call them geometric
restrictions. They are powerful tools for detecting infeasibility. We de ne two general types
of restrictions: valid restrictions, which preserve all solutions to a containment problem, and

39
semi-valid restrictions, which preserve a single solution, if one exists. We present a variety
of restrictions and several general properties of valid restrictions. We examine, in detail, the
e ectiveness of our geometric restrictions, which are valid restrictions. Our characterization
of the types of containment problems for which these restrictions are e ective is based on
theoretical as well as experimental results.
The evaluation method uses restriction together with a geometric method for maxi-
mizing the intersection of translations of the con guration spaces. The evaluation method
often nds a containment solution without any subdivision. The subdivision method is a
size-based one. Its goal is to partition a con guration space into two parts such that each
has less area than the con guration space. Our approximate algorithm has an extra layer
of structure on top of the restrict/evaluate/subdivide paradigm. This layer has a set of
containment subproblems to which course-grained parallelism could be applied in the fu-
ture. We create the subproblems by characterizing a valid con guration for translational
containment.
The current implementation of this algorithm is fast for up to four items. For k > 4,
the current implementation of our geometric restrictions, based on polygon set operations,
slows the algorithm down considerably. Increasing the speed of our polygon set operations
is a subject of future work (see Section 10.2.1). In our experiments, our approximate
algorithm performs fewer subdivisions than the faster, hybrid algorithm, and so we believe
the approximate algorithm has the potential to be competitive with the hybrid algorithm.
Hybrid Algorithm
In addition to our direct algorithm, we also present signi cant improvements to the
restriction, evaluation, and subdivision techniques of an existing indirect algorithm of
Milenkovic [Mil]. Milenkovic's indirect algorithm, in its original form, was slower than our
direct algorithm for infeasible problems, and its practical upper bound on k was k  3. The
improved indirect algorithm is currently our fastest containment algorithm for 4  k < 10.
Our rst improvement to the indirect algorithm is a proof that restrictions, when used
in the indirect algorithm, do not increase the algorithm's worst-case running time if the
algorithm bases certain decisions on the unrestricted con guration spaces. This is important
because restrictions can add vertices to the con guration spaces, and so they might have
increased the worst-case running time. The proof allows us to add our new direct (geometric)
restrictions to the algorithm. It also allows us to apply a new restriction method, developed
by Milenkovic, which is based on linear programming.
The second improvement to the indirect algorithm is related to the evaluation method.
We apply the work of Li and Milenkovic on overlap resolution [Li94, ML, LM93c, LM93a]
to improve the evaluation step. Their overlap resolution algorithm is an indirect algorithm.
It uses inner convex approximations, an approach which complements the outer approxi-
mations of the indirect containment algorithm. Our third improvement is our new practical
subdivision technique which attempts to maximize the distance from the current evaluation
to the convex approximations of the subdivided con guration spaces.
We call the resulting algorithm a hybrid algorithm because it uses indirect methods
for evaluation, developed by Li and Milenkovic, and both direct and indirect methods for
restriction. The hybrid algorithm is described in [MD95] as well as in Section 5.3.
The hybrid algorithm, in addition to being a formidable containment algorithm, has
the property that a partial containment solution from it is valuable. This is because each

40
evaluation step produces either a non-overlapping solution or an overlapping con guration
for which a linear program has reached a local minimum. As a result, the hybrid algorithm,
when terminated early, is the most powerful heuristic known for generating a local overlap
minimum.
2.1.1 Overview of Chapter
In this chapter, we rst give notation and technical background in Section 2.2. Sec-
tion 2.3 establishes the NP-completeness of translational containment. Section 2.4 describes
our restrict/evaluate/subdivide paradigm. Section 2.5 introduces the concept of size analy-
sis. Section 2.6 supplements the discussion of related work in containment which appears in
Section 1.2.1. Section 2.7 presents a characterization of translational containment problems.
Section 2.8 outlines the contents of the remaining chapters in this part of the thesis.

2.2 Notation
The goal of translational containment is to translate k polygonal regions P1 ; P2; : : :; Pk
into a polygonal container without overlap. If we denote by P0 the complement of the
container region C , then containment is equivalent to the placement of k + 1 polygons
P0 ; P1; P2; : : :; Pk in non-overlapping positions.
De nition 2.1 ([Min03, GRS83, Ser82, Ser88]) The Minkowski sum of two point-
sets (of R2 in our case) is:
[
A  B = fa + b j a 2 A; b 2 Bg = (A + b):
b2B
The Minkowski di erence is:
\
A B= (A ? b):
b2B
For a point-set A, let A denote the set complement of A and de ne ?A = f?a j a 2 Ag.
For a vector t, de ne A + t = fa + t j a 2 Ag and A ? t = fa ? t j a 2 Ag. Note that
A + t = A  ftg.
Theorem 2.2.1 ([Ser82]) A B = A  B.
Theorem 2.2.1 states that the Minkowski sum and di erence are duals with respect to
complementation.
Theorem 2.2.2 ([GRS83, Ser82]) (B + t) \ A if and only if t 2 A  ?B 6= ;.
Figure 2.1 illustrates Theorem 2.2.2. Readers familiar with mathematical morphology
will recognize this as the \hit" transformation [Ser82]. There is an analogous \miss" trans-
formation involving , which is illustrated in Figure 2.2 and stated in Theorem 2.2.3.
Theorem 2.2.3 ([Ser82]) (B + t)  A if and only if t 2 A ?B 6= ;.

41
B
A + (-B)
A

-B

Figure 2.1: (B + t) \ A if and only if t 2 A  ?B

-B

A - (-B)

Figure 2.2: (B + t)  A if and only if t 2 A ?B

42
De nition 2.2 ([Ser82]) The opening of A with respect to B is (A ?B)  B.
If B ts into A, then the opening of A with respect to B consists of all points of A which
are reachable by some part of B via translation.
The symbol Uij , 0  i; j  k, i 6= j , denotes the set of displacements from Pi to Pj
such that they do not overlap each other. For translations ti and tj , Pi + ti and Pj + tj
overlap if and only if tj ? ti 2 Uij . It follows that Uij = ?Uji . Since Pi cannot be displaced
with respect to itself, we de ne Uii = f(0; 0)g. Expressed in terms of set operations and
Minkowski sums,
Uij = Pi  ?Pj ; 0  i; j  k; i 6= j: (2.1)
Equation 2.1 follows directly from Theorem 2.2.2. The set Uij is the two-dimensional free
con guration space for placing Pj with respect to Pi . Note that Uij is unbounded unless
i = 0 or j = 0, assuming that the container and P1 ; : : :; Pk are bounded.
Let P and U denote the set of all Pi and Uij , respectively. (Strictly speaking, these
are lists; even if two elements, such as U27 and U36, are congruent, they are maintained as
separate elements.)
De nition 2.3 A con guration of P is a (k + 1)-tuple (t0; ti; : : :; tk ) where ti is a trans-
lation (xi ; yi) of Pi and t0 is arbitrarily set to (0; 0). A valid con guration of P is a
con guration of P which satis es
tj ? ti 2 Uij ; 0  i < j  k: (2.2)
A valid con guration is an exact solution to a translational containment problem. It
corresponds to a point in 2k-dimensional space. The set of points corresponding to all valid
con gurations for a given P is the 2k-dimensional free con guration space for P .
De nition 2.4 An -approximate con guration of P is a con guration such that the
distance of tj ? ti from Uij is  2 for 0  i < j  k.
In an -approximate con guration, no point of any polygon in P is more than 2 inside
of the boundary of any other polygon when the polygons are laid out according to the
con guration.
De nition 2.5 Given Uij 2 U for an instance of translational containment, Uij is the set
of all values of tj ? ti such that ti and tj are translations of Pi and Pj , respectively, in a
valid con guration of P .
In other words, Uij is the projection of the set of all 2k-dimensional valid con gurations
into Uij . Clearly Uij  Uij . Also, Uij = ?Uji implies Uij = ?Uji . Let U  denote the set of
all Uij .
In Section 1.2.1, we de ned the kNN problem, the (k; r)NN problem, and their variations
(see De nition 1.1 and De nition 1.2). We refer to the entire collection of two-dimensional
translational containment problems (including, kNN, kCN, etc.) as translational contain-
ment problems. Note that we do not require the set of all valid con gurations; a single
valid con guration suces. Each problem has an approximate version, which is to nd an
-approximate con guration. Each containment problem is an o -line problem because the
entire set P is available at the start of the containment problem.
43
Let the number of vertices of Pi be mi . We de ne m = max1ik mi . We use m0
frequently, and it tends to be larger than the other mi . Hence, we distinguish m0 by
denoting it by n. In general, we express the running time of a containment algorithm in
terms of m, n, k, and r.
This is not the most general containment problem, since it is restricted to two di-
mensions, does not allow rotation, and the representation of items is limited to polygons.
However, this problem is still of great practical importance. Industries dealing with two-
dimensional polygonal items include aerospace, ship building, apparel, shoe and furniture
manufacturing, and steel construction. In applications such as apparel, shoe, and furniture
manufacturing, only a discrete set of rotations is typically allowed; this can be dealt with
by iterating over the angles. In contrast, metal pieces are often represented using curves as
boundaries, and continuous rotation is frequently allowed.

2.3 Hardness
The main result of this section is that kNN is NP-complete. Before proving this, we
establish two lemmas. The rst shows that kRR, a containment problem for rectangular
items in a rectangular container, is NP-hard.
Lemma 2.3.1 The kRR problem is NP-hard.
Proof: Li [Li94] examines the two-dimensional coordinated motion planning problem of
strip compaction in which the motion of the polygons is restricted to a strip of xed width
and arbitrary length, and the goal is to minimize the length of the strip. He shows this
problem is NP-hard when all the polygons are rectangles and only translations are allowed.
He reduces the NP-complete PARTITION problem to strip compaction. We use a similar
reduction here to establish that kRR is NP-hard. In an instance of PARTITION [GJ79],
we are given a niteP set A and a Psize s(a) 2 Z + for each a 2 A. We ask: Is there a subset
A0  A such that a2A0 s(a) = a2AnA0 s(a)? We reduce the NP-complete PARTITION
problem to kRR. For each a 2 A we construct a rectangle
P
of unit height and width s(a). We
also construct a container of height 2 and width ( a2A s(a))=2. PARTITION has a solution
if and only if there is a valid con guration of the rectangular items inside the rectangular
container. This establishes that kRR is NP-hard. 2
Lemma 2.3.2 Two-dimensional translational containment is in NP.
Proof: Milenkovic, et al. [MDL91] show that the marker-making problem, in the absence
of rotations, is in NP. The main idea of their argument applies to translational contain-
ment. They show that, if a solution exists, then there exists a (possibly di erent) solution
which is uniquely determined by its set of vertex-edge contacts. We observe that there are
only 2k degrees of freedom. Therefore, we can (nondeterministically) select the correct 2k
vertex-edge contacts, solve the set of linear equations, and verify the solution. This is a
polynomial-time process because the linear system has a polynomial number of equations,
and a polynomial number of linear equations can be solved in time which is polynomial in
the number of equations. 2
Theorem 2.3.3 Two-dimensional translational containment is NP-complete.
44
Proof: Lemma 2.3.1 shows that kRR is NP-hard. Since all the translational containment
problems are harder than kRR, this proves that translational containment is NP-hard.
Lemma 2.3.2 shows that translational containment is in NP, which completes the proof. 2

2.4 The Restrict/Evaluate/Subdivide Paradigm


The set U encodes the entire containment problem, and one can think of it as the input
to and current state of a containment algorithm. We refer to U as the hypothesized solution
space or hypothesis. A solution within the current hypothesis consists of a valid con guration
for the current set U .
Our paradigm uses three operations on the hypothesis:
Restriction: prune the hypothesis by applying restrictions to the current set U . A restric-
tion is an operation on U which replaces some or all Uij s by subsets of themselves.
Naturally, if any Uij 2 U is replaced by the empty set, then there is no solution.
Evaluation: try to nd a solution within the given hypothesis. The result is that either:
1) yes, a valid con guration exists within the hypothesis, 2) no, a valid con guration
does not exist within the hypothesis, or 3) maybe a valid con guration exists, but it
has not yet been found.
Subdivision: divide the current hypothesis into two sub-hypotheses such that if a solution
exists within the current hypothesis, then a solution must exist within one of the two
sub-hypotheses.
The pseudo-code below describes the ow of a general containment algorithm which
abides by our paradigm. We present it in the form of a decision procedure which returns
YES or NO. Of course, if a valid con guration is found, the con guration can be stored
in a global data structure. The routines RESTRICT(), EVALUATE(), and SUBDIVIDE()
perform the functions described in the list of operations given above. RESTRICT() returns a
restriction of U . EVALUATE() returns a status ag indicating whether it decides YES, NO,
or MAYBE. SUBDIVIDE() partitions one of the con guration spaces Uij 2 U into two parts
Uij0 and Uij00 . The algorithm then recurses on each part while keeping the other con guration
spaces constant. That is, it creates U 0 = (U n fUij g) [ fUij0 g and U 00 = (U n fUij g) [ fUij00 g
and then recurses on U 0 and U 00.
CONTAIN(U , k)
U RESTRICT(U , k)
if (some Uij 2 U = ;) return NO
status EVALUATE(U , k)
if (status = YES) return YES
if (status = NO) return NO
U 0, U 00 SUBDIVIDE(U , k)
return CONTAIN(U 0 , k) or CONTAIN(U 00, k)

A containment algorithm can be designed by selecting a restriction method, an evalua-


tion method, and a subdivision method. The correctness of the algorithm depends on the

45
correctness of the chosen methods. Termination of the algorithm depends on the compat-
ibility of the methods. For example, suppose we have an algorithm which has a certain
worst-case running time based on a subdivision method for which the number of re ex ver-
tices of Uij0 and Uij00 is less than the number of re ex vertices of Uij . Now, suppose we add
restrictions to this algorithm. The restriction process may increase the number of re ex
vertices in the hypothesis. This can increase the worst-case running time of the algorithm.
A mismatch between evaluation and subdivision methods can also be problematic. For
instance, consider an algorithm which uses the subdivision technique mentioned above for
reducing the number of re ex vertices, together with a size-based evaluator. That is, the
evaluator says \no" if it cannot nd a valid con guration in U and the size1 of U is below a
tolerance. Subdivision will eventually produce hypotheses containing convex polygons, but
those polygons will not necessarily have size less than the tolerance. If a valid con guration
exists within U in this case, the algorithm will not terminate unless the evaluator happens
to nd it.
Course-grained parallelism can be applied to this paradigm, and is a subject for future
work. In order to parallelize the general containment algorithm, one would maintain a queue
of hypotheses instead of performing recursion. Each processor would remove a hypothesis
from the queue and restrict it and evaluate it. If no solution were found, the hypothesis
would be subdivided and the two resulting sub-hypotheses would be added to the queue.
Each hypothesis represents a \self-contained" containment problem, and so processors could
work on hypotheses independently. Of course, the enqueueing and dequeuing of hypotheses
should be implemented as atomic operations.
One limitation of this approach is that the subproblems would not necessarily all be
available at the start of the algorithm. Some processors might remain idle until a sucient
number of hypotheses were created.

2.5 Approach to Running Time Analysis


In order to analyze the worst-case time complexity of a containment algorithm which
follows our paradigm, one must know the time complexity of RESTRICT(), EVALUATE(),
and SUBDIVIDE(). In addition, one needs an upper bound on the number of times CON-
TAIN() is invoked; this is determined by the conditions for which EVALUATE() decides
\no". Finally, an upper bound on the total number of vertices of U is required.
Calculating the total number of vertices of U is complicated by the possibility that
RESTRICT() and SUBDIVIDE() may introduce new vertices into U , in spite of the fact
that each replaces some Uij s by subsets of themselves. One way to treat this is to estimate
the worst-case upper bound on the number of new vertices. However, this upper bound
may bear little resemblance to the actual worst-case size which is attainable in practice.
For example, suppose that RESTRICT() and SUBDIVIDE() use only polygon set oper-
ations such as intersection and union. In theory, for polygons A and B , jA [ B j, jA \ B j 2
O(jAj  jB j), but, in practice, jA [ B j, jA \ B j  jAj + jB j. On the one hand, new vertices
are created by edge-edge intersections, but, on the other hand, many vertices are discarded
when the union or intersection boundary is computed.
We introduce the concept of size analysis in order to capture the practical running time
of a containment algorithm which follows our paradigm. Let s be the maximum number
1
One measure of the \size" of U is the total area of all Uij in U .

46
of vertices produced by a possibly in nite sequence of polygon operations which replaces U
by a subset of U . (In Chapter 3 we are more speci c about the operations.) In some cases,
one can prove that s is nite. Although s can be exponential in the original size of U , in
practice, it is usually not more than quadratic. Since our analysis is based on s, we refer to
size analysis as s-analysis in the remainder of the thesis.

2.6 Related Work


In Section 1.2.1 we reviewed the literature on containment. Here we ll in some of the
technical details omitted by that discussion.
A \naive" kNN algorithm requires O(k2 (mn)2k log mn) time. It searches for a valid
con guration by iterating over all choices of 2k contacting pairs among the polygons and
container. There are O((kmn + k2 m2 )2k ) = O((mn)2k ) possible choices of 2k vertex-edge
contacts. For each choice we can solve a set of 2k linear equations in O(k3 ) time (the cost
of Gaussian elimination) to obtain ti , 1  i  k. This con guration can be checked for
overlap in O(log n) time, assuming the following preprocessing. Precompute U according
to Equation 2.1. The number of vertices in each of these polygons has an upper bound of
O(m2 n2 ). Preprocess the polygons in U so that a point-in-polygon test can be performed in
O(log mn) time [Kir83]. To check a layout, perform a point-in-polygon test for each tj ? ti
in Uij . This costs O(k2 log mn) time. The naive algorithm has running time in ((mn)2k )
because it must iterate over all possible sets of contacts whenever the answer is \no."
The following running times were given in Section 1.2.1.
k kCN kNN
1 O(mn log mn) [For85] O(m n2 log mn) [AB87, Avn89]
2
2 2
2 O(m n log m) [Dev90] O(m4 n4 log mn) [AB87, Avn89]
3 O(m3 n3 log m) [Dev90] O(m14n6 log mn) [Avn89]
Avnaim and Boissonnat also give a solution to the 3NP problem, three nonconvex polygons
in a parallelogram container, using time in O(m60 log m) [AB87, Avn89]. Even though
the asymptotic running time of the 3NP algorithm contains a polynomial whose degree
is high compare to the others, the 3NP algorithm is actually more practical than some
of the other lower degree algorithms. Avnaim's 3NN algorithm, in particular, iterates2
over all choices of edges from U01, U02 , U03, U12 and U13. (Avnaim uses a sweep-line
algorithm to avoid iterating over the edges of U23.) The worst-case running time of the 3NN
algorithm is therefore proportional to the product of the sizes of these sets. To determine
that a con guration is impossible, the algorithm must iterate over all these choices. Its
O(m14n6 log mn) running time is worse than the O((mn)6 log n) running time for the naive
algorithm for k = 3. For reasons outlined below, it is faster than the naive algorithm in
practice.
The m60 factor for 3NP arises from the worst case bound on the number of vertices
of the outputs of several consecutive Minkowski sums and polygon unions/intersections.
In practice, the worst case is usually a gross overestimate, and unlike the (m14n6 ) lower
bound for the 3NN algorithm, the factor of m60 may not be attainable. At each step, the
running time of the 3NP algorithm is sensitive to the output of the previous step, and so the
algorithm takes advantage of the fact that the number of vertices almost certainly grows
2
See Section 2.7 for a description of the 3NN characterization which allows this iteration.

47
much more slowly than the estimate. Recall our discussion of s-analysis in Section 2.5.
An observation similar to the one made about polygon intersection and union holds for
the Minkowski sum. Using s-analysis, the 3NN algorithm has running time in O(s5 log s),
whereas the 3NP algorithm has running time in O(s log s). The naive algorithm has running
time in O(s4k log s); for k = 3 this is O(s12 log s). With this analysis, 3NP is faster than
3NN, and both are faster than the naive algorithm.

2.7 Containment Characterization


Here we take advantage of the fact that the goal of our containment problem is to nd
a single valid con guration as opposed to nding the entire set of valid con gurations. We
characterize a containment solution by showing that if a valid con guration  exists for
a given containment problem, then a valid con guration  0 must exist, where  0 can be
reached from  by a non-overlapping motion of the pieces, and  0 possesses characteristics
which facilitate the search for a solution. In particular,  0 is a vertex of a component of
the free con guration space, and this component contains  . In this case we can solve the
containment problem by nding  0. This type of approach has been used for containment
by Chazelle to solve 1NN with rotation [Cha83], by Avnaim to solve 3NN with translation
only [Avn89], and by Milenkovic, et al. to solve 3CN [DML94].
Our characterization of a solution to kNN stems from a small number of simple obser-
vations about contacts and degrees of freedom. First, we observe that because kNN allows
translation in x and y , each polygon has two degrees of freedom. Second, the collection of
k polygons has 2k degrees of freedom.
There are two types of contacts. The rst type involves a vertex and an edge. A vertex-
edge contact is a single contact because it implies a single linear constraint. The second
type involves two vertices. A vertex-vertex contact is a type of double contact because it
implies two linear constraints. A linear constraint reduces the number of degrees of freedom
by one if and only if it is linearly independent of the existing system of constraints.
In terms of U , a single contact of Pi with Pj means tj ? ti is on the boundary of Uij and
a double contact means tj ? ti is at a vertex of Uij .
In our characterization below, let I be an instance of kNN.
Theorem 2.7.1 If I has a valid con guration, then I has a valid con guration with the
following characteristics:
  2k contacts;
  2 contacts per polygon Pi, 0  i  k;
Proof: Assume I has a valid con guration. We rst show that there is a valid con gura-
tion with  2k contacts by arguing as in [MDL91]. Less than 2k linear constraints cannot
determine the value of 2k variables, and therefore there must be at least one degree of
freedom. There must therefore be some simultaneous motion of the polygons Pi , 1  i  k,
for which at least some subset of the polygons move along lines while all polygons retain
their contacts. If some subset of the polygons are moving along lines, then eventually some
new contact must form. One of three events must occur: 1) two polygons collide, neither
of which is P0 , or 2) two polygons collide, one of which is P0 , or 3) a new contact occurs

48
between two already contacting polygons. The occurrence of one of these events is guar-
anteed by the fact that the complement of P0 is bounded (recall that P0 is the container).
If two polygons Pi and Pj collide, then a new vertex-edge contact is formed. If Pi and Pj
already have a vertex-edge contact, then that vertex may slide to the end of the edge and
form a vertex-vertex contact. We stop the motion when a new contact occurs. Repeating
this process forms a set of 2k contacts.
To establish the second part of the claim, we observe that if a valid con guration for
I exists and a polygon has fewer than two contacts, then we can translate that polygon
until it loses both degrees of freedom. \Translating" P0 is equivalent to xing P0 and
translating the remaining Pi by the same amount in the opposite direction. This implies a
valid con guration in which each polygon has at least two contacts. 2
A containment problem can be converted to a set of independent containment subprob-
lems using Theorem 2.7.1. A collection of independent subproblems can be solved in parallel
if sucient processors are available. In the special case where some of the Pi are identical,
equality of some of the Uij s reduces the number of subproblems.
The cost-e ectiveness of iterating over a particular set of subproblems should be carefully
evaluated when designing a containment algorithm. Let \contacting pair" refer to a pair
of polygons which is in contact. The characterization tells us which contacting pairs can
occur. The number of possible contacting pairs is purely a function of k, not m or n. Testing
the characterization requires solving a subproblem for each possible contacting pair. If the
pair is in single contact, we solve a boundary-restricted subproblem by replacing Uij by
its boundary. If the pair is in double contact, we solve a vertex-restricted subproblem by
replacing Uij by its vertices. Since k is a \constant", the characterization does not increase
the asymptotic running time in terms of m or n. Nevertheless, the number of ways to choose
contacting pairs is exponential in k, roughly k4k .
For k = 3 the implication of the rst part of the characterization ( 2k contacts) is
particularly nice. If there exists a solution to 3NN, then there exists a solution in which a
set of 2k = 6 determining contacts contains either a double contact or all single contacts.
Testing for a double contact solution requires solving a vertex-restricted subproblem for each
Uij . For a solution where six pairs of polygons are in contact, each (Pi ,Pj ) pair must be
in contact. This implies a subproblem in which every polygon in U is boundary-restricted.
Avnaim [Avn89] takes advantage of this to eliminate a factor of m4 from the running time
of his 3NN algorithm.  
Unfortunately, for k > 3 we must choose 2k out of k +2 1 possible contacting pairs. The
number of ways to choose 2k pairs is exponential in k. Both a generalization of Avnaim's
3NN algorithm and the naive algorithm must iterate over all ways of choosing 2k single
contacts. Using the second part of the characterization ( 2 contacts per polygon) also
generates an exponential number of subproblems if we apply it to every polygon. Applying
it to only a single polygon Pi requires that we check two cases:
1. a double contact with at least one Pj , 0  j  k, i 6= j , or
2. a single contact with at least one Pj , 0  j  k, i 6= j and a single contact with at
least one Pj 0 , 0  j 0  k, i 6= j 6= j 0.
Testing the rst case requires k vertex-restricted subproblems (one for each Uij , 0 
j  k, i 6= j ). Each subproblem is vertex-restricted for Uij . The second case requires
49
(k)(k ? 1)=2 subproblems. Each subproblem is boundary-restricted for two Uij s. This
approach therefore requires solving O(k2 ) subproblems.
Applying part of this characterization leads to a semi-valid 3 con guration space restric-
tion in Chapter 3. Partial characterization is also used in Chapter 5 for our approximate
containment algorithm.

2.8 Overview of Part One


The remainder of the containment part of the thesis contains three chapters. Chapter 3
describes our con guration space restrictions. Chapter 4 discusses evaluation and subdivi-
sion techniques. Chapter 5 gives two di erent algorithms which can be formed via speci c
choices of restriction, evaluation, and subdivision techniques. The rst of the two algo-
rithms is an approximate and direct algorithm; the second is our exact, hybrid algorithm.
Portions of our containment work also appear in [DML94, DM, DM95, MD95].

3
A semi-valid restriction is guaranteed to preserve at least one valid con guration, if one exists.

50
Chapter 3
Con guration Space Restrictions

3.1 Introduction
Here we show how to restrict certain two-dimensional con guration spaces associated
with translational containment. Recall from Section 2.4 that a restriction of U replaces
some or all Uij s by subsets of themselves. Naturally, if any Uij 2 U is replaced by the
empty set, then there is no solution for the hypothesis U . We de ne two general types
of restrictions: valid restrictions, which preserve all solutions to a containment problem,
and semi-valid restrictions, which preserve a single solution, if one exists. Restrictions can
prune away invalid regions from the con guration spaces before a containment algorithm is
applied. They can help an algorithm quickly detect infeasibility. Restrictions can also be
integrated into the body of an existing containment algorithm.
This chapter makes several general observations about restrictions of con guration
spaces for translational containment. We present a variety of valid and semi-valid restric-
tions. We examine, in detail, the e ectiveness of our geometric restrictions, which are valid
restrictions. Our characterization of the types of containment problems for which these
restrictions are e ective is based on theoretical as well as experimental results. Most of our
restrictions are based on closed 1, rather than regularized, set operations, and so we discuss
this important implementation issue.
3.1.1 Overview
Section 3.1.2 provides notation for this chapter. Section 3.2 makes several general ob-
servations about valid restrictions. The next three sections present valid restrictions. In
Section 3.3 we present our geometric restriction, give an example, and discuss the e ective-
ness of this restriction. In Section 3.4 we describe a linear programming restriction, give
an example, and discuss the e ectiveness of this restriction. Section 3.5 gives a restriction
which is valid when one item is a subset of another. Section 3.6 gives a parallelizable, valid
restriction based on unions. Section 3.7 gives a parallelizable restriction based on the char-
acterization in Section 2.7. Section 3.8 gives a restriction for identical items. Section 3.9
1
Regularized operations replace each polygon by the closure of its interior. Our closed operations simply
replace each polygon by its closure. See Section 3.9.

51
discusses closed operators, on which many of these restrictions are based. Section 3.10
summarizes this chapter and presents conclusions.
Note: Figures depicting Uij polygons appear throughout this chapter to illustrate the
di erence between Uij s before and after a restriction is applied to U . A Uij , for 1  i <
j  k, is unbounded in its original form, as de ned by Equation 2.1, so we perform the
following bounding operation before displaying a Uij :
Uij Uij \ (Ui0  U0j ) (3.1)
This operation is a valid restriction, as established in Section 3.3.
3.1.2 Notation
De nition 3.1 A restriction is an operation on U that replaces some or all Uij 2 U by
subsets of themselves. A valid restriction preserves the invariant Uij  Uij .
Recall from Section 2.2 that Uij is the set of all values of tj ? ti which belong to a valid
con guration. If we denote by Uij? the replacement of Uij in a valid restriction of U , then
Uij  Uij?  Uij . We also denote the set of all Uij? for a particular valid restriction by U ?.
De nition 3.2 A semi-valid restriction of U is a restriction of U having the property
that if there exists a valid con guration in U , then there exists a valid con guration in the
restriction of U .
When it is important to refer to an original Uij , as de ned by Equation 2.1, we use the
notation Uijorig, and U orig for the set of all Uijorig.

3.2 Properties of Valid Restrictions


Here we discuss several general properties of any valid restriction for translational con-
tainment.
3.2.1 Boundary Intersection
Consider the following two restriction examples2 . The rst example, shown in Figure 3.1,
is of four nearly rectangular items to be placed in a nearly convex container. Figure 3.2
shows two one-component U0j polygons for this example, with U0?j polygons shaded in black.
Figure 3.3 shows two two-component Uij polygons with Uij? polygons.
The second example, shown in Figure 3.4, is of four oblong items to be placed in a
nonconvex container. Figure 3.5 shows a one-component U0j polygon for this example,
with U0?j polygons shaded in black. Figure 3.6 shows a two-component Uij polygon, with
Uij? polygons.
These examples have the property that each component of Uij? intersects the boundary
of Uijorig . That is, they share part of the boundary in common. This phenomenon occurs
frequently in practice. The reason this behavior occurs often in practice is due to the
characterization of translational containment problems in [DM, DM95], summarized by
Theorem 2.7.1. The following theorem formalizes the relationship between Theorem 2.7.1
and the boundary intersection behavior.
2
The algorithms which generate these examples appear later in this chapter.

52
Figure 3.1: Boundary intersection example 1: the containment problem

Figure 3.2: Boundary intersection example 1: two one-component U0j s, with U0?j s

53
Figure 3.3: Boundary intersection example 1: two two-component Uij s, with Uij? s

Figure 3.4: Boundary intersection example 2: the containment problem

54
Figure 3.5: Boundary intersection example 2: a one-component U0j , with U0?j s

Figure 3.6: Boundary intersection example 2: a two-component Uij , with Uij? s

55
Theorem 3.2.1 If an instance of translational containment has a valid con guration, then,
for each j , 0  j  k, there exists i 6= j , 0  i  k, such that at least one component of Uij?
shares part of its boundary with Uijorig .
Proof: Part (2) of Theorem 2.7.1 implies the weaker condition that, if a solution exists,
then a solution exists such that, for each j , 0  j  k, there is some i 6= j , 0  i  k,
such that Pi and Pj are in (at least single) contact. A contact corresponds to a point on
the boundary of Uijorig; this point remains after any valid restriction because it is associated
with a valid con guration. Thus, a component of a replacement of Uij in a valid restriction
of U must intersect the boundary of Uijorig for some i, 0  i  k. 2
This explains components of U0j that cling to its boundary, as in Figure 3.2 and Fig-
ure 3.5. It also explains components of Uij , 1  i  k, that cling to the \inner" boundary,
which is the boundary of Pi  ?Pj (see the right-hand two-component Uij of Figure 3.3).
Theorem 3.2.1 is an important consideration in designing a containment algorithm. The
boundary intersection behavior occurs frequently in practice, and so it is unreasonable to
expect, for example, that the convex hull of a replacement polygon is strictly inside the
original polygon.
3.2.2 Loose Fits
It is clear that little restriction can be expected if the containment problem has a \loose-
tting" solution. In the extreme case of no container, no restriction is possible at all.
3.2.3 Annular Con guration Spaces
For a given set of items, as the container becomes less restrictive, each replacement of
a Uij polygon in a restriction of U , for 1  i  k, assumes an annular shape. Here we use
the terms annular and annulus in the topological sense (see Figure 3.7).
This behavior occurs because, when the t is loose enough, each point on the boundary
of Pi  ?Pj is a valid relative displacement. That is, each possible contact between Pi and
Pj is legal. An example is shown in Figure 3.7. In this example the outer boundary of the
annulus is formed by our geometric restriction (see Section 3.3).
The annular shape presents a challenge for certain types of containment algorithms. For
example, if the algorithm uses external approximations to the Uij s, then it will consider a
con guration to be valid if every tj ? ti is within the approximation to Uij . However, if some
tj ? ti is within the inner hole of the annulus, then Pi and Pj overlap and the con guration
is not valid.
3.2.4 Size and s-Analysis
Restriction reduces the area of U , but it is not guaranteed to reduce the number of
vertices of U . This can cause problems in analyzing the running time of a restriction or a
containment algorithm. For this reason, for a given set of restriction operations, we choose to
express the running time of such an algorithm in terms of the maximum number of vertices
s generated by applying any sequence of the restriction operations to U [DM95]. Expressing
running time in terms of s is what we call size analysis, or s-analysis of the algorithm, as

56
Figure 3.7: An annular con guration space

introduced in Section 2.5. The use of s-analysis gives a measure of the practical running
time of a containment algorithm.
Suppose U has a nite number of vertices. Some sets of restriction operations on U
have nite s, even when applied an in nite number of times. One example is intersections
using vertical (or horizontal) half-planes. Intersections with arbitrary half-planes, however,
cause s to be in nite. The question of whether or not the operations we use in our valid
restrictions cause s to be in nite is an open problem, and a subject of future work (see
Section 10.2.1). We show below that if all 3 possible valid restrictions are performed on U ,
the resulting number of vertices is nite.
Theorem 3.2.2 Uij has a nite number of vertices and a nite number of connected com-
ponents.
Proof: We rst show that Uij has a nite number of vertices. It then follows that the num-
ber of connected components is also nite. Recall from Section 2.2 that m = max1ik mi .
Here we let M = max0ik mi . We establish a rough upper bound on the number of vertices
in Uij by rst nding an upper bound on the number of vertices of the 2k-dimensional free
con guration space S . Each vertex of S is the intersection of 2k hyper-planes. Let Q be
an upper bound on the number of hyper-planes; then the number of vertices of S is no
more than Q2k , because it is well-known that the size of an arrangement of n hyper-planes
in d dimensions is (nd ) [Ede87]. Each hyper-plane corresponds to a contact between two
polygons Pi and Pj . For each polygon pair there are at most M 2 contacts, and there are
3
Knowing all possible valid restrictions is NP-hard.

57
(k + 1)k=2 pairs, so Q = M 2(k + 1)k=2. Thus, an upper bound on the size of the free
con guration space is given by: O((M 2(k + 1)k=2)2k) = M 4k ((k + 1)k)2k . Now, Uij is a
projection of S . The projection step can introduce new vertices, and so the number of
vertices in Uij is bounded by the size of an arrangement of the hyper-planes. Thus, the
number of vertices in Uij is bounded by O(((M 4k ((k + 1)k)2k )2k ) = O(M 8k2 k(k + 1)4k2 ).
Therefore, Uij has a nite number of vertices and connected components. 2
We note that tighter bounds have been obtained for the size of the free con guration
space in special cases. For example, Avnaim and Boissonnat [AB87] show that for two non-
convex polygons and a nonconvex container, the free con guration space is of size O(M 20 ).
Recently, for example, Aronov and Sharir [AS94] showed that, for a convex polyhedron B
translating in 3-space among k convex polyhedral obstacles A1 ; : : :; Ak with disjoint interi-
ors, the free con guration space for B has size O(Mk log2 k), where M is the total size of
the Minkowski sums Pi = Ai  (?B ), for i = 1; : : :; k.
The number of connected components of the restricted U is closely related to the num-
ber of vertices. We simply remark here that restriction can fragment U into many small
connected components, and that this should be taken into account when designing a con-
tainment algorithm.
3.2.5 Identical Items
If Pi and Pj are identical (with no rotations allowed), then U0i = U0j , so U0?i and U0?j
can replace each other. In this case, Pi  ?Pj is symmetric about the origin4 . If any point
p in Uij , 1  i  k, is removed by a restriction, then ?p can also be removed. Thus, Uij? is
symmetric with respect to the origin.
This symmetry re ects the fact that, given a valid con guration, identical items produce
valid equivalent con gurations5 . If an algorithm expects to deal often with identical items,
it is wise to take advantage of the equivalent con gurations (see Section 3.8).

3.3 Steady-State Geometric Restriction


In [DM, DM95] we present restrictions which operate directly on U . The restrictions use
the Minkowski sum and a closed intersection operator (see Section 3.9). Each element of U
is replaced using operations that include other elements of U as input. Theorem 3.3.1 below
encompasses all three restrictions from [DM, DM95]. Figure 3.8 illustrates the theorem.
Theorem 3.3.1 Uij Uij \(Uih Uhj ); 0  h; i; j  k; h 6= i 6= j; is a valid restriction.

Proof: The theorem states that a valid displacement from Pi to Pj must be a valid
displacement from Pi to Ph plus a valid displacement from Ph to Pj . 2
4
Guibas, et al. note [GRS83] that, if A is convex, then A  ?A is symmetric with respect to the origin.
The extension to arbitrary A is trivial.
5
Two con gurations are equivalent if their ti s correspond to a permutation of identical items.

58
Pj

tj - ti -c Ui j
tj - th -c U
hj
Pi
th - ti -c Ui h
Ph

Figure 3.8: Geometric restriction

3.3.1 Avnaim and Boissonnat's Solutions to 2NN and 3NP


We show that the operation of Theorem 3.3.1 can be viewed as a generalization of
Avnaim and Boissonnat's [AB87] solutions to 2NN and 3NP.
Avnaim and Boissonnat show for 2NN that the set of all the valid relative positions of
Pj with respect to Pi is given by (in our notation): Uij = Uij \ (Ui0  U0j ). This is an exact
solution. Each element of the set Uij corresponds to a valid con guration. We observe that,
for kNN, each valid displacement for two polygons must also be a solution for 2NN.
Avnaim and Boissonnat observe that, for 3NP, a valid translation t1 exists for P1 only
if:
t1 2 U01 \ (U02 ? (t2 ? t1 )) \ (U03 ? (t3 ? t1)) 6= ; (3.2)
where (t2 ? t1 ) 2 U12 and (t3 ? t1 ) 2 U13. Thus, U01 U01 \ (U02  U21) \ (U03  U31 ) is a
valid restriction for 3NP.
For their solution to 3NP, Avnaim and Boissonnat show that any vector in U12 \ (U13 
U32) is guaranteed to have an associated displacement for P2 with respect to P1 correspond-
ing to a solution to 3NP. The proof relies on their application of Helly's Theorem. Suppose
we have three parallelograms which are the same ane transformation of (di erent but
axis-parallel) rectangles. If the three parallelograms intersect pairwise, then all three have
a common point. We observe that, in the general case, Uij  Uih  Uhj for h 6= i 6= j .
In our notation, Avnaim and Boissonnat's solution to 3NP applies Uij Uij \(Ui0 U0j ),
1  i < j  3, followed by Uij Uih  Uhj for 1  h 6= i 6= j  3. Each element of Uij
then corresponds to a solution to 3NP.
3.3.2 Steady-State Restriction
One would hope that repeated applications of the restriction in Theorem 3.3.1 would
yield the set of Uij in the limit. Actually, this is always true only for k = 2. For k = 3, one
can construct counterexamples, but only with diculty. For k > 3, it is easy to construct
counterexamples. We iteratively apply the valid restriction of Theorem 3.3.1 until no more
signi cant shrinking occurs. We denote the resulting restriction by SSGR (steady-state
geometric restriction). In the pseudo-code below, Empty U is a set containing k(k + 1)=2
copies of the empty set.

59
SSGR(U , k)
do
for (i = 0 to k)
for (j = i + 1 to k)
for (h = 0 to k)
if (h 6= i 6= j )
Uij Uij \ (Uih  Uhj )
if (any Uij 2 U = ;) return Empty U
else Uji ?Uij
S maximum fractional area shrinkage in U
d maximum diameter of a polygon in U
while (S > and d > )
return U

The parameter  is speci ed by the user. It is used to help judge when to terminate
the restriction process. The parameter is not speci ed by the user. It is the relative
area shrinkage, 0   1. For = 0, the program achieves a \true" steady state (if the
maximum polygon diameter is greater than  and the maximum area is greater than 2 ) in
which additional restrictions cannot shrink the polygons further. However, this can require
an in nite number of iterations. We use = 0:1 and  = :5 in our implementation; these
are satisfactory for our applications. Using nonzero values of and  gives us a bound on
the number of restriction iterations (see Section 3.3.3).
The steady-state restriction process is quite powerful. We know that Uij  Uih  Uhj
for h 6= i 6= j . However, it is also true that: Uij  Uih  Uhq  Uqj for q; h 6= i 6= j . In
order to generalize this result, we de ne the notion of a list restriction. We then show in
Theorem 3.3.2 that SSGR satis es all list restrictions.
De nition 3.3 Given a list of indices  = i; 1; 2; 3; : : :q?1; q; j , the list restriction
of Uij corresponding to  is the restriction:
Uij Uij \ (Ui1  U1 2  U2 3      U ?1   U j )
q q q

In the following theorem, our use of the term \steady state" refers to the \true" steady
state described above, in which = 0.
Theorem 3.3.2 If multiple applications of SSGR reach a steady state, then all possible list
restrictions are satis ed.
Proof: The proof is by induction on the length of a list, which is the number of indices
in the list. The base case, for a path of length two, is true because SSGR satis es all list
restrictions of length two. The induction hypothesis is that the list restriction for every list
of length q + 1 is satis ed. Consider the list  = i; 1; 2; : : :; a; b ; c; : : :; q ; j of length
q + 2. We show that the list restriction for  is satis ed. First, by the inductive hypothesis,
the list restriction for the path  0 = i; 1; 2; : : :; a; c ; : : :; q ; j of length q + 1 is satis ed,
so
Uij  Uij \ (Ui1      Ua c      Uq j )

60
Now, because a steady state has been achieved, applying SSGR to Ua c does not change
Uac . This means
Ua c  Ua c \ (Uab  Ubc )
and hence Ua c  Ua b  Ub c . Substitution yields
Uij  Uij \ (Ui1      U   U       U j )
a b b c q

which is the list restriction for  . Thus, the list restriction for  is satis ed. 2
3.3.3 Running Time
Here we calculate the asymptotic running time of SSGR. We assume:
 s is the largest number of vertices of any polygon generated by a sequence of inter-
section and Minkowski sum operations on polygons in U ,
  is a real number representing a lower bound on the maximum diameter of a polygon
in U ,
 is the fractional shrinkage in area required by each iteration of SSGR.
 
Theorem 3.3.3 The running time of SSGR is in O(log 1 k3s log s).
Proof: Let r be the number of iterations of the while loop in SSGR. We have:
 
r = O(log1? 1 ) = O(log  1 )
 
We perform the operation Uij Uij \ (Uih  Uhj ) inside of three nested loops. Each
operation takes O(s log s) time. Each loop executes k times. Therefore, the running time
is k3 r(s log s).
The total running time is therefore:
 
O(log 1 k3 s log s)

2
3.3.4 Example
Figure 3.9 shows an example from the apparel industry. Three \trim" pieces are to be
placed. The nearly triangular shape displayed in Figure 3.10 is a scaled version of the U0j
polygon for the rectangular trim piece with respect to the container; on it the U0?j polygon
for the SSGR of U is shown as a black lled region. More than 95% of the area is eliminated
by the restriction process; all that remains is the tip. Figure 3.11 shows the two-component
Uij polygon for the rectangular trim piece with respect to one of the other two trim pieces.
The Uij? polygon for the SSGR of U is shown as a black lled region. Again, more than
95% of the area is removed by our restrictions. SSGR takes three seconds to execute this
example on a 50 MHz SPARCstation.

61
Figure 3.9: SSGR containment problem: place 3 convex items

Figure 3.10: A one-component U0j , with U0?j for the SSGR of U

62
Figure 3.11: A two-component Uij , with Uij? for the SSGR of U

3.3.5 E ectiveness
In order to gauge the e ectiveness of SSGR for 3  k  8, we applied it to more than
100 containment problems. Some of these were infeasible examples, and some were feasible.
Most of these examples were drawn from apparel manufacturing. Typical ranges for m and
n were 4  m  100 and 100  n  300. For our apparel examples, we produced each
feasible containment problem by removing a group of neighboring items from an existing
layout. To create an infeasible problem, we moved some of the remaining items a small
distance into the container. In all cases, the area of the container was at least as large as
the sum of the item areas. In addition, each pair of items t into the container, so that all
Uij s were non-null before applying SSGR.
The actual running time of SSGR, in practice, depends on the values of k, m, n, ,
, and the speed of the underlying set operations. We currently use  = :5 and = :1.
Although, in theory, SSGR can continue for many rounds before reaching a steady state,
we nd that if we terminate SSGR when no polygon's area decreases by at least 10%, one
or two rounds is usually sucient. For these values of  and , 3  k  10, our current
implementation of set operations, and the typical ranges for m and n given above, our
SSGR runs approximately ten times more slowly than the steady-state linear programming
restriction of Section 3.4. On a 50 MHz SPARCstation, SSGR running times tend to range
from less than one second for small values of k, m and n to several minutes for larger values
of k near 10, m near 100, and n near 300. For more details, see the running times for our
approximate algorithm in Section 5.2.3; this algorithm uses SSGR.
Our focus here is on how well SSGR restricts U . We measure e ectiveness as the

63
percentage of area removed by SSGR, averaged over all Uij s. Since the Uijorigs are unbounded
and in nite in area (for i; j 6= 0), we apply Equation 3.1 before measuring performance. As
expected, little or no restriction occurs for loose ts.
Infeasible Examples
Table 3.1 shows results for infeasible examples. We use the kCN and kNN notation
introduced in Section 1.2.1. For each case where SSGR does not detect infeasibility, we give
the average relative percentage area restriction over all the Uij s. In all the failure cases,
this gure is less than 30%.
In slightly more than 50% of the 15 cases, SSGR detects infeasibility. This suggests that
SSGR can be valuable as a preprocessing step for a containment algorithm.

Type Detected? % Type Detected? %


3CN yes 4CN yes
4CN no 20% 4CN yes
4CN no 29% 4CN no 29%
4NN yes 4NN yes
4NN yes 4NN yes
5NN no 0% 5NN yes
6CN no 14% 6NN no 3%
7CN no 0%
Table 3.1: SSGR applied to infeasible examples

Feasible Examples
For the 85 feasible examples, 3  k  8. For 72% of the feasible examples, the average
relative percentage area restriction over all the Uij s is below 30%. We found that 16%
is above 60%, and 10% is above 90%. This data shows a pattern similar to the infeasible
examples: for our test cases, it is rare to see a modest amount of SSGR restriction of around
50%.
Constructing Restrictive Examples
Our goal here is to provide insight into the types of situations in which SSGR performs
well. Naively, one might think that a tight- tting example whose only solutions are highly
interlocking would have high restriction. However, the requirements are more subtle. The
only valid con gurations for the 6NN example in Figure 3.12 are tight- tting and highly
interlocking, yet the average SSGR percentage for this example is only 1.8%.
In order to create a \restrictive" example, we want Uij  (Uih  Uhj ) to be false as much
as possible. That is, we desire restrictive triples , as de ned below in De nition 3.4. We
show below some ways of creating restrictive triples.
De nition 3.4 If Uij 6 (Uih  Uhj ), then (Pi; Pj ; Ph) is a restrictive triple.

64
Figure 3.12: A 6NN example with 1.8% SSGR restriction

Figure 3.13: \L"-shaped container and 3 items

65
A simple \L"-shaped container can be used to induce restriction because the base of the
\L" can be used to trap an item. Consider the example in Figure 3.13. Restrictions will
reduce the area of U01 and U02, because P3 can only t in the base of the \L"; this prohibits
P1 and P2 from being in the base of the \L". The limited possibilities for P3 will also cause
restrictions to reduce the area of U23 . SSGR for this example reduces the area of U01 and
U02 each by more than 50%, and the area of U12 by 80%.
Figure 3.14 shows a 5NN example which has average SSGR restriction of 52%. The
shape of the container greatly restricts the positions for the narrow pointed item (in the
middle of the column of unplaced items). This means there are many restrictive triples.
Swapping this item with the rectangular item at the bottom right corner of the layout
produces an example with average SSGR restriction of only 11%.

Figure 3.14: A 5NN example with 52% average SSGR restriction


In both Figure 3.13 and Figure 3.14, the shape of the container limits the possible
positions of one item Pi ; this, in turn, removes placement possibilities for the remaining
items. Thus, U0j 6 (U0i  Uij ), for some i 6= j .
We de ne a \restrictive" container as follows. Consider U orig. Apply Equation 3.1 so
that each Uij is bounded. If, for the resulting U , there is some U0j 6 (U0i  Uij ), then
the container is restrictive . Note that, except for applying Equation 3.1, we do not use
replacement polygons in this de nition. Under these assumptions, if there is a restrictive
triple which includes P0 , then the container is restrictive. Clearly, if a containment problem
has no restrictive triple, then the container is not restrictive.
Figure 3.15 shows a di erent type of situation. Here the container is complicated, and the
containment problem is infeasible. SSGR detects the infeasibility. However, the container
is not restrictive because the placement positions for a single item within the container are
not limited by adding a second item to the container. In this example, it is restrictions of
the form Uij Uij \ (Uih  Uhj ), i 6= j 6= h 6= 0 which are instrumental in detecting the
infeasibility.
If we choose an unrestrictive container, we must select the items carefully in order to
induce strong restriction. For example, given a rectangular container of height h and width
w, we can form three items of height h=3 and width w (see Figure 3.16, in which the items
are shown placed in the container). In this case, each U0orig j is a vertical line segment of

66
Figure 3.15: An infeasible 3CN example

length 2h=3. Each Uij , 1  i 6= j  3, after applying Equation 3.1, is a vertical line segment
of length 4h=3 with a segment of length 2h=3 removed from its center. Restrictions shrink
each Uij , 1  i 6= j  3, down to four points, which are the endpoints of the two line
segments. These are propagated to U0j , 1  j  3, leaving three points on its vertical line,
spaced at intervals of h=3.
1

2 h

Figure 3.16: A 3CC example with an unrestrictive container


This example does not work for k > 3 because, in that case, there is no restrictive triple
which excludes P0 .
Theorem 3.3.4 Given a rectangular container of height h and width w, and an integer
k > 3, for the translational containment problem with k rectangular items, each of height
h=k and width w, the Uij , 1  i 6= j  k, are una ected by restrictions of the form:
Uij Uij \ (Uih  Uhj ) for which neither i, j , nor h is 0.
Proof: The Uij , 1  i 6= j  k, are identical because the items are identical. Hence,
each replacement of such a Uij is the same. Recall from Section 2.2 that Uij = ?Uji ; each
restriction therefore reduces to Uij Uij \ (Uij  Uij ). This has no e ect if Uij  Uij  Uij .
Each Uij , after applying Equation 3.1, is a vertical line segment of length 2h(k ? 1)=k with
a segment of length 2h=k removed from its center. This forms two vertical line segments;
let a, b, c, and d be their endpoints, from top to bottom. Uij + a covers all of segment
ab except for a segment of length h=k, extending downwards from a. Uij + b covers all of
segment ab except for a segment of length h=k, extending upwards from b. Because each of
the two segments comprising Uij is of length  2h=k, the union of these sums completely
67
covers segment ab. Similarly, the union of the sums Uij + c and Uij + d cover segment cd.
Thus, Uij is a subset of Uij  Uij , and therefore the Uij s, 1  i 6= j  k, are una ected by
restrictions of the form: Uij Uij \ (Uih  Uhj ) for which neither i, j , nor h is 0. 2
A containment problem which satis es the conditions of Theorem 3.3.4 clearly also has
no restrictive triples which include P0 . Thus, SSGR applied to such a problem produces no
restriction.

1 2 3 4 5 6

Figure 3.17: A 6NC example with 53% average SSGR restriction


The previous example might suggest that a rectangular container cannot be restrictive
for k > 3. This is not true. It is easy to create a restrictive triple involving a rectangular
container for k > 3 using nonconvex items. Figure 3.17 shows one example for which the
items are placed in the container. SSGR provides average restriction of 53% in this case,
with the area of some Uij s reduced by more than 74%. One of the restrictive triples in this
example is (P0; P1; P6 ).
An example for k > 3 with a rectangular restrictive container and convex items can be
constructed using only slightly more e ort. It can be done using rectangles and the idea
of guillotine cuts 6. An example is given in Figure 3.18; the items are shown placed in the
container. This example has 48% average SSGR restriction. Item 1 participates in several
restrictive triples. The amount of restriction is controlled by the width of item 1. In order
to prevent items 2-5 from tting side by side, the width of item 1 should be less than half
the width of the container.
Theorem 3.3.5 Given a rectangular container of height h and width w, and an integer
k  3, for the translational containment problem with k rectangular items, one of height h
and width z < w=2, and k ? 1 of height h=(k ? 1) and width w ? z , the Uij , 1  i 6= j  k,
are a ected by restrictions of the form: Uij Uij \ (Uih  Uhj ) for which neither i, j , nor
h is 0.
Proof: Assume, w.l.o.g., that the item of width z is item number 1. We show that
(U23 \ (U21  U13 ))  U23 , which implies that the restriction U23 U23 \ (U21  U13)
removes area from U23. Refer to Figure 3.19, in which the origin is marked \o". The two
6
A guillotine cut is a common term in the cutting and layout world. It refers to a method of applying a
sequence of horizontal and vertical cuts to an axis-parallel rectangle. Each cut goes all the way through the
remaining rectangle. (See, for example, [DD92]).

68
2

3
1

Figure 3.18: A 5CC example with 48% average SSGR restriction

shaded rectangles represent U23. Each rectangle has width 2z and height (k ? 3)h=(k ? 1).
The rectangles are separated by a vertical distance of h=(k ? 1). The two vertical dashed
lines represent U21 = U13 . Each line has length h, and is at a distance w=2 from the origin.
The Minkowski sum U21  U13 consists of three vertical lines of length 2h; adjacent lines are
separated by a distance of w. Two of them are at a distance of w from the origin, and hence
cannot intersect U23 . One of the vertical lines runs through the origin. Because it extends
a length of h above and below the origin, and each rectangle of U23 ends at a distance of
(k ? 2)h=(k ? 1) above or below the origin, the intersection of U23 with this vertical line
consists of two vertical line segments (shown as thick lines in Figure 3.19). This intersection
is clearly a proper subset of U23. 2

Figure 3.19: U23 \ (U21  U13)


Figure 3.20 shows another guillotine-like layout with the polygons placed in the con-
tainer. In such an example, the area of some Uij polygons is reduced by as much as 90%.
For example, the area of U02 is greatly reduced due to items 1 and 4.
Our nal example of how to construct a restrictive containment problem is shown in
Figure 3.21. Figure 3.21 depicts a highly restrictive 6NN example. In this case, the large
nonconvex item together with the two large ovals form a restrictive triple. Figure 3.22 shows
the e ect of SSGR on two one-component U0j s in this case; each has more than 90% of its
area removed by the restrictions.
69
2
3
5
6

8
1 4 7

Figure 3.20: A 9CC example with strong SSGR restriction

Figure 3.21: A highly restrictive 6NN example

70
Figure 3.22: A 6NN example: e ect of SSGR on two one-component U0j s

Barriers to Restriction
Restrictions of the form Uij Uij \ (Uih  Uhj ) are ine ective when Uij  Uih  Uhj
or, equivalently, when Uij  Uih  ?Ujh . The following theorem provides a condition for
which this is true.
Theorem 3.3.6 Let A and B be polygons and let the origin be denoted by O. If either
O 2 B or A is connected and no translated copy of A lies entirely within the component of
B that contains O, then A  A  ?B.
Proof: Consider a point p 2 A. To prove that A  A  ?B, it suces to show that
p = a ? b, where a 2 A and b 2 B. Translate A by ?p, and denote this Ap . If B contains O,
then p = p ? O, so we're done. Otherwise, let B be the component of B which contains O.
No translated copy of A lies entirely within B; this guarantees the existence of q ? p 2 Ap
such that q ? p 62 B. If q ? p 2 B , then q ? p = b 2 B implies p = q ? b, so we're done. Now
assume q ? p 62 B . Ap is connected, and so there exists a connected path  from O 2 Ap to
q ? p 2 Ap . The endpoints of  are also in di erent components of the exterior of B, so 
must intersect B at some point b 2 B . Then because  2 Ap , there exists a ? p 2 Ap such
that b = a ? p. This implies p = a ? b, which completes the proof. 2
The following corollary is immediate from the above theorem.
Corollary 3.3.7 Let A be a simple, bounded, connected polygon such that the origin is
contained within the interior of the outer boundary of A. Then A  A  ?A.

71
Note: It is not true, in general, that A  A  ?A, even with the additional constraint
that A = ?A. Figure 3.23 presents a counterexample. In the gure, A consists of two
points, and A = ?A. However, A  ?A consists of three points which are not part of A.
Therefore, A 6 A  ?A.

a) A b) A + -A
= origin

Figure 3.23: Counterexample for A  A  ?A


The next corollary follows immediately from transitivity.
Corollary 3.3.8 Let A and B be polygons satisfying the conditions of Theorem 3.3.6, and
let C be a subset of A. Then C  A  ?B .
In practice, we observe that it is rare for SSGR to remove any area when all items
are identical. Even when only some items are identical, we observe limited restriction.
This makes sense in light of the above results. For, if all the items are identical, then
Uij = Uih = Uhj = ?Ujh . If, in addition, Uij satis es the conditions of Theorem 3.3.6 on
A and B, then Uij  Uih  ?Ujh , and therefore no restriction will occur. Although SSGR
can be ine ective when items are identical, Milenkovic has recently designed a restriction
to deal e ectively with identical items (see the brief description of his symmetry breaking
restriction in Section 3.8). In addition, we remind the reader that when items are identical
the number of subproblems created by using the translational containment characterization
of Section 2.7 is reduced.
When the boundary of a Uij , for 1  i  k, is a closed curve, it always contains the origin.
In this case the Uij satis es a necessary condition for Theorem 3.3.6 and its corollaries.
We tested the hypothesis that the extent to which the Uij s, 1  i 6= j  k, are closed
curves is a key factor in the e ectiveness of SSGR. For each of the more than 80 feasible
cases, we estimated the \closure" by shooting rays from the origin at equally spaced angles
about the origin. Let  be the fraction of rays which hit the Uij . We recorded the average 
over all Uij s, as well as the maximum and minimum  , for each example. We then calculated
correlation coecients for each of these with respect to the total restriction percentage. We
hoped to nd high negative correlation, indicating a linear relationship in which high  and
a small amount of restriction were related.
We did nd that whenever the average  was high, SSGR e ectiveness was low. For
  :9, e ectiveness was  18%, and for   :95, e fectiveness was  7%. The interlocking
6NN example of Figure 3.12 has average  = :83 and SSGR restriction of less than 2%.
Some of its Uij s are shown in Figure 3.24.
However, e ectiveness is sometimes low even when average  is not high. This tends to
occur when several items are identical. The correlation coecient for average  = ?:55,
meaning only 30% of the restriction results can be predicted using average  . The correlation
coecient for maximum  = ?:48, and for minimum  = ?:49. Thus,  alone is not a
sucient predictor of SSGR e ectiveness.

72
Figure 3.24: Selected Uij s for 6NN example

3.4 Steady-State Linear Programming Restriction


A di erent method of restriction, based on linear programming, was developed by
Milenkovic and is described in [DM, MD95]7 . This steady state linear programming restric-
tion technique (SSLPR) is an indirect one which uses the convex hulls of the con guration
spaces. It works as follows. Temporarily replace every element Uij 2 U by its convex hull
H (Uij ). The set of constraints
tj ? ti 2 H (Uij ) 1 <= i < j <= k
can be expressed as a linear program which is called the CLP (constraint linear program).
A polynomial-time algorithm is given to project the 2k-dimensional feasible space F into
any element Uij of U . If the projection is denoted Fij , then the replacement
Uij Uij \ Fij
is a valid restriction.
3.4.1 Example
We present an example of SSLPR from the apparel industry. Figure 3.25 depicts three
\v" shaped items which are to be placed in a nonconvex gap in a clothing marker. Fig-
ure 3.26 shows two two-component U0j polygons for this example. Their replacements
7
Programming work on the linear programming restriction was performed by Umesh Shankar while at
the MIT Summer Research Institute [Sha94].

73
are shaded in black. Figure 3.27 shows two Uij polygons and their replacements. The one-
component polygon on the left is una ected by restriction. The area of the three-component
polygon on the right, however, is visibly reduced. This example takes less than 1 second to
execute on a 50 MHz SPARCstation.

Figure 3.25: SSLPR containment problem: place 3 nonconvex items

Figure 3.26: Two two-component U0j s, with U0?j s for SSLPR of U

74
Figure 3.27: A one-component Uij , with Uij? s for SSLPR of U

3.4.2 E ectiveness
Here we discuss the e ectiveness of SSLPR. We also compare the e ectiveness of SSLPR
with SSGR. Preliminary results on the e ectiveness of the linear programming restriction
appear in [Sha94].
SSLPR has the property that it cannot increase the number of re ex vertices in U . Of
course, SSGR has no such guarantee.
SSLPR approximates each Uij by its convex hull. If all the Uij s are convex, then SSLPR
can give an exact solution to the containment problem. If a Uij is very di erent from
its convex hull, then many invalid points will be considered valid by the linear program.
Therefore, as Shankar observes [Sha94], SSLPR provides better restriction when the Uij
polygons are similar to their convex hulls than when the Uij s and their convex hulls are
dissimilar.
Unfortunately, most of the Uij s, 1  i 6= j  k are quite di erent from their convex hulls
immediately after applying Equation 3.1, because each becomes the di erence between two
polygons. For this reason, we observe very little SSLPR restriction for our collection of
more than 100 test cases. Only 10 of the feasible examples have any SSLPR restriction.
SSLPR is not able to detect infeasibility in any of the infeasible examples. For all the cases
where SSLPR achieves some restriction, the SSLPR of U is a superset of the SSGR of U .
SSLPR clearly cannot be expected to function well for examples having many annular
Uij s. This includes loose ts and identical items.
The examples in which SSLPR is e ective have restrictive containers that force the U0j
of some item to consist of a small, single, nearly convex component. For example, for the

75
\L"-shaped container of Figure 3.13 and its three items, SSLPR achieves an average of 62%
restriction. SSLPR does not do well in situations for which the container is unrestrictive.
For example, Figure 3.20, where SSGR reduces the area of some Uij s by up to 90%, has no
SSLPR restriction.
Based on the success of SSLPR on examples like that of Figure 3.13, it is reasonable to
expect SSLPR to be e ective in propagating the e ect of selecting a single (nearly convex)
component of a Uij for consideration. An example is shown in Figure 3.28, where a single
component of a Uij (on the far right) is selected, causing the area of the two two-component
Uij s on the left to be reduced. This \selection" is useful in a containment algorithm which
relies on subdivision (see Section 5.3.2). For our current implementation, SSLPR is roughly
ten times faster than SSGR; this makes it a practical type of restriction to use within a
subdivision containment algorithm.

Figure 3.28: E ect of SSLPR on Uij s when a single component of a Uij is selected
Although our tests suggest that SSGR is more powerful than SSLPR, there are, in theory,
cases for which SSLPR can provide more restriction than SSGR. Also, in our hierarchical
containment algorithm, we have occasionally seen SSLPR remove area not removed by
SSGR (see Section 5.3.2.)
The following is an example for which SSLPR is more powerful than SSGR. Consider
the container and three items in Figure 3.29. The container is the union of the boundaries
of two equilateral triangles. Each triangle has sides of length s. The points a, b, and c
at alternating re ex vertices of the container form an equilateral triangle. Each of the
three items has length s. Each also has two holes; each hole is 1=3 of the way from an
endpoint. The local origin of each item is indicated in the gure. The reader may verify
that U01 = fb; cg, U02 = fc; ag, and U03 = fa; bg. Also, U12 = f0; a ? bg, U23 = f0; b ? cg,

76
and U13 = f0; a ? cg. This example is infeasible because any simultaneous placement of the
three items would force their endpoints to collide. SSGR is unable to detect this infeasibility,
because the Uij s satisfy Uij = Uih  Uhj . However, there is also no feasible solution to the
CLP using the convex hulls of the Uij s, and so SSLPR can detect the infeasibility.
It is easy to construct an example for which SSGR removes a region which SSLPR
cannot. Consider again the containment problem of Theorem 3.3.5. In that case, each U1j
is a pair of parallel vertical line segments. The convex hull of these segments will cause
the CLP to allow solutions in which item 1 overlaps another item. This, in turn, will cause
points of the Uij for an identical pair of rectangles (which SSGR removes) to be valid for
the CLP. Thus, points will be removed by SSGR that are not removed by SSLPR.
We conclude that SSGR and SSLPR are complementary restrictions.
container
local origin P3
P1
c

b
P2

a holes

Figure 3.29: Example for which SSLPR is more powerful than SSGR

3.5 Subset Restriction


If one item Pi is a subset of another item Pj , then the following theorem can be used.
The theorem assumes that when the local origins of the two items are aligned, then Pi  Pj .
Theorem 3.5.1 If Pi  Pj , then, for Ph , Ujh  Pi  ?Ph and Uhj  Ph  ?Pi.
Proof: Pi  Pj implies that Pi  (?Ph )  Pj  (?Ph ). Since P  Q implies that Q  P ,
we have Pj  (?Ph )  Pi  (?Ph ), so Ujh  Pi  ?Ph . For the second part, Ujh = ?Uhj
and Pi  ?Ph = ?Ph  ?Pi applied to Ujh  Pi  ?Ph yield ?Uhj  ?Ph  ?Pi . This
immediately gives Uhj  Ph  ?Pi . 2
Corollary 3.5.2 If Pi  Pj , then Ujh Ujh \ Pi  ?Ph and Uhj Uhj \ Ph  ?Pi are
valid restrictions.
We call the valid restrictions given by Corollary 3.5.2 subset restrictions . Subset re-
strictions can be useful when initially calculating U . If a Uhi is null, then Uhj need not be
calculated.

3.6 Union Restriction


This section introduces a valid restriction which assumes that a particular Uij is the
union of a collection of polygons. First, we argue that if A  Uij , then restrictions on A
are well-de ned if we replace Uij in U by A. Recall from Section 2.2 that Uij is the set of

77
all values of tj ? ti which belong to a valid con guration, and recall from Section 3.1.2 that
Uij? is the replacement of Uij in a valid restriction of U . Analogously, A is the set of all
values of tj ? ti in A which belong to a valid con guration, and A? is the replacement of A
in a valid restriction.
Lemma 3.6.1 If A  Uij , then A  Uij and A = A \ Uij .
Proof: The rst part follows immediately from A  Uij . To obtain the second part, by
de nition, A  A; this together with the rst part of the lemma imply A  A \ Uij . To
obtain equality, observe that if a point tj ? ti is in A \ Uij (and hence in Uij ), it is part of
a valid con guration; therefore tj ? ti must also be in A . 2
Theorem 3.6.2 If Uij = Sh Ah , then Uij  Sh(A?h).
Proof: By de nition of a valid restriction, Ah  A?h . This implies Sh Ah S Sh A?h .SNow,
since ASh  Uij , LemmaS3.6.1 implies Ah = Ah \ USij . Substituting
S this into h ASh  h A?h
yields h (Ah \ Uij )S h A?h . This implies Uij \ ( h Ah )  h A?h . Since Uij = h Ah , this
simpli es to Uij  h (A?h ). 2
Theorem 3.6.2 immediately yields a valid restriction, which we call a union restriction.
Course-grained parallelism can be used to implement this restriction. In the pseudo-code
below, let A be the set of all the Ah polygons whose union is Uij , and suppose there are
l such polygons. U h represents a copy of U for the hth processor, and Uijh is processor h's
con guration space for Pj with respect to Pi . Empty U is a set containing k(k +1)=2 copies
of the empty set.
UNION-RESTRICT(U , k, i, j , A, l)
for (h = 1 to l) do in parallel
Uh U
Uijh Ah
Uh ANY-VALID-RESTRICTION(U h , k)
U Empty U
for (s = 1 to k) do
for (t = 1 to k) do
for (h = 1 to l) do
Ust Ust [ Usth
return U

3.7 Characterization-Based Restriction


We present here a restriction which does not satisfy the criteria for a valid restriction.
That is, it does not preserve the invariant Uij  Uij . However, the restriction is useful
when only a single valid con guration is required, because if a valid con guration exists,
a valid con guration also exists after the restriction. Thus, the restriction is a semi-valid
restriction (see De nition 3.2).
The restriction is based on the containment characterization given in Section 2.7. We
use a partial characterization based on the second part of Theorem 2.7.1, which guarantees
78
that if a valid con guration exists, then one also exists for which there are at least two
contacts per polygon. We apply this to P0 , and use the weaker condition that the polygon
has at least one contact. This means that if a valid con guration exists, there must exist
a valid con guration for which some th ? t0 is on the boundary of its U0h . Therefore, we
preserve the existence of such a con guration if we replace each U0h by its boundary, apply
any valid restriction, and then set U equal to the union of the restricted sets. Course-grained
parallelism can be used here.
As in the previous section, in the pseudo-code below, U h represents a copy of U for
the hth processor, and Uijh is processor h's con guration space for Pj with respect to Pi .
Empty U is a set containing k(k + 1)=2 copies of the empty set.
BOUNDARY-RESTRICT(U , k)
for (h = 1 to k) do in parallel
Uh U
U0hh BOUNDARY-OF(U0h )
Uh ANY-VALID-RESTRICTION(U h , k)
U Empty U
for (i = 1 to k) do
for (j = 1 to k) do
for (h = 1 to k) do
Uij Uij [ Uijh
return U

3.8 Symmetry Breaking and Subset Substitution Restric-


tions
Section 3.2.5 observes that identical items produce equivalent con gurations. These
pose a special challenge for SSGR, as discussed in Section 3.3.5. In recent work [Mil95b],
Milenkovic has developed a new form of restriction to overcome the diculty that SSGR
and SSLPR have with identical items. His idea is to arbitrarily choose a vector v , and
then impose the constraint that ti  v  tj  v if items Pi and Pj are identical. This has
the e ect of a 50% reduction in the area of Uij . A \resymmetrizing" step is ultimately
necessary to make the symmetry breaking restriction a valid restriction; without this it is
only a semi-valid restriction.
The symmetry breaking restriction can also handle non-identical items through a valid
restriction called a subset substitution restriction. The main idea of this restriction is as
follows. If a polygon A is a subset of each of a collection of polygons, then we can substitute
a copy of A for each of the polygons in the collection, calculate U using these copies of A,
perform restrictions on U for the copies of A, and then use these restrictions as valid
restrictions for the U involving the original items. Note that this is not the same as the
subset restriction of Section 3.5; in that case both Pi and Pj are to be placed simultaneously
in the same layout, and Pi is not substituted for Pj . In the most general form of the subset
substitution restriction, one can replace each polygon Pi by a subset Pi0 . The Pi0 's can all
be the same (as above), or they can all be di erent, or some the same and some di erent.

79
3.9 Closed Operators
Here we discuss an important design issue related to the type of set operations used to
implement restrictions.
Consider a general polygon, i.e. a collection of polygonal components, where each com-
ponent is connected.
De nition 3.5 A general polygon P is regularized if it is equal to the closure of its
interior.
Figure 3.30b depicts a regularization of the non-regularized general polygon of Fig-
ure 3.30a. Regularized polygons are not sucient for our application. Using only regular-
ized set operations can cause the algorithm to incorrectly conclude during a restriction that
no solution exists. This can occur if the intersection of two polygons is either a line segment
or a point. In fact, this situation can occur often for tight ts. One example is a stack of
rectangles inside a rectangular container, where each rectangle to be placed has the same
height as the container (see Figure 3.31).

(a) Non-regularized general polygon (b) Regularized general polygon

Figure 3.30: Regularization example


A non-regularized polygon does allow isolated points and line segments (see Figure 3.30a).
Unfortunately, it also allows isolated points or line segments to be removed from its interior.
Thus, a non-regularized polygon need not be closed, i.e. equal to its closure - its boundary
plus the closure of its interior. Since the con guration space we operate on is closed, we
must deal with closed sets. We therefore need closed polygons. Union and intersection
operations on closed sets are de ned in the normal way, because the union and intersection
of two closed sets is closed. However, the complement of a closed set is open, so we take
the closure of the complement of a closed polygon.
Correct implementation of the algorithm requires the use of robust, closed set operations.
The set operations must be implemented robustly in order to minimize error accumulation
during the steady-state restriction process. Closed operators are important to both the geo-
metric and linear programming restrictions. They are more crucial to geometric restrictions,
since these rely more heavily on set operations.

A B A
U
B

Figure 3.31: Two rectangles with degenerate intersection

80
For our rst implementation of SSGR, we used robust intersection, union, and comple-
ment operators which handled only regularized general polygons. Our robust set operators
are based on geometric rounding [LM93b, MN89, Mil93a, Mil93b]. In order to shrink poly-
gons to their boundaries, we added operators which can handle line segments and points as
well as lists of polygons. However, our underlying polygon operators were still regularized.
We are currently redesigning these operators so we will have a complete set of operators for
closed polygons.

3.10 Summary and Conclusions


In this chapter, we have made several general observations about restrictions for trans-
lational containment. For example, we have observed that replacement polygons often
have components which intersect the boundary of the original polygon, and shown when
and why this occurs. These properties represent important considerations for designers of
translational containment algorithms.
We have discussed the valid restriction SSGR in detail. We have evaluated the e ec-
tiveness of SSGR and of Milenkovic's SSLPR. We conclude that SSGR is often much more
e ective on containment examples than SSLPR. SSLPR performs best when the con gura-
tion spaces are similar to their convex hulls. SSGR performs best when a restrictive triple
of items exists. Both restrictions perform poorly for loose ts, identical items, and other
circumstances that cause the con guration spaces to be closed curves containing the origin.
Both restrictions are e ective in propagating the e ects of selecting a single (nearly convex)
component of a polygon for consideration. SSGR and SSLPR are complementary. There
are cases for which SSGR can restrict more than SSLPR, and cases for which SSLPR can
restrict more than SSGR.
We have also introduced subset, union, and characterization-based restrictions, and
discussed Milenkovic's symmetry-breaking restriction.

81
Chapter 4
Evaluation and Subdivision

4.1 Introduction
In this chapter we present evaluation and subdivision methods. Recall from Chapter 2
that, given a restricted hypothesis, the evaluation step tries to nd a valid con guration
within the hypothesis. If no solution is found by the evaluation step, then the subdivi-
sion step divides the current hypothesis into two sub-hypotheses such that if a solution
exists within the current hypothesis, then a solution must exist within one of the two
sub-hypotheses. To divide the current hypothesis, we partition one of the con guration
spaces into two parts. The algorithm then recurses on each part while keeping the other
con guration spaces constant.
Section 4.1.1 gives an overview of our work on evaluation. Section 4.1.2 outlines our
work on subdivision. Section 4.1.3 gives an overview of this chapter.
4.1.1 Evaluation
In order to evaluate a hypothesis, we can invoke an exact algorithm for kNN at this
point, or perhaps a MIP algorithm such as that of [DML94]. However, these alternatives
are costly. For the purposes of our general containment algorithm given in Section 2.4, all
we really require is a fast test which generates a con guration and tests to see if it is a valid
con guration: tj ? ti 2 Uij , 0  i < j  k. If the test succeeds, then we have a solution.
As explained in Section 2.4, the evaluator decides \yes", \no", or \maybe". It decides
\yes" if it nds a valid con guration within the current hypothesis. It decides \no" if
it determines that there cannot be a valid con guration within the current hypothesis.
It decides \maybe" if it fails to nd a valid con guration in the current hypothesis but
it is unable to rule out the possibility that one exists. In the \maybe" case, additional
subdivision is required.
Our rst evaluation method is a greedy one which operates directly on the con guration
spaces. It shrinks each Uij to a point, in turn, then propagates the e ects of that shrinking
via SSGR. If any Uij becomes empty during this process, then that combination of Uij
points is not part of a valid con guration, and so the evaluator does not nd a solution.
In this case, it returns \maybe". If, however, no Uij becomes empty, then the collection
82
of single-point Uij s determines a valid con guration and the evaluator returns \yes". The
key to the e ectiveness of this procedure is a good choice of a shrinking point in a Uij .
We present several selection methods. One of them produces valid con gurations, without
using subdivision, in 66% of our test cases. This method maximizes the intersection of
translations of the con guration spaces.
The disadvantage of the greedy evaluation method is that it cannot operate simultane-
ously on the con guration spaces. As k increases, this becomes a more serious limitation.
An evaluation method such as the indirect one of Milenkovic, which is based on a con-
tainment linear program (CLP), overcomes this limitation (see the discussion of the CLP
in Section 3.4 and [Mil, DM95]). It approximates the Uij s by their convex hulls and uses
linear programming to check if there exists a con guration which is valid with respect to
the convex hulls. If not, then the current U cannot have a valid con guration. Thus, this
evaluator is able to say \no", in contrast to the greedy evaluator. If a valid con guration
is found with respect to the convex hulls, and if each point found by the linear program is
within its associated Uij , then the con guration is a valid one. Otherwise, the con gura-
tion is an overlapping one. We apply the work of Li and Milenkovic on overlap reduction
[Li94, ML, LM93c, LM93a] to improve this evaluation method.
Both of these evaluation techniques are powerful enough so that they can, in combination
with restrictions, sometimes solve containment problems without any subdivision.
A containment algorithm based on CLP evaluation can solve a containment problem
using a single CLP evaluation if every Uij 2 U is convex. We present a theorem which
speci es a condition under which a single CLP evaluation can solve a containment problem
for a restricted U . This result has important implications. It provides a test which can be
used to decide when to switch from a direct to an indirect approach within a containment
algorithm. It also leads, in Section 4.5.3, to a theorem which guarantees that the number
of worst-case subdivisions in certain containment algorithms is not increased by the use of
restrictions.
4.1.2 Subdivision
In this chapter we also discuss subdivision methods. We de ne a set of possible goals for
subdivision methods, and then discuss several subdivision methods which have one or more
of these goals. One goal is to reduce the \size" of a Uij , where size can be interpreted in a
variety of ways. We present a size-based subdivision method which reduces the maximum
dimension of the bounding box of a Uij . Another goal is to reduce the number of combi-
natorial possibilities for a valid con guration. We discuss two such combinatorially-based
subdivision methods which partition a Uij by extending one of its edges. These methods
\move away" from the current con guration if it is overlapping. A di erent way to move
away is o ered by our new distance-based subdivision method.
4.1.3 Overview
Section 4.2 presents our greedy evaluation strategy. Section 4.3 describes the over-
lap reduction evaluation method. Section 4.4 gives a condition for which a single CLP
evaluation can solve a containment problem for a restricted U . Section 4.5, presents sub-
division techniques. Section 4.5.1 de nes subdivision and discusses goals of subdivision.
Section 4.5.2 discusses a size-based subdivision technique. Section 4.5.3 describes two

83
combinatorially-based subdivision techniques. Section 4.5.4 discusses our distance-based
subdivision method. Section 4.6 summarizes this chapter and states conclusions.

4.2 Greedy Evaluation


4.2.1 Algorithm
The method we present here is a greedy strategy which shrinks each Uij to a point, in
turn, then propagates the e ects of that shrinking via SSGR. If any Uij becomes empty
during this process, then the chosen combination of single Uij points cannot be part of
a valid solution. If, on the other hand, we succeed in shrinking each Uij to a point and
then applying SSGR without any Uij becoming empty, then a valid con guration exists (see
proof of Theorem 4.2.1). Section 4.2.2 gives heuristics for selecting a shrinking point for
each Uij . In the pseudo-code below, the tolerance test varies depending on the subdivision
method used.
GREEDY-EVALUATE(U , k)
if (U fails tolerance test)
return BUILD-CONFIGURATION(U , k)
while (no Uij 2 U = ; and some Uij has not been shrunken to a point)
select smallest Uij 2 U which is not a point
select a point uij 2 Uij
Uij fuij g
U SSGR(U , k)
if (no Uij 2 U = ;)
return BUILD-CONFIGURATION(U , k)
return MAYBE

BUILD-CONFIGURATION(U , k)
t0 = (0; 0)
for (j = 1 to k)
select tj 2 U0j
 (t0; t1 ; : : :; tk )
return YES

Theorem 4.2.1 If each Uij has been shrunken to a point and a subsequent application
of SSGR does not cause any Uij to be empty, then GREEDY-EVALUATE yields a valid
con guration for P .
Proof: By de nition, a valid con guration for P is a set of translations ti , 0  i  k, such
that tj ? ti 2 Uij , 0  i < j  k. Since U0j 6= ;, 1  j  k and consists of a single point,
let tj be that point. Since t0 = f0; 0g, we have U0j = tj ? t0 . Now, for Uij , 1  i 6= j  k,
SSGR(U , k) 6= ; ! Uij  Uih  Uhj . In particular, for h = 0, uij = u0j ? u0i , where
uij 2 Uij , u0j 2 U0j , and u0i 2 U0i. Since u0j = tj ? t0 and u0i = ti ? t0 , u0j ? u0i = tj ? ti.
Thus, the points of U constitute a valid con guration, and further restrictions cannot alter
that fact. 2
84
4.2.2 Heuristics for Selecting uij
We present several heuristics for choosing a shrinking point uij 2 Uij . In Section 4.2.3
we compare the e ectiveness of these methods in practice. The most naive heuristic is to
select a point at random. Our other heuristics are based on the following claim, which gives
a necessary but insucient condition for the existence of a valid con guration.
Claim 4.2.2 If a con guration ft1; : : :; tk g for P is valid, then:
\
Uhi \ (Uhj + (ti ? tj )) 6= ;
j 6=i
Proof: For xed h, for each i, Uhi + th ? ti contains the origin. The claim follows by
translating each Uhj + th ? tj , i 6= j , by (ti ? th ). 2
Claim 4.2.2 means that a solution exists only if there are choices for the uij which cause
the translated Uij s to intersect. SSGR enforces pairwise intersection of the translated Uij s.
However, it does not guarantee the multiple intersection in Claim 4.2.2. This is illustrated
by Figure 4.1 and Figure 4.2. Figure 4.1 shows a containment problem for three small
convex items. Figure 4.2 depicts the three U0j polygons for this example (unrestricted and
scaled up). U01 is not translated, U02 is translated by a vector in U21 , and U03 is translated
by a vector in U31. In this case, a solution is not found because the polygons do not have
a common intersection.

Figure 4.1: 3CN example which does not yield a common intersection of translated U0j s
Avnaim and Boissonnat's 3NP algorithm works because whenever the container is a
parallelogram the U0j s are parallelograms with sides parallel to the container. Avnaim
and Boissonnat prove that for parallelograms with parallel corresponding sides, pairwise
intersection implies multiple intersection. This version of Helly's Theorem holds even if
k > 3.
Note: One might be tempted to conclude from this that our algorithm provides an exact
polynomial-time solution for kNP. However, this is highly unlikely to be true because kNP
is NP-complete (see Section 2.3). The key to this paradox is that Claim 4.2.2 is necessary

85
Figure 4.2: No common intersection of translated U0j s

but not sucient. For 3NP, we can select u12 and know that u23 and u31 exist and satisfy
Claim 4.2.2. Unfortunately, for k > 3, we can't necessarily obtain a self-consistent set of
uij s for which uij = uih + uhj for all h 6= i 6= j .
Convexity of Union
Theorem 4.2.3 gives sucient conditions for a set of k polygons (convex or nonconvex)
to have a common intersection1 .
Theorem 4.2.3 Given a set of polygons P = fP1; P2; : : :; Pk g, if Pi [ Pj is convex, i 6= j ,
1  i < j  k, then
k
\
Pi 6= ;
i=1
Proof: If Pk is nonconvex, let p be a point which is on the boundary of Pk but is not
on the boundary of the convex hull of Pk , which we denote by H (Pk ). We claim p 2 Pi ,
1  i  k. Since Pk is a closed polygon it contains its boundary; thus p 2 Pk . Now, let
Q = H (Pk) n Pk . Because Pi [ Pk is convex, H (Pk )  (Pi [ Pk ). This implies that Q  Pi.
Since Pi is a closed polygon it contains the boundary of any subset of itself. Therefore
Pi contains the boundary of Q, which contains p. This establishes the claim that p 2 Pi,
1  i  k.
1
The generalization from 3 to k polygons was suggested by Dan Roth.

86
Now suppose Pk is convex. We prove this case by induction. The base case for k = 1 is
trivially true. The induction hypothesis is that our claim is true for k ? 1. Now we consider
Pk . We claim that Pi [ Pk is convex implies that Pi \ Pk 6= ; for 1  i  k. This is true
because of the more general fact that if the union of two closed sets is connected, then
they have a point in common. To establish this fact, we argue as follows. Let Pi [ Pk be
connected, pi 2 Pi and pk 2 Pk . If either pi 2 Pk or pk 2 Pi , we are done. Otherwise, since
Pi [ Pk is connected, there exists a path (t), 0  t  1, where (0) = pi and (1) = pk
and either (t) 2 Pi or (t) 2 Pk for 0 < t < 1. Consider t = supft j (t) 2 Pi g. The point
(t) is a limit point of Pi \ (t) as t increases from 0. Let t = infft j (t) 2 Pk g. We
claim t  t . For, if t < t then there exists t0 such that t < t0 < t and (t0) does not
belong to either Pi or Pk . Therefore (t) 2 Pk . This establishes Pi \ Pk 6= ; for 1  i  k.
Now we want to apply the induction hypothesis to the set of k ? 1 nonempty polygons:
f(P1 \ Pk ); : : :; (Pk?1 \ Pk )g to establish that Tki=1 Pi 6= ;. We claim we can do this because
(Pi \ Pk ) [ (Pj \ Pk ) is convex 1  i < j < k. This is true because
(Pi \ Pk ) [ (Pj \ Pk ) = Pk \ (Pi [ Pj )
This is the intersection of two convex sets and is therefore itself convex. This completes the
induction and establishes the theorem. 2
Corollary 4.2.4 Suppose P is a set of polygons satisfying the conditions of Theorem 4.2.3.
Let P0 be convex and intersect all the other polygons: P0 \ Pi 6= ;, 1  i  k. It follows
that,
k
\
Pi 6= ;:
i=0
Proof: In the proof of Theorem 4.2.3 for the case where one of the polygons is convex,
we only used the property that Pi \ Pk 6= ;. That property is satis ed here because of the
conditions of the theorem. 2
Note that we have an exact solution to 3NN if V1, V2 + u21, and V3 + u31 satisfy the
conditions of Theorem 4.2.3 or Corollary 4.2.4. However, these results cannot give an exact
solution to kNN (see the note in the previous section). The results are still useful because
they suggest a heuristic we can use in our kNN algorithm for selecting uij . Theorem 4.2.3
suggests we choose uij which maximizes the convexity of U0i [ (U0j + uji ). The problem of
nding the set of translations which maximizes the convexity of the union of a set of polygons
is an open one (see Section 10.2.1). In our implementation we nd an approximate answer
(see Section 4.2.3).
Maximizing Intersection Area
In GREEDY-EVALUATE, shrinking a particular Uij to a single point fuij g xes the
translation of Uhj with respect to Uhi . Subsequent applications of SSGR can therefore
cause the associated polygons in U to shrink. A containment solution can only be reached,
however, if each polygon in U can be shrunk to a point without causing any polygon in U
to vanish.
Suppose we choose the shrinking point for a Uij to be the point which shrinks Uhi and
Uhj the least. We might then have a better chance of preventing the polygons in U from
87
vanishing. This suggests the following heuristic for selecting the shrinking point: choose
uij to be the point in Uij such that the area of Uhi \ (Uhj + uji ) is maximized. If Uhi and
Ujh are simple polygons with a and b vertices, respectively, then the optimal uij can be
found exactly in polynomial time using a brute-force con guration space method, by solving
O(a2 b2) two-dimensional convex optimization problems [Mit94].
Solutions to some closely related problems often provide large intersection area in prac-
tice. One possibility is to align the centroids of the polygons. Another is to maximize the
Hausdor distance between the polygons. An inexpensive approximation to this is given in
[ABB91]. Their result states that if you superimpose the lower left cornerpof the bounding
boxes of two polygons, the Hausdor distance between them is  (1 + 2) , where  is
the minimum Hausdor distance possible for the two polygons. Of course, there is nothing
special about the lower left corner. The proof works for any corresponding pair of bounding
box corners.
In practice, we use a method which (approximately) maximizes the area of Uhi \ (Uhj +
uji ). We overlay a grid on each component of Uij and, at each grid point uij , we measure
the area of Uhi \ (Uhj + uji ). Section 4.2.3 describes this in more detail.
4.2.3 Comparing Heuristics for Selecting uij
We experimented with three di erent heuristics for selecting a shrinking point uij 2 Uij :
1) a random point, 2) a point which (approximately) maximizes the area of U0i \ (U0j + uji ),
and 3) a point which (approximately) maximizes the convexity of U0i [ (U0j + uji ).
Heuristics (2) and (3) use a two-dimensional grid to nd an approximate maximum. For
(2), we overlay a grid on each component of Uij and, at each grid point uij , measure the
area of U0i \ (U0j + uji ). We choose a grid resolution for each component which gives ten
grid points along the longest (x or y ) dimension of the component. For each component,
we nd the average area and the maximum area. From the component with the highest
average value, we select the point yielding the maximum value.
For (3), we again overlay a grid on each component of Uij . In this case, however, we
measure the area of H (U0i [ (U0j + uji )) n (U0i [ (U0j + uji )). For each component, we
nd the average area and the minimum area. From the component with the lowest average
value, we select the point yielding the minimum value.
The goal of our experiments was to see which of the three heuristics was able to solve
the most containment problems using only geometric restrictions and greedy evaluation.
We performed several experiments. Our early experiments with 3NN examples, using geo-
metric restrictions but not a steady-state loop, suggested that choices of uij leading to valid
con gurations often have regions of large intersection area between the translated Uij s. For
that experiment we tested 30 4CN and 4NN examples, using method (2). A solution was
found without subdividing in 66% of the 30 examples.
Once we had the steady-state restriction process in place, we ran another experiment
using 30 test examples for 4  k  6. The results appear in Table 4.1.
Table 4.1 shows that maximizing the area of intersection (heuristic (2)) performs best,
solving 60% of the problems without subdivision. Maximizing convexity (heuristic (3))
solved 36% of the problems, and randomization (heuristic (1)) only 20%.
Although the intersection method usually outperforms the convexity method, there are
cases in which convexity is able to nd a solution without subdivision when intersection
cannot. For example, Figure 4.3 shows three U0j polygons for the example of Figure 4.1.

88
Type Number of Examples Random Intersection Convexity
4CC 6 2 6 2
4CN 1 0 1 0
4NN 13 4 5 5
5CC 2 0 2 0
5CN 2 0 1 1
5NN 5 0 2 3
6NN 1 0 1 0
Totals: 30 6 18 11
Table 4.1: Number of solutions found without subdivision

The three polygons almost have a triple intersection when uij is chosen by the intersection
method. When the convexity method is used for the same example, Figure 4.4 shows that
a triple intersection results. Note in Figure 4.4 that the algorithm has matched each pair of
Uij polygons along one side in an attempt to maximize the convexity of the union of each
pair.
The grid analysis method for nding an approximate maximum for intersection appears
to be e ective. However, many set operations are performed in order to choose a sin-
gle uij shrinking point. We would like a method which is as e ective but is faster (see
Section 10.2.1). Our initial experiments with inexpensive methods for encouraging large
intersection area (such as aligning centroids and bounding box corners (see Section 4.2.2))
suggest that these faster methods are not as e ective as grid analysis.

4.3 Overlap Reduction Evaluation


The disadvantage of the greedy evaluation method of Section 4.2 is that it cannot act
simultaneously on the Uij s. Thus, our choices of shrinking points for the Uij s do not
necessarily produce a consistent set of points leading to a valid con guration. This limitation
can be overcome by approximating the con guration spaces by their convex hulls.
The method we discuss here is based on Milenkovic's containment linear program, called
the CLP (see Section 3.4 and [Mil, DM95]). To create the CLP, temporarily replace every
element Uij of U by its convex hull H (Uij ). The set of constraints
tj ? ti 2 H (Uij ) 1 <= i < j <= k
can be expressed as a linear program.
The use of CLP as an evaluator predates the use of the CLP for restriction. If the CLP
is infeasible, then no solution exists. However, if the CLP is feasible, then an arbitrary
feasible solution may or may not be a valid con guration. If Uij 2 U is not convex, then it
is possible that tj ? ti lies in the convex hull H (Uij ), but not in Uij itself. In that case, the
con guration is feasible with respect to the CLP, yet Pi + ti and Pj + tj overlap.
We suspected that applying an overlap reduction algorithm to the overlapping con-
guration would greatly improve this evaluation method. We tested this conjecture by
directly invoking an overlap reduction module which was developed for another task within
89
Figure 4.3: No common intersection after maximizing intersection

our project. Li and Milenkovic have a method for overlap reduction based on linear pro-
gramming [Li94, ML, LM93c, LM93a]. This was recently generalized and improved by
Milenkovic2 . We decided to apply this algorithm to improve the performance of the \naive"
CLP. Thus, our rst attempt directly invoked the overlap reduction module for each over-
lapping con guration. This was a breakthrough for our containment work. Previously, the
CLP algorithm had only been fast for three or fewer polygons. We found that reducing
overlaps allowed this algorithm to solve containment, for up to ten polygons, often without
subdivision.
However, there was still room for improvement. The overlap reduction module operates
directly on the original (i.e. unrestricted) Uij s, and so it cannot take into account the
restrictions that SSGR or SSLPR nd for U . Our initial success motivated Milenkovic to
develop a new overlap resolution algorithm as described in [MD95]. We brie y summarize
the key features of this algorithm here. Full details appear in [MD95].
Finding an overlap minimum, even a local minimum, is a nonconvex problem. Fortu-
nately, it is possible to rapidly nd a local minimum in practice through iterated linear
programs. For the problems we encounter in practice, minimization appears to require at
most two or three iterations. The linear program used in this case is an extension of the
CLP called the overlap linear program (OLP). Building the OLP requires constructing a
set of inner convex approximations to the Uij s, in a fashion similar to Li's use of a locality
heuristic for compaction [Li94]. The OLP minimizes the maximum distance from the Uij
to those inner approximations which simultaneously satisfy the CLP.
2
The implementation was done by Iraklis Kourtidis, a Harvard undergraduate.

90
Figure 4.4: Common intersection after maximizing convexity of the union

This evaluation technique is very powerful. Like the greedy evaluation method, it can
sometimes solve containment problems without any subdivision. It is especially adept, for
obvious reasons, at nding a valid con guration when a \loose- tting" one exists. We defer
further discussion of its performance to Section 5.3.

4.4 Evaluation of a Restricted U


A containment algorithm based on CLP evaluation can solve a containment problem
using a single CLP evaluation if every Uij 2 U is convex. In this section we specify a
condition under which a single CLP evaluation can solve a containment problem for a
restricted U .
Following the notation of Chapter 3, in Theorem 4.4.1 below, we denote the replacement
of Uij in a valid restriction of U by Uij? , and the collection of Uij? s by U ? . We also denote
the convex hull of Uij by H (Uij ) and we use Uijorig to denote Uij before any restriction has
been applied to U .
Theorem 4.4.1 If, for every Uij? 2 U ?, H (Uij? )  Uijorig, then a single CLP evaluation
solves the containment problem for U ?.
Proof: If the CLP is infeasible, then the containment problemorigfor U ? is infeasible. If the
CLP is feasible, then because, for every Uij? 2 U ? , H (Uij? )  Uij , every point of the form

91
tj ? ti returned by the CLP is within Uijorig. In this case, the con guration (t0 ; t1; : : :; tk ) is
valid. 2
This result has two important implications. We show in Section 4.5.3 that Theorem 4.4.1
can be used to guarantee that incorporating restrictions into a CLP-based algorithm does
not increase the worst-case number of subdivisions.
In addition, Theorem 4.4.1 provides a test which can be used to decide when to switch
from a direct to an indirect approach within a restrict/evaluate/subdivide containment
algorithm. Inside any evaluator, one can test if the conditions of Theorem 4.4.1 hold. If
so, one can call a CLP evaluator which tests tj ? ti against Uijorig to solve the containment
problem for the current hypothesis.

4.5 Subdivision
We now turn to a discussion of subdivision techniques. Section 4.5.1 de nes subdivision
and discusses goals of subdivision. Subsequent sections discuss speci c subdivision methods.
Section 4.5.2 describes our size-based method. Section 4.5.3 discusses combinatorially-based
methods. Section 4.5.4 gives a new distance-based method.
4.5.1 De nition and Goals
Subdivision divides the hypothesis into two sub-hypotheses. Speci cally, subdivision
splits one Uij 2 U into two parts Uij0 and Uij00 . It creates two sub-hypotheses by replacing
Uij with Uij0 or Uij00 . That is, it creates U 0 = (U nfUij g) [fUij0 g and U 00 = (U nfUij g) [fUij00 g.
Subdivision can have a variety of goals:
1. reduce the \size" of the Uij s, which can be interpreted as the number of vertices,
the number of re ex vertices, the number of components, the area, or the maximum
dimension of the bounding box, for example;
2. make the convex hull H (Uij ) \stick out" of Uijorig less, where Uijorig is Uij before re-
strictions have been applied to U . In particular, if we de ne the maximum overlap of
H (Uij ) to be,
max dist(u; Uijorig);
u2H (Uij )
then subdivision should make the maximum overlap of H (Uij0 ) and H (Uij00 ) smaller;
3. move away from the current con guration;
4. reduce the number of combinatorial possibilities for a valid con guration.
Depending on the containment algorithm, one or more of the goals applies.
The rst goal can be accomplished by almost any reasonable splitting algorithm. The
goal of reducing the maximum dimension of the bounding box is appropriate for a direct
algorithm. Reducing the number of re ex vertices is appropriate for an indirect algorithm
which approximates the Uij s by their convex hulls. We call a subdivision method which
focuses on the rst goal a size-based subdivision method.
The last three goals make sense if the subdivision method is used together with an evalu-
ation method which always produces a con guration. If the con guration is an overlapping
92
one, then subdivision is invoked. The direct, greedy evaluation method of Section 4.2 does
not produce a con guration when it answers \maybe". However, the indirect overlap re-
duction evaluation method of Section 4.3 does produce an overlapping con guration when
its answer is \maybe".
The second goal is appropriate for an indirect algorithm which approximates the Uij s
by their convex hulls. This goal arises because if H (Uij )  Uijorig, then Uij need not be split
further. Recall that Theorem 4.4.1 guarantees that if this condition holds, then a single
CLP evaluation solves the containment problem for the current hypothesis. Recall also that
Theorem 4.4.1 can be used within any evaluator to identify a situation in which a single
CLP evaluation solves the containment problem for the current hypothesis. This implies
that the second goal of subdivision can be viewed as an appropriate one for any containment
algorithm within our restrict/evaluate/subdivide paradigm.
Both the third and fourth goals have the e ect of driving the evaluator \away" from
the current overlapping con guration. Without this goal there is no guarantee that a
single subdivision will produce sub-hypotheses which are easier to evaluate. Evaluation
might investigate the same overlapping tentative solution in one of the sub-hypotheses.
Metaphorically, evaluation may get \stuck in a rut," and a large number of subdivisions
may be required to move it out of the \rut."
A subdivision method which emphasizes the third goal by maximizing the distance from
the current con guration is a distance-based method. A method based on the fourth goal
is a combinatorially-based method.
4.5.2 Size-Based Method
As discussed above, a size-based subdivision method reduces the \size" of the Uij s, which
can be interpreted as the number of vertices, the number of re ex vertices, the number of
components, the area, or the maximum (x or y ) dimension of the bounding box, for example.
We present here one size-based method which reduces the maximal dimension of the
bounding boxes of the Uij s. Our method operates on the V 2 fU0j j1  j  kg which has
the maximum bounding box dimension. It cuts V with either a vertical or horizontal line,
depending on the aspect ratio of V .
SIZE-SUBDIVIDE(U , k)
V element of fU0j j1  j  kg with maximum bounding box dimension
B bounding box of V
/* Assume x is the longest dimension of B */
U 0 fV \ left half of Bg [ (U n fV g)
U 00 fV \ right half of Bg [ (U n fV g)
return U 0, U 00
4.5.3 Combinatorially-Based Methods
Here we discuss two subdivision methods which satisfy the third and fourth goals. Each
moves away from the current con guration, and does so in a way that reduces the number
of combinatorial possibilities. Both can be used with an indirect evaluation method which
approximates the Uij s by their convex hulls. The rst part of Section 4.5.3 discusses a
method which extends an edge of a Uij . This method was designed by Milenkovic to work

93
with the CLP evaluator [Mil, DM95]. The second part of Section 4.5.3 describes an edge
extension technique, developed by Milenkovic [MD95], which depends on feedback from an
overlap reduction evaluator.
Each of these combinatorially-based methods has an associated worst-case number of
subdivisions which depends on the number of vertices in the Uij s. If restrictions are applied
to an hypothesis, then the number of vertices in U may increase. In the third part of
Section 4.5.3 we show that if we use the edges of the Uijorigs instead of the Uij? s within
the evaluator and subdivision methods, then any valid restriction may be applied to an
hypothesis without increasing the worst-case number of subdivisions.
Extending an Edge
The subdivision method of [Mil, DM95] selects a Uij 2 U and extends an edge e of
Uij into a line Le . The line Le is used to cut the polygon. This method is used with the
CLP linear programming evaluator. In the absence of restrictions, the worst-case number
of subdivisions is given by Theorem 4.5.1. In Theorem 4.5.1, (and later in Theorem 4.5.2),
LP(a; b) is the time required to solve a linear program with a variables and b constraints.
Theorem 4.5.1 ([DM95]) If subdivision is implemented using cutting lines of the form
Le described above, then a kNN algorithm which uses CLP evaluation runs in
O((mn)2k+1 LP(2k; O(2kmn + k2 m2 )))
time.
Milenkovic's proof of Theorem 4.5.1 rests on the fact that there are only O(mn) cutting
lines of the form Le . The number of hypotheses which must be evaluated is shown to be
the product of the number of cutting lines (each of which corresponds to a cutting plane
in the 2k-dimensional con guration space for the hypothesis) times the number of di erent
possible vertices in the 2k-dimensional con guration space. The number of di erent possible
vertices is in O((mn)2k ) because each vertex is the intersection of 2k cutting planes.
The con gurations associated with a hypothesis are contained within a convex cell of
the 2k-dimensional con guration space. The cell is bounded by the cutting planes for
the hypothesis. This subdivision method produces two sub-hypotheses associated with
combinatorially di erent con gurations, because their con gurations are inside convex cells
bounded by di erent sets of cutting planes.
Extending the Closest Edge
This section summarizes a di erent subdivision method, designed by Milenkovic [MD95],
that drives the evaluator to a con guration which is combinatorially di erent. This algo-
rithm, in combination with SSLPR and a modi cation of the OLP of Section 4.3, gives
containment a provable running time bound, but it sacri ces practicality for the sake of
this bound.
The technique receives a point umax from the OLP evaluator. The point umax corre-
sponds to the pair of polygons with the most overlap. The cutting line is chosen to be the
extension of the edge of the Uij which is closest to umax .
This type of combinatorially-based subdivision requires some modi cations to evalu-
ation. The modi ed OLP has only a single distance constraint for each Uij , and this
94
constraint is determined by the closest edge of Uij to the current value of tj ? ti inside the
convex hull H (Uij ). We impose an arbitrary constant bound on the number of iterations of
overlap minimization to prevent it from solving an exponential number of OLPs.
With this subdivision method, the containment algorithm requires
2 m2)2k+1 !
O (6kmn + k LP(1 + 2k + k(k + 1)=2; O(kmn + k2 m2 )
k!
time [MD95]. Here O((6kmn + k2m2 )2k+1 =k!) is the number of local minima visited by the
algorithm, and LP(1 + 2k + k(k + 1)=2; O(kmn + k2 m2 )) is the time for an evaluation.
If we modify the containment algorithm, we can claim the following running time, which
has an exponent of 2k instead of 2k + 1:
Theorem 4.5.2 ([MD95]) There exists a kNN algorithm which uses combinatorially-based
subdivision and requires
2 m2 )2k !
O (6kmn + k 2
LP(2k; 6kmn + k m )2
(k ? 2)!
time.
Milenkovic's proof is sketched in [MD95]. For the proof, the containment algorithm's
notion of a hypothesis is modi ed. Instead of using U as a hypothesis, it uses F = fFij ; 1 
i < j  kg, where Fij begins as the convex hull of the original Uij . All restriction and
subdivision acts on F without modifying the Uij polygons. The restriction is basically
the SSLPR of Section 3.4, but we do not replace Fij by H (Fij \ Uij ), and, as a result,
restriction automatically terminates after one iteration. Finally, when we split a hypothesis,
we immediately restrict both sub-hypotheses before recursing. Evaluation acts on the Fij \
Uij polygons, and the \splitting edge" is selected from one of these (as usual).
This may seem like a small improvement over the \simple"
O((mn)2k+1 LP(2k; O(kmn + k2 m2)))
LP-based algorithm [DM95]: the exponent on mn is smaller but the number of variables
is larger. However, the new algorithm visits only local overlap minima (inside convex cells
bounded by cutting lines). There are reasons to expect the number of such minima to be
independent of m and n and depend only on an exponential function in k.
Incorporating Restrictions
A containment algorithm which uses either of these combinatorially-based methods has
an associated worst-case number of subdivisions which depends on the number of vertices
in the Uij s. If restrictions are applied to a hypothesis, then the number of vertices in U
may increase, thus increasing the worst-case number of subdivisions. We show here that,
in each case, the same small modi cations to evaluation and subdivision allow us to use
restrictions without increasing the worst-case number of subdivisions.
These modi cations are based on Theorem 4.4.1. The rst modi cation is within the
linear programming evaluator. We test each tj ? ti for inclusion in Uijorig, not Uij? . The
second modi cation is that the cutting lines used for the subdivision are extensions of the
edges of the Uijorigs, not the Uij? s. These modi cations preserve the running times of the
algorithms, as shown by the following theorem and corollary.
95
Theorem 4.5.3 Theorem 4.5.1 holds for U ? if the evaluator tests each tj ? ti for inclusion
in Uijorig, and subdivision uses extensions of the edges of the Uijorigs.
Proof: Consider nonconvex polygon Uijorig. In the algorithm referred to in Theorem 4.5.1,
in the worst case, Uijorig is subdivided at every re ex vertex. The result is a set of convex
polygons. Now use the same partitioning lines to subdivide Uij?  Uijorig. Each subpolygon
of Uij? is inside exactly one convex subpolygon of Uijorig; hence the convex hull of each
subpolygon of Uij? is in Uijorig . Now Theorem 4.4.1 guarantees that a CLP evaluation which
tests tj ? ti for inclusion in Uijorig solves the containment problem for U ? . Thus, the number
of subdivisions for the worst case is the same. 2
Corollary 4.5.4 Theorem 4.5.2 holds for U ? if the evaluator tests each tj ? ti for inclusion
in Uijorig and subdivision uses extensions of the edges of the Uijorigs.
Theorem 4.5.3 provides a way to incorporate restrictions into the exact, indirect kNN
algorithm of [DM95, Mil] without a ecting the worst-case running time. The subdivision
method which results from applying Corollary 4.5.4 to the method associated with Theo-
rem 4.5.2 appears in [MD95].
4.5.4 Distance-Based Method

tj - t
i

U U
ij ij

Figure 4.5: Distance-based subdivision


Extending the closest edge gives containment a provable running time bound, but it
sacri ces practicality for the sake of this bound. Here we give a new cutting algorithm that
moves the con guration the maximum possible distance from the current con guration.
Because it fragments the regions much less, this algorithm is much more practical, although
we do not have a provable running time bound.
Distance-based subdivision attempts to move the tentative con guration as far as pos-
sible from the current con guration. As before, let Pi + ti and Pj + tj be a pair of polygons
with maximum overlap under the current con guration. The subdivision algorithm selects a
line L through tij = tj ? ti and uses L to cut Uij into Uij0 and Uij00 . It selects the line Lmax that
maximizes the minimum distance d from tij to H (Uij0 ) and H (Uij00 ). This guarantees that
all points within distance d of tij are removed from further consideration in the algorithm
(see Figure 4.5). In other words, it eliminates some of the overlapping con gurations which

96
are \close to" the current overlapping one. The line Lmax can be computed in O(nij log nij )
time, where nij = jUij j, using a fairly intricate sweep algorithm. We summarize here a
simpler and more practical approximate algorithm with essentially the same order running
time.
First compute the visibility polygon3 V for H (Uij ) \ Uij with respect to tij . If tij cannot
see the boundary of H (Uij ) in any direction, then the maximum distance is zero, and
the algorithm chooses the line L through the nearest point to tij on the boundary of Uij .
Otherwise, let ab be an arc of the boundary of H (Uij ) that is visible from tij . Parameterize
ab by piecewise linear () for  2 [0; 1]. Let the cut line be L() = ()tij ; it partitions Uij
into Uij0 and Uij00 . Let Uij0 be the portion of Uij containing a, and Uij00 be the portion of Uij
containing b. For each value of  , let d0( ) and d00( ) be the distances of tij to H (Uij0 ) and
H (Uij00 ), respectively.
Theorem 4.5.5 For   0 and  2 [0; 1], d00( + )  d00() and d0( + )  d0().
?!
Proof: Let p be the rst point beyond tij where the?!ray ()tij intersects V . Similarly,
let q be the rst point beyond tij where the ray ( + )tij intersects V . We show that
d00( + ) > d00() cannot occur. For , let r be the closest point of H (Uij00 ) to tij . We show
that r 2 H (Uij00 ) for  + . Now, all points of Uij on or to the right of L( + ) are in H (Uij00 )
for  + . The points of Uij on or to the right of L( ) are on or to the right of ( )tij q . Let
w1 be the set of points on or to the right of ()tij ( + ). Because tij can see all points
on the arc ab, no points of Uij are in the interior of w1. Therefore, all points of Uij on or to
the right of L( ) are also on or to the right of L( + ). If r is a point of Uij , then it must
therefore be in H (Uij00 ) for  + . If r is not a point of Uij , then it is on a line between two
points of Uij , and thus is also in H (Uij00 ) for  + . Therefore, d00( + )  d00( ). A similar
argument establishes that d0 ( + )  d0( ). 2
Because d0 ( ) increases monotonically and d00( ) decreases monotonically, the maximum
distance can be found to any degree of accuracy by binary search on . The running time
of the practical algorithm is O(nij log nij + narcs log ?1 log nij ), where the accuracy is  and
the number narcs of arcs is  the number of components of Uij , which is generally much
less than nij .
Subdivision is only invoked if the OLP fails to resolve all overlaps; in this case, it is
often true in practice that tj ? ti is very close to the boundary of Uij . It is often true that
the overlap is between two polygons, as opposed to between a polygon and the container.
Thus, the situation often resembles that shown in Figure 4.6. In this gure, the shaded
regions are the components of Uij? .
In Section 5.3.2 we show that distance-based subdivision, together with the overlap
reduction evaluation technique, allow our hybrid algorithm to detect infeasibility better
than the naive CLP-based exact algorithm.
3
This can be found in O(n) time [GA81, Lee83, JS87], or in (n log n) time if the polygon has holes
[O'R87]. (Restrictions tend to fragment U , so an individual Uij may have many components; this means
holes may be encountered.)

97
p

Figure 4.6: Common situation for p = tj ? ti

4.6 Summary and Conclusions


In this chapter, we presented several evaluation and subdivision methods. We o er two
evaluation methods. Our greedy evaluator operates directly on the con guration spaces and
employs SSGR. The key to its e ectiveness is a good choice of a shrinking point for each
Uij . Selecting a point which maximizes the intersection area of translated Uij s appears to
work well; in our experiments this strategy solves 60% of the containment problems without
any subdivision.
The other evaluation method is an overlap reduction evaluator. This is based on im-
provements to an indirect containment linear program (CLP) evaluator. A con guration
which is feasible for the CLP evaluator may be overlapping. If the overlaps can be elimi-
nated, a valid con guration can be found. An overlap reduction evaluator is sometimes able
to solve containment problems without any subdivision. It is especially adept at nding a
valid con guration when a \loose- tting" one exists.
Our discussion of evaluation methods concludes with the observation that, if the convex
hull of every Uij is within its unrestricted Uij , then a single CLP evaluation solves the
containment problem for the associated hypothesis. This test can be used to decide when
to stop subdividing a hypothesis inside any containment algorithm which is based on our
paradigm.
We o er four di erent possible subdivision goals: 1) reduce the \size" of the Uij s, 2)
make the convex hull of a Uij protrude less from its original Uij (this is motivated by the
observation of the previous paragraph), 3) move away from the current con guration, and
4) reduce the number of combinatorial possibilities for a valid con guration. We present
a size-based subdivision technique which abides by the rst goal and reduces the maximal
dimension of the bounding boxes of the Uij s. We discuss two combinatorially-based meth-
ods, developed by Milenkovic, which satisfy the third and fourth goals. We show that a
containment algorithm which uses either of these can use restrictions without increasing

98
the worst-case number of subdivisions. We introduce a distance-based subdivision method
which satis es the third goal. This practical method eliminates some of the overlapping
con gurations which are \close to" the current overlapping one.

99
Chapter 5
Algorithms

5.1 Introduction
Our techniques can be combined to form a variety of two-dimensional translational
containment algorithms which follow the restrict/evaluate/subdivide paradigm introduced
in Section 2.4. As we noted in Section 2.4, termination of the algorithm depends on the
compatibility of the techniques. In this chapter we present two ways of combining techniques
into kNN containment algorithms.
Our rst algorithm is an approximate algorithm. That is, it nds an -approximate
con guration for kNN (refer to De nition 2.4 in Section 2.2). The algorithm uses the
geometric restrictions (SSGR) of Section 3.3, the greedy evaluator of Section 4.2, and the
size-based subdivision method of Section 4.5.2. All of these methods are direct , and so this
algorithm is a direct, approximate algorithm.
The approximate algorithm has additional structure beyond our paradigm. It uses
the containment characterization of Section 2.7 to create a set of k boundary-restricted
containment subproblems. Each of the subproblems is solved using restriction, evaluation,
and subdivision. Our approximate algorithm appears in [DM95, DM] as well as in this
chapter.
The second algorithm is an exact algorithm. It uses both the geometric restrictions of
Section 3.3 and the linear programming restrictions (SSLPR) of Section 3.4. It uses the
overlap reduction evaluation of Section 4.3. It employs the distance-based subdivision of
Section 4.5.4, so that when the polygons of a hypothesis are subdivided into more convex
pieces, this is done with the goal of moving the con guration as far as possible from the
current con guration. We consider this exact algorithm to be a hybrid algorithm because
it uses an indirect evaluation method, and both direct and indirect methods for restriction.
Our hybrid algorithm is also described in [MD95]. This algorithm currently yields our best
practical results, primarily due to its powerful evaluation method.
Ironically, the approximate algorithm operates directly on the actual Uij s, whereas the
exact algorithm uses approximations of the Uij s.
Both of our algorithms have practical running times for small numbers of items. The
container and items may be nonconvex. Most of our examples are drawn from apparel

100
manufacturing. For our apparel examples, the number of vertices in the container ranges
from 100 to 300, and the number of vertices in each item ranges from 4 to 100. We produce
a feasible containment problem from an apparel layout by removing a group of neighboring
items from the layout. To create an infeasible problem, we move some of the remaining
items a small distance into the container.
For ten or fewer items, the hybrid algorithm typically requires several minutes or less
on a 50MHz SPARCstation. The current implementation of approximate algorithm is fast
for up to four items. For k > 4, the current implementation of SSGR, which is based on
polygon set operations, slows the algorithm down considerably. Increasing the speed of our
SSGR implementation is a subject of future work (see Section 10.2.1). In our experiments,
SSGR evaluates fewer hypotheses than the faster, hybrid algorithm, and so we believe the
approximate algorithm has the potential to be competitive with the hybrid algorithm.
Implementation note: Both of our algorithms require robust geometric techniques [LM93b,
MN89, Mil93a, Mil93b] in order to function well in practice.
5.1.1 Overview
This chapter contains one section for each of our two containment algorithms. Section 5.2
discusses our approximate algorithm and gives experimental results. Section 5.3 describes
our hybrid algorithm and presents experimental results. Section 5.4 summarizes this chapter
and states conclusions.

5.2 An Approximate Containment Algorithm


5.2.1 Introduction
We present a numerically approximate algorithm for nding an -approximate con g-
uration for kNN (refer to De nition 2.4 in Section 2.2). The value of  is an input to
the algorithm. In industrial cutting applications, the containment solution acts as a guide
to a machine cutting out polygonal shapes from a sheet of material. If one chooses  to
be a fraction of the cutter's accuracy, then the solution to the approximate containment
problem is sucient for industrial purposes. In fact, for some cutting applications, CAD
vendors round polygon vertex coordinates to integers. In such cases, an overlap of up to
one unit, corresponding to .01 inches is acceptable. For the applications we consider, the
ratio of the largest value to the smallest nonzero value is usually under 10,000; this makes
an approximate algorithm a practical choice.
Given a containment problem, we use our characterization of translational containment
to create a collection of k containment subproblems (see Section 2.7). We solve each sub-
problem by rst applying our steady-state geometric restrictions (see SSGR in Section 3.3)
and then using the greedy evaluator of Section 4.2. If necessary, we subdivide the con gu-
ration space, using size-based
 k subdivision,
  to generate new subproblems. The running time
1 1 6
of our algorithm is O(  log  k s log s), where s is the largest number of vertices in
a polygon restricted by SSGR (see the discussion of size analysis in Section 3.2.4). In the
worst case s can be exponential in the size of the input, but, in practice, it is usually not
more than quadratic.
In a sense, our algorithm substitutes subdivision for the iteration used in Avnaim's exact
3NN algorithm (see [Avn89] and the discussion of related work in Section 2.6). In prac-

101
tice, much iteration is required in an exact algorithm before hitting the right combination
of edges. If the answer is \no", one must check all combinations of edges. Our SSGR
alone can often detect infeasibility (see Section 3.3.5). SSGR together with our greedy
evaluation method can often nd a valid con guration without requiring the algorithm to
use subdivision (see Section 5.2.3). While restriction and subdivision often perform bet-
ter than iteration, iteration can have a much better theoretical worst-case running time
bound. Even when subdivision is unnecessary, the running times of our algorithm are in
O(m28n24 log mn) for 3CN and in O(m60 n40 log mn) for 3NN. These seem much worse than
the iteration-based methods, but in terms of the size s of the largest polygon generated by
SSGR, the running times are both in O(s log s) as opposed to O(s5 log s) for the iteration-
based algorithms. As mentioned in Section 2.6, it is not clear that the worst-case results
for consecutive polygon operations are attainable. The tests we have run indicate that the
running times of our algorithm are much faster than the best times we would estimate for
the iteration-based solutions to 3NN.
The high-level description of our algorithm in the second part of this subsection draws
its structure from our characterization of containment, because the characterization allows
us to decompose kNN into a set of subproblems. The algorithm solves each subproblem
by rst restricting U using SSGR, testing for a solution using the greedy evaluation of
Section 4.2, and subdividing the con guration space if a solution is not found. Section 5.2.2
establishes the correctness and running time of our algorithm. Section 5.2.3 discusses our
implementation and also presents containment examples.
Applying the Characterization
Our philosophy is to avoid iteration as much as possible because iteration contributes
to the worst-case cost of the algorithm, particularly in terms of s-analysis. For instance,
Avnaim and Boissonnat's algorithm for 3NP [AB87, Avn89] has running time O(s log s),
and Avnaim's algorithm for 3NN [Avn89] has running time O(s5 log s). The algorithm for
3NP is faster because it avoids iterating over the edges and vertices of polygons. Like the
algorithm for 3NP, we replace iteration with restriction as much as possible.
There is another type of iteration one must consider. This iteration is based on the
characterization of Theorem 2.7.1. To apply the entire characterization, one needs to choose
contacting pairs in a number of ways which is exponential in k, roughly k4k . If one were to
generalize Avnaim's algorithm for 3NN [Avn89] to an algorithm for kNN, the factor of k4k
would not be2 a signi cant part of the cost compared to the factor dependent on m and n
(roughly m2k (mn)2k ). For our approach, however, a factor of k4k is signi cant.
It is possible to use restriction and subdivision and avoid iterating over any contacting
pairs. In fact, this is what we initially implemented. However, we feel that considering
contacting pairs is potentially very useful. For instance, if we invoke a condition weaker
than the second part of Theorem 2.7.1 to assume that a particular Pj is in contact with P0 ,
then we can replace U0j with its boundary. Note that this boundary restriction covers the
cases where P0 has either a single or a double contact with some Pj . Although we restrict
U0j to its boundary, we are not iterating over vertices or edges of U0j , unlike Avnaim's 3NN
algorithm.
This partial characterization implies a signi cant restriction. However, the disadvantage
is that we must iterate over all k choices of j by solving k subproblems1 . Our current
1
However, as noted in Section 2.7, these subproblems can be solved in parallel. In our current implemen-

102
conjecture is that iterating over O(k) choices of (partial) characterization will allow us to
avoid subdivision often enough to o set the extra factor of k in cost. We present data in
Section 5.2.3 to support this conjecture in the case of k = 4.
The boundary restriction does not add an extra factor of k in cost for containment
problems in which all the items are identical. In this case, all k subproblems are identical,
and so only one subproblem must be solved.
The Algorithm
In the algorithm below, P and U are set initially to the values given in Section 2.2. If
at any point a valid con guration is found, execution terminates with success. We assume
that the tolerance test TOO-SMALL() is used within GREEDY-EVALUATE(). The  used
in TOO-SMALL() is the same  used within SSGR. BUILD-CONFIGURATION is given in
Section 4.2.1. Note that in order to perform a boundary restriction on a U0j , we need only
intersect it with the boundary of U0orig
j . U represents a copy of U for the j th processor,
j
and U0jj is processor j 's con guration space for Pj with respect to P0 .
APPROX-CONTAIN(U , k)
U orig U
U SSGR(U , k)
if (some Uij 2 U = ;) return NO
if (TOO-SMALL(U , k))
return BUILD-CONFIGURATION(U , k)
/* Search for a boundary-restricted solution */
for (j = 1 to k) do in parallel
Uj U
U0jj U0j \ BOUNDARY-OF(U0orig
j )
return SOLVE-SUBPROBLEM(U j , k)

SOLVE-SUBPROBLEM(U , k)
U SSGR(U , k)
if (some Uij 2 U = ;) return NO
status GREEDY-EVALUATE(U , k)
if (status = YES) return YES
if (status = NO) return NO
U 0, U 00 SIZE-SUBDIVIDE(U , k)
return SOLVE-SUBPROBLEM(U 0 , k) or SOLVE-SUBPROBLEM(U 00 , k)

TOO-SMALL(U , k)
V element of fU0j j1  j  kg with maximum bounding box dimension
B bounding box of V
if (height(B) < p2 and width(B) < p2 ) return YES
return NO
tation we solve them sequentially.

103
5.2.2 Analysis
Correctness
Claim 5.2.1 If a valid con guration exists, then the algorithm will nd an -approximate
solution.
Proof: The algorithm converges because each subdivision reduces the height or width of
the bounding box of V by a factor of two, stopping when the height and width reach the
tolerance p2 .
If a solution exists, then our characterization guarantees that one of the boundary-
restricted subproblems has a solution. We claim that our algorithm does not throw away
any solutions to these subproblems. This is because Theorem 3.3.1 guarantees that SSGR
is a valid restriction.
Now we argue that any solution which is found is an -approximate solution. If GREEDY-
EVALUATE nds a solution, then Theorem 4.2.1 guarantees that solution contains no over-
lap; hence it is a valid con guration. A valid con guration is an -approximate solution.
The only other way the algorithm can nd a solution is through a call to BUILD-
CONFIGURATION. This occurs when the height and width of the bounding box of V
are both < p2 . In this case, the diameter is less than  for all polygons in V . In that
case, we argue that no point of any polygon is more than 2 inside of any other polygon's
boundary. None of the polygons overlap the container, because each tj chosen in BUILD-
CONFIGURATION is in U0j . Thus, we need only bound the overlap between a pair of
polygons Pi and Pj , 1  i 6= j  k.
Any uij 2 Uij , having survived SSGR, can be expressed as tj ? ti , where ti 2 U0i and
tj 2 U0j . If Pi is placed at ti and Pj is placed at tj , Pi and Pj do not overlap. Now let
vi and vj be the points of U0i and U0j chosen in BUILD-CONFIGURATION. Let D(p; q)
denote the distance between points p and q . Since the diameter d of any U0i is less than
, we have D(vi ; ti ) <  and D(vj ; tj ) < . Translating Pi from ti to vi can therefore cause
a point of Pi to be at most a distance of  inside Pj . Translating Pj from tj to vj can at
most double this overlap. Therefore, the maximum overlap between two polygons in the
con guration ft1 ; t2 ; : : :; tk g is < 2. 2
Running Time
Here we calculate the asymptotic running time of our algorithm. We assume:
 s is the largest number of vertices of any polygon generated by SSGR (see the discus-
sion in Section 3.2.4),
 2 is the maximum overlap between two polygons,
 is the fractional shrinkage in area required by each iteration of SSGR.
   
Theorem 5.2.2 The running time of our algorithm is in O( 1 k log 1 k6s log s).
Proof: Let S be the number of subdivisions and R be the running time of SSGR. The
running time is dominated by Sk3R. To nd S , we rst assume that the maximum height
or width of the bounding box of V is q . The height of the subdivision tree is therefore
104
O(k(log2 q ? log2)). The k term corresponds to the total number of polygons that are
subdivided in the worst case. The number of nodes in the subdivision tree is therefore
O(2log2 (  )k ) = O(2log2 (  ) )
q q k

 k  k
=O q =O 1 =S
 
 
From Section 3.3.3, R is in O(log 1 k3 s log s).
The total running time is therefore:
 k  
Sk3 R = O( 1 log 1 k6s log s)

2
5.2.3 Results
This section describes our implementation of the algorithm for approximate containment.
We set equal to 0:1 and  to 0.5. SSGR is used for k > 3. We have found that for k  3 the
algorithm runs so quickly and so rarely requires subdivision that one iteration of geometric
restriction is all that is needed.
General Results
Table 5.1 contains running times for our algorithm for a variety of examples for 3 
k  10. All running times given in the table are to the nearest second on a 28 MHz
SPARCstation2 . The setup time to calculate Uij polygons is not included in the running
time. Infeasible examples are identi ed by (*). All infeasible examples are ones for which
each pair of polygons ts into the container, and the area of the container is larger than
the total area of the polygons. Our restriction process enables the algorithm to often detect
infeasibility very quickly.
Figure 5.1 through Figure 5.4 show con gurations achieved for four examples using our
algorithm. The tightness of the layouts is typical. In each gure, the items placed by our
containment algorithm have dark shading. Figure 5.1 and Figure 5.2 are 3NN examples.
Figure 5.3 is a 4NN example. Figure 5.4 is a 5NN example.

Figure 5.1: 3NN example: time = 6 seconds


2
SPARCstation is a trademark of SPARC International, Inc., licensed exclusively to Sun Microsystems,
Inc.

105
Type Time Type Time Type Time
3CC 1 3CC 1 3CC 0
3CC* 0 3CC 1 3CN 3
3CN 1 3CN 1 3CN 1
3CN 2 3CN 3 3CN 2
3CN 2 3CN 1 3CN 1
3CN 4 3CN 1 3CN* 2
3CN* 1 3CN* 0 3CN* 2
3CN* 1 3CN* 0 3CN* 1
3CN 2 3CN 2 3CN 3
3CN 4 3CN 2 3CN 1
3CN 1 3CN 2 3CN 2
3CN 2 3CN 2 3CN 3
3CN 2 3CN 2 3CN* 27
3CN 2 3NC 1 3NN 5
3NN 5 4CN 26 4CN 30
4CN* 3 4CN 5 4CN 19
4CN 192 4CN 43 4NN 111
4NN 111 4NN 90 4NN* 10
5CC 126 5NN 60 5NN* 26
6NN 93 6NN 300 6NN* 49
7CC 823 10CC 600
Table 5.1: Running times in seconds ((*) denotes infeasible example)

Figure 5.2: 3NN example: time = 5 seconds

106
Figure 5.3: 4NN example: time = 90 seconds

Figure 5.4: 5NN example: time = 60 seconds

107
Table 5.2 below presents running times, in seconds, for the approximate algorithm on
a di erent, small collection of examples for 5  k  7. These tests are for a 50MHz
SPARCstation. For each example, we also give the number of hypotheses examined by the
approximate algorithm. Judging the approximate algorithm purely by its running time can
be deceptive. Although the algorithm takes anywhere from 3 to 45 minutes in these exam-
ples, it is only examining 1 or 2 hypotheses. Most of the running time is consumed by SSGR,
whose current implementation is roughly ten times more slow than that of SSLPR. Faster
implementations of SSGR will directly lead to faster running times for the approximate
algorithm. In fact, for an experiment in Section 5.3.2, the number of hypotheses examined
by our fastest (hybrid) algorithm is greater than the number of hypotheses examined by
the approximate algorithm. We therefore believe that, given fast enough polygon set op-
erations, the approximate algorithm has the potential to be competitive with the hybrid
algorithm.

Type Running Time Number of Hypotheses


5NN 428 1
5NN 189 1
5NN 317 1
6CN 2540 2
6CN 2238 1
7CC 823 1
Table 5.2: Running time, in seconds, and number of hypotheses examined by approximate
algorithm

Solving Subproblems
In our comparison of evaluation heuristics for choosing a Uij shrinking point, we reported
in Table 4.1 that the method of maximizing the intersection area solved 60% of the 30
examples without subdivision. Here we maximize the intersection area of translated Uij s,
and we compare the e ect of using k subproblems versus not using subproblems. That is,
we evaluate the e ectiveness of applying a partial containment characterization.
Our examples use k = 4. Figure 5.5 shows an example of a feasible 4NN problem solved
using the characterization. The darkly shaded items numbered 5, 62, 63, and 61 were
placed. Note that three of the four items are in contact with the boundary of the container.
We rst compare the sequential cost, in terms of the number of hypotheses examined, for
infeasible as well as feasible examples. Results are shown in Table 5.3. For the 46 examples,
the subproblem approach examines a total of 162 hypotheses, whereas the non-subproblem
approach examines 345 hypotheses. These results suggest that, for k = 4, applying the
partial characterization to the sequential algorithm is bene cial.
When all the items are identical, the k subproblems generated by the characterization
are all identical, and so we need only solve one of them. The results above contain 21
problems for which all items are identical. The remaining 25 problems are for non-identical
items. The results for identical item problems are listed separately in Table 5.4. The total
number of hypotheses examined in the characterization case is 115, whereas the number
108
25
8
28
7 26
21
27
6
47
5
37
61 63 11

58 10
44 62
45 13
46
31 32 30
55
48
12 36
49
57 33 50
38
15 35
43 39
14

22

16 17
40 41
23 56 42
60
20

Figure 5.5: 4NN feasible example solved using partial characterization

109
Type Subproblems No Subproblems Type Subproblems No Subproblems
4CC 1 1 4CC 1 1
4CC 1 1 4CN 1 1
4NN 1 2 4NN 1 1
4NN 1 1 4NN 2 1
4NN 3 1 4CC 1 1
4NN 1 1 4CN 1 1
4CN 3 1 4CN 8 1
4NN 1 1 4CN 1 116
4CN 1 1 4CN 4 1
4CN 8 26 4CN 1 1
4CN 1 6 4CN 1 1
4CN 1 1 4CN 1 1
4CN 1 1 4CN 3 2
4CN 2 1 4CN 3 1
4CN 1 4 4NN 4 1
4NN 2 1 4CN 1 1
4CN* 5 5 4CN* 2 2
4CN* 7 14 4CN* 5 11
4CN* 1 11 4CN* 2 7
4NN* 3 3 4NN* 1 1
4CC* 1 1 4NN* 1 1
4NN* 1 1 4CC* 65 93
4NN* 2 2 4CN* 3 11
Table 5.3: Number of hypotheses examined by sequential algorithm ((*) denotes infeasible
example)

110
examined in the non-characterization case is 174. For the non-identical item problems,
the total number of hypotheses examined in the characterization case is 47, whereas the
number examined in the non-characterization case is 171. Thus, for identical as well as
non-identical item problems, the characterization examines fewer hypotheses for k = 4 in
our experiments.

Type Subproblems No Subproblems Type Subproblems No Subproblems


4CN 1 1 4CN 1 1
4CN 1 1 4CN 3 2
4CN 2 1 4CN 3 1
4CN 1 4 4NN 4 1
4NN 2 1 4CN 1 1
4CN* 5 5 4CN* 2 2
4CN* 7 14 4CN* 5 11
4CN* 1 11 4CN* 2 7
4NN* 3 3 4NN* 1 1
4NN* 2 2 4CN* 3 11
4CC* 65 93
Table 5.4: Number of hypotheses examined for identical items ((*) denotes infeasible ex-
ample)

Intuitively, we expect that a single boundary-restricted subproblem will not examine


more hypotheses than the non-subproblem approach because we are eliminating so much
of the search space. Often this is the case. However, our experiments reveal that it is
possible for a boundary-restricted subproblem to evaluate (a few) more hypotheses than a
problem in which the boundary restriction is not applied. We believe that this phenomenon
is related to the evaluation method. Recall that our evaluation method maximizes the
intersection area of Uij polygons. If a general polygon contains polygonal regions plus
isolated edges and points, the intersection heuristic only considers the polygonal regions.
If no polygonal regions are present, we currently resort to a simple alternative heuristic.
Thus, if the boundary restriction causes many of the Uij s to consist of only isolated edges
and points, we currently lose the power of the evaluation heuristic. Section 10.2.1 suggests
a possible remedy for this.
Assessing the cost-e ectiveness of applying the partial characterization for k > 4 is
a subject for future work. We believe that, as k increases, the partial characterization
will continue to be cost-e ective for identical item problems because only one subproblem
must be examined. The partial characterization is less likely to have an advantage in
sequential tests for non-identical items as k increases, due to the increase in the number of
subproblems. Once the critical value of k is identi ed for which this advantage is lost, a
detailed comparison of the parallel cost will be necessary in order to justify using the partial
characterization for values of k larger than the critical one. Section 2.4 discussed one way
in which course-grained parallelism can be applied to our algorithm. A queue of hypotheses
can be maintained. Each hypothesis represents an independent containment subproblem
which is created via subdivision. Whenever a processor is available, it operates on the

111
next hypothesis in the queue. The subproblems generated by the characterization are also
independent from each other, and so the hypotheses they generate are independent and can
be added to the queue. Note that applying the characterization creates subproblems which
are all available at the start of the algorithm.
We remark that the number of subproblems which must be examined is smaller, in
theory, for kCN examples than for kNN. This is due to the following theorem and corollary.
Theorem 5.2.3 If an instance of kCN for k  3 has a valid con guration, then that
instance has a valid con guration in which at least three items are in contact with the
boundary of the container.
Proof: Consider a valid con guration for an instance of kCN, for k  3. Milenkovic
shows [Mil] that, given a set P of non-overlapping polygons, at least one of them can move
unobstructed to in nity in the direction (-1,0). This \disassembly lemma" works for any
direction. Thus, for every direction d, there is some polygon P 2 P such that P can move
to in nity in direction d. Let S be the subset of polygons in P which can move to in nity
in at least one direction. Let S 0 be a minimal subset of S ; one which covers all directions
using the smallest number of polygons. We now show that S 0 contains at least two polygons.
First, we observe that either every polygon in P can move to in nity in some direction (in
which case we are done), or at least one polygon Q cannot. It is easy to show that one
convex polygon alone cannot \surround" another convex polygon, and so it takes at least
two polygons to prevent Q from moving to in nity. Thus, S 0 contains at least two polygons.
Since S 0 contains at least two polygons and all directions are covered by polygons in S 0, at
least two polygons P1 and P2 in S 0 share a common direction d. This means that we can
move both P1 and P2 in direction d until each hits the boundary of the container. Because
the direction ?d is independent of direction d and there are at least three polygons, we can
apply the disassembly lemma to P n (fP1g [ fP2g) to move a third polygon P3 in direction
?d until it hits the container. 2
We note that three items is \tight", because there are sets of four convex polygons for
which only three can be moved to in nity.
Corollary 5.2.4 When solving an instance of kCN using the partial characterization, ex-
amining k ? 2 subproblems is sucient.
Corollary 5.2.4 implies that examining one subproblem is sucient for 3CN examples.
This, however, does not yield the fastest known algorithm for 3CN. Milenkovic [Mil] ob-
served that, for a valid 3CN con guration, each of the three polygons can be moved to the
boundary of the container. This forms the basis of his O(m3 n log mn) time 3CN algorithm.
Below we compare the performance of this very fast 3CN algorithm with our approximate
algorithm on 3CN examples.
Comparison with Exact Algorithm for Convex Items
Table 5.5 compares running times for Milenkovic's 3CN exact algorithm [DML94, Mil]
with our approximate algorithm on 3CN examples. Infeasible examples are identi ed by
(*). The exact algorithm tests separating line orientations between pairs of convex poly-
gons. Because both algorithms are so fast, timing comparisons are based on 100 calls to
112
each. For feasible examples, the exact algorithm performs about seven times faster than
the approximate algorithm, on average. For infeasible examples, the exact algorithm is only
about twice as fast, on average. This is no accident. When the exact algorithm was rst
designed, the approximate algorithm's powerful restriction process gave it a considerable
advantage over the exact algorithm for infeasible examples. The exact algorithm was re-
designed with the goal of detecting infeasibility quickly in mind. The major part of the
redesign was a multiple resolution method for evaluating sets of separating lines.

Example Exact Approximate Example Exact Approximate


1 11 79 6* 15 34
2 14 107 7* 21 68
3 26 171 8* 16 35
4 22 230 9* 15 27
5 10 55 10* 30 38
Table 5.5: Exact vs. approximate running times for 3CN examples (100 calls each)

Subdivision
The vertical and horizontal cut method of Section 4.5.2 has been implemented. For our
applications, CAD vendors round polygon coordinates to integers, so an  of :5 units is
suciently small. It is rare for the algorithm to subdivide down to tolerance. Often, no
subdivision is required at all to nd a valid con guration. This is because we have a heuristic
for selecting a shrinking point for each Uij which works well in practice (see Section 4.2.3).
Subdivision is rarely required for infeasible situations, because our restriction process often
detects infeasibility early in the algorithm.

5.3 A Hybrid Containment Algorithm


5.3.1 Introduction
Our hybrid algorithm is the result of signi cant improvements to the restriction, eval-
uation, and subdivision techniques of an existing indirect algorithm of Milenkovic [Mil].
Milenkovic's indirect algorithm, in its original form, was slower than our direct algorithm
for infeasible problems, and its practical upper bound on k was k  3. The improved
indirect algorithm is currently the fastest containment algorithm for 4  k < 10.
Our rst improvement to the indirect algorithm improves its ability to detect infeasibil-
ity by proving that restrictions, when used in the indirect algorithm, do not increase the
algorithm's worst-case running time if we perform an appropriate test within the evaluation
step and subdivide using an appropriate set of cutting lines. This improvement is discussed
in detail in Section 4.5.3. This improvement allows us to use both SSGR (Section 3.3) and
SSLPR (Section 3.4) in the algorithm without increasing the worst-case running time. We
use some SSGR restriction at the top level, and SSLPR at the hypothesis level.
The second improvement to the indirect algorithm improves the ability of the evaluator
to locate a valid con guration. We apply the work of Li and Milenkovic on overlap reso-
113
lution [Li94, ML, LM93c, LM93a] to improve the evaluation step. Their overlap resolution
algorithm is an indirect algorithm. It uses inner convex approximations, an approach which
complements the outer approximations of the indirect containment algorithm. The success
of this experiment led Milenkovic to develop the overlap reduction evaluator described in
Section 4.3. We use that evaluator in the hybrid algorithm.
A combinatorially-based subdivision algorithm yields a running time of
!
O (6kmn + k2 m2 )2k LP(2k; 6kmn + k2 m2)
(k ? 2)!
where LP(a; b) is the time to solve a linear program with a variables and b constraints,
although the actual running time depends on the number of local minima, which is likely
to be far smaller than (mn)2k . An alternative heuristically-based subdivision algorithm
(the distance-based subdivision method of Section 4.5.4) gives containment an even better
running time in practice, and is guaranteed to not create new local overlap minima.
Our practical distance-based subdivision technique is our third improvement to the indi-
rect algorithm. This method attempts to maximize the distance from the current evaluation
to the convex approximations of the subdivided con guration spaces. This technique is de-
scribed in Section 4.5.4. It moves the evaluator away from the current con guration so that
evaluation does not get \stuck in a rut". It does this by eliminating some of the overlapping
con gurations which are \close to" the current overlapping one.
We call our improved algorithm a hybrid algorithm because it uses indirect methods
for evaluation, developed by Li and Milenkovic, and both direct and indirect methods for
restriction. This algorithm is also described in [MD95].
The hybrid algorithm, in addition to being a formidable containment algorithm, has
the property that a partial containment solution from it is valuable. This is because each
evaluation step produces either a non-overlapping solution or an overlapping arrangement
for which a linear program has reached a local minimum. As a result, the hybrid algorithm,
when terminated early, is the most powerful heuristic known for generating a local overlap
minimum.
In our experiments, the new algorithm clearly outperforms purely geometric containment
algorithms, such as the approximate algorithm of Section 5.2. However, it does this by com-
bining rather than replacing geometric techniques with linear programming. This \hybrid"
algorithm demonstrates the manner in which linear programming can greatly increase the
power of geometric algorithms.
High-Level Algorithm Description
The algorithm starts with a queue Q of hypotheses containing a single hypothesis: the
set of original Uij , 0  i < j  k, as calculated using Equation 2.1. We assume that this
hypothesis has been restricted using some geometric restrictions.

114
HYBRID-CONTAIN(Q, k)
while (Q is not empty)
U DEQUEUE(Q)
U SSLPR(U , k)
if (some Uij 2 U = ;) continue while loop
status OVERLAP-EVALUATE(U , k)
if (status = YES) return YES
if (status = NO) continue while loop
U 0, U 00 DISTANCE-BASED-SUBDIVIDE(U , k)
ENQUEUE(U 0, Q)
ENQUEUE(U 00, Q)
return NO
5.3.2 Results
The examples we selected are primarily from the apparel industry. We use SSLPR only
at the deepest levels of hypothesis splitting, for two reasons. The rst is cost. Although
SSLPR is several times faster than SSGR, it is not necessarily cost-e ective to invoke it
prior to evaluating each hypothesis. The second is that SSLPR restricts polygons more as
their convex hulls become better approximations to them.
Comparison with Other Algorithms
We compared the performance of our hybrid algorithm with two algorithms: the approx-
imate algorithm of Section 5.2 and the naive exact kNN algorithm of [DM95, Mil], which is
based on the CLP. Table 5.6 gives running times for feasible containment problems. Run-
ning times are expressed in seconds on a 50 MHz SPARCstation. Table 5.7 gives running
times for infeasible containment problems. Our new algorithm clearly outperforms both of
the other algorithms. For ve or more polygons, the di erence is dramatic. Figure 5.6 and
Figure 5.7 depict con gurations obtained by our algorithm.
The approximate algorithm uses SSGR, whose current implementation runs about ten
times more slowly than SSLPR. A more fair comparison between the approximate and
hybrid algorithms counts the number of hypotheses examined instead of simply the running
time. Table 5.8 shows the results of such an experiment. The approximate algorithm
examines a total of 76 hypotheses, whereas the hybrid algorithm examines 100. From
this perspective, the approximate algorithm appears more powerful. For the 7CC example,
both algorithms nd a solution without any subdivision; each evaluates only one hypothesis.
However, the approximate algorithm requires 823 seconds, whereas the hybrid algorithm
nds the solution in only 7 seconds. The time di erence here is due to the di erence in the
speed of SSGR and SSLPR, and to the fact that the greedy evaluator invokes SSGR each
time a Uij is shrunk to a point.
SSGR vs. SSLPR
Here we discuss the e ectiveness of SSGR and SSLPR within our new algorithm.
We tested our algorithm on 21 di erent feasible and infeasible containment examples for
3  k  8. For each example, we varied the type of restriction used: 1) no restriction, 2)

115
Type APPROX NAIVE LP HYBRID
3NN 19 22 9
3NN 2 4 4
4CC 25 1 2
4CC 21 10 11
4CN 113 84 4
4NN 155 19 33
4NN 80 24 35
4NN 35 43 15
4NN 319 89 10
5CC 126 6 3
5NN > 5min. > 5min. 24
5NN 428 ? 16
5NN 189 ? 36
5NN 317 ? 44
6CC 75 53 5
6CN > 5min. > 5min. 184
6CN 2540 ? 372
6CN 2238 ? 1031
6NN > 5min. > 5min. 86
6NN > 15min. > 15min. 593
7CC 823 162 7
7CN > 5 min. > 5min. 72
7NN > 10 min. > 10min. 176
8CC > 5 min. > 5min. 34
10CC ? ? 14
10NN ? ? 183
Table 5.6: Running times for feasible examples in seconds

116
SSGR, 3) SSLPR, and 4) SSGR followed by SSLPR. In all cases, we began the containment
algorithm by applying Equation 3.1, which bounds the unbounded Uij s.
Let N be the number of hypothesis evaluations required if we use no restrictions at the
individual hypothesis level. Let L be the number of evaluations if we use SSLPR for each
hypothesis. Let G be the number of evaluations if we use SSGR for each hypothesis. Finally,
let GL be the number of evaluations if we follow SSGR by SSLPR for each hypothesis.
Table 5.9 shows values of N, L, and G for some of our examples.
The table omits GL because it is almost always true that G=GL. However, in one case,
SSLPR eliminated something SSGR could not eliminate. This occurred after many levels
of subdivision.
In all 21 cases, min(G, L, GL) <= N. This is true of both feasible and infeasible cases.
The di erence is often dramatic (e.g. for one 4NN infeasible example, N=237, L=21, and
G=7,). It is always the case that both G<=N and L<=N. It is often true that jL?Gj is
small. In 17 of 21 cases, G<L. The total of G for the 21 examples is only 46, whereas the
total for L is 117.
Sometimes G is slightly larger than L. We believe that this phenomenon is due to a
subdivision method we experimented with for these tests. This method isolates an individual
component of a Uij . If restriction produces many components, just isolating one at a time
can create many hypotheses. In this case, a cutting algorithm which moves the con guration
the maximum possible distance from the current con guration might be preferable. It is
possible that the best subdivision strategy might depend on which type of restriction is
used at the hypothesis level. For example, since SSLPR is most e ective when a polygon is
similar to its convex hull, subdivision in this case should maximize the convexity of Uij0 and
Uij00 . For SSGR, perhaps subdivision should minimize the amount of \closure" of the two
subpolygons. We are still evaluating subdivision strategies for our hierarchical containment
algorithm.
Our results suggest that both SSGR and SSLPR can dramatically reduce the number
of hypotheses which must be evaluated. Using SSLPR causes the algorithm to evaluate
more hypotheses than SSGR in our experiments. However, for our current implementations
of SSGR and SSLPR, SSLPR is the most cost-e ective restriction in this context because
SSLPR is approximately ten times faster than SSGR.
Subdivision
In order to gauge the success of our subdivision strategy, we compared the number of
hypotheses evaluated by the naive exact algorithm versus the hybrid algorithm on infeasible
examples. For this comparison, we disabled the high-level geometric restriction as well as
the hypothesis-level restrictions. The subdivision method we tested uses distance-based
subdivision when the Uij consists of a single component. If the Uij to be subdivided has
multiple components, we isolate the component which is nearest to the \bad point". The
results are shown in Table 5.10. In every case the hybrid algorithm requires fewer evaluations
than the naive exact algorithm. This suggests that a subdivision strategy which uses the
information from a local overlap minimum can be e ective.

117
Figure 5.6: 4NN example: time = 15 seconds

Figure 5.7: 7NN example: time = 176 seconds

118
5.3.3 Conclusion
We now have both geometric and linear programming-based methods for restriction,
evaluation, and subdivision. Each appears to have strengths in di erent areas, and they
can be combined to achieve good results. In particular, geometric restrictions are very
helpful at the \top level" when the Uij are very nonconvex, and we de nitely apply them
there. This work in no way shows the superiority of either approach, but instead shows how
well they can work together.

5.4 Summary and Conclusions


This chapter has presented two kNN algorithms based on our restrict/subdivide/evaluate
paradigm. The rst is a numerically approximate algorithm because it produces an -
approximate con guration. It uses a partial containment characterization to produce a set
of k subproblems. It uses SSGR, a greedy evaluator, and size-based subdivision to solve
each subproblem.
The second algorithm is a hybrid algorithm. It uses both geometric and linear program-
ming restrictions, an overlap reduction evaluator, and distance-based subdivision. SSLPR
is used at the hypothesis level. Our experiments suggest that distance-based subdivision
helps the hybrid algorithm detect infeasibility. This algorithm is superior to the original
CLP-based algorithm of Milenkovic.
We give running times for both of these algorithms for a variety of kCC, kCN, and
kNN containment examples, for 3  k  10. The hybrid algorithm is currently faster
than the approximate algorithm for k > 4. The hybrid algorithm typically solves our
containment examples in several minutes or less on a 50 MHz SPARCstation, for 3  k  10,
4  m  100, and 100  n  300. We believe that the evaluation methods of the
two algorithms are of roughly the same e ectiveness, and that this di erence in speed is
primarily due to the fact that the current implementation of SSLPR is roughly ten times
faster than the current implementation of SSGR. This conclusion is supported by the fact
that the number of hypotheses evaluated by the approximate algorithm in our experiment
is less than the number of hypotheses evaluated by the hybrid algorithm. Speeding up our
implementation of SSGR is a subject of future work (see Section 10.2.1). We believe that,
given fast enough polygon set operations, the approximate algorithm has the potential to
be competitive with the hybrid algorithm.

119
Type APPROX NAIVE LP HYBRID
4NN 47 152 135
4NN 138 3037 493
4NN > 10 min. > 10 min. 416
5NN > 10 min. > 10 min. 103
Table 5.7: Running times for infeasible examples in seconds

Type APPROX HYBRID


4CC 1 1
4CC 1 1
4CC 1 1
4CN 1 2
4CN 1 5
4CN* 13 15
4CN* 23 13
4NN 1 3
4NN* 1 5
4NN* 1 5
4NN* 23 15
5CN 2 1
5NN 1 1
5NN 1 4
5NN 1 4
6CN 2 10
6CN 1 13
7CC 1 1
Table 5.8: Number of hypotheses examined ((*) denotes infeasible example)

120
Type N L G Type N L G
3NN 5 3 1 4NN 3 2 1
3CN* 61 3 1 4NN 26 3 1
4CN 3 3 3 4NN 1 1 1
4CN 3 2 3 4NN* 257 5 1
4CN 21 4 2 4NN* 237 19 7
4CN* 25 3 1 4NN* 81 7 1
4CN* 15 7 1 5CN 10 3 4
4CN* 205 27 3 5CN 73 2 2
4CN* 85 15 1 6CN 1 1 1
4NN 13 3 5 8CC 4 3 4
4NN 18 1 2
Table 5.9: Number of hypotheses evaluated for restriction options ((*) denotes infeasible
example)

Type NAIVE LP HYBRID Type NAIVE LP HYBRID


4CN 32 27 4CN 42 11
4NN 493 153 4NN 1736 311
4NN 153 103 4NN 436 323
Table 5.10: Number of hypotheses evaluated for infeasible problems (no hypothesis restric-
tion)

121
Part II

Applications of Containment to
Layout

122
Chapter 6
Preliminaries

6.1 Introduction
This part of the thesis gives examples of layout problems which can be solved using
two-dimensional translational containment for irregular polygons. The problems are of two
types: 1) minimal container, and 2) maximal item. These applications support our claim
that being able to quickly solve containment problems for up to ten items allows a variety
of layout problems to be solved more e ectively.
Recall that our containment approach is a \bottom-up" one in which we maximize the
number of items for which algorithms are practical. Clustering techniques for layout also
use a bottom-up approach. Some of them even allow the items to rotate. However, the
methods which cluster more than two arbitrary nonconvex polygons are greedy heuristics.
Our containment-based solutions to minimal container problems do not allow the items to
rotate, but, in contrast to the heuristics, we establish an actual lower bound on the size of
the container for multiple items.
In addition, we believe that some potentially promising layout techniques have been
abandoned by the layout community because they lack strong bottom-up methods. The
primary example of this is pre-packing. Recall that a pre-packing strategy generates feasible
groups for each container and then assigns groups to containers. If the layout world had a
practical method for generating groups of items which t into each container, this, plus an
appropriate assignment strategy, would make pre-packing viable. We demonstrate, through
our containment-based solutions to maximal item problems, that pre-packing can work.
We also solve a problem which arises in multi-stage layout problems, and which must be
solved before a strategy such as pre-packing can be applied. The problem is to transform a
single, highly nonconvex container into a collection of containers.
We remark that, although the primary use of containment is for automatic solutions
to layout problems, containment can also be part of a collection of computer-aided layout
tools for humans as well as assist in training humans who solve layout tasks manually.

123
6.1.1 Overview of Chapter
Section 6.2 discusses our work on the container transformation problem. Section 6.3
gives an overview of our work on minimal container problems. Section 6.4 discusses our
work on maximal item problems. Section 6.5 explains the structure of this part of the thesis.

6.2 Overview of Container Transformation Work


The motivation for our container transformation work is the trim placement problem in
apparel manufacturing (see Section 1.4). Trim placement is the second stage of a two-stage
layout process for pants markers. The items to be placed are either large pattern pieces or
small trim pieces. The rst stage places the large pattern pieces. The \container" for the
second stage is the complement of the large placed pieces within the rst stage container.
It is typically highly nonconvex. Rather than operate on this unwieldy container directly,
we transform it into a collection of containers. The trim placement phase places as many
of the small trim pieces as possible into the containers.
An ideal transformation of the container both limits and decomposes it. The limiting
process removes unnecessary regions which are not reachable by any of the items to be
placed. We use restrictions (see Chapter 3) to remove regions which are clearly unreachable.
The decomposition process produces a collection of containers which may overlap each other.
Overlap among containers makes it possible for placed items to overlap each other. We
show how to balance the con icting goals of maximizing the number of containers while
minimizing the overlap between them. Maximizing the number of containers produces
manageable containment problems. We allow the user to select the probability that the
containers will be overlapping.
For eciency purposes, we introduce the notion of a breaking polygon. This is a polygon
which can be translated to t inside any of the items to be placed. In order to nd a breaking
polygon, we nd the maximum area axis-parallel rectangle (MAAPR) in a polygon. Our
theoretical work on the MAAPR problem uses fast matrix searching techniques from the
computer science literature and gives the fastest known asymptotic running time for this
problem: O(n log2 n) time for an n-vertex polygon.
Our work on container decomposition is applicable, in general, to multi-stage layout
problems because the second stage of a multi-stage layout problem may involve a large,
highly nonconvex container which needs to be decomposed. Highly nonconvex containers
also appear in industries in which defects in material cause the material to be partitioned
into \quality zones" (see Heistermann and Lengauer's work with leather [HL92]).

6.3 Overview of Minimal Container Work


From the discussion of related work in layout in Section 1.3.3, we see that solutions to
minimal enclosure problems are often required for the rst stage of a multi-stage layout
problem. Minimal container problems also arise when nding the best stock sheet size for
a given set of items, or the best cloth width in garment manufacturing. Current solutions
to minimal enclosure problems for more than two arbitrary nonconvex items often allow
the items to rotate, but they use greedy heuristics. The heuristics cannot produce a lower
bound on the size of the enclosure.

124
We give four examples of minimal container problems: 1) strip packing, 2) nding mini-
mal square enclosures, 3) nding minimal rectangular enclosures, and 4) nding tight convex
and nonconvex enclosures. Before discussing our work on these problems in Section 6.3.2,
we summarize related work on enclosure problems in Section 6.3.1.
6.3.1 Related Work
Attempts to treat multiple polygon enclosure problems in the literature typically use
greedy heuristics. Adamowicz and Albano [AA76], Haims and Freeman [HF70], Amaral,
et al. [ABJ90], Dagli and Tatoglu [DT87], and Bailleul et al. [BTS83] pack nonconvex
items into rectangles to form composite items, and then place the rectangles. In [AA76],
Adamowicz and Albano give an algorithm for pairwise clustering of irregular items in a
rectangle, and describe limited clustering options for more than two items. They allow
polygons to rotate. They perform pairwise clustering, using an object called the No Fit
Polygon (NFP)1. They also perform limited clustering for more than two items, by rst
clustering two large items and then placing small items within the bounding box of that
cluster. Amaral, et al. [ABJ90] adopt the approach of [AA76] for clustering items.
Dagli and Totaglu [DT87] cluster pairs of items in a greedy fashion and allow the items
to rotate. They rst cluster two items, make them a unit, cluster this unit with another
item, and so on. Prasad and Somasundaram [PS91] and Babaev [Bab82] form an ecient
cluster of items, then build a repeating pattern in order to nest sheet-metal blanks. Ismail
and Hon [IH92] use a genetic algorithm to cluster two similar items together; the items are
allowed to rotate. Dighe and Jakiela [DJ95] use a hierarchical genetic algorithm to cluster
items. The items are allowed to rotate. Their algorithm uses a binary tree in which each
item (together with values for its three parameters: x, y , and ) constitutes a leaf, and a
node represents a search for the minimal enclosing rectangle of its two children. Thus, the
genetic algorithm seeks the best binary tree for a set of items. Milenkovic, et al. [MDL92]
solve the specialized problem of building 4-item stacks of pants panels for layouts in apparel
manufacturing.
In computational geometry, algorithmic work appears on various types of enclosure
problems for single items. Finding a minimal enclosure for a single polygon can also be
viewed as a polygon approximation problem, as noted in [CY86].
Chang and Yap de ne inclusion and enclosure problems [CY86]. They de ne an en-
closure problem as follows: Enc(P, Q, ): Given P 2 P, nd the -smallest Q 2 Q that
encloses P , where P and Q are families of polygons, and  is a real function on polygons
such that:
8Q; Q0 2 Q; Q0  Q ) (Q0)  (Q)
Chang and Yap [CY86] review work on a variety of such enclosure problems. We brie y
mention two examples of enclosure work. Melissaratos and Souvaine [MS90a] nd the min-
imum concave quadrilateral circumscribed about a simple polygon, and O'Rourke [O'R85]
nds the minimal box circumscribing a set of three-dimensional points2 .
1
The NFP of polygon A with respect to polygon B is really just B  ?A (see Section 2.2).
2
Chang and Yap's de nition of P could be extended to point sets for this case.

125
6.3.2 Our Minimal Container Work
Our containment-based solutions to minimal container problems do not allow the items
to rotate, but, in contrast to heuristics, we establish an actual lower bound on the size of the
container for multiple items. Our general approach to nding minimal enclosures is to nd
an optimal value for a parameter via binary search. We remark that the binary search can be
accelerated by using compaction techniques, such as those developed by Li and Milenkovic
[Li94, ML, LM93c, LM93a], to improve a feasible solution and thereby help update the
upper bound. Our approach to these problems follows the common operations research
approach of tightening upper and lower bounds to obtain an optimal value. Containment
provides the lower bounds in our case. Our hybrid containment algorithm is appropriate
for minimal enclosure problems. It is extremely fast for loose ts, and so the initial stages
of the search are fast.
For the problem of nding tight convex and nonconvex enclosures, we show that re-
moving unreachable portions of a tight rectangular enclosure can yield tight convex and
nonconvex enclosures which preserve all the containment solutions for the rectangle.

6.4 Overview of Maximal Item Work


We give four examples of maximal item problems. For this discussion, let kpractical
be the largest number of items for which current containment algorithms are feasible. Our
problems are: 1) given a container and a collection of items, nding maximally scaled copies
of items which t in the container, 2) given a container and a collection of  kpractical items
for which the sum of the item areas does not exceed the area of the container, nding a
\large" sub-collection which ts in the container, 3) given a container and a collection of
items for which either the sum of the item areas exceeds the area of the container or the
number of items is > kpractical, nding a \large" sub-collection of  kpractical items which
ts in the container, and 4) problem (3) for a collection of containers.
Section 1.3 contains an extensive survey of the relevant literature on layout and packing
problems. It also reviews the limited layout literature on the assignment problem, which
arises as part of problem (4).
We discuss our work on maximal item problems below in Section 6.4.1.
6.4.1 Our Maximal Item Work
For problem (1) we use binary search combined with containment. In problem (2) the
given items cannot be scaled; they can only be included in or excluded from the set. The
key to solving this problem is to carefully choose which item to eliminate from an infeasible
subset. We reject the most \troublesome" item, which is identi ed using the results of the
overlap reduction evaluation method of the hybrid containment algorithm. For problem (3),
we randomly construct a set of items and solve problem (2). This yields a rich collection of
sets which t into the container. For each set of size n which ts, we automatically have 2n
subsets which t and which need not be individually represented. This work provides the
rst practical method for constructing sets of up to ten polygons which t in a container.
For problem (4), we solve problem (3) for each container, and then apply a look-ahead
assignment strategy. This is a pre-packing strategy. Containment thus allows us to trans-
form a geometric optimization problem into a combinatorial optimization problem. Our

126
results challenge the existing belief that pre-packing methods are impractical for layout
[Arb93].

6.5 Overview of Part Two


Chapter 7 discusses our work on container decomposition. Our container decomposition
work also appears in [DM94]. Our theoretical work on the MAAPR problem is summarized
in Chapter 7, and also appears in [DMR93, DMR]. Chapter 8 solves four minimal container
problems. Chapter 9 solves four maximal item problems, including pre-packing.

127
Chapter 7
Limited Gaps

7.1 Introduction
In this chapter we consider the following problem: given a complex container and a set
of items to be placed in the container, transform the container by limiting and decomposing
it. The limiting process removes unnecessary regions which are not reachable by any of
the items to be placed. The decomposition process produces a collection of containers.
Maximizing the number of containers produces manageable containment problems.
The motivation for our container transformation work is the trim placement problem in
apparel manufacturing (see Section 1.4). Trim placement is the second stage of a two-stage
layout process for pants markers. The items to be placed are either large pattern pieces or
small trim pieces. The rst stage places the large pattern pieces. The \container" for the
second stage is the complement of the large placed pieces within the rst stage container.
This container is highly nonconvex and may have many components. An example is shown
in Figure 7.1. In that gure, the darkly shaded regions together form the second stage
container. They represent gaps formed by neighboring pants panels. The items to be
placed during the second stage appear stacked at the bottom of the gure. These items are
trim pieces. Rather than operate on this unwieldy container directly, we transform it into a
collection of containers. The trim placement phase places as many of the small trim pieces
as possible into the containers.
Our work on container decomposition is applicable to multi-stage layout problems, in
general, because the second stage may involve a large, highly nonconvex container. Highly
nonconvex containers also appear in industries in which defects in material cause the ma-
terial to be partitioned into \quality zones" (see Heistermann and Lengauer's work with
leather [HL92]).
An ideal transformation of the container both limits and decomposes it. We show how
the container can be limited with the help of restrictions (see Chapter 3). The limiting
process removes regions which are clearly not reachable by any item. (Identifying all un-
necessary regions is an NP-hard problem.) The need for eliminating unreachable regions has
been suggested by Amaral, et al. [ABJ90]: \It is ... necessary to implement algorithms that
identify those gaps which cannot contain any shape, due to its dimensions or geometry."

128
Figure 7.1: Trim placement task

Karasic [Kar89] conjectures that humans perform this function intuitively when they place
items in a container.
The decomposition process produces a collection of containers which may overlap each
other. Overlap among containers makes it possible for placed items to overlap each other.
We show how to balance the con icting goals of maximizing the number of containers while
minimizing the overlap between them. As mentioned earlier, maximizing the number of
containers produces manageable containment problems. Our method allows the user to
select the probability that the containers will be overlapping.
For eciency purposes, we introduce the notion of a breaking polygon. This is a polygon
which can be translated to t inside any of the items to be placed. In order to nd a breaking
polygon, we nd the maximum area axis-parallel rectangle (MAAPR) in a polygon. Our
theoretical work on the MAAPR problem uses fast matrix searching techniques from the
computer science literature and gives the fastest known asymptotic running time for this
problem: O(n log2 n) time for an n-vertex polygon.
This chapter describes our e orts to provide a sound mathematical basis and a set of
algorithms for container limiting and decomposition. We call each component of the limited
and decomposed container a limited gap.1
7.1.1 Decomposing the Container into Gaps
Given a collection of k polygonal items P and a container C , we seek a gap set G =
fg1; g2; : : :; glg, such that [igi = C , with the containment property: if p 2 P is placed inside
the container, then it is contained entirely inside some g 2 G. Smaller gaps and more gaps
are desirable; overlap among gaps is undesirable.
The set of connected components of the container has the containment property, but
it usually has too few elements to improve tractability much in practice. We will de ne a
subset RP of the container called the reachable region of P . The connected components of
RP form a much better gap set than the container itself, but RP still is often insuciently
decomposed. However, it is the largest cardinality gap set we know of with non-overlapping
gaps.
If we permit gaps to overlap, then it is possible for polygons placed in di erent gaps
to overlap. However, if we construct the gaps properly, then the probability of overlap
1
After creating the term limited gap , we noticed the double connection of the name to the retail side of
the apparel business.

129
will be small. We will give a rigorous notion of this probability. Furthermore, for any
given probability  , we will show how to construct a gap set, called limited gaps, such
that the probability that items in di erent gaps overlap is less than  , assuming a uniform
distribution.
There is a tradeo between the number and size of gaps and the probability. Many small
gaps imply small k for a containment problem for an individual gap, but the probability of
overlap between items in di erent gaps is higher. Since  is a parameter to the construction
of the limited gaps, we allow the possibility of nding the best value of  in the tradeo .
7.1.2 Overview
Section 7.2 describes the container decomposition methods we experimented with before
arriving at the notion of the limited gaps. Section 7.3 de nes the limited gaps in terms of
the probability  and gives an algorithm for constructing them. Section 7.4 presents two
methods for improving the eciency of limited gap construction. These methods introduce
the new concepts of breaking polygons and inking polygons. Section 7.5 summarizes our
work on nding the maximum area axis-parallel rectangle in a polygon. This is used to
construct a breaking polygon. Section 7.6 summarizes this chapter and states conclusions.

7.2 The Challenge of Constructing Gap Sets


Before de ning the limited gaps, we review the other container decomposition methods
we experimented with. Our methods assume that the container is the product of the rst
stage of a two-stage layout problem. The rst stage places a collection of items which
become obstacles in the second stage. That is, the container for the second stage is the
region within the original container which is the complement of the obstacles. We also
assume that a transformation of the obstacles which improves the rst-stage layout is legal.
7.2.1 Compaction
Li and Milenkovic have developed a compaction tool [MDL92, LM93d, Li94, ML, LM93c,
LM93a] which plans a motion for a set of placed items under a set of applied \forces." We
attempted to eliminate holes and narrow \necks" in the container by applying a leftward
force to each obstacle polygon. In theory, this process is guaranteed to eliminate holes
because each obstacle will be in contact with at least one other obstacle. However, we
found that this strategy has three shortcomings: 1) it fails to close all the narrow necks,
2) although the container components become simply connected, unnaturally large regions
still exist, and 3) contact detection su ers from numerical instability. Through the use of
a con guration space approach, the compaction algorithm avoids the question of contact
detection in order to run quickly and numerically robustly. However, in order to generate
a good decomposition, it is necessary to explicitly detect contacts, and this proves to be
numerically unstable.
7.2.2 Partitioning Squares
Our second attempt involved choosing a narrowness tolerance . We formed a partition-
ing square with side of length and used this to partition the container. The partitioning

130
Figure 7.2: A partitioning square which fragments the container

process is conceptually analogous to centering a partitioning square at each vertex of each


obstacle, and then removing those squares for which the intersection with the set of ob-
stacles is connected (see Figure 7.2; this illustrates a square which is not removed). The
diculties with this approach are: 1) it introduces artifacts (pieces of squares) into the
layout, 2) it fragments the container in a way that leaves many small regions which are
not usable by the set of items to be placed, and 3) choosing a universal is inherently
impossible. The fragmentation issue can be resolved by testing each region, but this is
computationally expensive2 . Choosing is not possible because P does not naturally give
rise to a narrowness value. Adopting a value corresponding to regions which no item can
t through is insucient, because an item might be able to reach into an arbitrarily narrow
neck from both sides, thus making the neck usable (see Figure 7.3). The only \natural"
solution is to let the left and right regions overlap.

Figure 7.3: A neck which is usable although items cannot pass through it

7.2.3 Other Methods


We considered other methods, such as computing a Voronoi diagram of the obstacles, us-
ing edges of the obstacles as sites [For87]. However, such methods also rely on a narrowness
tolerance , which is a fundamentally awed approach.

7.3 Limited Gaps


We rst de ne the set of limited gaps for P when Pi = Pj for 1  i < j  k. Then
we give a straightforward extension of the theory to noncongruent shapes. For ease of
notation, we denote Pj by p. We also denote U0?j by Vp (recall from Section 3.1.2 that U0?j
is the replacement of U0j in a valid restriction of U ).
7.3.1 Reachable Regions
De nition 7.1 The set Rp = Vp  p is the reachable region of p with respect to P .
2
One could use a breaking polygon (see Section 7.4.1) to reduce the expense.

131
Note that if we use U0orig
j , the unrestricted U0j , in place of Vp, the reachable region
becomes what is known in mathematical morphology and image analysis as the opening of
a point set [Ser82, Ser88]. At the other extreme, if we use U0j in place of Vp, we obtain Vp.
The set Vp  p is a tight upper bound on the reachable region under set inclusion.
A gap set GR consisting of the connected components of Rp has the containment prop-
erty, and the components have the added advantage of being non-overlapping. Unfortu-
nately, components of Rp can contain arbitrarily narrow necks. Ideally, each component
of Rp is generated by a single component of Vp . Unfortunately, dilation by p can blend
together two components of Vp for cases in which p cannot t through a narrow neck of the
container but two copies of p can touch if they reach into the neck from di erent sides, as
in Figure 7.3.
By dilating each individual component of Vp we can create a gap set GV with higher
cardinality than GR and with no narrow necks. Unfortunately, these gaps can be highly
overlapping. The limited gap set will be somewhere \in between" GV and GR.
One advantage of Rp is that it provides a nontrivial upper bound on the eciency3 of a
container, as follows. The eciency of packing copies of p in a container C cannot exceed
the ratio of area(Rp) to area(C ). Most packing density bounds in the literature apply only
when the polygons are convex, but this one works for any type of polygon.
Karasic [Kar89] conjectures that human layout experts employ the opening as a heuristic
criterion. He notes that the opening can be used not only to evaluate the quality of a
polygon's placement, but also for selecting the next polygon to place. The area of Rp can
also be used within a greedy layout algorithm as a criterion for judging the placement of
polygons. Larger Rp is better in this case, and the algorithm chooses a placement position
for each item which results in the most usable space for the remaining unplaced items.
7.3.2 De ning Limited Gaps using Probability of Interference
Let v; v 0  Vp be two connected components. If v  p and v 0  p do not intersect, then
we can independently place copies of p into the container using translations from v and
v 0 without interference. If (v  p) \ (v 0  p) 6= ;, then we want to compute a probability
Prob(v; v 0) that (p + t) \ (p + t0 ) 6= ;. We assume a uniform distribution on the connected
components of Vp, so t 2 v and t0 2 v 0 are chosen uniformly. Assuming we can compute
Prob(v; v 0), we can nally de ne limited gaps. Given a probability  , components v and v 0
interfere with each other if Prob(v; v 0)   . Partition the components of Vp according to
the transitive closure of the interference relation into interference sets. Finally, dilate each
interference set by p to generate a gap. The resulting set GL( ) of gaps is the limited gap
set for probability  . There are variations on this de nition, but the point is that one can
select the probability  of interference for the gap set.
We calculate Prob(v; v 0) as follows. For each t 2 v , the set of t0 2 v 0 such that (p + t) \ (p +
t0 ) 6= ; is ((p + t)  (?p)) \ v 0, and the probability that this choice of t results in interference
is Prob(ftg; v 0) = area(((p + t)  (?p)) \ v 0 )=area(v 0). The integral of Prob(ftg; v 0) over all
t 2 v gives us Prob(v; v 0).
We do not have an exact algorithm for this integral, but we can approximate it by
sampling t 2 v . We can improve the eciency by sampling only the region v \ ((v 0  p) ?p)
that can result in a non-zero value of Prob(ftg; v 0). After computing the average value of
3
Eciency is the ratio of used area to total area.

132
this probability, we normalize by multiplying it by area(v \ ((v 0  p)  ?p))=area(v ).
Finally, if P contains noncongruent shapes, then we can similarly de ne and compute
Prob(v; v 0) for v  Vp and v 0  Vp0 . Let VP be the set of all components of the Vp regions.
We partition VP according to interference. In this case, v and v 0 can only interfere if they
come from di erent Vp and Vp0 . For each partition set   VP , we dilate each component
v 2  by the appropriate p 2 P , and then take the union [v2 (v  p) to generate the gap
corresponding to .

7.4 Ecient Computation of Limited Gaps


In order to eciently construct the limited gaps, we want to quickly partition VP into in-
terference sets. We present two helpful partitioning techniques. The rst technique involves
nding a breaking polygon: a polygon B that can be translated to t inside any p 2 P .4
The second technique uses the notion of inking polygons: q inks p if the reachable region of
q inside p is all of p. Section 7.4.1 discusses the breaking polygon technique. Section 7.4.2
introduces the more powerful concept of inking polygons.
7.4.1 Breaking Polygons
Let us denote a breaking polygon of P by B. To simplify the presentation, we assume
that each p 2 P has its origin positioned so that B  p without translation. Since B  p,
it follows that every polygon in VP is a subset of some polygon in VB .
We can use B to quickly group together components of VP which might intersect each
other. Clearly, two components of VP cannot intersect each other unless they are in the same
component of VB . Furthermore, since each connected component of VP is entirely within
a connected component of VB , the inclusion test consists only of a single point-in-polygon
test.
Since we want to maximize the number of components of VB , we want to maximize the
area of B. Finding the polygon of maximum area which can be translated to lie inside of
each p 2 P is an interesting open problem which we do not solve here. In practice, we nd
the maximum area axis-parallel rectangle (MAAPR) of each p 2 P . Finding the MAAPR
quickly is a challenging problem in its own right and is the subject of [DMR93, DMR]. We
brie y summarize our MAAPR work in Section 7.5.
We reposition each p so that its origin is at the center of its MAAPR. We choose B
to be the intersection of all p 2 P . This guarantees that B contains a rectangle with the
minimum width and minimum height of all the MAAPRs. This, in turn, provides a lower
bound on the area of the best breaking polygon.
Unfortunately, the reachable region RB of the breaking polygon is not necessarily a
superset of Rp for each p 2 P . Consequently, we are not guaranteed that if two components
v1 and v2 2 VP are in di erent connected components of VB , then they do not interfere. In
fact, there can be a signi cant amount of interference.
7.4.2 Inking Polygons
We say polygon q inks polygon p if and only if (p (?q ))  q ) = p. A set of polygons
Q is an inking set of a set of polygons P if, for every p 2 P , [q2Q((p (?q))  q) = p.
4
We use the term breaking polygon because VB \breaks", or partitions, VP .

133
An inking set is valuable because RQ = [q2Q Rq is a superset of each reachable region Rp.
Therefore, for p; p0 2 P , if v  Vp and v 0  Vp0 are connected components, then v  p and
v 0  p0 can intersect (and hence Prob(v; v 0) > 0) only if they lie in the same component of
RQ. Furthermore, since v  p either lies entirely inside or entirely outside each component
of RQ, we can simply perform a point-in-polygon test of any point of v  p. It is easy to
generate this point without computing all of v  p.
For an inking set to be useful, RQ should not be too much larger than RP . On the
other hand, RQ should be easier to compute. The rst condition implies that we desire
the inking set whose smallest member has maximum possible area. The second condition
implies that we desire the inking set of minimum complexity (total number of vertices).
The inking set of maximal area is the set P itself. The inking set of minimum complexity
is a single point, but if we specify non-zero area, then this is an open question. We desire
the answer to a doubly open question: of the inking sets of minimum complexity, nd the
one whose smallest member has maximum area. The next section gives a lower bound on
the complexity of an inking set based on the notion of a hitting set.
Hitting Sets
Let a; b; c be three consecutive vertices of some polygon p 2 P such that angle abc opens
into the interior of p. If an inking set Q inks p, then there must be a set of consecutive
vertices a0 b0c0 on some q 2 Q such that angle a0b0 c0 is a subset of abc: if q is translated to
place b0 on top of b, then a0 and c0 are on or in the interior of angle abc. We say that angle
a0 b0c0 hits angle abc.
We can map angle abc to an interval (arc) on the unit circle by translating b to the
center of the circle and taking the arc in the interior of abc. Clearly, a0b0c0 hits abc if and
only if its interval is a subset. We refer to the set of angles or intervals of P as that set of
arcs generated by all triples of consecutive vertices abc of polygons p 2 P .
Given a set of intervals on the unit circle, a hitting point-set S is a set of points such
that each interval contains at least one point of S in its interior. Clearly the complexity of
Q is at least as great as the size of the minimum hitting point-set of the angles of P .
In the following discussion, let M be the total number of vertices in P .
Theorem 7.4.1 A minimal hitting point-set for M intervals on the unit circle can be found
in O(M 2 ) time.
Proof: We rst give an O(M log M ) time algorithm for nding the minimal hitting set
for a set of M intervals of the real line. Then we show how to use this algorithm as part of
an O(M 2 ) time algorithm for intervals on a circle.
The rst step in the line interval algorithm is to initialize the hitting set H to ;. We
then sort the intervals by increasing x-coordinate of their right endpoints, and process them
in this order. For each interval i, if H contains a point that hits i, then we do not change
H. Otherwise, we add the right endpoint5 of i to H.
This algorithm runs in O(M log M ) time because we sort the intervals. It can be shown
that each step of the algorithm yields the minimal hitting set (for the current set of intervals)
whose rightmost point is furthest to the right.
5
Actually, to avoid degeneracy, we add a point just slightly to the left of the right endpoint of i.

134
Our algorithm for intervals on a circle rst sorts the interval endpoints, creating a set
of 2M event points and 2M \sub-intervals" (between event points). The algorithm has an
outer loop which considers each of these 2M sub-intervals, in turn. We add a single hitting
point to a sub-interval, and thus we can eliminate from consideration any interval that
contains this sub-interval. We can now cut the circle anywhere in this sub-interval and then
apply the line algorithm. This gives the minimal hitting set for this choice of sub-interval.
After considering all 2M sub-intervals, we know which choice yields the minimum hitting
set. We only have to sort once, so the running time is in O(M 2 ). 2
We can generalize the notion of hitting point-set to hitting interval-set: recall an interval
of Q hits an interval of P if it is a subset. Clearly, we can replace each point in the
hitting point-set by the sub-interval in which that point lies. However, this might not
yield the hitting interval-set whose minimum length interval is maximized. Since we do not
want \skinny" inking polygons (to avoid inking narrow necks of the container), we want to
maximize this minimum angle. This will also tend to maximize the minimum area of an
inking polygon.
For a given arc length L, we can nd the minimum hitting interval-set whose members
are of length L. We can do this in O(M 2) time using a modi cation of the point-set
algorithm. The algorithm on the line is exactly the same, but \unraveling" the circle into
a line becomes more intricate. For any given hitting set size h, we can do binary search on
the value of L to nd the smallest value of L whose minimum hitting interval-set is h or
greater. Since L can take on at most 2M (2M ? 1) values, we only have to run the O(M 2)
hitting interval-set algorithm O(lg M ) times.
Forming Inking Sets using Hitting Sets
Given the minimal sized hitting interval-set Q of h intervals for P (with maximum
minimum length L), we show here how to construct an inking set consisting of a set of h
isosceles triangles and a single square. The isosceles triangles ink regions near the vertices
of P , and the rectangle inks the remaining area. If hmin is the minimum complexity of the
inking set, then this algorithm yields an inking set of complexity 3hmin + 4.
We rst discuss the isosceles triangles. We associate an isosceles triangle with each
hitting set interval i as follows. Consider the set Si of angles abc of polygons p 2 P which
are hit by i. For each angle abc, nd the maximum height of a \corner" isosceles triangle
whose top is b and whose sides are aligned with ba and bc such that the triangle is in the
interior of p. Let Hi be the minimum height over all vertices in Si . Now, form an inking
set polygon Ti which is the isosceles triangle of height Hi=2 and top angle determined by i.
Ti can ink a \corner" isosceles triangle for abc of height Hi=2.
The size of the square s is constrained by two factors: 1) the narrowest neck in P , and
2) the narrowest (and most diagonal) angle in P . To treat the rst constraint, we build a
Voronoi diagram of each polygon in P , under the L1 metric [Lee80]. If we surround each
site with the largest square which ts, then s must be smaller than the smallest of these
squares. Let w1 be thepresulting width. The second constraint requires that we calculate
w2 = min(Hi sin( =2)= 2) over all abc in P with angle and hitting interval i. The width
of s is min(w1; w2).

135
7.5 MAAPRs
This section considers the geometric optimization problem of nding the maximum area
axis-parallel rectangle (MAAPR) in an n-vertex general polygon. This rectangle problem
arises naturally in applications where a quick internal approximation to a polygon is useful.
One of its applications is nding breaking polygons (see Section 7.4.1).
Section 7.5.1 summarizes our MAAPR work from [DMR93, DMR]. This work gives an
O(n log2 n) time algorithm for an n-vertex general polygon. Section 7.5.2 describes a naive
algorithm6 which we use in practice to nd a breaking polygon.
7.5.1 Summary of MAAPR Results
Here we summarize our work on MAAPRs, which is described in detail in [DMR93,
DMR]. We present the fastest algorithmic results for general polygons with holes: a
O(n log2 n) time algorithm. We also prove a lower bound for this type of polygon of time in
(n log n). In order to develop this algorithm, we rst characterize the MAAPR for general
polygons by considering di erent cases based on the types of contacts between the rectangle
and the polygon.
We present a general framework for solving the two-contact case of the MAAPR problem,
which dominates the running time for a variety of polygon types. The framework involves
transforming the polygon, via vertex projection and inner rectilinear approximation, into
another polygon for which we can solve a related problem. In the related problem, we are
given a point set S and two subsets L and R of S . The task is to nd the largest rectangle
containing no point of S which has lower-left corner in L and upper-right corner in R. This
problem can be solved by transforming it into a monotone7 matrix searching problem. Fast
methods exist in the literature for nding the maximum element of such matrices.

xy-monotone orthogonally convex horizontally convex vertically separated


horizontally convex general
(n) O (n a(n)) O (n a(n) log n) O (n log n) O (n log 2 n)

Figure 7.4: Algorithmic results for a variety of polygon types


Using these methods leads to the following MAAPR results: 1) (n) time for an xy-
monotone polygon8 , 2) O(n (n)) time for an orthogonally convex polygon9 , where (n)
is the slowly growing inverse of Ackermann's function, 3) O(n (n) log n) for horizontally
(vertically) convex polygons, 4) O(n log n) for a special type of horizontally convex polygon,
6
We thank Jacqueline Huang for implementing this algorithm while she was an undergraduate at Harvard
University.
7
A matrix is monotone if the column index of the row maximum moves to the right as the row index
increases.
8
A simple polygon consisting of two xy-monotone chains is an xy-monotone polygon. A chain is xy-
monotone if it is monotone with respect to both the x and y axes. A chain is monotone with respect to a
line l if a line orthogonal to l intersects the chain in exactly one point [PS85].
9
An orthogonally convex polygon is both horizontally and vertically convex. This class contains the class
of convex polygons.

136
consisting of two y-monotone chains on opposite sides of a vertical line, and 5) O(n log2 n)
for general polygons.
7.5.2 MAAPR Characterization and Naive Algorithm
Here we characterize the MAAPR contained in a general polygon P by considering
di erent cases based on the types of contacts between the MAAPR and the boundary of P ,
and outline a naive algorithm for nding the MAAPR based on this characterization. Others
have used contact classi cation for algorithmic development (see, for example, [MS90a,
MOS85, DKO87]). The rst part of this subsection classi es the types of contacts between
the MAAPR and the boundary of P . The second part introduces the notion of a determining
set of contacts. The third part shows how to solve maximization problems associated with
certain determining sets of contacts. The fourth part presents our characterization theorem
for MAAPRs. The fth part gives a naive algorithm based on the characterization theorem.
Types of Contacts
Intuitively, if an axis-parallel rectangle is inside P , it has four degrees of freedom (param-
eters) and can \grow" until each of its four sides is stopped by contact with the boundary
of P . Contacts between the MAAPR and P are of two types: 1) a side of the MAAPR
with a vertex of P , and 2) a corner of the MAAPR with an edge of P . In order to discuss
the rst type, we require the notion of a re ex extreme vertex, introduced in [SRW91].
De nition 7.2 A vertex v of P is a vertical re ex extreme vertex if there exists  > 0
such that, for an open ball of radius  about v and S =exterior(P ) \ , either no point of
S has y-coordinate above v or no point of S has y-coordinate below v. A horizontal re ex
extreme vertex is de ned similarly.
For type 1 contacts, a re ex extreme vertex of P touches a side of the MAAPR and
stops growth in one direction; we call this a re ex contact . Each re ex contact can remove
one degree of freedom. Two re ex contacts with adjacent sides of the MAAPR x a corner
of the MAAPR. For type 2 contacts, a corner of the MAAPR touches an edge of P forming
an edge contact .
Determining Sets of Contacts
De nition 7.3 A set of contacts C is a determining set of contacts if the MAAPR R
satisfying C has nite area and if the MAAPR R0 satisfying any proper subset C 0  C has
greater or in nite area.
For example, a set of four re ex contacts, one on each side of the rectangle, is a deter-
mining set.
Note: A determining set determines the area of the MAAPR, but it does not necessarily
determine a unique rectangle or MAAPR.
Within a determining set, we distinguish between two di erent subtypes of edge contacts.
An edge contact is xed if the set of constraints uniquely determines the point of contact
with the rectangle (not necessarily the MAAPR). Otherwise, it is a sliding contact.
A xed contact can arise when there is no freedom to slide along an edge because a
re ex contact xes a coordinate. For example, in Figure 7.5(a), the re ex contact of the
137
(a) Fixed contact (b) Sliding contact (c) Two dependent
sliding contacts

Figure 7.5: Edge contact types for a determining set

determining set xes the x-coordinate of the edge contact, which completely determines the
location of the edge contact. An edge contact with an adjacent side that has either a re ex
or xed contact must also be a xed contact.
Two sliding edge contacts are dependent if the position of one determines the position
of the other; otherwise they are independent. An independent sliding contact requires that
the two adjacent sides of the MAAPR do not have any contact with P (see Figure 7.5(b)).
A sliding contact adjacent to another sliding contact is dependent, because the two contacts
must share a coordinate (see Figure 7.5(c)).
Maximization Problems
Here we examine maximization problems associated with certain determining sets of
contacts. Finding the MAAPR associated with a determining set of contacts requires solving
a maximization problem if the set contains a sliding contact. For a given set of contacts, the
number of degrees of freedom is the number of undetermined parameters of the rectangle.
Degrees of freedom within a determining set can only arise from sliding contacts because
any other degree of freedom would result in a rectangle of in nite area, and therefore the
contacts would not form a determining set. It follows that if a determining set consists of
only re ex or xed edge contacts, no maximization is required. For each independent sliding
contact in the set, we can parameterize the associated edge. The maximization problems
can then be classi ed based on the number of parameters.
1-Parameter Problems:
The set of 1-parameter maximization problems can be further subdivided according to
the number of dependent sliding contacts.
The Basic 1-Parameter Problem
The simplest 1-parameter problem involves no dependent sliding contacts, just a single
independent one. This is the basic 1-parameter problem, and it arises when one corner of
the MAAPR has a sliding contact and the opposite corner is xed. The basic 1-parameter
problem can be solved by parameterizing the edge associated with the sliding contact and
maximizing a quadratic in one variable. This can be solved in O(1) time.
An alternate constant-time solution to the 1-parameter problem is based on the following
lemma, which demonstrates that the slope of the MAAPR diagonal depends only on the
slope of the polygon edge. We assume here that the edge is neither vertical nor horizontal.
Lemma 7.5.1 (Slope Lemma) Given a point p and a line L with slope s, the MAAPR
with one corner at p and opposite corner at point q on L has diagonal pq , where the slope

138
of pq = ?s.
In other words, if one corner of a MAAPR is incident on L, the slope of the MAAPR's
diagonal is the negative of L's slope.
L: ax + by = d

s = -a/b

increasing area

(x c, y c) = q

area of MAAPR:
xy = c

Figure 7.6: Slope lemma

Proof: Assume p is at the origin, and that the line L is given by ax + by = d (see
Figure 7.6). W.l.o.g. assume L intersects the +x and +y axes. (Note: we ignore the
degenerate cases of horizontal and vertical lines, for which the MAAPR is unde ned.) The
family of hyperbolas given by xy = r represents curves corresponding to rectangles of
constant area. Moving away from the origin into the rst quadrant, the hyperbola branch
which is tangent to L provides the area c of the MAAPR associated with L. Let (xc ; yc ) be
the coordinates of the point at which L is tangent to the hyperbola xy = c.
Now, the tangent to xy = c at (xc ; yc ) is equal to L, so the normal to xy = c at (xc ; yc )
is equal to the normal to L at (xc ; yc ). The normal to xy = c is (Fx ; Fy ) = (y; x); evaluated
at (xc ; yc) it is (yc ; xc). The normal to L at (xc ; yc) is (a; b). Therefore,
slope(pq ) = xyc = ab = ?slope(L):
c
2
Note that, as a consequence of this lemma, if p moves downward, q moves downward
along its edge.
Two Dependent Sliding Contacts
Consider a pair of dependent sliding contacts which share a coordinate (w.l.o.g. y ), and
have an opposite re ex contact which determines one coordinate of a side, (w.l.o.g. y 0) (see
Figure 7.7(a)).
To nd the MAAPR, we parameterize edge p2p1 by t, yielding the following quadratic
in t to maximize:
F (t) = (x0 ? x)(y ? y0)
= (A + Bt)(C + Dt) (7.1)

139
p4 p4

p2
t p2
t

(x,y) (x’,y)
(x,y) (x’,y)

p1 p1
p3 p3
p5
y’
(x, y’ )
p6

(a) Pair of Constrained Sliding Contacts (b) Triple of Constrained Sliding Contacts

Figure 7.7: 1-Parameter problems with two and three dependent sliding contacts

where:
A = y2m? b ? x2 B = y1 m ? y2 + (x2 ? x1)
C = y2 ? y 0 D = y 1 ? y2
(y 4 ? y 3 )
m = (x4 ? x3) b = y4 ? mx4
Note that x0 is obtained by solving the line equation associated with edge p4 p3.
Three Dependent Sliding Contacts
For the case of three dependent sliding contacts, (see Figure 7.7b), we again parameterize
edge p2 p1 and express x0 in terms of y for edge p4p3 . Similarly, we express y 0 in terms of x
for edge p5 p6. We again obtain a quadratic in t to maximize. The di erence between this
and the previous case is in the values of C and D:
C = y2 ? m0x2 ? b0 D = y1 ? y2 + m0(x2 ? x1 )
? y5 )
m0 = ((xy66 ? b0 = y5 ? m0x5
x5 )

The 2-Parameter Problem:


There is only one type of 2-parameter problem. It has two independent sliding contacts.
The following lemma allows us to reduce a 2-parameter problem to a set of 1-parameter
problems.
Lemma 7.5.2 Let e1 and e2 be non-intersecting line segments. Consider the set E of
empty axis-parallel rectangles which have diagonally opposite corners on e1 and e2 . There
is a maximum area rectangle in E with at least one corner at an endpoint of e1 or e2 .
Proof: One way to prove the claim is to parameterize each edge to obtain a quadratic in
two variables. It is easily shown that the resulting surface is a saddle surface, so that the
maximum is achieved along the boundary of the unit patch, implying that the MAAPR has
a corner at a vertex of one of the edges.

140
A geometric argument is more intuitive, however. Let c1 on e1 and c2 on e2 be opposite
corners of a MAAPR, let r be the diagonal connecting c1 and c2, and, by way of contra-
diction, assume that neither c1 nor c2 is at a vertex of e1 or e2 . Now replace e1 and e2 by
the lines l1 and l2 containing them. Consider the set S of all line segments that connect
l1 and l2 and are parallel to r. The length of these line segments as a function of their
distance from r is monotonic. If l1 and l2 are parallel, the length is constant. Otherwise
the length increases moving away from r in one direction, and decreases in the opposite
direction. Move away from r in the direction of increasing length (either direction if l1 is
parallel to l2 ) until either c1 or c2 is at a vertex of e1 or e2 , and let r0 be the new diagonal.
Since jr0j  jrj, the area of the rectangle whose diagonal is r0 is at least as large as the
area of the rectangle whose diagonal is r, contradicting the assumption that the latter is a
MAAPR.
2
Having established that the MAAPR has a corner at a vertex in this case, we can
nd the MAAPR by considering, in turn, each of the four endpoints of e1 and e2 , solving
the associated 1-parameter problems, and then comparing the four resulting 1-parameter
MAAPR areas.
Characterization Theorem
To characterize the MAAPR, we examine the possible determining sets of contacts. By
enumerating the re ex contacts between the MAAPR and P , we derive the set of ve cases
shown in Figure 7.8.
# RC

3 1 FC

RC = Reflex Contact
FC = Fixed Contact
2 2 FC 2 FC 1 ISC 2 FC 2 FC
ISC = Independent Sliding Contact
DSC = Dependent Sliding Contact

1 FC
1 3 FC 3 FC 2 DSC
1 ISC

0 2 ISC 3 DSC 4 FC

Figure 7.8: Determining sets of contacts for the MAAPR


Theorem 7.5.3 The determining set of the MAAPR of a general polygon P conforms (up
to symmetry) to one of the ve cases in Figure 7.8.

141
Proof: A determining set has, by de nition, at most one re ex contact with each side of
the MAAPR. For each of the possible numbers of re ex contacts, we show that the deter-
mining set of a MAAPR of that type conforms (up to symmetry) to one of the con gurations
shown in Figure 7.8. In each case, we must eliminate degrees of freedom beyond those of
sliding contacts. We observe that a determining set cannot contain both two adjacent re ex
contacts and the xed contact between them; this would be redundant.
Case 4: In this case there is one re ex contact with each side of the MAAPR. Since each
re ex contact removes one degree of freedom from the rectangle, this set of contacts
is sucient to determine the MAAPR. Removing any one re ex contact allows the
rectangle to grow, so all four contacts are necessary.
Case 3: Three re ex contacts are not sucient to determine the MAAPR, since the fourth
side can move outward. We must add an edge contact which is xed because it is
adjacent to a re ex contact. There is only one choice of position for this contact, up
to symmetry. It xes the remaining side of the rectangle.
Case 2: Two re ex contacts can be either adjacent or diagonally opposite. In both cases, edge
contacts are needed to determine the MAAPR. Since two degrees of freedom remain,
the determining set contains at most two edge contacts.
In the adjacent case, we examine the possibility of edge contacts at the corners of the
rectangle, excluding the corner between the two re ex contacts. There are two ways
that two edge contacts can appear. In both cases both contacts must be xed, so they
x the remaining sides of the rectangle. If only one edge contact appears, it must
be a sliding contact; otherwise a degree of freedom remains. The sliding contact is
diagonally opposite to the corner xed by the re ex contacts. This contact is sucient
because it represents a basic 1-parameter problem.
In the opposite case, any edge contact is xed; hence there must be two of them.
There are two possible ways they can be con gured. In both cases, all four sides of
the rectangle are xed.
Case 1: One re ex contact is not sucient to determine the MAAPR. We can choose to
place edge contacts at any three of the four corners of the rectangle (four would be
redundant). If there are three edge contacts, they are all xed and they completely
determine the MAAPR. There are two ways to con gure three such contacts. If
there are two edge contacts, they can be adjacent or diagonally opposite. If they are
adjacent, they must both be opposite the re ex contact; otherwise a degree of freedom
remains. These adjacent contacts are dependent sliding contacts, and they determine
the MAAPR because they represent a 1-parameter problem. If they are diagonally
opposite, one is xed, and the other is a sliding contact; this is also a 1-parameter
problem. One edge contact is not sucient, together with the one re ex contact, to
determine the MAAPR.
Case 0: In this case there are no re ex contacts. One sliding contact is not sucient to
determine the MAAPR. If there are two sliding contacts, they must be opposite if
they are to determine the MAAPR. This is a 2-parameter problem. If there are
three edge contacts, they are all dependent sliding contacts. They are sucient to
determine the MAAPR because they represent a 1-parameter problem. If there are
142
four edge contacts, they yield four equations in four unknowns, for which the MAAPR
is completely determined. This therefore forms a set of xed contacts.
2
Corollary 7.5.4 Given a determining set C for a MAAPR of a general polygon, it follows
that 2  jC j  4.
A Naive Algorithm
Based on the above characterization, we can nd the MAAPR in a general polygon by
nding the MAAPR under the constraints of each of the ve cases and selecting the largest
one.
Theorem 7.5.5 For each determining set of contacts in Figure 7.8, the MAAPR can be
found in constant time.
Proof: A re ex contact yields, in constant time, one of the four parameters of the rectan-
gle, as does a xed contact. In all cases we can process these contacts rst. The remaining
situations are all either 1-parameter problems or 2-parameter problems, which can be solved
in constant time because they only require maximizing a constant number of quadratic
forms. 2
A naive MAAPR algorithm can use Theorem 7.5.5 and supply it, in each case, all
possible determining sets for P . These can be identi ed using an algorithm with up to four
nested loops, one for each element of the determining set. For each MAAPR candidate, we
can check if it is empty (i.e. contains no edges or vertices of P ) in O(n) time. We conclude:
Theorem 7.5.6 The MAAPR of an n-vertex general polygon can be found in O(n5) time.
7.6 Summary and Conclusions
This chapter has discussed our container limiting and decomposition work. Limiting is
based on the reachable regions of the items. A reachable region is found by using restrictions
combined with the opening morphological operator. Our decomposition process allows the
user to control the probability that gaps will overlap each other. For eciency reasons,
we introduce the notion of breaking polygons and inking polygons . We have implemented
the breaking polygon approach and found it to be practical for forming limited gaps. Our
implementation uses a naive algorithm for nding the maximum-area axis-parallel rectangle
(MAAPR) in a polygon. Our theoretical work on the MAAPR problem yields the fastest
known asymptotic running time for this problem: O(n log2 n) time for an n-vertex poly-
gon. Figure 7.9 is a screen dump which shows limited gaps for an example from apparel
manufacturing.
We note that in applications which operate on Uij s and do not directly operate on the
containers themselves, the methods we present for clustering together Uij s can be used
without carrying out the nal construction of the limited gaps themselves.

143
Figure 7.9: Limited gaps example

144
Chapter 8
Minimal Container Problems

8.1 Introduction
In this chapter we address four minimal container problems: 1) strip packing, 2) nding
minimal square enclosures, 3) nding minimal rectangular enclosures, and 4) nding tight
convex and nonconvex enclosures.
In Chapter 6 we cited the following de nition of an enclosure problem given by Chang
and Yap [CY86]. Enc(P, Q, ): Given P 2 P, nd the -smallest Q 2 Q that encloses P ,
where P and Q are families of polygons, and  is a real function on polygons such that:
8Q; Q0 2 Q; Q0  Q ) (Q0 )  (Q)
This de nition is not sucient for our purposes. We need a more general de nition of
an enclosure problem. Our de nition replaces P with P , where P is a tuple of elements of
P. We also let M be a set of polygon transformations. We de ne:
Enc(P; M; Q; ): Given P = (P1 ; P2; : : :; Pk ), where Pi 2 P, nd the -smallest Q 2
Q that encloses PM, where P and Q are families of polygons, M is a set of polygon
transformations, PM is the transformation of each item in P by a (possibly di erent)
transformation in M, and  is a real function on collections of polygons such that:
8Q; Q0 2 Q; Q0  Q ) (Q0 )  (Q)
Work on minimal enclosures in computational geometry uses the identity transformation
for M and does not allow a collection of polygons. The particular two-dimensional enclosure
problems we address here let P be a collection of polygons, M be translations,  be the
area function, and Q vary depending on the problem. For example, in strip packing, Q is
the set of axis-parallel rectangles of xed width.
From the discussion of related work in layout, we see that solutions to minimal enclosure
problems are often required for the rst stage of a multi-stage layout problem. Minimal
container problems also arise when nding the best stock sheet size for a given set of items,
or the best cloth width in garment manufacturing. Section 6.3.1 discusses computational
geometry work on various types of enclosure problems for single items. Current solutions to

145
minimal enclosure problems for more than two arbitrary nonconvex items often allow the
items to rotate. However, because they use greedy heuristics, they cannot provide a lower
bound on the size of the enclosure. Popular approaches either place small items in gaps
inside the bounding box of two large items [AA76], or solve the problem incrementally by
adding one item at a time [DT87, DJ95].
Our containment-based solutions to minimal container problems do not allow the items
to rotate, but, in contrast to heuristics, we establish an actual lower bound on the size of
the container for multiple items. Our approach to nding minimal enclosures is to nd an
optimal value for a parameter via binary search combined with containment. The idea of
solving a geometric optimization problem using binary search on a parameter is found, for
example, in Chazelle's seminal paper on polygon containment [Cha83]. We observe that the
binary search can be accelerated by using compaction techniques, such as those developed
by Li and Milenkovic [Li94, ML, LM93c, LM93a], to improve a feasible solution and hence
update the upper bound. Our hybrid containment algorithm is appropriate for minimal
enclosure problems. It is extremely fast for loose ts, and so the initial stages of the search
are fast.
We propose the following general approach to minimal enclosure problems involving a
single parameter. Our approach is a binary search, with compaction1 used to update the
upper bound obtained by containment. Note that the binary search approach works only
if the function being minimized decreases monotonically as the parameter decreases. In
this binary search, the \function" can be thought of as a binary one which returns 1 if the
containment problem is feasible and 0 otherwise.
MINIMAL-ENCLOSURE(P , k)
do
max large initial value
min 0
 tolerance value
while (max ? min > )
current (max + min )=2
Q container of size current
if ( CONTAINMENT(P , k, Q))
place polygons according to con guration 
max COMPACTION(P , k, Q)
else
min current
return max
8.1.1 Overview
In Section 8.2 we use binary search on length together with containment to solve a two-
dimensional strip packing problem. In Section 8.3, binary search on length is combined with
containment to nd tight (axis-parallel) square enclosures for collections of items. We extend
this approach in Section 8.4 to nding tight (axis-parallel) rectangular enclosures, which is
a common preprocessing step in many layout algorithms which pack non-rectangular items.
It can also be applied to the problem of nding the best stock sheet size for a given set
1
Our current implementations do not use compaction; adding it is a subject of future work.

146
of items, or the best cloth width in garment manufacturing. In higher dimensions, this
technique could help determine the best packaging for a product. In Section 8.5, we show
that removing unreachable portions of a tight rectangular enclosure can yield tight convex
and nonconvex enclosures which preserve all the containment solutions for the rectangle.
Section 8.6 summarizes this chapter and states conclusions.

8.2 Strip Packing


Here we use binary search on length together with containment to solve a two-dimensional
strip packing problem. We illustrate how containment can be used for strip packing by solv-
ing an incremental layout task from the apparel industry. Figure 8.1 shows an incremental
layout task. A garment containing 8 items has already been laid out, and the task is to
add a second garment to this layout while minimizing the length of the strip. This type
of problem is becoming more important as the apparel industry moves in the direction of
smaller lot sizes and customized garments.
5
3

0
2
14

15
10 11

14 12

13
9

Figure 8.1: Incremental layout task


Let the length be . Figure 8.2 shows a very loose initial layout corresponding to  =
101:24 inches. Note that only the six large items are placed. The two small ones can easily
be inserted into the nal layout in a number of places, so our search on length ignores them.
Figure 8.3 shows the tightest layout obtained for this problem;  = 77:09 inches.
One alternative to this approach is to use an improvement strategy. Improvement meth-
ods for layouts of nonconvex items are surveyed by Dowsland and Dowsland [DD]. \Meta-
heuristic" techniques such as simulated annealing, genetic algorithms, and tabu search can
be applied. However, they are unable to establish a lower bound on . Another improvement
strategy uses the compaction techniques of Li and Milenkovic [Li94, ML, LM93c, LM93a,
MDL92, LM93d]. Compaction is also unable to establish a lower bound. Compaction alone
can be applied in a given direction to simulate a gravitational force. However, this technique
can only produce a continuous motion of the items. It is unlikely to produce the tightest
possible strip packing, especially if the items are nonconvex. In order to produce a tight

147
Name: multnew1
Width: 23.50 in
Length: 101.24 in
Pieces: 16
Efficiency: 56.35%

5 13
3

0
2 8
10

14
9

6 15

7 14

11
12

Figure 8.2: Loose layout

Name: multnew4
Width: 23.50 in
Length: 77.09 in
Pieces: 16
Efficiency: 74.93%

3
15

0 13

14
8
9 10
11 12

7 14

Figure 8.3: Final layout

148
packing, we must allow the items to move discontinuously as  decreases. This \jumping"
behavior can be obtained using forms of compaction based on mixed integer programming,
but current implementations are currently too slow to be practical.
Name: multiple
Width: 23.50 in
Length: 85.48 in
Pieces: 16
Efficiency: 67.58%

5 13

0 8

2 10

1 9

3 11
4 12

6 14

7 15

Figure 8.4: Compaction solution


We compare our solution based on binary search and containment with the solution
obtained by rst compacting the layout of Figure 8.2 and then placing the two small items.
Compaction gives  = 85:48 (see Figure 8.4), whereas  = 77:09 for the containment ap-
proach.
Although the binary search technique obtains a tighter layout than compaction alone,
compaction can still play an important role in this problem, and in all our enclosure prob-
lems. It can certainly help to update an upper bound quickly, thereby accelerating the
binary search, as noted in Section 8.1.

8.3 Minimal Square Enclosures


In our second problem, binary search on a scaling factor can be combined with contain-
ment to nd the minimal enclosing (axis-parallel) square for a set of polygonal items, where
the orientation of each polygon is xed. One parameter suces to characterize the square.
We denote this by ; it is equal to 1=2 the length of a side of the square. We use binary
search on  combined with containment.
We begin with a square which is guaranteed to be large enough to hold the items. Our
rst guess consists of a square whose side is the maximum of the height and width of the
bounding box of the following shape:
(((P1  ?P2 )  P2 ) : : :  ?Pk )  Pk
The k items can certainly t into this square.
Because the area function () decreases monotonically as  decreases, minimizing 
yields the minimum area square enclosing the items. The same is true if we apply a pure
scaling to any polygon. Thus, the same method can be used, for example, to nd the
minimal regular hexagon (of xed orientation) enclosing a collection of items.

149
Figure 8.5: Minimal square enclosure for 3 nonconvex items

150
Figure 8.5 shows the minimal square enclosure for three items. An improvement method
which only allows continuous motion of the items is incapable of nding this square enclosure
because there is no continuous motion of the items which can \nest" the rectangle inside
the \C"-shaped item.
Figure 8.6 shows a minimal square enclosure for four nonconvex apparel pattern pieces.
This runs in 122 seconds on a 50MHz SPARCstation. Each of the four polygons has 65
vertices. The pieces are identical, except that two are rotated by  . The con guration pro-
duced by the containment algorithm is a repeating pattern. Indeed, one possible application
of our minimal enclosure work is to identify good repeating patterns for identical nonconvex
objects2. Although much work has been done on lattice packings for convex objects, there
is still room for work on nding good repeating patterns for identical nonconvex objects.
It would be interesting to experiment with di erent types of regular polygonal enclosures
(e.g. square, hexagon) for a given set of identical items to see which gives the best result.
Figure 8.7 shows the minimal square for ve items; this takes 50 seconds on a 50 MHz
SPARCstation.

Figure 8.6: Minimal square enclosure for 4 items


2
We thank Matt Buonomano of Gerber Garment Technologies for suggesting this application.

151
Figure 8.7: Minimal square enclosure for 5 nonconvex items

152
Implementation note: an appropriate form of compaction for the square enclosure prob-
lem is one which gravitates items toward the center of the square.

8.4 Minimal Rectangular Enclosures


We extend this approach to nding tight rectangular enclosures, which is a common
preprocessing step in many current algorithms which pack non-rectangular items. It can
also be applied to the problem of nding the best stock sheet size for a given set of items,
or the best cloth width in garment manufacturing. For layout algorithms which build strips
[AA76] or columns [MDL92] of items, this approach can help determine the best cluster. In
higher dimensions, this technique could help determine the best packaging for a product.
Characterizing a rectangle requires two parameters. One could parameterize by aspect
ratio a and a scaling factor . Unfortunately, although () is monotonic with respect to ,
it is not monotonic with respect to a. Thus, we cannot perform binary search on a. Instead,
we can sample a.
We choose instead to parameterize the rectangle by height h and width w. In this case,
as well, the area function is only monotonic with respect to one of the parameters. We
choose to sample w and perform binary search on h for each sampled value of w. It is
straightforward to obtain upper and lower bounds on w. Let hmax be the maximum height
of any item, and wmax be the maximum width. Then, one bound on w can be obtained
by letting the width of the container be wmax, and using binary search on height combined
with containment to nd the minimum associated height. The other bound is achieved by
letting the height of the container be hmax , and using binary search on width combined
with containment to nd the minimum width wmin . The minimal rectangular enclosure
algorithm contains an outer loop which samples w in between the two bounds. For each w,
it invokes a procedure which performs binary search on h for a particular value of w.
Implementation note: there are two ways to use compaction to accelerate the minimal
rectangle algorithm. First of all, downward compaction can be used within the binary search
on h for xed w. The second application of compaction relies on the fact that the minimal
h as a function of w increases monotonically as w decreases. \Steps" in the function can
be identi ed using compaction, as follows. After the minimal h is obtained for a given w,
place the items within a w by h rectangle and apply leftward compaction to the right-hand
boundary of the rectangle. Hopefully, compaction will move w beyond the next sample
point so that a binary search is avoided. For this reason, we suggest that w be sampled in
decreasing fashion.
We present three examples of nding minimal rectangles. The rst example is three
rectangles in Figure 8.8. This was found in seven seconds on a 50MHz SPARCstation.
The second is three identical nearly nonconvex apparel pattern pieces in Figure 8.9. This
required less than ve minutes. The third is three nonconvex polygons in Figure 8.10.
Milenkovic has recently suggested [MD95] an alternative method for nding minimal
rectangular enclosures of multiple items. He poses the following problem: given shapes
P1 ; P2; : : :; Pk and area , is there a rectangle of area which contains translated copies of
the shapes? He solves the problem by modifying the hybrid containment algorithm. With
the modi cation, one can nd the minimum numerically by binary search.

153
Figure 8.8: Tight rectangular enclosure for 3 convex items

154
Figure 8.9: Tight rectangular enclosure for 3 nearly identical nonconvex items

155
Figure 8.10: Tight rectangular enclosure for 3 nonconvex items

156
8.5 Tight Convex and Nonconvex Enclosures

Figure 8.11: Tight nonconvex enclosure for 3 items


Section 8.3 observed that one-parameter binary search together with containment can
not only nd the minimal enclosing square, but it can also nd the minimal scaling factor
applied to any polygonal enclosure.
In this section we introduce a di erent way to obtain tight convex and nonconvex enclo-
sures. One can easily generate tight enclosures from a con guration of items within a tight
square or rectangular enclosure C . For example, one can use the convex hull of the items
to obtain a convex enclosure.
However, if we want the enclosure to preserve all the valid con gurations of C , we need
a di erent approach. We use the reachable region, as de ned in Chapter 7, to form a
container. The procedure is simple. First,S nd U for the items with respect to C . Next,
apply restrictions to U . Then form: C 0 = 1j k (U0?j  Pj ). This C 0 is a container which
has all the valid con gurations associated with C . It reveals portions of C which cannot be
reached by any of the items when they are placed simultaneously in C .
Figure 8.11 shows a tight nonconvex container obtained from the rectangular enclosure
of Figure 8.10. In Figure 8.11, the container is the black lled region inside the rectangle,
157
and the outlines of the original items are shown to the right for reference. This container
was formed using reachable regions, which were, in turn, obtained from U0?j s. All of the
white space inside the rectangle and outside the black region is unreachable, and can be
removed with no consequences. In the gure, we can easily see that three of the four corners
of the rectangle are super uous for the three given items. The convex hull of this container
is a convex container which has the same valid con gurations as the rectangle.
This technique can be applied for container design. In some cases, it can be used to help
humans solve containment problems. Observe that the container and items of Figure 8.11
immediately suggest a legal con guration to the human eye.

8.6 Summary and Conclusions


This chapter has addressed four minimal container problems: 1) strip packing, 2) nding
minimal square enclosures, 3) nding minimal rectangular enclosures, and 4) nding tight
convex and nonconvex enclosures. Our general approach to solving these problems is to
use binary search combined with containment. The binary search can be accelerated using
compaction techniques.
Previous work on minimal enclosures in computational geometry considers only one
item. Work in the layout literature allows multiple items, but is heuristic (and typically
greedy) for more than two items. Our work represents the rst non-heuristic multiple-
item minimal enclosure work for more than two items. Our containment-based solutions to
minimal container problems do not allow the items to rotate, but, in contrast to heuristics,
we establish an actual lower bound on the size of the container for multiple items.

158
Chapter 9
Maximal Item Problems

9.1 Introduction
In this chapter we give four examples of maximal item problems: 1) given a container
and a collection of items, nding maximally scaled copies of items which t in the container,
2) given a container and a collection of  kpractical items for which the sum of the item areas
does not exceed the area of the container, nding a \large" sub-collection which ts in the
container, 3) given a container and a collection of items for which either the sum of the
item areas exceeds the area of the container or the number of items is > kpractical , nding
a \large" sub-collection of  kpractical items which ts in the container, and 4) problem (3)
for a collection of containers.
For problem (1) we use binary search combined with containment. In problem (2) the
given items cannot be scaled; they can only be included in or excluded from the set. The
key to solving problem (2) is to carefully choose which item to eliminate from an infeasible
subset. We reject the most \troublesome" item, which is identi ed using the results of the
overlap elimination evaluation method of the hybrid containment algorithm. For problem
(3), we randomly construct a set of items and solve problem (2). This yields a rich collection
of sets which t into the container. For each set of size n which ts, we automatically have
2n subsets which t and which need not be individually represented. This work provides the
rst practical method for constructing sets of up to ten polygons which t in a container.
For problem (4), we solve problem (3) for each container, and then apply a look-ahead
assignment strategy. This is a pre-packing strategy. Containment thus allows us transform
a geometric optimization problem into a combinatorial optimization problem. Our results
challenge the existing belief that pre-packing methods are impractical for layout [Arb93].
9.1.1 Overview
Section 9.2 solves problem (1). Section 9.3 solves problem (2). Section 9.4 solves prob-
lem (3). Section 9.5 solves problem (4). Section 9.6 summarizes this chapter and states
conclusions.

159
9.2 Maximally Scaled Items
In this section we are concerned with positive homothets of items. A positive homothet
of an item Pi is a scaled copy of Pi with scaling factor i  1. We do not allow rotation
of the homothets. Given a container, a set of k items which has a valid con guration in
the container, and a real number  , we pose the following general problem: nd a valid
con guration of positive homothets of the items in the container for which the sum of the
areas of the homothets is at least  . We call this the maximal positive homothet problem
(MAX-HOMOTHET) for a container. Maximizing the value of  yields a \tight" packing
of the container.
Claim 9.2.1 The maximal positive homothet problem is NP-hard.
Proof: We reduce an instance of the NP-complete PARTITION problem to an instance
of MAX-HOMOTHET. An instance P of PARTITION has a set A of items. Each a 2 A has a
positive integer size s(a). Let a2A s(a) = Q. We build a two-component container. Each
component is a rectangle of height 1 and width Q=2. For each a 2 A, construct a rectangle
of height 1=2 and width s(a)=2. Now, if jAj > 1, scale each of these rectangles by (1 + ),
where 0 <  < 1 and
X
(s(a)(1 + ))  Q
a2(An )
where is the item in A of minimum width.
This guarantees that the scaled rectangles t into the container. Let  = Q. Now solve
MAX-HOMOTHET. We claim MAX-HOMOTHET has a solution if and only if PARTI-
TION has a solution. For, if PARTITION has a solution, then there exists a valid con gu-
ration of rectangles of height 1 and width s(a), for a 2 A. This con guration has area = Q,
and it represents a scaling of at least 1 for each item we constructed for MAX-HOMOTHET,
so it is a solution to our instance of MAX-HOMOTHET.
Now we argue that if MAX-HOMOTHET nds a solution, then that solution is a solution
to PARTITION. Because we scaled by (1+ ), no two items can be stacked vertically within
a component of the container. This means that, in order to completely ll the container,
each item must be restored to its full height. This yields a solution to PARTITION. 2
We examine heuristics for obtaining a good, but not necessarily optimal, solution to the
problem of maximizing the value of  for MAX-HOMOTHET. One possible approach is to
force all items to have the same scaling factor. This is easy to solve using binary search on
the scaling factor. It has the advantage that a solution can be found either by scaling the
container or the pieces. If the number of vertices in the container is much smaller than the
sum of the number of vertices in the items, then it is appropriate to scale the container.
Another possibility is a greedy approach. A greedy scaling algorithm would x the
scaling factor of one item during each iteration. The xed item would be the one whose
positive homothet can have maximum area.
Each of these strategies can be appropriate under di erent circumstances. To illustrate
this, we consider a maximal homothet problem for a collection of identical circular items
and a circular container. Suppose the container has a radius of 3, and there are 7 items. It
is easy to show that if each of the given circles has radius less than :4, then the area of the
160
Figure 9.1: Scaling circles

positive homothets is greatest under a greedy strategy. If, however, each circle has radius
greater than :4, the greedy approach is inferior to applying the same scaling factor to each
circle. The left portion of Figure 9.1 shows a common scaling factor applied to the circles,
and the right portion shows a greedy scaling.
Yet another approach is an enhancement to the common scaling factor approach, sug-
gested by Milenkovic [Mil95a]. In this approach one maintains a set Q of items whose
scaling factors are xed, and a set S of items whose scaling factors are not yet xed. While
S 6= ;, the algorithm scales items in S up as much as possible, by the same scaling factor
. (The scaled items of S must t into the container along with the items in Q.) Then, for
each item of S , if it cannot be scaled more than by , we remove it from S and add it to Q.
We present an approach to the maximal homothet problem which has most of the
advantages of both the greedy and the enhanced common scaling factor approaches and
is superior to both on some inputs. As in the enhancement to the common scaling factor
algorithm, we maintain a set Q of items whose scaling factors are xed, and a set S of items
whose scaling factors are not yet xed. We use the area of SSGR(U0i ) as a rough measure
of the \freedom" of an item Pi . We associate a parameter  with the area of these U0is.
The value of  is a real number whose minimum and maximum values are the minimum and
maximum areas, respectively, over all SSGR(U0i ), for 1  i  k. For a given value of , we
de ne a subset S () of S : S () = fPi 2 S jarea(SSGR(U0i))  g. As stated, this causes
both Pi and Pj to be in S () if area(SSGR(U0i)) = area(SSGR(U0j ))  . Thus, whenever
an item is in S (), all identical copies of it are also in S (). To avoid this diculty, we
modify the area function slightly so that, for i 6= j , area(SSGR(U0i)) 6= area(SSGR(U0j )).
As  ranges from max to min , S () changes k times. For each of these k events, we
scale S () up as much as possible, using a common scaling factor for the items in S ().
The maximum common scaling factor is found using binary search on a parameter . For
the maximum common scaling factor, we record the total area of the scaled (and unscaled)
items. Let max be the scaling factor associated with the event of maximum total area, and
S ()max be the associated subset. We scale items in S ()max by max. If an item in S ()max
cannot be scaled by more than max , we remove it from S and add it to Q. We then update
the set of events and repeat the process.
Pseudo-code for the complete algorithm is given below. The tolerance for the search on
is  . We denote the scaling of each element of a set S by a scaling factor as follows:

161
S .
MAX-HOMOTHET(P )
Q ;
S P n P0
max maxP 2S (area(SSGR(U0i)))
i
min minP 2S (area(SSGR(U0i)))
i
Form event list
while (S 6= ;)
best 1; best max
for each event  in event list
S () fPi 2 S jarea(SSGR(U0i))  g
SCALABLE(S (),P0,Q [ (S n S ()))
if (total area for > total area for best)
best ; best
S (best) fPi 2 S jarea(SSGR(U0i))  best g
S (best) S (best)  best
S (best) fPi 2 S (best)jSCALABLE(fPig; P0; Q [ (S n fPig)) = 1g
Q Q [ S (best )
S S n S (best )

SCALABLE(S 0 , C , S 00)
max init
min 1
while ( max ? min >  )
current ( max + min )=2
if (S 0  current [ S 00 ts in C )
min current
else
max current
return min
P
Here is a simple way to establish init . Set init = (area(C )= Pi 2(S 0[S 00 ) area(Pi )) + .
We stated above that this algorithm possesses most of the advantages of the greedy
and the enhanced balanced strategies. Here we elaborate on this point. We rst show how
our algorithm is similar to the greedy algorithm. Our event list contains an event which
corresponds to scaling up a single item. If we included an event for scaling up each Pi
individually, we would be testing all the possibilities considered by the greedy algorithm.
In contrast, our algorithm has only one single-item scaling event. However, the scaled item
is the Pi whose U0?i has the largest area. This is a rough measure of the \freedom" of an
item. When all items are identical, our algorithm clearly makes the same choice as the
greedy algorithm. We expect that in many cases, in practice, it will make the same choice
as the greedy algorithm. Figure 9.2 gives an example of MAX-HOMOTHET behaving in
a greedy fashion for three items. In this example, only one of the items is scaled up; its
scaling factor is 1.97.

162
Figure 9.2: Scaling up items in a greedy fashion

163
We now show how our algorithm is similar to the enhanced balanced strategy. Our event
list contains an event which corresponds to scaling up all items simultaneously by the same
scaling factor. This is also done by the enhanced balanced strategy. In addition, we only
x the scaling factor for an item if it cannot be scaled up further. This is also a feature
of the enhanced balanced algorithm. Figure 9.3 gives an example of MAX-HOMOTHET
behaving like the enhanced balanced strategy. In this example, all of the items are scaled
up by a factor of 1.09.

Figure 9.3: Scaling up items in an enhanced balanced fashion


There exist inputs for which our algorithm obtains a better solution than the greedy
algorithm. A simple example is the previously mentioned circle problem, in which the
container is a circle of radius 3, and each of 7 items is a circle of radius greater than .4. In
general, if the solution found by the enhanced balanced strategy is better than the solution
found by the greedy strategy for a given input, then our algorithm's solution will be better

164
than the greedy algorithm's solution for that input.
There also exist inputs for which our algorithm obtains a better solution than the en-
hanced balanced algorithm. One such input consists of a circular container of radius 3 and
7 circular items, each of radius less than .4. Because the items are identical, our algorithm
makes the same choice as the greedy algorithm in this case.
Our algorithm not only combines the power of the greedy and enhanced balanced strate-
gies. It also considers possibilities not considered by either of these two strategies. These
possibilities correspond to events which scale up j items simultaneously, for 1 < j < k.
Figure 9.4 is an example in which MAX-HOMOTHET scales up two items by the same
factor of 1.47; the third item is not scaled.

Figure 9.4: Scaling up two items


Implementation note: In applications where a scaling factor is involved, one can use the
subset restriction of Section 3.5 to attempt to speed up the containment algorithm.

165
9.3 An Elimination Technique
In this section we consider the following problem: given a container and a collection of
k  kpractical items for which the sum of the item areas does not exceed the area of the
container, nd a large 1 subset of the items which ts in the container. An example of such
a problem is illustrated by Figure 9.5. In this gure, we are given nine items and must nd
a subset of them which ts into the container. The container is depicted as a black lled
region. The given items cannot be scaled; they can only be included in or excluded from
the set.
We present here a heuristic approach to this problem which is based on an elimination
technique, and is therefore a top-down method. The ow of the procedure is shown below,
for a container C and an initial collection of items S . The advantage of this top-down
approach is that a subset is found using at most k calls to containment. Of course, once
a subset of size n is found, we obtain 2n subsets for free, and need not represent them
separately.
GENERATE-SUBSET(S , C )
while S does not t into C
S S n CHOOSE(S )
return S

However, nothing is ever truly for free. The price we pay here is that we must resolve the
dicult issue of how to choose an item to remove from the subset. If this is not done well,
the following scenario can easily occur. Suppose we always eliminate the item of smallest
area. If a small number of the largest items in the subset cannot t together into the
container, the subset produced by our procedure might only contain one or two items. This
might not pack the container tightly.

Figure 9.5: Find a \large" subset which ts


Intuitively, it makes sense to ask why a given infeasible subset does not t. If the
bottleneck can be found, we can eliminate one of the items involved in the bottleneck. We
know how to obtain a good answer to this question, because when the hybrid containment
algorithm fails, it can give us information about the local overlap minimum it has obtained.
Recall that our distance-based subdivision method of Section 4.5.4 selects a Uij to subdivide
based on feedback from the evaluation step. In that case, we subdivide the Uij which is
1
One measure of size is the number of items; another is the total area of the items.

166
Figure 9.6: A \large" subset which ts

associated with the greatest overlap between two items. These two items are, in some sense,
the most \troublesome". In the current application, we can eliminate one of these two items
from the subset. (We choose to eliminate the smaller of the two.) This approach works
well. Figure 9.6 shows a solution to the subset problem of Figure 9.5. Our approach nds
a collection of seven items which t into the container. This packing is fairly tight.
Implementation note: To control running time, it is possible to limit the number of
hypotheses examined by the hybrid algorithm. This causes it to give up early in some
cases. However, because every hypothesis evaluation produces a local overlap minimum, we
still obtain useful overlap information when the algorithm terminates early.

9.4 Randomized Subset Construction


Here we address the following problem: given a container and a collection of items for
which either the sum of the item areas exceeds the area of the container or the number
of items is > kpractical , nd a \large" sub-collection of  kpractical items which ts in the
container.
Whenever the total area of the items exceeds the area of the container, the containment
problem is infeasible, and there is no need to run a containment algorithm. Whenever
the number of items exceeds kpractical , it is also not appropriate to run a containment
algorithm. In both of these cases, our goal is to nd a subset S of the given set of items
which satis es the conditions of the problem addressed in Section 9.3. The elimination
technique of Section 9.3 can then be applied to S .
One way to produce S is to use randomness. Using randomness has the advantage that
we can create a diverse collection of subsets by repeating the process. We then apply the
elimination technique to each subset. We select the resulting subset which has the largest
area and therefore provides the tightest packing of the container. One can randomly exclude
items from the collection until the subset satis es the requirements of S . Alternatively, one
can select items randomly in a bottom-up fashion. When the number of items is large is
large (i.e. >> kpractical), as in our applications, the latter approach is more ecient.
We o er two examples of the success of this randomized approach. In both examples,
some of the items can be re ected in the x-axis or rotated by a multiple of =4. Our con-
struction process takes these possibilities into account by randomly selecting orientations.
Figure 9.7 portrays a collection of items and a container ( lled in black). Figure 9.8 shows
a collection of three items which pack the container tightly. We generated the following ten

167
Figure 9.7: Items and container for randomized construction task

16
17 52
18 36
19 53 21
20 54 57
37
55
22 38 56
23
24 39 58
25 59
26 40
41 60
27 61
42
28 62
43
29 44 63
30 45 64
31 46 65
66
47 67
32 68
69
48 70
71
33 49
50
34 51
35

Figure 9.8: Randomly constructed 3-item group

Figure 9.9: Randomly constructed 5-item group

168
groups for this container: (21 37 57) (63 39 56) (34 17) (33 46) (36 43) (34 44) (36 54) (33
65) (34) (32). Figure 9.8 gives another example. In this case, a collection of ve items is
placed in the container. We generated 150 groups for this container.
Pseudo-code for a group generation procedure for a container C and a collection of items
P is given below. We maintain a \ t" list G and a \mis t" list2 M , which are both initially
empty. To generate a group g which ts, we rst initialize the current group g = ;. Then,
while the total area of items in g does not exceed the container area and the number of
items in g is  kpractical, we randomly add items to g . Let random(P ) return a randomly
selected item Pi , i 6= 0, from P . Now we attempt to disqualify g by checking if any group
in the mis t list is a subset of g . If g is not disquali ed, we obtain a subset of g which ts
by invoking GENERATE-SUBSET from Section 9.3. We add the subset to G. Let be a
stopping condition for generating groups for a container.
GENERATE-GROUPS(P , C )
G ;
while not satis ed
g ;P
while P 2g area(Pi)  area(C ) and jgj  kpractical
g g [ random(P )
i

if there exists m 2 M such that m  g


M M [g
else
g GENERATE-SUBSET(g, C )
Add g to G
return G
The question of what constitutes an e ective stopping criterion is an open one. A
running time/packing density tradeo clearly exists. Our experience suggests that a good
balance between these two factors can be achieved if the number of groups is inversely pro-
portional to the capacity of the container. We have tested our group generation procedure
on hundreds of examples, and veri ed that our stopping criterion produces tight packings.
Ultimately, the best stopping condition can depend on what the groups will be used for.
We therefore defer further discussion of this to the next section.

9.5 Pre-packing
Given a packing problem for a collection of containers and a collection of items, a
pre-packing approach has two steps: 1) generate groups of items for each container, and
2) assign groups to containers. The goal is to place a \maximal"3 subset of the items.
Section 9.4 o ers a group generation procedure which uses containment. This solves problem
(1). Problem (2) is a combinatorial optimization problem. There is a rich literature on
combinatorial optimization.
The optimal solution to the assignment problem can be obtained using a matching model
based on mixed integer programming. We developed one such model. A Boolean variable
2
We did not implement the mis t list, so it is not clear whether or not it is cost-e ective in general.
3
Area is one possible measure.

169
gkj in our model represents the assignment of group k to container j . A Boolean variable pij
represents the assignment of item i to container j . The model has three types of constraints.
The rst type of constraint is of the form:
X
gkj  1
k
There is one constraint of this type for each container. Each constraint prevents a container
from having more than one group assigned to it. The second type of constraint is:
X
pij  1
j
There is one constraint of this type for each item. Each constraint prevents an item from
being assigned to more than one group. The third type of constraint forces all items in a
group to be assigned together. This is of the form:
pij ? gkj  0
There is one constraint of this type for each item in each group. One possible objective
function is to maximize the following:
X
gkj
k;j
We originally intended to use this model under the assumption that each group contains
at most three items. If we allow up to ten items in a group, it is not clear that this approach
is practical, given the current state-of-the-art in mixed integer programming. Consider, for
example, a typical pre-packing application in apparel marker making. A typical pants
marker has approximately 50 containers. If containment produces a 10-item group for each
container, this yields 210 subgroups for each container. The number of Boolean variables
could be as large as 50,000 in this case.
At the other extreme, a greedy algorithm can be used for assignment. There are many
possible approaches in between mixed integer programming and a greedy strategy. Finding
the ultimate assignment technique for pre-packing is beyond the scope of this containment-
oriented thesis. However, we o er an intermediate approach which runs quickly, provides
look-ahead, and is quite exible. Our results challenge the belief that pre-packing strategies
are impractical [Arb93].
9.5.1 Assignment Strategy
Our assignment strategy requires three functions. The rst is a comparison function
f (g1; g2) which accepts two groups g1 and g2 , returns g1 if g1  g2 and returns g2 otherwise.
Let C be a collection of containers. The second function is an ordering function f 0 (C1; C2)
for containers. The third is an evaluation function h(C; g ) which accepts a container C 2 C
and a group g and returns a real number .
The rst step of our assignment process is to sort the groups for each container according
to f . Then, while some item remains unassigned, perform the following. For each container,
select the rst group in its ordering for which there are sucient unassigned items. Apply

170
h(C; g) to obtain a value for this group. Select the (C; g) pair with the best value, and
perform this assignment.
The evaluation function h uses a greedy look-ahead strategy which requires the ordering
function f 0 for containers. For a given container and its rst group, tentatively perform this
assignment. Now examine each remaining container according to the container ordering.
For each container, tentatively assign the highest group (according to f ) whose items are
currently (tentatively) unassigned. When tentative assignments have been performed for
all containers, return , which is the total area of all tentatively assigned items.
In the pseudo-code below, C is a collection of containers and G is the output of GENERATE-
GROUPS for all the containers. Ci 2 C is an individual container, and Gi 2 G is the
collection of groups produced by GENERATE-GROUPS for Ci . An available group g 2 Gi
is one whose items are all available (i.e. unassigned).
ASSIGN(P , C , G )
for each container Ci 2 C
Sort Gi 2 G according to f
while some Pi 2 P , i 6= 0, is available
for each container Ci 2 C
g rst available group of Gi
i h(Ci; g )
BestI = index of container with best
Assign items of g for CBestI to CBestI

9.5.2 Results
To justify our claim that pre-packing can be practical, we present the results of our
experiments with applying pre-packing to a trim placement problem for pants markers in
apparel manufacturing (see Section 1.4 for a description of the trim placement problem).
Table 9.1 presents results of our group generation and assignment procedures for 18 ex-
amples. For each example we give the name of the marker (layout), the number of trim
items which must be placed, and results for three trim placement strategies. The three
strategies are: 1) First-Fit, 2) our assignment strategy (called \Match" in the table), and
our assignment strategy followed by First-Fit on the unplaced items. For each strategy, we
give the number of items left unplaced and the total area of the unplaced items.
Our First-Fit strategy is a greedy heuristic which places each item at the leftmost
available position. The items are sorted by decreasing area and are placed in that order.
The heuristic has two additional features which allow it to produce layouts whose waste
percentage is, on the average, only about 3% larger than what experienced humans achieve.
The rst additional feature tries all legal ips and discrete rotations (by multiples of =4)
for each item and selects the best one. The second feature uses a variation on compaction
to \bump" each placed item in a particular direction in an e ort to reduce fragmentation
of the available space.
There is a number of ways to interpret this data. Comparing the total number of items
left unplaced, assignment plus First-Fit places 53% more items than First-Fit alone. Averag-
ing the percentage of the items which are left unplaced shows that assignment plus First-Fit
leaves only an average of 10% of the trim items unplaced, whereas First-Fit leaves 19% of

171
First-Fit Match Match+First-Fit
marker name ntrim nleft area left nleft area left nleft area left
37504c 70 0 0 17 4.3e+6 2 5.9e+5
41132b 56 15 2.3e+6 7 1.6e+6 2 7.7e+5
41132b1 56 16 3.1e+6 8 2.9e+6 5 2.4e+6
45941b 56 5 7.2e+5 12 2.8e+6 4 1.4e+6
42461d 56 12 1.9e+6 9 2.7e+6 7 2.4e+6
37188c 42 10 1.9e+6 5 1.5e+6 2 1.1e+6
2275k1t001 24 5 2.6e+6 3 2.3e+6 2 2.1e+6
2275k1t016 24 3 2.3e+6 4 2.5e+6 3 2.4e+6
2275k1t016a 24 7 3.0e+6 4 2.4 4 2.4
2215k2t013 22 4 2.5e+6 5 2.8e+6 2 2.2e+6
37969a 18 2 3.0e+5 1 1.4e+5 0 0
37969a1 18 2 1.1e+6 2 3.2e+5 2 3.2e+5
41487a 18 6 1.3e+6 2 7.3e+5 1 5.6e+5
41487a1 18 3 4.7e+5 1 5.6e+5 1 5.6e+5
2413i5t001 18 2 1.2e+6 2 1.2e+6 2 1.2e+6
2413i5t001a 18 2 1.2e+6 3 1.5e+6 3 1.5e+6
2413i5t122 18 3 1.4e+6 3 1.5e+6 3 1.5e+6
340065233 18 0 0 1 7.2e+5 1 7.2e+5
Table 9.1: Pre-packing results

172
them unplaced; assignment plus First-Fit is 9% better by this measure. It is also valuable
to compare the area of the unplaced items. A raw comparison of total area shows that
assignment plus First-Fit placed 12% more area than First-Fit. To remove any di erences
in scale, averaging the percentage of the area which is left unplaced shows that assignment
plus First-Fit leaves only an average of 23% of the area unplaced, whereas First-Fit leaves
26% of the area unplaced; assignment plus First-Fit is 3% better by this measure. All of
these comparisons show the quality of assignment plus First-Fit to be superior to First-Fit
alone.
We illustrate our results by showing layouts for two examples. For the rst example, Fig-
ure 9.10 shows the trim placement task, Figure 9.11 shows the layout obtained by First-Fit,
Figure 9.12 shows the layout obtained by our assignment procedure, and Figure 9.13 shows
the layout after assignment followed by First-Fit. Figure 9.14, Figure 9.15, Figure 9.16, and
Figure 9.17 show the same progression for our second example.
Name: 37188c-trim
Width: 59.75 in
Length: 130.28 in
Pieces: 54
Efficiency: 76.45%

5 3
7

11
6

8
9
0

10
2 4

12 23 38 49
29 50
13 51
39 52
14 24 53
30
15 40
31
16 25 41
17 32
42
18 33
26 43
34
19 44
35
20 27 45
36
21 46
37
22 28 47
48

Figure 9.10: Trim placement task for 37188c.mrk


Additional quality comes at the cost of higher running time. Our First-Fit strategy
requires under ve minutes, whereas our new strategy currently requires one to two hours
for a 56 item example (on a 50 MHz SPARCstation). In apparel manufacturing, a human
takes about 1/2 hour for an example of this size. One to two hours is within the realm
of practicality. For our current implementations of containment and assignment, about
95% of the running time is consumed by group generation, and of this 95%, almost all the
time is spent in the containment algorithm. This means that a dramatic increase in the
speed of our hybrid containment algorithm will also have a dramatic impact on the speed

173
Name: 37188c-ffit
Width: 59.75 in
Length: 130.28 in
Pieces: 54
Efficiency: 86.78%

51 50 48 14 18
24 45 22
31 16
5 3
7
35 15 34 33
23

11 20 30 13
6
29 28 47

25 46 1
26
17
8
9
0

32 12

10
2 4
21 27
53 52 49

19
36
37
38
39
40
41
42
43
44

Figure 9.11: First-Fit for 37188c.mrk


Name: 37188c-matchk
Width: 59.75 in
Length: 130.28 in
Pieces: 54
Efficiency: 87.27%

52 51 30 21
25
26
37
5 3
7
34 49 32
20

15 29
11 17 47
23
35
6
39
27 48 36 1
31
46 24
45 38
8
9
0

33 12 14

10
2 4
40
16 13
28
41 22 50 53

18
19
42
43
44

Figure 9.12: Assignment for 37188c.mrk

174
Name: 37188c-matchk2
Width: 59.75 in
Length: 130.28 in
Pieces: 54
Efficiency: 87.82%

52 51 30 21
25
44 43 42 26
37
5 3
7
34 49 32
20

15 29
11 17 47
23
35
6
39
27 48 36 1
31
46 24
45 38
8
9
0

33 12 14

10
2 4
40
16 13
28
41 22 50 53

18
19

Figure 9.13: Assignment plus First-Fit for 37188c.mrk

Name: 41132-trim
Width: 59.75 in
Length: 182.13 in
Pieces: 72
Efficiency: 78.65%

6 13 2 9

7 3
12 8

14 10
5 1

15 4 11
0

16 49
17 34
18 50
19 35 51
20 52
21 36 53
22
23 54
37 55
24
25 38 56
26 57
27 39 58
28 59
40 60
29
41 61
30 42
31 62
43 63
32 44 64
45 65
66
67
33 46 68
69
47 70
71
48

Figure 9.14: Trim placement task for 41132b.mrk

175
Name: 41132-ffit
Width: 59.75 in
Length: 182.13 in
Pieces: 72
Efficiency: 89.04%

28 26 47 16 24 29 40
23 22 17 69

6 13 2 9
37 55 36 54
62 33 32
21 53
42
7 3
12 8
20 45

46

14 10
5 1
44
39 19 38 43 34
63
71 35 68

15 4 11
0
70
18 25 31 30 41 27

48
49
50
51
52
56
57
58
59
60
61
64
65
66
67

Figure 9.15: First-Fit for 41132b.mrk

Name: 41132-matchk
Width: 59.75 in
Length: 182.13 in
Pieces: 72
Efficiency: 89.67%

53 24 40 67 69 31 56
26
66 68

6 13 2 9
18 43
70 39
57 21 34 20 59
37 55
36 54
7 3
12 8
47
44
23 45
14 10
42
5 1
41 35 38
63 33 17 58
32 46 22
62
15 4 11
0
61
71
65 30 52 28 19 60 27 29

16
25
48
49
50
51
64

Figure 9.16: Assignment for 41132b.mrk

176
Name: 41132-matchk2
Width: 59.75 in
Length: 182.13 in
Pieces: 72
Efficiency: 90.41%

53 24 40 67 51 50 49 69 31 56
26
66 64 16 68

6 13 2 9
18 43
70 39
57 21 34 20 59
37 55
36 54
7 3
12 8
47
44
23 45
14 10
42
5 1
41 35 38
63 33 17 58
32 46 22
62
15 4 11
0
61
71
65 30 52 28 19 60 27 29

25
48

Figure 9.17: Assignment plus First-Fit for 41132b.mrk

of pre-packing.
Our best trim placement strategy for pants markers in apparel manufacturing is neither
First-Fit nor pre-packing. It places 56 items in approximately ten minutes and its waste is in
the production quality range. Neither First-Fit nor our current pre-packing implementation
currently generate results which are consistently in the production quality range. Our best
strategy employs special heuristics to place the largest items rst, uses containment to
construct small groups of at most three items, a naive greedy assignment strategy, and
compaction-based techniques combined with First-Fit to place any remaining items.
Although it achieves excellent results for pants markers, its applicability to other do-
mains is limited. In contrast, our pre-packing approach is completely general. It can be
applied to any domain. The same is true of First-Fit.
Our best pants strategy also does not have much growth potential. That is, we do not
expect that adjustments to parameters of this algorithm will signi cantly reduce the waste
of the associated layouts. The same is true of First-Fit. However, our pre-packing approach
has tremendous growth potential. Its initial results are good, and improvements in the
assignment strategy or the group generation process will improve the quality of its results.
Future research will bene t from determining whether the current group generation
process or the assignment strategy is the limiting factor. We have attempted to perform
assignment manually using the groups we generate, and our initial results show that, sur-
prisingly, the assignment strategy is less limiting than group generation.
There are many parts of the group generation procedure which can be altered: the ran-
dom procedure for building a group, the elimination technique, the containment algorithm,
and the number of groups. Our initial experiments have imposed a number of arti cial
limitations. Our current implementation limits the size of a subset to ten; our waste gures
would probably improve with a higher limit. We also limit the number of hypotheses eval-
uated by the hybrid algorithm to less than 20, and we build no more than 150 groups for a
container.
This returns us to the stopping condition issue raised in Section 9.4. It is impractical to
generate all possible groups for each container. How do we determine if a given collection of

177
groups is satisfactory for the assignment phase without running the assignment algorithm?
This is beyond the scope of the thesis. Investigating ways to obtain a lower bound on
waste for a given collection of groups is a subject of future work. Given a lower bounding
technique, one could iteratively determine a bound and selectively generate more groups
until the desired waste level was achieved.

9.6 Summary and Conclusions


This chapter has addressed the following maximal item problems: 1) given a container
and a collection of items, nding maximally scaled copies of items which t in the container,
2) given a container and a collection of  kpractical items for which the sum of the item areas
does not exceed the area of the container, nding a \large" sub-collection which ts in the
container, 3) given a container and a collection of items for which either the sum of the
item areas exceeds the area of the container or the number of items is > kpractical , nding
a \large" sub-collection of  kpractical items which ts in the container, and 4) problem (3)
for a collection of containers.
For problem (1), we o er a heuristic which uses binary search combined with contain-
ment. The heuristic has most of the advantages of the greedy and the enhanced balanced
strategies. It beats both of these approaches on some inputs.
The remaining three problems are all related to our pre-packing strategy. We use an
elimination strategy for problem (2), which uses feedback from the hybrid containment
algorithm to identify \troublesome" items. For problem (3), we randomly construct a subset
of items and then solve problem (2). This process considers multiple (discrete) orientations
for items. For problem (4), we nd groups of items which can t into each container, and
then apply a look-ahead assignment strategy. We demonstrate what we believe is the rst
practical application of pre-packing. We are able to generate a suciently rich collection of
groups for each container so that our assignment strategy, followed by a First-Fit strategy,
can outperform a First-Fit strategy. This process takes one or two hours for a 56-item
example, using the current implementation of our hybrid containment algorithm. There are
many ways to improve our group generation and assignment processes. We believe that, in
addition to demonstrating the feasibility of pre-packing, this approach has the potential to
eventually match the production-quality results of our best algorithm for trim placement
in apparel manufacturing.

178
Part III

Conclusion and Appendix

179
C h a p t e r 10
Conclusion and Future Work

10.1 Conclusion
This thesis demonstrates that, although containment is NP-hard, it is fruitful to: 1)
develop algorithms for containment, as opposed to heuristics, 2) design containment algo-
rithms so that they say \no" almost as fast as they say \yes", 3) use geometric techniques,
not just mathematical programming techniques, and 4) maximize the number of items for
which the algorithms are practical.
Our approach to containment has a number of important features. First, we intro-
duce a restrict/evaluate/subdivide paradigm. Second, we develop new theory and practical
techniques for the operations within the paradigm. The techniques are appropriate for two-
dimensional containment problems in which the items and container may be irregular (i.e.
nonconvex) and have multiple components, and in which the items may be translated, but
not rotated. Our techniques can be combined to form a variety of two-dimensional transla-
tional containment algorithms which follow the restrict/evaluate/subdivide paradigm. The
paradigm is designed so that, unlike existing iteration-based algorithms, containment algo-
rithms based on the paradigm are adept at saying \no", even for slightly infeasible problems.
Infeasible problems occur frequently in practice.
We develop geometric restrictions which are powerful con guration space pruning tech-
niques. They often detect infeasibility without any evaluation or subdivision. We give two
algorithms which use geometric restrictions and are based on the restrict/evaluate/subdivide
paradigm. We obtain the rst practical running times for NP-complete two-dimensional
translational containment problems for up to ten nonconvex items in a nonconvex container.
Our examples are drawn primarily from the apparel industry. Each item is represented as
a polygon which typically has from 4 to 100 vertices. The container is also represented
as a polygon; it typically has from 100 to 300 vertices. The items and container may be
nonconvex.
We demonstrate that viewing containment as a feasibility problem has many bene ts
for layout optimization problems. We present an e ective method for nding minimal
enclosures which uses containment to perform binary search on a parameter. We note that
compaction can be used to accelerate the search. This work represents the rst practical
approach to nding minimal enclosures for multiple nonconvex items. Clustering is used
180
often in layout, but current clustering techniques for more than two items are heuristic.
Our minimal enclosure work should lead to more powerful clustering methods, and is an
example of how strong \bottom-up" techniques from computational geometry can help the
layout community.
We believe that some potentially promising layout techniques have been abandoned by
the layout community because they lack strong bottom-up methods. The primary example
of this is pre-packing. We challenge the view that pre-packing is impractical for multiple-
container packing problems by generating a rich collection of groups for each container,
using containment, and then applying an assignment strategy.
Our advances in containment will help automate layout for industrial processes that rely
on two-dimensional packing of objects. Our results are particularly relevant to industries
which deal with irregularly shaped items. This includes industries such as aerospace, ship
building, apparel and shoe manufacturing, furniture production, and steel construction.

10.2 Future Work


Section 1.5 invites other researchers to exchange containment data with us. We hope
this will result in a broad set of containment benchmarks which can be used to evaluate
the e ectiveness of di erent containment algorithms. In addition to soliciting containment
problems from a variety of sources, we plan to automate the process of creating containment
problems from existing layouts.
Here we o er directions for future research on containment problems and applications of
containment to layout. Section 10.2.1 describes problems related to containment algorithms,
and Section 10.2.2 discusses applications.
10.2.1 Containment
There are several directions for future work in containment. One direction is to increase
the dimensionality of the containment problems we address. For instance, many important
three-dimensional packing and layout problems could be solved using three-dimensional
containment algorithms. Another way to increase the dimensionality of the containment
problem is to allow rotation. Rotational containment adds an extra degree of freedom to
each item by allowing it to rotate continuously within a range of angles. Many layout
problems allow continuous rotation of items, particularly problems related to sheet metal
work. Section 1.2.1 reviewed work in this area. There is clearly room for much work on
rotational containment involving more than one item.
There are ways rotation could be dealt with in a heuristic fashion. The rst way in-
volves our minimal enclosure work. Recall that compaction can be used to update the
upper bound. Li and Milenkovic have rotational as well as translational compaction al-
gorithms. Rotational compaction could be applied to the translational solution as a nal,
post-processing step. In our hybrid translational containment algorithm, one could de-
velop a rotational overlap reduction evaluator. Using this evaluator, the hybrid algorithm
would try to produce a valid rotational con guration from each overlapping translational
con guration found by the linear program.
A second direction for future work involves improving our algorithms for translational
containment so that larger numbers of items (more than ten) may be placed in a practical
amount of time. One way to obtain a speed-up is to apply course-grained parallelism.
181
Throughout the thesis we have identi ed parts of our work which are natural candidates
for course-grained parallelism.
Some layout applications represent objects using circular arcs in addition to polygons,
and so enriching our underlying object representation is a worthwhile goal.
There is a considerable literature on rectangle packing and integer programming that
remains untapped for general kNN containment. Integer programming requires a great
deal of nesse to apply correctly, and we have been disappointed so far with results we
have obtained using state-of-the-art mixed integer programming packages. Our work on
combining computational geometry and linear programming is a large step towards the
ultimate goal of usefully combining computational geometry and integer programming.
Exact covering problems, such as solving jigsaw puzzles, can be solved using contain-
ment algorithms. For the non-exact case, we hope to develop algorithms which decide if a
collection of items can cover another item.
We plan to develop more valid and semi-valid restrictions for translational containment.
We will continue to explore the question of which restriction operations allow s, the max-
imum number of vertices in a restricted polygon, to remain nite even when an in nite
number of operations is applied.
The following problems are associated with our greedy evaluation method:
 Given polygons A and B, is there a translation t for which H ((A + t) [ B) is convex?
 Find t which maximizes the convexity of (A + t) \ B.
 Solve these problems for convex A and B.
 Given a collection of polygons P , how fast can we nd the translation tp for each
p 2 P such that the area of \p2P (p + tp ) is maximized? This is also useful for nding
the maximum area breaking polygon.
The shrinking-point heuristic which maximizes the area of intersection of translated Uij s
works well unless the Uij s consist only of isolated edges and points. This case can occur
when we form subproblems using a boundary restriction. In this case, we propose using a
distance-based heuristic instead of an area-based one.
These problems arise in relation to subdivision:
 Subdivide in a that encourages translations of Uij s to intersect.
 Given a set of polygons P , how fast can we nd the line L which partitions P into P 0
and P 00, such that the area of (H (P 0) [ H (P 00)) \ H (P ) is minimized?
Potential improvements to the approximate algorithm include:
 Use multiple resolution techniques.
 Invoke a linear programming-based algorithm when the convex hulls of the restricted
Uij s are all inside their respective unrestricted Uij s.
 Extend the size-based subdivision used in this algorithm so it subdivides Uij s for
which i 6= 0.

182
We plan to evaluate the cost-e ectiveness of using a partial characterization to produce k
subproblems for k > 4. Our current experiments suggest that it is bene cial for k = 4,
even when the subproblems are solved sequentially. We expect that, for non-identical item
problems, a critical value of k exists beyond which the sequential advantage is lost. A de-
tailed comparison of the parallel cost will be necessary in order to justify using the partial
characterization for values of k larger than the critical one. Course-grained parallelism can
be applied to our algorithm by maintaining a queue of hypotheses. Each hypothesis repre-
sents an independent containment subproblem which is created via subdivision. Whenever
a processor is available, it operates on the next hypothesis in the queue. The subproblems
generated by the characterization are also independent from each other, and so the hypothe-
ses they generate are independent and can be added to the queue. We plan to construct a
program which uses a queue and simulates the parallel execution of our algorithm.
10.2.2 Applications of Containment to Layout
In our work on container transformation, we currently approximate the area integral of
Prob(ftg; v 0) for t 2 v via sampling. We seek ecient algorithms for computing this integral
exactly.
Pursuing more ecient algorithms for nding the exact MAAPR is certainly one di-
rection for future work. We are interested in faster algorithms or lower bounds for these
polygon types:
 Orthogonally convex
 Horizontally convex
 Vertically separated, horizontally convex
Another direction of practical importance is to nd a fast approximate MAAPR algorithm.
Such an algorithm would be very helpful in our applications.
There are many open problems associated with the concept of inking sets. The problem
of minimizing the number of vertices while maximizing the minimum area of inking set
pieces is open, even for convex polygons. Given a collection of polygons, the problem of
nding a polygon representing the maximal common subset of translations of these polygons
is useful not only for inking sets, but also for the subset substitution restriction.
Future work on our minimal enclosure applications includes implementing our idea of
using compaction to accelerate binary search. We also plan to experiment with using our
minimal enclosure algorithms for nding repeating patterns of irregular objects.
Future work on maximal item problems includes improving our algorithm for solving
the maximal homothet problem. Our pre-packing work can be improved by obtaining lower
bounds for assignment and feeding these results back into the group generation process.

183
Chapter A
Containment File Format

This appendix describes the simple le format we propose for two-dimensional polygonal
containment data. The format rst describes the polygonal container and then each of the
polygonal items to be placed. For each polygon, the number of vertices precedes the list of
x and y vertex coordinates. Data for each vertex appears on a separate line. The vertices
are listed in counterclockwise order, so that the inside of the polygon is to the left of each
directed edge.
The small example on the next page is for a 3CC problem for which our approximate
algorithm nds a valid con guration in one second on a 28 MHz SPARCStation. In this
example, the container is a triangle, and each of the three items is also a triangle.

184
container vertices:3
0.0 30.0
200.0 30.0
100.0 200.0
items:3
item vertices:3
0.0 0.0
0.0 100.0
-80.0 -60.0
item vertices:3
0.0 0.0
80.0 -60.0
0.0 100.0
item vertices:3
0.0 0.0
-70.0 -60.0
70.0 -60.0

185
Bibliography

[AA76] M. Adamowicz and A. Albano. Nesting two-dimensional shapes in rectangular


modules. Computer-Aided Design, 8:27{33, 1976.
[AB87] F. Avnaim and J. Boissonnat. Simultaneous Containment of Several Polygons.
In Proceedings of the 3rd ACM Symposium on Computational Geometry, pages
242{250, 1987.
[AB88] F. Avnaim and J. Boissonnat. Polygon placement under translation and rota-
tion. In Proceedings of the 5th Annual Symposium on Theoretical Aspects of
Computer Science; Lecture Notes in Computer Science (294), pages 322{333.
Springer-Verlag, 1988.
[ABB91] H. Alt, B. Behrends, and J. Blomer. Approximate matching of polygonal shapes.
In Proceedings of the 7th Annual ACM Symposium on Computational Geometry,
pages 186{193, 1991.
[ABJ90] C. Amaral, J. Bernardo, and J. Jorge. Marker-Making Using Automatic Place-
ment of Irregular Shapes for the Garment Industry. Computers and Graphics,
14(1):41{46, 1990.
[Ada69] M. Adamowicz. The optimum two-dimensional allocation of irregular, multiply-
connected shapes with linear, logical, and geometric constraints. PhD thesis,
Electrical Engineering Department, New York University, 1969.
[Arb93] A. Arbel. Large-scale optimization methods applied to the cutting stock problem
of irregular shapes. International Journal of Production Research, 31(2):483{
500, 1993.
[AS80] A. Albano and G. Sapuppo. Optimal Allocation of Two-Dimensional Irregular
Shapes Using Heuristic Search Methods. IEEE Transactions on Systems, Man,
and Cybernetics, SMC-10(5):242{248, 1980.
[AS94] B. Aronov and M. Sharir. On Translational Motion Planning in 3-Space. In
Proceedings of the 10th Annual ACM Symposium on Computational Geometry,
pages 21{30, 1994.
186
[Avn89] F. Avnaim. Placement et deplacement de formes rigides ou articulees. PhD
thesis, Universite de Franche-Comte, France, 1989.
[Bab82] P. V. Babaev. The optimal cutout of materials with help of a computer.
Mashinostroenje, 1982.
[BFM86] B. Baker, S. Fortune, and S. Mahaney. Polygon Containment under Translation.
Journal of Algorithms, 7:532{548, 1986.
[BG79] D. Bohme and A. Graham. Practical Experiences with Semiautomatic and
Automatic Partnesting Methods. In C. Kuo, K. MacCallum, and T. Williams,
editors, Computer Applications in the Automation of Shipyard Operation and
Ship Design III, pages 213{220. North-Holland Publishing Company, 1979.
[BHW93] J. Blazewicz, P. Hawryluk, and R. Walkowiak. Using a tabu search approach
for solving the two-dimensional irregular cutting problem. Annals of Operations
Research, 41:313{327, 1993.
[BK87] A. Bezdek and G. Kertesz. Counter-examples to a packing problem of L. Fejes
Toth. In K Boroczky and G. Fejes Toth, editors, Intuitive Geometry. North-
Holland, New York, 1987.
[BTS83] J. Bailleul, K. Tiaibia, and R. Soenen. Nesting Two Dimensional Irregular
Shapes In Anisotropic Material. In T.M.R. Ellis and O.I. Semenkov, editors,
Advances in CAD/CAM, pages 191{201. North-Holland, 1983.
[CG89] R. Cuninghame-Green. Geometry, Shoemaking and the Milk-Tray Problem.
New Scientist, 12 August:50{53, 1989.
[Cha83] B. Chazelle. The Polygon Containment Problem. In Preparata, editor, Advances
in Computing Research, Volume 1: Computational Geometry, pages 1{33. JAI
Press, Inc., Greenwich, Connecticut, 1983.
[CK89] L.P. Chew and K. Kedem. Placing the largest similar copy of a convex polygon
among polygonal obstacles. In Proceedings of the 5th Annual ACM Symposium
on Computational Geometry, pages 167{174, 1989.
[CMTS79] N. Christo des, A. Mingozzi, P. Toth, and C. Sandi. Combinatorial Optimiza-
tion. John Wiley and Sons, Chichester, 1979.
[CY86] J.S. Chang and C.K. Yap. A Polynomial Solution for the Potato-peeling Prob-
lem. Discrete and Computational Geometry, 1:155{182, 1986.
[DBB84] D. Dori and M. Ben-Bassat. Ecient Nesting of Congruent Convex Figures.
Communications of the ACM, 27(3):228{235, 1984.
[DD] Kathryn A. Dowsland and William B. Dowsland. Solution Approaches to Ir-
regular Nesting Problems. Invited review to appear in European Journal of
Operational Research.
[DD92] K. A. Dowsland and W. B. Dowsland. Packing Problems. European Journal of
Operational Research, 56:2 { 14, 1992.

187
[Dev90] O. Devillers. Simultaneous Containment of Several Polygons: Analysis of the
Contact Con gurations. Technical Report 1179, INRIA, 1990.
[DF92] H. Dyckho and U. Finke. Cutting and Packing in Production and Distribution:
A Typology and Bibliography. Physica-Verlag, Heidelberg, 1992.
[DJ95] R. Dighe and M. Jakiela. Solving Pattern Nesting Problems With Genetic Al-
gorithms: Employing Task Decomposition And Contact Detection. In Evolu-
tionary Computation. MIT Press, Cambridge, MA, 1995.
[DKO87] N. DePano, Y. Ke, and J. O'Rourke. Finding Largest Inscribed Equilateral
Triangles and Squares. In Proceedings of the 25th Allerton Conference on Com-
munications, Control, and Computing, pages 869{878, 1987.
[DM] K. Daniels and V. J. Milenkovic. Multiple Translational Containment, Part
I: An Approximate Algorithm. Algorithmica, special issue on Computational
Geometry in Manufacturing, accepted, subject to revisions.
[DM94] K. Daniels and V. J. Milenkovic. Limited Gaps. In Proceedings of the 6th
Canadian Conference on Computational Geometry, pages 225{230, 1994.
[DM95] K. Daniels and V. J. Milenkovic. Multiple Translational Containment: Approx-
imate and Exact Algorithms. In Proceedings of the 6th Annual ACM-SIAM
Symposium on Discrete Algorithms, 1995.
[DML94] K. Daniels, V.J. Milenkovic, and Z. Li. Multiple Containment Methods. Tech-
nical Report 12{94, Center for Research in Computing Technology, Division of
Applied Sciences, Harvard University, 1994.
[DMR] K. Daniels, V. J. Milenkovic, and D. Roth. Finding the Maximum Area Axis-
Parallel Rectangle in a Simple Polygon. Computational Geometry: Theory and
Applications, accepted, subject to revisions.
[DMR93] K. Daniels, V. J. Milenkovic, and D. Roth. Finding the Maximum Area Axis-
Parallel Rectangle in a Polygon. In Proceedings of the 5th Canadian Conference
on Computational Geometry, pages 322{327, 1993.
[DT87] C. Dagli and M. Tatoglu. An approach to two-dimensional cutting stock prob-
lems. International Journal of Production Research, 25(2):175{190, 1987.
[Due93] G. Dueck. New Optimization Heuristics: The Great Deluge Algorithm and the
Record-to-Record Travel. Journal of Computational Physics, 104:86{92, 1993.
[Dyc90] H. Dyckho . A typology of cutting and packing problems. European Journal of
Operations Research, 44:145{159, 1990.
[Ede87] H. Edelsbrunner. Algorithms in Combinatorial Geometry. Springer-Verlag, New
York, 1987.
[For85] S. Fortune. A Fast Algorithm for Polygon Containment by Translation. In
Proceedings of the 12th Colloquium on Automata, Languages, and Programming,
pages 189{198. Springer-Verlag, 1985.
188
[For87] S. Fortune. A Sweepline Algorithm for Voronoi Diagrams. Algorithmica, 2:153{
174, 1987.
[GA81] H. El Gindy and D. Avis. A Linear Algorithm for Computing the Visibility
Polygon from a Point. Journal of Algorithms, 2:186{197, 1981.
[GC93] R. Grinde and T. Cavalier. Containment of a Single Polygon Using Mathemat-
ical Programming. Technical Report IMSE Working Paper 92-164, The Penn-
sylvania State University, Department of Industrial and Management Systems
Engineering, 1993.
[GG65] P. Gilmore and R. Gomory. Multistage Cutting Stock Problems of Two and
More Dimensions. Operations Research, 13:94{120, 1965.
[GJ79] M. Garey and D. Johnson. Computers and Intractability: A Guide to the Theory
of NP-Completeness. W.H. Freeman and Company, San Francisco, 1979.
[GRS83] L. Guibas, L. Ramshaw, and J. Stol . A Kinetic Framework for Computa-
tional Geometry. In Proceedings of the 24th IEEE Symposium on Foundations
of Computer Science, pages 100{111, 1983.
[Gur68a] O. Gurel. Additional considerations on marker layout problem by graph theory.
Technical Report 320-2945, IBM Scienti c Centre, 1968.
[Gur68b] O. Gurel. Marker layout via graph theory: An attempt for optimal layout of
irregular patterns. Technical Report 320-2921, IBM Scienti c Centre, 1968.
[Gur69] O. Gurel. Circular graph of marker layout. Technical Report 320-2965, IBM
Scienti c Centre, 1969.
[HF70] M. Haims and H. Freeman. A multistage solution of the template-layout prob-
lem. I.E.E.E. Transactions on Systems Science and Cybernetics, 6:145, 1970.
[Hin80] A. Hinxman. The trim-loss and assortment problems: A survey. European
Journal of Operations Research, 5:8{18, 1980.
[HL92] J. Heistermann and T. Lengauer. Ecient Automatic Part Nesting on Irregular
and Inhomogeneous Surfaces. In Proceedings of the 3rd Annual ACM-SIAM
Symposium on Discrete Algorithms, 1992.
[HL95] R. Heckmann and T. Lengauer. A Simulated Annealing Approach to the Nesting
Problem in the Textile Manufacturing Industry. In R. Burkhard, P. Hammer,
T. Ibaraki, and M. Queyranne, editors, Mathematics of Industrial Systems, Vol-
ume 2, Annals of Operations Research. J. C. Baltzer AG Science Publishers,
Amsterdam, 1995.
[HT94] D. Hinkle and C. Toomey. Clavier: Applying Cased-Based Reasoning to Com-
posite Part Fabriciation. In AAAI Sixth Innovative Applications of Arti cial
Intelligence Conference, pages 54{62, 1994.
[HW89] N. Hyer and U. Wemmerlov. Group technology in the U.S. manufacturing indus-
try: a survey of current practices. International Journal of Production Research,
27(8):1287{1304, 1989.
189
[IH92] H. Ismail and K. Hon. New approaches for the nesting of two-dimensional shapes
for press tool design. International Journal of Production Research, 30(4):825{
837, 1992.
[IS82] S. Israni and J. Sanders. Two-dimensional cutting stock problem research: A
review and a new rectangular layout algorithm. Journal of Manufacturing Sys-
tems, 1(169), 1982.
[IS85] S. Israni and J. Sanders. Performance testing of rectanular parts-nesting heuris-
tics. International Journal of Production Research, 23:437{456, 1985.
[JGJ84] E.G. Co man Jr., M.R. Garey, and D.S. Johnson. Approximation Algorithms for
Bin-Packing - An Updated Survey. In G. Ausiello, M. Lucertini, and P Sera ni,
editors, Algorithm Design for Computer System Design, pages 49{106. Springer,
1984.
[JL91] E.G. Co man Jr. and G. S. Lueker. Probabilistic Analysis of Packing and Par-
titioning Algorithms. Wiley and Sons, Inc., New York, 1991.
[JS87] B. Joe and R.B. Simpson. Corrections to Lee's Visibility Polygon Algorithm.
BIT, 27:458{473, 1987.
[Kar89] J. B. Karasic. Formalization of Heuristic Tactics Used by Human Beings in
Searching for the Closest Packing of Arbitrary Objects in a Given Domain.
Cybernetics and Systems: An International Journal, 20:43{49, 1989.
[Kha79] L. G. Khachian. A polynomial algorithm in linear programming. Soviet Math-
ematics Doklady, 20(1):191{194, 1979.
[Kir83] D. G. Kirkpatrick. Optimal Search in Planar Subdivisions. SIAM Journal on
Computing, 12(1):28{35, 1983.
[KK90] G. Kuperberg and W. Kuperberg. Double-Lattice Packings of Convex Bodies
in the Plane. Discrete and Computational Geometry, 5:389{397, 1990.
[KL91] F. Karoupi and M. Loftus. Accommodating diverse shapes within hexagonal
pavers. International Journal of Production Research, 29(8):1507{1519, 1991.
[Law81] E. Lawler. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart
and Winston, New York, 1981.
[Lee80] D.T. Lee. Two-dimensional Voronoi diagrams in the Lp Metric. Journal of the
ACM, 27:604{618, 1980.
[Lee83] D. T. Lee. Visibility of a Simple Polygon. Computer Vision, Graphics, and
Image Processing, 22:207{221, 1983.
[Li94] Z. Li. Compaction Algorithms for Non-Convex Polygons and Their Applications.
PhD thesis, Harvard University, Division of Applied Sciences, 1994.
[LLKS85] E. Lawler, J. Lenstra, A. Rinnooy Kan, and D. Shmoys, editors. The Traveling
Salesman: A Guided Tour of Combinatorial Optimization. John Wiley and
Sons, Chichester, 1985.
190
[LM93a] Z. Li and V. J. Milenkovic. A Compaction Algorithm for Non-Convex Polygons
and Its Application. In Proceedings of the 9th Annual ACM Symposium on
Computational Geometry, pages 153{162, May 1993.
[LM93b] Z. Li and V. J. Milenkovic. Compaction and Separation Algorithms for Non-
Convex Polygons and Their Applications. Technical Report TR-13-93, Center
for Research in Computing Technology, 1993.
[LM93c] Z. Li and V. J. Milenkovic. The Complexity of the Compaction Problem. In
Proceedings of the 5th Canadian Conference on Computational Geometry, A.
Lubiw and J. Urrutia, Editors, pages 473{478, August 1993.
[LM93d] Z. Li and V. J. Milenkovic. The Complexity of the Compaction Problem. In
Proceedings of the 5th Canadian Conference on Computational Geometry, pages
7{11, Waterloo, Canada, 1993.
[MD95] V. J. Milenkovic and K. Daniels. Translational Polygon Containment and Min-
imal Enclosure using Geometric Algorithms and Mathematical Programming.
In Submitted to the 36th Annual IEEE Conference on Foundations of Computer
Science, 1995.
[MDL91] V. J. Milenkovic, K. Daniels, and Z. Li. Automatic Marker Making. In Proceed-
ings of the 3rd Canadian Conference on Computational Geometry, 1991.
[MDL92] V. J. Milenkovic, K. Daniels, and Z. Li. Placement and Compaction of Non-
convex Polygons for Clothing Manufacture. In Proceedings of the 4th Canadian
Conference on Computational Geometry, pages 236{243, 1992.
[Meg83] N. Megiddo. Applying Parallel Computation Algorithm in the Design of Serial
Algorithms. Journal of the ACM, 30(4):852{865, 1983.
[MH69] G. Moreau and P. Hardavin. Marker layout problem: An experimental attempt.
Technical Report 320-2978, IBM New York Scienti c Center, 1969.
[Mil] V. J. Milenkovic. Exact Algorithms for Multiple Containment. Algorithmica,
special issue on Computational Geometry in Manufacturing, accepted, subject to
revisions.
[Mil93a] V. J. Milenkovic. Applications of Geometric Rounding to Polygon and Polyhe-
dron Modeling. In ARO/MSI Computational Geometry Workshop, 1993.
[Mil93b] V. J. Milenkovic. Robust polygon modelling. Computer-Aided Design,
25(9):546{566, September 1993.
[Mil95a] V. J. Milenkovic. Enhanced Balanced Scaling Strategy. Personal communication,
April 1995.
[Mil95b] V. J. Milenkovic. A Symmetry Breaking Restriction. Personal communication,
March 1995.
[Min03] H. Minkowski. Volumen und Ober ache. Mathematische Annalen, 57:447{495,
1903.
191
[Mit94] J. Mitchell. Comment at MSI Workshop, October 1994.
[MK93] Y. Moon and Y. Kao. Automatic Generation of Group Technology Families
during the Part Classi cation Process. The International Journal of Advanced
Manufacturing Technology, 8:160{166, 1993.
[ML] V. J. Milenkovic and Z. Li. A Compaction Algorithm for Nonconvex Polygons
and Its Application. European Journal of Operations Research, to appear.
[ML91] A. Mangen and N. Lasudry. Search for the Intersection Polygon of any Two
Polygons: Application to the Garment Industry. Computer Graphics Forum,
10:195{208, 1991.
[MN89] V. J. Milenkovic and L. R. Nackman. Finding Compact Coordinate represen-
tations for polygons and polyhedra. In Proceedings of the 6th Annual ACM
Symposium on Computational Geometry, pages 244{252, 1989.
[MOS85] M. McKenna, J. O'Rourke, and S. Suri. Finding the largest rectangle in an
orthogonal polygon. In Proceedings of the 23rd Allerton Conference on Com-
munication, Control, and Computing, pages 486{495, 1985.
[Mou91] D. M. Mount. The Densest Double-Lattice Packing of a Convex Polygon. Tech-
nical Report UMIACS-TR-91-4, CS-TR-2584, University of Maryland, January
1991.
[MS90a] E. Melissaratos and D. Souvaine. On Solving Geometric Optimization Prob-
lems Using Shortest Paths. In Proceedings of 6th Annual ACM Symposium on
Computational Geometry, pages 350{359, 1990.
[MS90b] D. Mount and R. Silverman. Packing and covering the plane with translates of
a convex polygon. Journal of Algorithms, 11:564{580, 1990.
[NH84] M. Nakajima and K. Hayashi. Automated Parts Layout Planning in Garment
Industry By Using Group Technology. In Computers in the World of Textiles,
pages 297{310. Textile Institute Annual World Conference, 1984.
[OF93] J. F. Oliveira and J. A. Ferreira. Algorithms for Nesting Problems. In R.V.V.
Vidal, editor, Lecture Notes in Economics and Mathematical Systems: Applied
Simulated Annealing. Springer-Verlag, 1993.
[O'R85] J. O'Rourke. Finding Minimal Enclosing Boxes. International Journal of Com-
put. Inform. Sci., 14:183{199, June 1985.
[O'R87] Joseph O'Rourke. Art Gallery Theorems and Algorithms. Oxford University
Press, 1987.
[PS82] C. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and
Complexity. Prentice-Hall, Englewood Cli s, New Jersey, 1982.
[PS85] F. Preparata and M. Shamos. Computational Geometry: An Introduction.
Springer-Verlag, New York, 1985.

192
[PS91] Y. Prasad and S. Somasundaram. CASNS - a heuristic algorithm for the nesting
of irregular-shaped sheet-metal blanks. Computer-Aided Engineering Journal,
pages 69{73, April 1991.
[QS87] W. Qu and J. Sanders. A nesting algorithm for irregular parts and factors
a ecting trim losses. International Journal of Production Research, 25:381{397,
1987.
[Ree93] Colin Reeves, editor. Modern Techniques for Combinatorial Problems. Halsted
Press/John Wiley and Sons, Inc., New York, 1993.
[Rog51] C.A. Rogers. The closest packing of two-dimensional convex sets. Acta Math.,
86:309{321, 1951.
[Ser82] J. Serra. Image Analysis and Mathematical Morphology, volume 1. Academic
Press, New York, 1982.
[Ser88] J. Serra, editor. Image Analysis and Mathematical Morphology, volume 2: The-
oretical Advances. Academic Press, New York, 1988.
[Sha94] U. Shankar. Restricting Search Spaces for Translational Containment Using
Linear Programming. Technical report, MIT Summer Research Institute, 1994.
[SP92] P. E. Sweeney and E. R. Paternoster. Cutting and Packing Problems: A Catego-
rized, Application-Oriented Research Bibliography. Journal of the Operational
Research Society, 43(7):691{706, 1992.
[SRW91] S. Schuierer, G.J. E. Rawlins, and D. Wood. A Generalization of Staircase
Visibility. In Proceedings of the 3rd Canadian Conference on Computational
Geometry, pages 96{99, 1991.
[TW73] M. Tanaka and T. Wachi. Computerized Marker Making. Journal of The Textile
Machinery Society of Japan, 26(7):74{81, 1973.
[YLW87] C. Yuzu, L. Lujun, and Wangwei. An expert system for automatic allocation of
2d irregular shapes. In Expert Systems in Computer Aided Design Conference,
IFIP, pages 407{423, 1987.

193

You might also like