0% found this document useful (0 votes)
17 views8 pages

Surfviz

Uploaded by

alex.muravev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

Surfviz

Uploaded by

alex.muravev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Visibility-Guided Simplification

Eugene Zhang and Greg Turk

GVU Center and College of Computing, Georgia Institute of Technology

A BSTRACT The remainder of the paper is organized as follows. In Section 2


we review existing methods for visibility calculation and mesh sim-
For some graphics applications, object interiors and hard-to-see re- plification algorithms. We present the definition of our surface vis-
gions contribute little to the final images and need not be processed. ibility measure in Section 3 and then describe how we calculate
In this paper, we define a view-independent visibility measure on this measure in Section 4. In Section 5 we present our visibility-
mesh surfaces based on the visibility function between the surfaces guided simplification algorithm, which combines the surface vis-
and a surrounding sphere of cameras. We demonstrate the useful- ibility measure with a well-know geometric measure, the quadric
ness of this measure with a visibility-guided simplification algo- measure. Section 6 provides a summary and discuss some future
rithm. work.
Mesh simplification reduces the polygon counts of 3D models
and speeds up the rendering process. Many mesh simplification al- 2 P REVIOUS W ORK
gorithms are based on sequences of edge collapses that minimize
geometric and attribute errors. By combining the surface visibility In this section, we review previous work in visibility and mesh sim-
measure with a geometric error measure, we obtain simplified mod- plification.
els with improvement proportional to the amount of low visibility
regions in the original models. 2.1 Visibility
Keywords: Visualization, Visibility, Mesh Simplification, Ren- Visibility issues appear in many aspects of graphics. Here, we re-
dering view some areas that are related to our work.
Visible Surface Determination Problem: The Visible Surface
1 I NTRODUCTION Determination Problem (VSD), also called the Hidden Surface Re-
moval Problem, is the task of deciding which parts of the opaque
Visibility is important and has been well-studied in computer graph- objects in a scene are visible from a given viewpoint. In their 1974
ics. In general, visibility refers to determining which surfaces are survey [22], Sutherland et al classify existing VSD algorithms into
unoccluded from certain camera positions in an environment. those that perform calculations in object-space, those that perform
In this paper, we are primarily interested in describing how some calculations in image-space, and those that work partly in both,
surface points are difficult to see due to object self-occlusions. For list-priority. They further point out these algorithms differ in how
instance, the interiors of an object are invisible from any outside they perform sorting and what local coherence information is used
viewpoint. Some exterior regions are more difficult to see than oth- to reduce the recalculation cost. The local coherence information
ers. To describe this view-independent property, we define a surface used may include: face coherence [20, 24], scan line coherence and
visibility measure which depends on the visibility function between edge coherence [2, 25], depth coherence [24], etc. Catmull devel-
the surface and a surrounding sphere of cameras (camera space). ops the depth-buffer or z-buffer image-precision algorithm which
To calculate the surface-camera visibility function, we render the uses depth coherence [3]. Myers later incorporates the depth-buffer
object from a dense set of camera poses in the camera space. For algorithm with the scan-line algorithm [16]. Fuchs et al use BSP
a point on the surface, the visibility measure is the percentage of tree to establish scene visibility [7]. Appel [1], Weiler and Ather-
the camera space from which this point is visible, and the camera ton [27], and Whitted [28] develop ray tracing algorithm which
space is weighted by the dot product between the point’s surface transforms the VSD into ray-object intersection tests.
normal and the viewing directions. We use this measure to help Aspect Graph: The visibility of a static scene often remains
mesh simplification. constant if viewpoints are restricted to be inside a limited region.
Mesh simplification algorithms reduce the polygon count of a This has led Koenderink and Van Doorn to propose the aspect graph
model while maintain its overall shape and appearance. This is to record where visibility changes occur [13]. In this graph, each
important for reducing the model storage cost and subsequent pro- node represents a general view as seen from a region of viewpoint
cessing time. Many mesh simplification algorithms are based on space. Two neighboring nodes are linked by an edge to represent
a sequence of edge collapses. At each step, one edge is collapsed a visual event (visibility change) when the viewpoint moves from
into a vertex, reducing the polygon count by two. The sequence one region to another. Algorithms have been developed for com-
of the edge collapse operations is designed to minimize geometric puting the aspect graph for 3D convex objects using orthographic
and appearance errors. In our study, we observe that many CAD views [18] and perspective views [21, 26]. Gigus et al propose al-
models and medical imaging data sets contain large interiors and gorithms for computing the aspect graph for 3D polyhedra under
concavities, which contribute little to the final images from any out- orthographic views [9]. Unfortunately, computing aspect graphes
side viewpoint when being rendered as opaque objects. In these re- is expensive. For general polyhedra with n supporting planes, the
gions, our visibility-guided algorithm allows greater geometric and complexity of computing the aspect graph using orthographic views
attributes errors. is O(n6 ). One often uses sampling techniques to generate approx-
imations [6, 10]. However, the sampling rate is difficult to set.
e-mail: {zhange,turk}@cc.gatech.edu
2.2 Mesh Simplification
IEEE Visualization 2002 Oct. 27 - Nov. 1, 2002, Boston, MA, USA
0-7803-7498-3/02/$17.00 © 2002 IEEE
Mesh simplification, too, is a well-studied problem in computer
graphics. Since the literature in this area is extensive, we review

267
surrounding circle S with infinite radius. The center of S coincides
with the center of the bounding box of M . Note, to draw both M
and S in the same image, their relative size are distorted. The cam-
eras are drawn as small line segments pointing toward the center of
the circle. p is a point on M . c1, c2 and c3 are camera poses on
the circle. p is visible from both c1 and c2, and invisible from c3
due to self-occlusion. c1 has a head-on view of the region near p
while c2 views the same region at a poor angle.
The right image in Figure 1 is a visualization of F (p, c), which
we call the visibility diagram of M . The x-axis represents points
on the perimeter of the shape, as traversed counter-clockwise. The
y-axis represents camera poses on the surrounding circle, also tra-
Figure 1: An object (left) and its visibility diagram F (p, c) (right). versed counter-clockwise. In the visibility diagram, the color at
In the visibility diagram, white indicates surface point p and camera point (p, c) encodes the visibility between point p on the object
pose c are mutually occluded. Green indicates they are mutually M and camera pose c. Green means they are mutually visible, and
visible. The intensity of greenness is related to the dot product white means they are mutually occluded. The intensity of green-
between the surface normal at p and the viewing direction of c. ness is proportional to the dot product between N (p) and R(c),
Head-on views are in lighter greens and side views are in darker the surface normal at p and the viewing direction of c, respectively.
greens. For instance, point p is visible from both c1 and c2, but Lighter green indicates better views. The overall visibility of p from
occluded from c3. Furthermore, c1 has a better viewing angle for outside views is definedR as:
the surface near p than that c2 does. F (p, c) (R (c) · N (p)) dc
V (p) = S R (3.1)
S
(R (c) · N (p)) dc

only a few of the most relevant methods. Hoppe proposes the V (p) measures the percentage of camera space that can “see” p,
framework of the progressive mesh [11] representation to address giving more weight to views at better angles. The portion of S over
the issues of progressive transmission, selective refinement and ge- which we integrate is actually a half-sphere, based on the surface
omorphing. Under this scheme, the polygon count is reduced by a normal at p. V (p) is between 0 and 1. For example, any point
sequence of edge collapses. All edges are put on a priority queue, on a convex object achieves the maximum value. Using the terms
which is sorted by some error measure. At each step, the edge with from radiosity [5] and under the assumption that there is no scatter-
the least error is collapsed into a single vertex, therefore remov- ing or energy loss during light transport, F (p, c) (R (c) · N (p))
ing two polygons. The location of the new vertex and the choice is the form factor between an infinitesimal surface around p and
of the error measure are the keys to determining the quality of the an infinitesimal surface around c, i.e., the fraction of light which
simplified models. leaves c that reaches p. Therefore, V (p) measures the fraction of
Ronfard and Rossignac [19] measure the geometric errors by us- light which leave a sphere infinitely away from the object that can
ing the maximum distance of the new vertex location to the sup- directly reach p. Furthermore, V (p) is related to the measure used
porting planes of the original edge’s 1-neighborhood. Garland and by Nooruddin and Turk[17] for surface interior/exterior classifica-
Heckbert use similar geometry information, namely, the quadric tion and visualization. For their applications, their measure is a
measure [8] as their error measure. In this measure, determining binary measure and all camera views are weighted equally.
the location of the new vertex is internally linked to the error mea- Figure 2 shows the measure V (p) for some of our test models.
sure, defined as the squared sum of the distances of the new vertex The color coding is as follows: 0-1/3 (interpolating between white
location to the supporting planes that contain at least one triangle and red), 1/3-2/3 (interpolating between red and yellow), 2/3-1 (in-
incident to the edge. The quadric measure is a geometry-based er- terpolating between yellow and green). The overall visibility of
ror. Hoppe later extends this to handle attributes such as colors and mesh M is defined as: R
texture coordinates [12]. The original quadric measure does not use V (p) dp
visibility information. V (M ) = MR (3.2)
M
dp
Lindstrom and Turk define a different type of error measure,
namely, the image-driven measure [15]. Instead of measuring the This measure is 1 for convex objects. Table 1 shows the overall
geometric deviation caused by the edge collapse operations, they surface visibility of some test models. The Stanford Bunny model
measure the image deviation, that is, the visual differences between has a large convex body with the ears and other parts that are at-
the model before and after a certain edge collapse. By creating tached. This model has a high overall visibility. The Motor and
images of both the original and partially simplified models from a Blade models contain large numbers of interior polygons, resulting
number of different camera poses (such as the center of the faces of in a low overall visibility.
an icosahedron) the method determines the order of the edges based
on the visual difference that these edges contribute. This measure 4 V ISIBILITY M EASURE C ALCULATION
indirectly takes into account which portions of an object are visible,
and it greatly reduces the number of polygons used to represent in- Calculating the exact mesh visibility function for large models is
terior details. However, the processing time required for calculating computationally prohibitive. Nooruddin and Turk [17] have used
the image deviation is substantially more than that for the geometric a sampling approach in both the object space M and the camera
deviation. space S for interior/exterior classification. Here, we use a similar
approach. First, we subdivide the mesh surface until all triangles are
3 V ISIBILITY M EASURE D EFINITION small. Next, we choose a finite number of camera positions that are
evenly distributed in the camera space. Finally, we render M from
Due to scene occlusions, a point p is not always visible from a each of these camera positions with the help of graphics hardware to
camera pose c. Figure 1 illustrates this object-camera visibility quickly compute a table of visibility between the camera positions
function. In the left image, an object M consisting of line seg- and the surface triangles. This table is a discrete version of the
ments is observed from inward-looking orthographic cameras on a visibility diagram (Figure 1).

268
Figure 2: This image illustrates the visibility measures for some test models: the Utah Teapot, a foot bone model, Happy Buddha, Dragon,
and a CAD model of three interlocking tori.

To obtain uniformly spaced camera poses, we construct a tessel- Motor (140000 polygons)
Buddha (1087123 polygons)
lation of the camera space S by subdividing the faces of an octa-
0.2
hedron three times and placing sample cameras at every vertex of
the resulting mesh. We assume a camera pose v sees a triangle t if
and only if at least part of t is visible from v. We now adapt all our 0.15
definitions in Section 3 as follows. F (t, v) is defined as 0 if t is
entirely invisible from v, and 1 otherwise. N (t) is the normal of t,
and R(v) is the viewing direction of v. We assume the tessellation 0.1
of the camera space is even. Thus, area(v) is the same for all v.
P
F (t, v) ∗ (R (v) · N (t)) ∗ area(v) 0.05
V (t) = P
v∈S
(4.3)
v∈S
(R (v) · N (t)) ∗ area(v)
0
0 200 400 600 800 1000
Here, we make the assumption that the visibility between a triangle Figure 3: This diagram shows the tradeoff between the visibility
t and a view triangle v is constant across both t and v. In general errors and the number of cameras (calculation time) for the Mo-
this is not true. However, when triangles in both M and S are tor and the Buddha models. The visibility is calculated with 6, 18,
small enough, the error introduced in the above formula becomes 66, 258, 1026 cameras, and is compared to the visibility calculated
negligible. From each viewpoint v, we render the mesh object with with 4096 cameras. The X-axis represents the number of cameras
a color-encoding of the polygon ID using graphics hardware. Then used for visibility calculation, and the Y-axis represents the visibil-
we record F (t, v) = 1 if at least one pixel has color t in the color ity error.
buffer.
This approach has two problems that need attention: triangles
that are too large, and triangles that are too small or sliver-shaped.
between the surface normal and light rays is 0.25). To perform the
Large triangles increase the error since F (t, v) is far away from be-
subdivision, we add vertices to the middle of any edge that is longer
ing constant. Small triangles result in aliasing, or so-called popping
than l. For each triangle, based on the number of the new vertices
effect. When being viewed from rather poor angles, depending on
added to its edges, we divide it into sub-triangles. This process is
how the scan-conversion is done in the rendering system, the pres-
repeated until all mesh edges are shorter than l.
ence of pixels for a particular triangle in the image is not consistent.
While mesh subdivision removes large triangles, it maintains or
To handle triangles that are too large, we subdivide them such
even creates small and sliver triangles, which are subject to sam-
that the length of the longest edge of each triangle after subdivision pling problems. This affects the accuracy of F (t, v) more for the
is limited by a threshold l. l is calculated based on the aspect ratio side views than then the head-on views. Since V (t) is defined to
and worst viewing angle (we use 75 degrees, where the dot product favor the head-on views, it is less sensitive to the sampling prob-
lems. Nonetheless, we alleviate t he situation by storing a depth
buffer along with the color buffer for each camera pose. To de-
Surface Visibility Measure termine F (t, v) for a small triangle t, we compare the depths of
Model Size Visibility Processing its vertices to the depths of their respective neighbor pixels. Even
(# polygons) Time (mm:ss) without pixels in the color buffer indicating t is visible, our algo-
Bunny 69,451 0.958 6:07 rithm considers it visible if the depth at any of its vertices is within
Teapot 28,744 0.890 4:58 a tolerance to the depth of a neighbor pixel. With this method, we
Dragon 871,414 0.804 24:38 are able to use a relatively low resolution (480 × 480) during the
Buddha 1,087,416 0.764 30:14 rendering process.
Locking Tori 9,708 0.728 4:35 The accuracy of our algorithm depends on the sampling pattern
Foot Bone 8,790 0.724 4:33 in the camera space. In general, more cameras means more accurate
Blade 1,688,933 0.466 58:19 results. On the other hand, more cameras means longer calculation
Skull 1,169,608 0.444 37:44 time. Figure 3 shows the relation between the visibility errors with
Motor 140,113 0.264 10:56 respect to the number of cameras used for the Motor and Happy
Buddha models. Here, we subdivide an octahedron up to 5 times to
Table 1: This table shows the surface visibility measure for several generate 6 camera sampling patterns, namely, 6, 18, 66, 258, 1026
models along with the processing time. The timing measurements and 4098 cameras evenly spaced on a sphere. Assuming the visibil-
were taken on a SGI Octane of 195 MHz CPU.

269
ity is accurate using 4098 cameras, we obtain the visibility errors
for the other sampling patterns by calculating the area-weighted av-   
erage of the visibility difference. As one can see, the visibility er- A2t A t Bt At C t At D t
X  A B Bt2 Bt C t Bt D t  2 
rors quickly converge, and we find that 258 cameras seem to be a t t
Qv =  A C V (t)
good comprise between time and accuracy for all test models. t t Bt Ct Ct2 Ct Dt 
t∈T 2
At D t Bt D t Ct Dt Dt
(5.10)
5 V ISIBILITY -G UIDED S IMPLIFICATION
5.1 Algorithm
5.2 Results
We observe that many applications do not require processing in-
visible and low visibility concavity regions. We can be less con- To compare the quality of the two simplification methods, we select
cerned with the geometry errors at those parts of the surface. To the following image-based root-mean-squared (RMS) error, based
put this into practice, we combine our surface visibility measure on the method of Lindstrom and Turk [15]. For the original model
with a well-known geometric error measure called the quadric mea- M0 and the simplified model Mi , we render both models from the
sure [8], which is defined for each edge in the mesh object. Let e twenty vertices of a surrounding dodecahedron using flat shading.
be the next edge to collapse into a point v, represented in homoge- The RMS “image” error between the images is calculated as:
v
neous coordinates as (x0 , y0 , z0 , 1)T . Let T be all the triangles in
u 20
uX
M that are adjacent to at least one vertex of e, i.e., T is the union RM S (Mi , M0 ) = t Dn i (5.11)
of the 1-ring neighborhoods of both vertices of edge e, allowing the n=1
triangles in both neighborhoods to be counted twice. Each triangle
t has a plane equation Here, Din is the squared sum of pixel-wise intensity difference be-
tween the n-th image of Mi and M0 . Essentially we are evaluating
At x + Bt y + Ct z + Dt = 0 (5.4) how similar the original and simplified models appear when ren-
dered.
The quadric measure is then defined as
X For each of the six test model, we select seven target polygon
Eq (e) = (distance (v, t))2 (5.5) counts, and apply both the quadric-based (QB) method [8] and our
t∈T
visibility-guided (VG) method. Figure 4 shows the comparisons be-
tween the image errors and the geometric errors obtained using the
i.e., Metro program [4] for the simplified models of the same polygon
counts. Results obtained for the QB method are painted using blue
X lines, and those for the VG method are painted using red lines. The
Eq (e) = (At x0 + Bt y0 + Ct z0 + Dt )2 = vT Qv (5.6)
image errors are painted using regular lines with diamonds, and the
t∈T
geometric errors are painted using wider lines. This figure shows
that the VG method in general generates smaller image errors but
where
incurs greater geometric errors than the QB method. The greater
  geometric errors occur in low visibility regions.
A2t A t Bt At C t At Dt Figure 5 shows the visual comparisons for the Motor model, a
X  At B t Bt2 Bt C t Bt Dt 
Q= car engine model with 140,113 polygons (middle). This model has
 AC Bt Ct Ct2 Ct Dt 
(5.7)
t t a large number of invisible polygons with high curvatures occluded
t∈T
At Dt Bt Dt Ct Dt Dt2 by its main exterior feature, a box. The exterior also contains sev-
eral mechanical parts. For the simplified models (QB on the left,
To combine our visibility measure with the quadric measure we and VG on the right) with the same polygon count of 15,000, the
note that the quadric measure is the sum of the squared distance VG method has 51.77% less image error as the QB method. In
from a point to many planes. If edge e is adjacent to some triangles fact, the image error for the VG method of polygon count 12,000
with low visibility, then the distance from v to this plane makes less is about equal to that for the QB method of 27,000 polygons (Fig-
visual impact than the distances from v to high visibility triangles ure 4). This is not surprising since the quadric errors for the original
if the geometric errors are the same. Our visibility-guided error Motor model’s interior edges are higher than that of most exterior
measure is defined as features. When the regions with low quadric errors have been sim-
X plified, the QB method starts simplifying the exterior features. The
Ev (e) = (distance (v, t) V (t))2 (5.8)
VG method simplifies the interior regions despite their relatively
t∈T
high quadric errors.
Also shown in Figure 6 is the Blade model created from CT data
Ev (e) guides which edges are collapsed, that is, this measure is of an actual turbine blade (middle: original model with 1,688,933
used to order the priority queue. polygons, left: QB method, and right: VG method, both with
Recall the meaning of V (t) as the weighted sum of dot prod- 15,000 polygons). It also contains a large number of interior poly-
ucts between a triangle’s normal with incoming ray directions, our gons. Again, the VG method performs better than the QB method.
visibility-guided error measure for one triangle is the weighted av- Note the difference at the letters on the base (bottom row) and the
erage projected distance from all viewing directions. This means features along both sides of the blade (both rows).
edges with higher geometric errors can be chosen for removal if Medical imaging data sets often contain large interiors and con-
they situate in extremely low visibility regions, such as interiors cavities. The Skull model, created from 124 CT scan slices, is
and creases. We use the original quadric matrix to select the best shown in Figure 7 (middle: the original model with 1,169,608
new vertex location for the collapsed edge as described in [8]. For polygons, left: QB method, and right: VG method, both with
computational purpose, our measure is written as 10,000 polygons). To remove the contour lines that are artifacts
Ev (e) = vT Qv v (5.9) of the reconstruction algorithm, we performed Taubin’s smoothing
method [23] before simplification. This model does not have many
where invisible polygons, but it has a large number of polygons with low

270
Motor Skull Blade
4 0.045
0.06 4.5
6
0.04
0.04
3 4

0.04 4 0.035
0.03 3.5
2 0.03
3
0.02 2 0.02
0.025
27000 21000 15000 9000 35000 25000 15000 5000 35000 25000 15000 5000

Locomotive Dragon Budda


2
Image(QB)
Image(VG) 2 0.9
0.04 Geom(QB) 0.03
Geom(VG)

1.5
1.5
0.02
0.02 0.02

0.8
35000 25000 15000 5000 35000 25000 15000 5000 35000 25000 15000 5000

Figure 4: Image error (diamond) and geometric error (wider line) comparisons between the quadric simplification method (blue) and our
visibility-guided method (red) at 7 levels in the progressive meshes. The X-axis represents the numbers of polygons for each level. The left
Y-axis represents represents the image error and the right Y-axis represents the geometric error. Motor, Skull, Blade and Locomotive models
show significant improvement due to the large percentage of interiors. Buddha and Dragon models show small but noticeable improvement
due to the large percentage of low visibility regions.

Simplification comparisons
(running time in minutes:seconds)
Model Size Visibility Time Calculating Time Simplification Less Image Error Incurred with VG
(# polygons) Visibility (quadric and visibility) than with QB under Flat Shading
Motor 140,113 0.264 10:56 0:24 51.77%
Skull 1,169,608 0.444 37:44 4:18 11.51%
Blade 1,688,933 0.446 58:19 6:36 10.22%
Locomotive 183,450 0.538 11:14 0:32 7.41%
Dragon 871,414 0.804 24:38 2:31 2.44%
Buddha 1,087,416 0.764 30:14 3:06 2.12%
Table 2: This table shows the comparison between six test models, with their polygon counts, visibility measure, image error, and the
calculation time the visibility measure as well as processing time for simplification. The timing measurements were taken on a SGI Octane
with a 195 MHz CPU.

visibility. The VG method maintains better triangulations than the Table 2. The last column lists the average of the percentage dif-
QB method around the regions of the teeth and their roots, as well ferences in the image errors incurred using VG method than using
as the forehead. the QB method for the seven levels. The table also lists the pro-
To understand the range of applicability of our method, we tested cessing time for each test model, including the time to calculate the
our method against models that has a relatively small amount of low visibility measure, and the time to perform visibility-guided sim-
visibility regions. The Buddha model and the Dragon model (not plification. Note that the time required for the QB method and the
shown), created using range scan and surface reconstruction, be- VG method differ very little (< 1%). Therefore, if a model’s vis-
long to this category. As shown in Figure 4, the visibility-guided ibility measure has been pre-computed, the visibility-guided sim-
method consistently perform better although the improvement is plification does not require more time than that is needed by other
less than that of other models. Figure 8 shows the visual compar- edge-collapse mesh simplification algorithms.
isons for the Buddha model (bottom middle: original model with
1,087,416 polygons, bottom left: QB method, and bottom right: 6 C ONCLUSION AND F UTURE W ORK
VG method, both with 20,000 polygons). The main difference is
mainly around the face. Note the features in this regions are better In this paper, we defined a view-independent visibility measure to
maintained using our VG method than using the QB method. classify mesh surface regions based on how easy they are to see
From the analysis above, it appears that the amount of improve- from the outside. This visibility measure depends on the visibil-
ment is related to the overall visibility measure of the surface, see ity function between the mesh surface and a surrounding sphere

271
of cameras. We combined our visibility measure with the quadric [9] G IGUS , Z., and M ALIK , J., Computing the aspect graph for the line
measure to perform mesh simplification. The visibility-guided drawings of polyhedral objects, IEEE Trans. on Pat. Matching &
method produces improvements (measured according to image dif- Mach. Intelligence, February 1990, 12(2).
ferences) that are related to the amount of low-visibility regions in [10] H EBERT, M., and K ANADE , T., The 3-D profile method for object
the mesh. recognition, Proceedings, CVPR ’85 (IEEE Computer Society Con-
As a possible next step, we would like to find algorithms to ference on Computer Vision and Pattern Recognition), San Francisco,
calculate the visibility function more accurately. One possibility CA, June 1985, pp. 458-463.
is to allow the visibility function to have values between 0 and
1 as a probability for views at poor angles or insufficient resolu- [11] H OPPE , H., Progressive Meshes, Computer Graphics, (SIGGRAPH
1996 Proceedings), pages 99-108.
tions. Also, we would like to perform out-of-core visibility calcu-
lations for large models such as those obtained through the digital [12] H OPPE , H., New quadric metric for simplifying meshes with appear-
Michelangelo project [14]. ance attributes, IEEE Visualization 1999, October 1999, pp. 59-66.
We are also exploring other applications for our visibility mea-
sure, including shape matching and texture mapping. [13] KOENDERINK , J.J., and VAN D OORN , A.J., The singularities of the
visual mapping, BioCyber, 1976, 24(1), pp. 51-59.
The visibility function and the visibility measure describe the
self-occlusion properties of mesh objects. Therefore, it is possible [14] L EVOY, M., P ULLI , K., C URLESS , B., RUSKINKIEWICZ , S.,
that the distribution of the visibility measure can be used in shape KOLLER , D., P EREIRA , L., G INZTON , M., A NDERSON , S., DAVIS ,
matching and feature recognition. J., G INSBERG , J., S HADE , J., and F ULK , D., The Digital Michelan-
In texture mapping, the surface of an object is often divided into gelo Project: 3D Scanning of Large Statues , SIGGRAPH Proceed-
patches. Every patch is independently unfolded onto a plane before ings 2000, 2000, pp. 131-144.
all the patches are packed into the texture map. Since mesh interiors [15] L INDSTROM , P. and T URK , G., Image-Driven Simplification, ACM
do not contribute the final images for opaque objects, we do not Transactions on Graphics, 19(3), July 2000, pp. 204-241.
need to assign space for them. Also, regions with low visibility
measure will be viewed from poor angles, allowing us to be less [16] M YERS , A.J., An Efficient Visible Surface Program, Report to
concerned about their stretch during the texture unfolding process. National Science Foundation, Computer Graphics Research Group,
Ohio State University, Columbus, OH, July 1975.
7 ACKNOWLEDGEMENTS [17] N OORUDDIN , F.S, and T URK , G., Interior/Exterior Classification of
Polygonal Models, Visualization 2000 Conference, Salt Lake City,
We would like to thank the following people and groups for the 3D Utah, Oct. 8-13, 2000, pp. 415-422.
models they provided: Will Schroeder, Ken Martin, Bill Lorensen,
[18] P LANTINGA , W.H., and DYER , C.R., An algorithm for constructing
Bruce Teeter, Terry Yoo, Mark Levoy and the Stanford Graphics the aspect graph, 27th Annual Symposium on Foundations of Com-
Group. Also, we would like to thank Michael Garland for the QS- puter Science, Los Angeles, Ca., USA, October 1986, pp. 123-131.
lim code. Finally, we thank the anonymous reviewers for their ex-
cellent suggestions. [19] RONFARD , R., and ROSSIGNAC , J., Full-range approximations of
This work is funded by NSF grant ACI 0083836. triangulated polyhedra, Proceedings of Eurographics96, Computer
Graphics Forum, pp. C-67, Vol. 15, No. 3, August 1996.
R EFERENCES [20] S CHUMACKER , R.A., B RAND , B., G ILLILAND , M., and S HARP,
[1] A PPEL , A., Some Techniques for Shading Machine Renderings of W., Study for Applying Computer Generated Images to Visual Sim-
Solids , SJCC, 1968, pp. 37-45. ulation, AFHRL-TR-69-14, US Air Force Human Resources Labora-
tory, September 1969.
[2] B OUKNIGHT, W.J., A procedure for Generation of Three-
[21] S TEWMAN , J., and B OWYER , K., Direct construction of the perspec-
Dimensional Half-toned Computer Graphics Representations, Comm,
tive projection aspect graph of convex polyhedra, Computer Vision,
ACM 13, 9, September 1969.
Graphics and Image Processing, July 1990, 51(1), pp. 20-37.
[3] C ATMULL , E., A Subdivision Algorithm for Computer Display of [22] S UTHERLAND , I.E., S PROULL , R.F. and S CHUMACKER , R.A., A
Curved Surfaces, Ph.D Thesis, Report UTEC-CSc-74-133, Computer characterization of ten hidden-surface algorithms, ACM Computing
Science Department, University of Utah, Salt Lake City, UT, Decem- Survey, 6(1), March 1974, pp. 1-55.
ber 1974.
[23] TAUBIN , G., A signal Processing Approach to Fail Surface Design,
[4] C IGNONI , P., ROCCHINI , C., and S COPIGNO , R., Metro: measuring Computer Graphics Proceedings, (SIGGRAPH 1995 Proceedings)
error on simplified surfaces, Computer Graphics Forum, vol. 17, no. August 1995, pp. 351-358.
2, pp. 167-174, June 1998.
[24] WARNOCK , J.E., A Hidden-Surface Algorithm for Computer-
[5] C OHEN , M.F., and G REENBERG , D.P., The Hemi-Cube: A Radios- Generated Halftone Pictures, Computer Science Department, Univer-
ity Solution for Complex Environments, Computer Graphics Pro- sity of Utah, TR 4-15, June 1969.
ceedings, vol. 19, no. 3, pp. 31-40, 1985.
[25] WATKINS , G.S., A Real-Time Visible Surface Algorithm, Computer
[6] F EKETE , G., and DAVIS , L.S., Property spheres: a new represen- Science Department, University of Utah, UTECH-CSC-70-101, June
tation for 3-D object recognition, Proceedings of the Workshop on 1970.
Computer Vision: Representation and Control , Annapolis, MD, April [26] WATTS , N.A., Calculating the principle views of polyhedron,
30-May 2, 1984, pp. 192-201. Ninth International Conference on Pattern Recognition, Rome, Italy,
November 1988, pp. 316-322.
[7] F UCHS , H., K EDEM , Z.M., and NAYLOR , B.F., On Visible Surface
Generation by A Priori Tree Structures, SIGGRAPH 80, 1980, pp. [27] W EILER , K., and ATHERTON , P., Hidden Surface Removal Using
124-133. Polygon Area Sorting, SIGGRAPH 77, pp. 214-222.

[8] G ARLAND , M. and H ECKBERT, P.S., Surface Simplification us- [28] W HITTED , T., An Improved Illumination Model for Shaded Display,
ing Quadric Error Metrics, Computer Graphics Proceedings, Annual ACAM, 23(6), June 1980, pp. 343-349.
Conference Series (SIGGRAPH 97), pp. 209-216.

272
Figure 5: Visual comparisons between the original Motor model (Middle, 140,113 polygons), and the simplified versions using the quadric-
only method (Left, 15,000 polygons) and the visibility-guided method (Right, 15,000 polygons). All images are rendered using flat shading.
Compare the overall shape of the trunk and mechanical parts.

Figure 6: Visual comparisons between the original Blade model (Middle, 1,688,933 polygons), and the simplified versions using the quadric-
only method (Left, 15,000 polygons) and the visibility-guided method (Right, 15,000 polygons). All images are rendered using flat shading.
Compare features such as the letters on the base (bottom row), the column of rectangular vents along the right edge, and the small holes along
the left edge (both rows).

273
Figure 7: Visual comparisons between the original Skull model (Middle, 1,169,608 polygons), and the simplified versions using both the
quadric-only method (Left, 10,000 polygons) and the visibility-guided method (Right, 10,000 polygons). All images are rendered using flat
shading. Compare features such as the teeth and forehead.

Figure 8: Visual comparisons between the original Buddha model (Bottom Middle, 1,087,416 polygons), and the simplified versions using
the quadric-only method (Bottom Left, 20,000 polygons) and the visibility-guided method (Bottom Right, 20,000 polygons). All images are
rendered using flat shading. The top row images use the red channel to encode image differences between the bottom row images. Note that
the top left image (difference between the bottom center image and the bottom left image) has more areas of red than the top right image
(difference between the bottom center image and the bottom right image).

274

You might also like