0% found this document useful (0 votes)
105 views8 pages

Interactive Texture Sprites Method

This document presents a new method for interactively texturing complex 3D geometries at high resolution using minimal memory. It splats small texture elements ("texture sprites") onto surfaces to define a composite texture. The sprites can be arbitrarily blended and their attributes dynamically updated for interactive editing and animation. The sprites' attributes are stored hierarchically in GPU memory, allowing efficient rendering while using less memory than conventional textures. This provides a flexible way to create complex surface appearances.

Uploaded by

Natan Sanglard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views8 pages

Interactive Texture Sprites Method

This document presents a new method for interactively texturing complex 3D geometries at high resolution using minimal memory. It splats small texture elements ("texture sprites") onto surfaces to define a composite texture. The sprites can be arbitrarily blended and their attributes dynamically updated for interactive editing and animation. The sprites' attributes are stored hierarchically in GPU memory, allowing efficient rendering while using less memory than conventional textures. This provides a flexible way to create complex surface appearances.

Uploaded by

Natan Sanglard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Texture Sprites: Texture Elements Splatted on Surfaces

Sylvain Lefebvre∗ Samuel Hornus∗ Fabrice Neyret∗


GRAVIR / IMAG-INRIA

(a) (b) (c) (d) (e) (f)

Figure 1: From left to right: (a) Lapped textures. (b-c) Animated sprites. (d) Blobby painting. (e) Voronoi blending of sprites. (f) Octree texture.
The bottom line shows the texture patterns used (which are the only user defined textures stored in video memory). All these bunnies are rendered in one
pass, in real-time, using our composite texture representation. The texture sprites can be edited interactively. The original mesh is unmodified.

Abstract 1 Introduction
We present a new interactive method to texture complex geome- Textures are crucial for the realism of scenes. They let us add de-
tries at very high resolution, while using little memory and without tails without increasing the geometric complexity in real-time ap-
the need for a global planar parameterization. We rely on small plications. Games and special effects now rely on multiple texture
texture elements, the texture sprites, locally splatted onto the sur- layers on objects. However, texturing 3D models with very high
face to define a composite texture. The sprites can be arbitrarily resolutions creates two major difficulties. First, the storage cost of
blended to create complex surface appearances. Their attributes high resolution texture maps can easily exceed the available texture
(position, size, texture id) can be dynamically updated, thus pro- memory. The difficulty to create planar parameterizations further
viding a convenient framework for interactive editing and animated worsens this problem since memory can be wasted by unused space
textures. We demonstrate the flexibility of our method by creating in the texture maps or by distorted areas. Second, creating high res-
new surface aspects difficult to achieve with other methods. olution textures proves to be a difficult and tedious task.
Each sprite is described by a small set of attributes which is A current trend in computer graphics is to synthesize large tex-
stored in a hierarchical structure surrounding the object’s surface. tures from image samples [De Bonet 1997; Wei and Levoy 2000].
The patterns supported by the sprites are stored only once. The Indeed, many surfaces have an homogeneous appearance which can
whole data structure is compactly encoded into GPU memory. At be well captured by a small sample. Many of these algorithms pro-
run time, it is accessed by a fragment program which computes the duce a larger texture by combining patches taken from the texture
final appearance of a surface point from all the sprites covering it. sample [Efros and Freeman 2001; Kwatra et al. 2003]. However,
The overall memory cost of the structure is very low compared to because of the lack of efficient representation of patch–based tex-
the resulting texturing resolutions. Rendering is done in real–time. tures, they often explicitly store the resulting texture in a large im-
The resulting texture is linearly interpolated and filtered. age, which wastes memory. Another approach is to introduce new
CR Categories: I.3.7 [Computer Graphics]: Three -Dimensional geometry to position the patches directly onto the surface [Praun
Graphics and Realism—Color, shading, shadowing, and texture et al. 2000; Dischler et al. 2002]. If this approach greatly reduces
texture memory consumption, it also increases the geometrical cost,
Keywords: texturing, texture sprites, octree textures, decals, real– for texturing purposes only. Moreover, this hinders geometric opti-
time rendering, graphics hardware mizations such as triangle strips and geometric level of details.
Besides homogeneous appearances, textures are also a solution
to encode scattered objects like footprints, bullet impacts [Miller
2000], or drops [Neyret et al. 2002; Lefebvre 2003]. These are
likely to appear dynamically during a video game. In this situa-
tion also, the lack of representation for textures composed of sparse
elements creates difficulties. Storing scattered details in a large tex-
∗ email: FirstName.Name@imag.fr
ture covering the mesh wastes a lot of memory, since the texture
then contains large empty areas. The common solution is to use de-
URL: http://www-evasion.imag.fr/Publications/2005/LHN05
cals instead, i.e., to put the texture elements on extra small textured
transparent quads. However, such marks do not stick correctly to
curved surfaces and intersection between decals yields various ar-
tifacts such as discontinuities and flickering due to Z–fighting. All
this gets worse if the underlying surface is animated since all the
decals must be updated at each time step.
We describe a new representation for textures composed of vari-
ous texture elements. Since the texture elements live on the object
#1 #1 #1
#2 #2 #1
#1 #1 #2
#2 #2 #2

Figure 3: A tiling saves memory by instancing a small set of patterns.

Another convenient approach to produce highly detailed textures


Figure 2: The attributes of the sprites are stored in an octree structure is to automatically synthesize large textures from an image sample
surrounding the mesh surface. [De Bonet 1997; Wei and Levoy 2000; Efros and Freeman 2001;
Kwatra et al. 2003]. The result is stored in a 2D texture map: A
parameterization of the object is mandatory. The distortions and
surface and can be updated dynamically we refer to them as tex- discontinuities often introduced by the parameterization may lead
ture sprites. Our representation provides a general and convenient to visual artifacts. In particular, it may require to increase the tex-
scheme for sprite-based texturing, ranging from homogeneous ap- ture size to avoid loss of resolution in distorted areas. Since the
pearances obtained by overlapping many sprites, to the interactive synthesized texture is also composed of similar patterns the rep-
addition of local details. It does not require the mesh to conform resentation is not memory efficient. Other algorithms proceed di-
to any constraint. In particular it does not require to compute a rectly on the surface to avoid parameterization [Praun et al. 2000;
global planar parameterization, nor to modify the initial mesh: The Turk 2001; Wei and Levoy 2001; Soler et al. 2002; Dischler et al.
geometry and the composite texture are independent. 2002]. To store the result they often modify the input geometry,
which breaks the independence between the texture and the mesh.
The contributions of this paper are: This can produce an overly sampled geometry and hinders the use
- A parameterization-free texture representation allowing efficient of geometric level of details or the dynamic update of the texture
storage and filtered rendering of composite textures. content. Authors proposing better solutions for the storage either
generate a soup of small textured polygons [Dischler et al. 2002] or
- An efficient solution to dynamic decal management and ani-
redraw several times the faces on which several texture elements are
mated texture elements.
mapped [Praun et al. 2000]. As explained in this last paper, instanc-
- New appealing texturing effects made possible by our texture ing instead of duplicating texture elements allows one to obtain a far
representation, as illustrated in Figures 1 and 9. better texture resolution with less memory. However, drawing sev-
- A complete GPU-implementation that makes our method usable eral superimposed faces increases the geometry processing cost and
both in texture authoring tools and real-time applications. yields various blending issues and Z–buffer conflicts. Moreover,
Our key idea is to store the mapping parameters of the sprites along the dynamic creation and update of these small geometry patches is
the mesh surface, using a hierarchical structure similar to an oc- difficult when geometry is complex or animated.
tree (see Figure 2). At run–time, the renderer determines which
sprites are covering the rasterized pixel, and computes the final 2.2 Filtering issues
color by combining the sprite’s textures. Customizable blending
operators allow to treat overlapping sprites according to the user It is important to realize that GPU rendered scenes are anti-aliased
requirements. The resulting composite texture is accessed from a mostly thanks to texture filtering mechanisms and despite aliasing
fragment program using 3D texture coordinates. in the rasterized geometry. However, this works only under the
assumption that the texture mapping function is continuous along
the surface. Each time a mapping discontinuity is defined on the
2 Previous work and texturing issues surface the filtering is no longer correct, for both linear interpolation
and MIP-mapping. Note that oversampling decreases but does not
2.1 High resolution textures remove these artifacts.
The new powerful features of GPUs also yield new filtering is-
When creating detailed textures, one has to cope with two issues: sues since numerous non-linear or discontinuous effects can now
authoring the texture content and making it small enough to fit into occur within a face: Indirections can be used to cover two nearby
texture memory. areas with different texture regions; pixel shaders can modify or
A solution to save memory while keeping high resolution is to procedurally determine the pixel color. In both cases, estimating
rely on repeated patterns using tiles (see Figure 3). Games usually correctly filtered value is no longer possible using an embedded
use quad tiles for terrain textures. Recently, Cohen et al. [2003] mechanism. It is the programmer’s responsibility to handle filter-
proposed an approach to avoid regularity with a small set of pre- ing. Even so, detecting and fixing the problem is not straightfor-
computed tiles. This approach was ported to the GPU by Wei ward [Kraus and Ertl 2002; Lefebvre and Neyret 2003; Wei 2004].
[2004]. Lefebvre and Neyret [2003] also proposed a similar method
to dynamically instantiate texture patterns at arbitrary locations in a
2D texture space. Arbitrarily large textures are thus created at low 3 Sprite-Based Textures
memory cost. These approaches do not require modifications of the
mesh geometry. However, this process does not transfer well to ar- This section describes our texturing method dedicated to the edit-
bitrary surfaces where the parameterization introduces distortions ing and rendering of sprite-based textures. Sprites are local texture
on the tiles. Therefore, these methods cannot represent composite elements that are dynamically projected onto the surface.
textures on arbitrary surfaces and suffers from the same artifacts as Our key idea is to store the sprite attributes (position, orienta-
traditional mapping on curved geometry. tion, etc) in a 3D hierarchical structure surrounding the mesh. Until
recently, volumetric representations have been used only for solid We extend these two ideas to create a hierarchical 3D grid stor-
texturing [Perlin 1985; Peachey 1985; Worley 1996] and volume ing sprite positioning information. We call this structure a N 3 -tree
rendering. We follow the idea of octree textures [DeBry et al. 2002; because each internal node has N 3 children (N in each dimension).
Benson and Davis 2002] which relies on an octree to efficiently An octree corresponds to N = 2 (in our implementation we use
store color information along a surface. However, instead of stor- N = 4). See Section 4.3 for a discussion on the choice of N.
ing colors we store the sprites attributes in the leaves of the hier-
archical grid (see Figure 2). The entire data-structure is compact:
Many sprites share a same texture pattern and information is stored 4.1 N 3 -tree memory layout
only around the surface.
The structure is encoded into GPU texture memory and is ac-
cessed from a fragment program using 3D texture coordinates. Tree structure
Thus, the original mesh geometry is not modified and can be op- Figure 4 shows the grid structure: Each node represents an indi-
timized for rendering (triangle strips, level of details) indepen- rection grid consisting of N 3 cells; the cell content (whose type is
dently from the texture. The composite texture (composed of all determined by a flag stored in the A-channel) is either data if it is
the sprites) is linearly interpolated and MIP-mapped. a leaf ( A = 1), or a pointer to another node (A = 12 ). The third
Sprite-based textures are easy to author: The user clicks on the possibility (A = 0) indicates that the cell is empty.
surface to add sprites, drags and drops them and he interactively To optimize for the limited range of texture indices, we store our
changes their size or orientation. The system is very flexible: Sprite structure in a 3D texture called the indirection pool in which both
ordering is controllable and overlapping sprites can be mixed to- data values and pointer values are stored as RGB-triples.
gether to achieve various appealing effects (Figure 1). Each sprite
can be independently animated, for instance to react to surface de- Note that this vectorial organization does not add any cost since
formations as shown in Figure 9. the GPU hardware does vectorial operations on 4D vectors. In the
following we use vector notations (operations are done component
Our representation is interesting for high quality software ren-
by component).
derers, to reduce memory requirements of composite textures and
to avoid rendering artifacts. Since it is amenable to GPU imple-
mentation despite the more constrained context, we focus on the Notations
description of a GPU implementation. It is easy to adapt it to a Let N × N × N be the resolution of the indirection grid correspond-
software framework. ing to each tree node. The resolution of the virtual 3D grid at
Several difficulties have to be solved: Section 4 describes how level l is N l × N l × N l . At level l a point M in space (M ∈ [0, 1)3 )
bM·N l c
to manage the hierarchical 3D grid on the GPU so that it can be ac- lies in a cell located at Ml,i = Nl
and has a local coordinate
cessed from a fragment program. Section 5 shows how we use the f rac(M·N l )
hierarchical grid to store and retrieve sprites attributes. Section 6 Ml, f = within this cell. We have M = Ml,i + Ml, f .
Nl
explains how we filter the composite texture and blend overlapping Nodes are packed in the indirection pool (see Figure 4-right).
sprites together. Section 7 presents various appealing texturing ef- Let Su , Sv and Sw be the number of allocated nodes in the u, v and w
fects made possible by our composite textures, as well as perfor- directions. We define S = (Su , Sv , Sw ) as the size of the indirection
mance results. We conclude and discuss future work in Section 8. pool using the vectorial notation. The resolution of the texture host-
ing the indirection pool is therefore N Su × N Sv × N Sw . A texture
value in the indirection pool is indexed by a real valued P ∈ [0, 1)3 ,
as follows:
4 Implementing a 3D hierarchical grid on bP·Sc
P can be decomposed into Pi + Pf with Pi = S and
a GPU f rac(P·S)
Pf = S . Pi identifies the node which is stored in the area
We now explain how to store the hierarchical grid in texture mem- [Pi , Pi + S1 ) of the indirection pool. Pi is called the node coordi-
ory, and how to retrieve data from the grid at a given 3D location. nates. Pf is the local coordinate of P within the node’s indirection
This is implemented in a fragment program (or pixel shader) exe- grid.
cuted per-pixel on the GPU. Note that packing the cells does not change the local coordinates:
Hierarchical grids such as octrees are often used to compactly At level l, if the value corresponding to the point M is stored at the
store sparse data (e.g. images or volumes). They are easy to imple- coordinate Pl in the indirection pool then Ml, f N l = Pl, f S. Thus:
ment using arrays. The texture indirection ability of modern GPUs
is quite similar to the notion of an array, and the programmability of
f rac(M · N l )
these GPUs enables the programming of the structure access. How- Pl, f = (1)
ever, it is crucial to think about the consequences of each choice S
in terms of performance, and precision issues complicate the task
(these include low dynamics of values and the fact that indices are
   

floating points). The reader can also refer to [Lefebvre et al. 2005]
A
for details on how to implement and use octree textures on the GPU. A A
(0,0,0)
[Kraus and Ertl 2002] introduced the first attempt (using a first A C E

generation programmable GPU) to skip the empty space in a target D


B
D
C
B D
C
texture by packing blocks of data so as to only store relevant texture B
E
data in texture memory. They used an indirection method based on    Indirection Pool
a regular grid: To encode a virtual texture T containing sparse data
the space is first subdivided into n tiles. Then the non-empty tiles
are packed in a texture P containing s tiles and an indirection map Figure 4: An example tree. The five nodes A,B,C,D and E are packed in
I of resolution n is the indirection pool (right). In this example each node is an indirection grid
 used to recover  the pixel color of any given lo- of size 23 (N = 2). The cells of the indirection grids contain either data
I(x)+ f rac(x·n)
cation: T (x) = P s (where f rac(x) denotes the frac- (light, green cells) or an indirection (i.e. a pointer) to a child node (dark,
tional part of x). blue cells).
4.2 Retrieving values for a given location: tree N S (maximum dmax max. reachable
number of nodes) resolution
lookup 2 16 × 16 × 16 19 (219 )3
4 16 × 16 × 16 9 (218 )3
The virtual 3D texture in which the data is stored is aligned with the
8 16 × 16 × 16 6 (218 )3
bounding box of the object to be textured. For rendering, we need 2 32 × 32 × 32 18 (218 )3
to retrieve the data corresponding to a given location M in space 4 32 × 32 × 32 9 (218 )3
(more precisely, on the object surface). This is done per-pixel in a 2 128 × 128 × 128 16 655363
fragment program. 4 128 × 128 × 128 8 655363
We start from level 0 in the tree. The indirection grid associated
with the tree root is always stored at coordinates (0, 0, 0) in the in- Figure 6: Typical values of N, S and corresponding limits. The maximum
direction pool. At level l let Pl,i be the coordinate of the current resolution of the resulting 3D texture is N dmax ×N dmax ×N dmax . The maximal
node in the indirection pool. The RGBA value corresponding to M number of nodes the indirection pool can store is 23eS .
is found in the current node indirection grid at the local coordinate
Pl, f , obtained by Equation (1). This corresponds to the coordinate
Pl = Pl,i + Pl, f in the indirection pool. as tightly as possible. However, for applications where rendering
The value in A tells us whether we are in a leaf, an empty node performance is crucial a larger value of N would be preferable to
or an internal node. If we are in a leaf the data RGB is returned. If limit per-fragment computations while still offering high resolution
we are in an empty node the fragment is discarded (i.e. the shader in deepest tree levels.
aborts). Otherwise RGB contains a pointer to a child node and we Increasing the maximum number of nodes S that can be stored in
decode the next node coordinate Pl,i from the RGB value. Then the indirection pool allows to create larger structures. An increase
we continue to the next level (l + 1). Our algorithm is described in of S reduces dmax , but only by a small amount.
Figure 5.

float4 rgba = <0,0,0,0>; 5 Managing the texture sprites


for (float i=0; i<TREE_MAX_DEPTH; i++) {
// child location Each sprite is associated with various parameters (including posi-
float3 P = (frac(M)+rgba.xyz*255.0)*inv_S; tioning parameters described in Section 5.3) and a bounding volume
rgba = (float4)tex3D(PoolTex,P); // access indir. pool V defining the spatial limit of the sprite influence on the composite
if (rgba.w > 0.9) break; // type of node = leaf texture. This bounding volume is used to determine which cells are
if (rgba.w < 0.1) discard; // type of node = empty
covered by the sprite and detect whether two sprites overlap.
M=M*N; // go to next level
}
return rgba; 5.1 Sprite storage

Figure 5: The pseudo–code of our tree lookup. M is provided by the CPU


Storing sprite parameters
via the 3D texture coordinates. inv_S= S1 . As the break statement is not
available on all hardware our real pixel shader is a bit more complicated.
The parameters associated with a sprite must be stored in the tree
leaves. These parameters are a set of p floating point numbers.
Since a given sprite is likely to cover several leaves, it is not a good
4.3 Implementation details idea to store this set directly in each leaf: This would waste mem-
ory and complicate the update of sprites. Instead we introduce an
indirection: The sprite parameter table is a texture containing all
Addressing precision and storage issues the parameters for all the sprites. Each sprite corresponds to an in-
To avoid precision issues and minimize storage requirements we dex into this texture; each texel encodes a parameter set. We store
store S · Pl,i instead of Pl,i in the indirection cells. Recall that the index of the sprite parameters in the tree leaf data field: The
Pl,i ∈ [0, 1) is the coordinate of a node’s indirection grid within the RGB value encodes a pointer to the sprite parameter table just as it
indirection pool. Therefore, S · Pl,i is an integer, and we can store encodes pointers to child nodes.
it as such, which has two advantages. First, it optimizes the en- The sprite parameter table has to be allocated in texture memory
coding: We can choose the number of bits to be stored in the RGB with a chosen size based on the expected number of sprites M. If
channels to exactly represent at most S indices. Second, the values more are added, the table must be reallocated and old values must
of the pointers do not depend on the size of the indirection pool. be copied.
The size of the indirection pool can therefore be changed without
recomputing the pointers. Dealing with overlapping sprites
Not only can each sprite cover several cells but several sprites are
Precision limitations also likely to share the same cell. Therefore we must be able to
Due to the limited precision of 32 bit floating point internal regis- store a vector of sprite parameters in each leaf.
ters there is a limit on the depth, resolution and number of nodes We address this by introducing a new indirection map: The leaf
of our hierarchical grid. To avoid precision issues we use power vectors table (see Figure 7). This 3D texture implements a table
of two values for N and S. Recall that S controls the size of the of vectors storing the sprite parameters associated with a leaf. Two
indirection pool and thus the maximum number of nodes that can dimensions are used to access the vector corresponding to a given
compose the N 3 -tree. Writing N = 2eN , S = (2eS , 2eS , 2eS ) and using leaf. The third dimension is used to index the vector of sprite pa-
standard floating point values with a mantissa of 23 bits, the deepest rameters. Each voxel of this 3D texture therefore encodes a pointer
reachable depth is dmax = b 23−e
eN c (see Figure 6).
S
to the sprite parameter table. We use a special value to mark unused
Increasing the size N of the tree nodes allows to store more infor- entries in the vectors. The maximum size Omax of a vector controls
mation at a given tree depth. Usually for a same set of sprites this the maximum number of sprites allowed per leaf.
reduces the tree depth resulting in better performance (fewer texture The tree also has to be modified: Instead of directly storing the
lookups are required to go down the tree). An application limited index of one sprite in the leaves, we now store an index to the leaf
by storage would rather choose a small N value to pack texture data vector.
Ordering of sprites
 The ordering of overlapping sprites might be meaningful in some
cases (e.g. snake scales). We ensure that the shader will visit them
 !"#$%&"')(  in the right order, simply by ordering the sprites in the leaf vectors
table.
*&#,+-/. 0)+). $". #01$. +

243 $. " 3 $56"$%7"')( 


5.3 Positioning the sprites on the surface
At this stage we can know which sprites are in each cell surrounding
Figure 7: Each indirection cell stores the index of the corresponding leaf the object surface, but no more. In this section we describe how
vector. Each cell of the leaf vector stores the indices of the sprite parameters. to encode and recover their precise position and orientation. The
pattern associated with the sprite must be mapped to the surface,
i.e., we need to define a projection from texture space to surface
5.2 Adding a sprite to the tree space. In practice the opposite is done: The rasterization produces
a 3D point and we must project it back to texture space, onto the
The influence of a sprite is limited to its bounding volume. The sprite.
sprite must therefore be present in all the leaves colliding with this We associate a frame to each sprite and use a parallel projection
volume. Our insertion algorithm (run by the CPU) is as follows: to the surface. The positioning parameters are:
addSprite(node n, sprite s) :
if (n is a leaf) - a center point c (sprite handle),
if (number of sprites in n < Omax ) insert s in n - two vectors l, h defining the plane, scale and orientation of the
else pattern,
if (s overlaps with all the sprites in n)
error(Omax too small !) - a normal vector n defining the direction of projection.
else { Let M denote the matrix (l, h, n). Once the sprite parameters
if (max depth reached) are fetched for a given point P, we compute in the pixel shader
error(Max depth reached !) U = M −1 · (P − c) with U = (u, v, w) to get the texture coordinates
split n within the pattern. w is useful for volumetric or solid textures, but
addSprite(n,s)
also in the 2D case: when two faces of the mesh are very close
}
else
to each other (e.g., a folded surface) we need to distinguish which
forall child c of n colliding with s texture applies to which fold. Instead of forcing the tree cell to sub-
addSprite(c,s) divide we can simply define a narrow region where the projection
applies by tuning the scale of n. This solves the ambiguous color
The leaves are not split until the limit of Omax sprites is reached. assignment with thin or flat two-sided objects described in [DeBry
Leaf splitting occurs only in full leaves. In our implementation we et al. 2002; Benson and Davis 2002].
used Omax = 8. Generally when a leaf is filled with Omax sprites, The sprite projection on the surface is equivalent to defining a lo-
the maximum number of sprites really overlapping at a given point cal linear parameterization. If the distortion is too high for a given
is Cmax < Omax . When inserting a new sprite it is then possible to sprite, when a geometric feature is smaller than the sprite, it is pos-
recursively subdivide the node so than Cmax is kept smaller than sible to split it in multiple sub–sprites to minimize distortion. Each
Omax in the new leaves (see Figure 8). This may locally generate a sub–sprite displays only a sub–region of the initial sprite texture.
set of small nodes if the sprite regions are hard to ‘separate’. How- Our representation supports such decompositions, but the calling
ever, this scheme tends to produce a tree of small depth since most application is in charge of computing the sub–sprite positioning. In
leaves will contain several packed sprites. practice, to avoid visible projection artifacts we attenuate the sprite
B D contribution according to the angle between the sprite and surface
normals.

Deformation-proof texturing
Our tree structure allows to store and retrieve data from 3D loca-
8
tions. However, it is meant to associate these informations to an
object surface. If the object is rotated, rescaled or animated we
9;:=<?>6@A B C
<?>6@AFEHG want the texture data to stick to the surface. This is exactly equiva-
lent to the case of solid textures [Perlin 1985]. The usual solution is
to rely on a 3D parameterization (u, v, w) stored at each vertex and
Figure 8: A new sprite (dark, blue) is inserted in a full leaf. As the sprite
does not overlap with all the sprites (light, green), there is a level of subdi-
interpolated as usual for any fragment. This parameterization can
vision at which the new sprite can be inserted. Subdividing lets us keep the be seen as the reference or rest state of the object and can conve-
sprite count less than or equal to Omax . niently be chosen in [0, 1]3 .
Note that the reference mesh does not need to be the same as
the rendered mesh as long as they can be textured by the same N 3 -
Insertion failures tree. For instance, a subdivision surface can be subdivided further
Omax should be chosen greater than the maximum number of sprites without requiring one to update the texture. The (u, v, w) of newly
allowed to contribute to a given pixel. A painting application might created vertices just have to be interpolated linearly.
choose to discard some sprite in overcrowded cells to keep this Since our representation defines a 3D texture, the rendered mesh
number reasonable. does not even need to be a surface. In particular, point-based rep-
The maximum depth level is reached only when more than Omax resentations and particle systems can be textured conveniently. Fi-
sprites are crowded in the same very small region (i.e., they cannot nally, a high resolution volume could even be defined by 3D sprites
be separated by recursive splitting). and sliced as usual for volume rendering.
5.4 Blending sprites 7 Applications and Results
When multiple sprites overlap, the resulting color is computed by
blending together their contributions. Various ways of compositing
7.1 Examples
sprites can be defined. The texturing methods that rely on multipass We have created various examples to illustrate our system, shown
rendering are limited to basic frame buffer blending operations. In on Figure 1, Figure 9 and in our video (available at http://
most cases, it is transparency blending. Since our blending is per- www-evasion.imag.fr/Publications/2005/LHN05).
formed in a fragment program, we do not suffer from such limita-
tions. Our model relies on a customizable blending component to Texture authoring (Figure 1(c) and video)
blend the contributions of the overlapping sprites. In this example, the user interactively pastes texture elements onto
We implemented non standard blending modes such as blobby- a surface. After having provided a set of texture patterns, the user
painting (Figure 1(e)) or cellular textures [Worley 1996] (i.e., can simply click on the mesh to create a texture sprite. The latter
Voronoi, Figure 1(f)). The first effect corresponds to an implicit can then be interactively scaled, rotated, or moved above or below
surface defined by sprite centers. The second effect selects the color the already existing sprites. The night-sky bunny was textured in a
defined by the closest sprite. Both rely on a distance function that few minutes.
can be implemented simply by using a pattern containing a radial This typically illustrates how an application can use our repre-
gradient in the alpha value A (i.e., a tabulated distance function). sentation: Here, the application is responsible for implementing the
user interface, placing the sprites (a simple picking task), and ori-
6 Filtering enting them. Requests are sent to our texture sprites API to delete
We can distinguish three filtering cases: Linear interpolation for and insert sprites as they move.
close viewpoints (mag filter), MIP-mapping of sprites (min filter) Note that sprites can overlap, but also large surface parts can
and MIP-mapping of the N 3 -tree. remain uncovered. This permits the use of an ordinary texture (or
a second layer of composite texture) on the exposed surface. In
Linear interpolation particular, this provides a way to overcome the overlapping limit by
Linear interpolation of the texture of each sprite is handled naturally using multipass rendering.
by the standard texture units of the GPU. As long as the blending
equation between the sprites is linear the final color is correctly Lapped texture approximation (Figure 1(a) and video)
interpolated. This example was created using the output of the Lapped Textures
algorithm [Praun et al. 2000] as an input to our texturing system.
MIP-mapping of sprites Our sprite-based representation fits well with the approach of this
The min filtering is used for faces that are either distant or tilted ac- texture synthesis algorithm in which small texture elements are
cording to the viewpoint. The MIP-mapping of the texture of each pasted on the mesh surface. Our representation stores such textures
sprite can be handled naturally by the texture unit of the GPU. As efficiently: The sample is stored only once at full resolution and
long as the blending equation between the sprites is linear, filtering the N 3 -tree minimizes the memory space required for positioning
of the composite texture remains correct: Each sprite is filtered in- information. Moreover, rendering does not suffer from filtering is-
dependently and the result of the linear blending still corresponds sues created by atlases or geometrical approaches (see video), and
to the correct average color. However, since we explicitly com- we use the initial low resolution mesh. Since lapped texture in-
pute the (u, v) texture coordinates within the fragment program, the volves many overlapping of sprites, in our current implementation
GPU does not know the derivatives relative to screen space and thus we use two separate composite textures to overcome hardware limi-
cannot evaluate the MIP-map level. To achieve correct filtering we tations (the maximum number of registers and instructions limit the
compute the derivatives explicitly before accessing the texture. (We maximum number of overlapping sprites).
rely on the ddx and ddy derivative instructions of the HLSL or Cg
languages). Animated sprites (Figure 1(b,c) and video)
Sprites pasted on a 3D model can be animated in two ways. First,
MIP-mapping of the N 3 -tree the application can modify the positioning parameters (position,
If the textured object is seen from a very distant viewpoint, multiple orientation, scaling) at every frame, which is not time consum-
cells of the tree may be projected into the same pixel. Aliasing will ing. Particle animation can be simulated as well to move the sprites
occur if the cells contain different color statistics. Tree filtering can (e.g., drops). In Figure 1(b), the user has interactively placed gears.
be achieved similarly to what was done in the case of [DeBry et al. Then the sprites rotation angle is modified at each frame (clockwise
2002; Benson and Davis 2002] (i.e., defining nodes values that are for sprites with even id and counter-clockwise for odd ones). Sec-
the average of child values, which corresponds to a standard MIP- ond, the pattern bound to a sprite can cycle over a set of texture
mapping). In our case we first need to evaluate the average color patterns, simulating an animation in a cartoon-like fashion. The
of the leaves from the portion of the sprites they contain. However, patterns of Figure 1(c) are animated this way. See Table 1 for frame
cell aliasing does not occur often in practice: First, the cell size rate measurements.
does not depend on the sprite size. In particular, small sprites are
stored in large cells. Second, our insertion algorithm presented in Snake scales (Figure 9 and video)
Section 5.2 tends to minimize the tree depth to avoid small cells. As explained above each sprite can be independently scaled and
Finally, small neighboring cells are usually covered by the same rotated. This can even be done by the GPU as long as a for-
sprites and therefore have the same average color. Thus we did mula is available, and as long as the sprite bounding volume re-
not need to implement MIP-mapping of the N 3 -tree for our demos. mains unchanged. For illustration we emulated the behavior of rigid
Apart from very distant viewpoints (for which the linearity hypoth- scales: usually the texture deforms when the underlying surface
esis assumed by every texturing approach fails), the only practical is deformed (Figure 9, middle). We estimate the local geometric
case where cell aliasing occurs is when two different sprites are distortion and scale the sprites accordingly to compensate for the
close to each other and cannot be inserted inside the same leaf. The deformation of the animated mesh. Our example is an undulating
two sprites have to be separated by splitting the tree. As a result, snake covered by scales: one can see (especially on the video) that
small cells containing different sprites are generated. These cells the scales keep their size and slide on each other in concave areas.
are likely to alias if seen from a large distance. Note that this has similarities with the cellular textures of Fleischer
Figure 9: Undulating snake mapped with 600 overlapping texture sprites whose common pattern (color+bump) have a 512 × 512 resolution. The virtual
composite texture thus has a 30720 × 5120 resolution. One can see the correct filtering at sprite edges. This figure demonstrates the independent tuning of each
scale aspect-ratio in order to simulate rigid scales. Middle: Without stretch compensation. The texture is stretched depending on the curvature. Right: With
stretch compensation. The scales slide onto each other and overlap differently depending on the curvature (see also the video).

et al. [Fleischer et al. 1995]: Sprites have their own life and can
interact. But in our case no extra geometry is added: Everything number node tree max. FPS
occurs in texture space. We really define a textural space in which of sprites size depth overlap 800 × 600

the color can be determined at any surface location, so we do not Lapped 536 4 4 16 27
have to modify the mesh to be textured. Gears 50 4 2 8 80
Stars 69 4 2 8 26
Octree textures (Figure 1(f) and video) Octree (nearest mode) none 2 9 none 365
We have reimplemented the DeBry et al. octree textures [DeBry Blobby 125 4 3 10 70
Voronoi 132 4 3 12 56
et al. 2002] with our system in order to benchmark the efficiency
of our GPU N 3 -tree model. In this application no sprite parameter
is needed, therefore we directly store color data in the leaf cells of Table 1: Performance for examples of Figure 1.
the indirection grids. The octree texture can be created by a paint
application or by a procedural tool. Here we created the nodes at
high resolution by recursively subdividing the N 3 -tree nodes inter- Memory usage
secting the object surface. Then we evaluated a Perlin marble noise Our textures require little memory in comparison to the standard
to set the color of each leaf. For filtering, we implemented a simple textures needed to obtain the same amount of detail. Our tests show
tri–linear interpolation scheme by querying the N 3 -tree at 8 loca- that texturing the Stanford bunny with an atlas automatically gener-
tions around the current fragment. The bunny model of Figure 1(f) ated by modeling software (see video) would require a 2048 × 2048
is textured with an octree texture of depth 9 (maximal resolution of texture (i.e., 16 MB) to obtain an equivalent visual quality. More-
5123 ). We obtain about 33 fps at 1600 × 1200 screen resolution, over, we show on the video how atlas discontinuities generate arti-
displaying the bunny model with the same viewpoint than Figure 1. facts. The memory size used by our various examples is summa-
The timings of DeBry et al. [2002] software implementation was rized in Table 2. Note that since textures must have power of two
about one minute to render a 2048 × 1200 still image on a 1Ghz dimensions in video memory the allocated memory size is usually
Pentium III. This proves that this approach benefits especially well greater than the size of the structure. The last column of Table 2 in-
from our GPU implementation. cludes the size of the 2D texture patterns used for the demonstrated
application.
7.2 Performance
size of the allocated total
structure memory memory
Rendering time
Lapped 1 MB 1.6 MB 1.9 MB
Performance for the examples presented in the paper are summa- Gears 0.012 MB 0.278 MB 0.442 MB
rized in Table 1. Measurements were done on a GeForceFX 6800 Stars 0.007 MB 0.285 MB 5.7 MB
GT, without using the dynamic branching feature. The system is en- Octree 16.8 MB 32.5 MB 32.5 MB
tirely implemented in Nvidia Cg. The performance of our N 3 -tree Blobby paint 0.141 MB 0.418 MB 0.434 MB
allows for fast rendering of color octree textures; but the complete Voronoi 0.180 MB 0.538 MB 0.554 MB
texture sprite system usually performs at a lower frame rate. The
main bottleneck comes from the maximum number of overlapping Table 2: Storage requirements for examples of Figure 1.
sprites allowed. Note that we did not try to optimize GPU register
usage: The code is directly compiled using the Cg compiler (v1.3).
On hardware without true branching in fragment programs, a
lookup in our composite texture always involves as many computa- 8 Conclusions and future work
tion as if Omax sprites were stored in the deepest leaves of the tree.
(Another consequence is that the rendering cost remains constant We have introduced a new representation to texture 3D models with
independently of the number of sprites stored). composite textures. The final appearance is defined by the blending
On hardware allowing branching we are in a favorable case. In- of overlapping texture elements, the texture sprites, locally applied
deed, the tree leaves enclose large surface areas and thus neighbor- onto the surface. We thus reach very high texturing resolution at
ing pixels are likely to follow the same branching path. low memory cost, and without the need for a global planar param-
Note that since the cost is in the fragment shader, the rendering eterization. The sprite’s attributes are efficiently stored in a hierar-
cost is mostly proportional to the number of pixels drawn: The ren- chical grid surrounding the object’s surface. Since this truly defines
dering of an object that is far or partly occluded costs less; i.e., you a 3D texture sampled per–pixel, no modification of the textured ge-
pay only for what you see. ometry is required.
The system is flexible in many ways: Each sprite can be inde- K RAUS , M., AND E RTL , T. 2002. Adaptive Texture Maps. In
pendently animated, moved, scaled and rotated. This offers natural Proceedings of the ACM SIGGRAPH / Eurographics Conference
support for many existing methods such as interactive painting on on Graphics Hardware 2002, 7–15.
surfaces, lapped textures rendering, dynamic addition of local de-
tails, all available within the same texturing system with better qual- K WATRA , V., S CHÖDL , A., E SSA , I., T URK , G., AND B OBICK ,
ity and using less memory. We also showed how our new represen- A. 2003. Graphcut textures: Image and video synthesis using
tation can be used to create new texturing effects, such as animated graph cuts. In Proceedings of ACM SIGGRAPH 2003.
textures and textures reacting to mesh deformations.
We described a complete GPU implementation of our texturing L EFEBVRE , S., AND N EYRET, F. 2003. Pattern based procedural
method, which achieves real–time performance. Moreover, if the textures. In Proceedings of the ACM SIGGRAPH Symposium on
performance are not good enough for a given application, the com- Interactive 3D Graphics 2003, 203–212.
posite texture can be baked into a standard 2D texture using an ex- L EFEBVRE , S., H ORNUS , S., AND N EYRET, F. 2005. GPU Gems
isting parameterization (as shown in the video). II: Programming Techniques for High–Performance Graphics
and General–Purpose Computation. Addison–Wesley, ch. Oc-
Future work tree Textures on the GPU. ISBN 0-32133-559-7.
In this paper we have demonstrated several types of usage of our
system. However, the possibilities are endless and we would like L EFEBVRE , S. 2003. ShaderX2: Shader Programming Tips &
to explore other kinds of textures enabled by this sprite instan- Tricks. Wordware Publishing, ch. Drops of water texture sprites,
tiation scheme. In particular, approaches like painterly render- 190–206. ISBN 1-55622-988-7.
ing [Meier 1996] could probably benefit from our texture represen-
tation. Among the possible improvements, we would like to define M EIER , B. J. 1996. Painterly rendering for animation. In Proceed-
sprite projector functions better than simple planar mapping in or- ings of ACM SIGGRAPH 1996, 477–484.
der to minimize distortion. We also showed that relying on a spatial
structure – and no surface parameterization – the textured objects M ILLER , N. 2000. Decals explained. http://www.
are no longer required to be meshes. In particular, our approach flipcode.com/articles/article_decals.shtml.
could prove interesting for point-based and volumetric representa-
tions. N EYRET, F., H EISS , R., AND S ENEGAS , F. 2002. Realistic
Rendering of an Organ Surface in Real-Time for Laparoscopic
Surgery Simulation. The Visual Computer 18, 3, 135–149.
9 Acknowledgments
P EACHEY, D. R. 1985. Solid texturing of complex surfaces. In
We would like to thank John Hugues, Laks Raghupathi, Adrien Proceedings of ACM SIGGRAPH 1985, 279–286.
Treuille, and Marie–Paule Cani for proof–reading an early version
P ERLIN , K. 1985. An image synthesizer. In Proceedings of ACM
of this paper. Thanks to Emil Praun and Hugues Hoppe for pro-
SIGGRAPH 1985, 287–296.
viding us with the Lapped Textures result used in Figure 1, and to
Nvidia for providing us with the GeForce 6800 used for this work. P RAUN , E., F INKELSTEIN , A., AND H OPPE , H. 2000. Lapped
Also, many thanks to Laure Heïgéas and Gilles Debunne for their textures. In Proceedings of ACM SIGGRAPH 2000, 465–470.
help in creating the accompanying video, and to our reviewers for
their help in improving this paper. S OLER , C., C ANI , M.-P., AND A NGELIDIS , A. 2002. Hierarchi-
cal pattern mapping. In Proceedings of ACM SIGGRAPH 2002,
673–680.
References
T URK , G. 2001. Texture synthesis on surfaces. In Proceedings of
B ENSON , D., AND DAVIS , J. 2002. Octree textures. In Proceed- ACM SIGGRAPH 2001, 347–354.
ings of ACM SIGGRAPH 2002, 785–790.
W EI , L.-Y., AND L EVOY, M. 2000. Fast texture synthesis us-
C OHEN , M. F., S HADE , J., H ILLER , S., AND D EUSSEN , O. 2003. ing tree-structured vector quantization. In Proceedings of ACM
Wang tiles for image and texture generation. In Proceedings of SIGGRAPH 2000, 479–488.
ACM SIGGRAPH 2003, 287–294.
W EI , L.-Y., AND L EVOY, M. 2001. Texture synthesis over ar-
D E B ONET, J. S. 1997. Multiresolution sampling procedure for bitrary manifold surfaces. In Proceedings of ACM SIGGRAPH
analysis and synthesis of texture images. In Proceedings of ACM 2001, 355–360.
SIGGRAPH 1997, 361–368.
W EI , L.-Y. 2004. Tile-based texture mapping on graphics hard-
D E B RY, D., G IBBS , J., P ETTY, D. D., AND ROBINS , N. 2002. ware. In Proceedings of the ACM SIGGRAPH / Eurographics
Painting and rendering textures on unparameterized models. In Conference on Graphics Hardware 2004, 55–64.
Proceedings of ACM SIGGRAPH 2002, 763–768.
W ORLEY, S. P. 1996. A cellular texturing basis function. In Pro-
D ISCHLER , J., M ARITAUD , K., L ÉVY, B., AND G HAZANFAR - ceedings of ACM SIGGRAPH 1996, 291–294.
POUR , D. 2002. Texture particles. In Proceedings of the Euro-
graphics Conference 2002, 401–410.
E FROS , A. A., AND F REEMAN , W. T. 2001. Image quilting
for texture synthesis and transfer. In Proceedings of ACM SIG-
GRAPH 2001, 341–346.
F LEISCHER , K. W., L AIDLAW, D. H., C URRIN , B. L., AND
BARR , A. H. 1995. Cellular texture generation. In Proceed-
ings of ACM SIGGRAPH 1995, 239–248.

You might also like