Interactive Texture Sprites Method
Interactive Texture Sprites Method
Figure 1: From left to right: (a) Lapped textures. (b-c) Animated sprites. (d) Blobby painting. (e) Voronoi blending of sprites. (f) Octree texture.
The bottom line shows the texture patterns used (which are the only user defined textures stored in video memory). All these bunnies are rendered in one
pass, in real-time, using our composite texture representation. The texture sprites can be edited interactively. The original mesh is unmodified.
Abstract 1 Introduction
We present a new interactive method to texture complex geome- Textures are crucial for the realism of scenes. They let us add de-
tries at very high resolution, while using little memory and without tails without increasing the geometric complexity in real-time ap-
the need for a global planar parameterization. We rely on small plications. Games and special effects now rely on multiple texture
texture elements, the texture sprites, locally splatted onto the sur- layers on objects. However, texturing 3D models with very high
face to define a composite texture. The sprites can be arbitrarily resolutions creates two major difficulties. First, the storage cost of
blended to create complex surface appearances. Their attributes high resolution texture maps can easily exceed the available texture
(position, size, texture id) can be dynamically updated, thus pro- memory. The difficulty to create planar parameterizations further
viding a convenient framework for interactive editing and animated worsens this problem since memory can be wasted by unused space
textures. We demonstrate the flexibility of our method by creating in the texture maps or by distorted areas. Second, creating high res-
new surface aspects difficult to achieve with other methods. olution textures proves to be a difficult and tedious task.
Each sprite is described by a small set of attributes which is A current trend in computer graphics is to synthesize large tex-
stored in a hierarchical structure surrounding the object’s surface. tures from image samples [De Bonet 1997; Wei and Levoy 2000].
The patterns supported by the sprites are stored only once. The Indeed, many surfaces have an homogeneous appearance which can
whole data structure is compactly encoded into GPU memory. At be well captured by a small sample. Many of these algorithms pro-
run time, it is accessed by a fragment program which computes the duce a larger texture by combining patches taken from the texture
final appearance of a surface point from all the sprites covering it. sample [Efros and Freeman 2001; Kwatra et al. 2003]. However,
The overall memory cost of the structure is very low compared to because of the lack of efficient representation of patch–based tex-
the resulting texturing resolutions. Rendering is done in real–time. tures, they often explicitly store the resulting texture in a large im-
The resulting texture is linearly interpolated and filtered. age, which wastes memory. Another approach is to introduce new
CR Categories: I.3.7 [Computer Graphics]: Three -Dimensional geometry to position the patches directly onto the surface [Praun
Graphics and Realism—Color, shading, shadowing, and texture et al. 2000; Dischler et al. 2002]. If this approach greatly reduces
texture memory consumption, it also increases the geometrical cost,
Keywords: texturing, texture sprites, octree textures, decals, real– for texturing purposes only. Moreover, this hinders geometric opti-
time rendering, graphics hardware mizations such as triangle strips and geometric level of details.
Besides homogeneous appearances, textures are also a solution
to encode scattered objects like footprints, bullet impacts [Miller
2000], or drops [Neyret et al. 2002; Lefebvre 2003]. These are
likely to appear dynamically during a video game. In this situa-
tion also, the lack of representation for textures composed of sparse
elements creates difficulties. Storing scattered details in a large tex-
∗ email: FirstName.Name@imag.fr
ture covering the mesh wastes a lot of memory, since the texture
then contains large empty areas. The common solution is to use de-
URL: http://www-evasion.imag.fr/Publications/2005/LHN05
cals instead, i.e., to put the texture elements on extra small textured
transparent quads. However, such marks do not stick correctly to
curved surfaces and intersection between decals yields various ar-
tifacts such as discontinuities and flickering due to Z–fighting. All
this gets worse if the underlying surface is animated since all the
decals must be updated at each time step.
We describe a new representation for textures composed of vari-
ous texture elements. Since the texture elements live on the object
#1 #1 #1
#2 #2 #1
#1 #1 #2
#2 #2 #2
floating points). The reader can also refer to [Lefebvre et al. 2005]
A
for details on how to implement and use octree textures on the GPU. A A
(0,0,0)
[Kraus and Ertl 2002] introduced the first attempt (using a first A C E
Deformation-proof texturing
Our tree structure allows to store and retrieve data from 3D loca-
8
tions. However, it is meant to associate these informations to an
object surface. If the object is rotated, rescaled or animated we
9;:=<?>6@A B C
<?>6@AFEHG want the texture data to stick to the surface. This is exactly equiva-
lent to the case of solid textures [Perlin 1985]. The usual solution is
to rely on a 3D parameterization (u, v, w) stored at each vertex and
Figure 8: A new sprite (dark, blue) is inserted in a full leaf. As the sprite
does not overlap with all the sprites (light, green), there is a level of subdi-
interpolated as usual for any fragment. This parameterization can
vision at which the new sprite can be inserted. Subdividing lets us keep the be seen as the reference or rest state of the object and can conve-
sprite count less than or equal to Omax . niently be chosen in [0, 1]3 .
Note that the reference mesh does not need to be the same as
the rendered mesh as long as they can be textured by the same N 3 -
Insertion failures tree. For instance, a subdivision surface can be subdivided further
Omax should be chosen greater than the maximum number of sprites without requiring one to update the texture. The (u, v, w) of newly
allowed to contribute to a given pixel. A painting application might created vertices just have to be interpolated linearly.
choose to discard some sprite in overcrowded cells to keep this Since our representation defines a 3D texture, the rendered mesh
number reasonable. does not even need to be a surface. In particular, point-based rep-
The maximum depth level is reached only when more than Omax resentations and particle systems can be textured conveniently. Fi-
sprites are crowded in the same very small region (i.e., they cannot nally, a high resolution volume could even be defined by 3D sprites
be separated by recursive splitting). and sliced as usual for volume rendering.
5.4 Blending sprites 7 Applications and Results
When multiple sprites overlap, the resulting color is computed by
blending together their contributions. Various ways of compositing
7.1 Examples
sprites can be defined. The texturing methods that rely on multipass We have created various examples to illustrate our system, shown
rendering are limited to basic frame buffer blending operations. In on Figure 1, Figure 9 and in our video (available at http://
most cases, it is transparency blending. Since our blending is per- www-evasion.imag.fr/Publications/2005/LHN05).
formed in a fragment program, we do not suffer from such limita-
tions. Our model relies on a customizable blending component to Texture authoring (Figure 1(c) and video)
blend the contributions of the overlapping sprites. In this example, the user interactively pastes texture elements onto
We implemented non standard blending modes such as blobby- a surface. After having provided a set of texture patterns, the user
painting (Figure 1(e)) or cellular textures [Worley 1996] (i.e., can simply click on the mesh to create a texture sprite. The latter
Voronoi, Figure 1(f)). The first effect corresponds to an implicit can then be interactively scaled, rotated, or moved above or below
surface defined by sprite centers. The second effect selects the color the already existing sprites. The night-sky bunny was textured in a
defined by the closest sprite. Both rely on a distance function that few minutes.
can be implemented simply by using a pattern containing a radial This typically illustrates how an application can use our repre-
gradient in the alpha value A (i.e., a tabulated distance function). sentation: Here, the application is responsible for implementing the
user interface, placing the sprites (a simple picking task), and ori-
6 Filtering enting them. Requests are sent to our texture sprites API to delete
We can distinguish three filtering cases: Linear interpolation for and insert sprites as they move.
close viewpoints (mag filter), MIP-mapping of sprites (min filter) Note that sprites can overlap, but also large surface parts can
and MIP-mapping of the N 3 -tree. remain uncovered. This permits the use of an ordinary texture (or
a second layer of composite texture) on the exposed surface. In
Linear interpolation particular, this provides a way to overcome the overlapping limit by
Linear interpolation of the texture of each sprite is handled naturally using multipass rendering.
by the standard texture units of the GPU. As long as the blending
equation between the sprites is linear the final color is correctly Lapped texture approximation (Figure 1(a) and video)
interpolated. This example was created using the output of the Lapped Textures
algorithm [Praun et al. 2000] as an input to our texturing system.
MIP-mapping of sprites Our sprite-based representation fits well with the approach of this
The min filtering is used for faces that are either distant or tilted ac- texture synthesis algorithm in which small texture elements are
cording to the viewpoint. The MIP-mapping of the texture of each pasted on the mesh surface. Our representation stores such textures
sprite can be handled naturally by the texture unit of the GPU. As efficiently: The sample is stored only once at full resolution and
long as the blending equation between the sprites is linear, filtering the N 3 -tree minimizes the memory space required for positioning
of the composite texture remains correct: Each sprite is filtered in- information. Moreover, rendering does not suffer from filtering is-
dependently and the result of the linear blending still corresponds sues created by atlases or geometrical approaches (see video), and
to the correct average color. However, since we explicitly com- we use the initial low resolution mesh. Since lapped texture in-
pute the (u, v) texture coordinates within the fragment program, the volves many overlapping of sprites, in our current implementation
GPU does not know the derivatives relative to screen space and thus we use two separate composite textures to overcome hardware limi-
cannot evaluate the MIP-map level. To achieve correct filtering we tations (the maximum number of registers and instructions limit the
compute the derivatives explicitly before accessing the texture. (We maximum number of overlapping sprites).
rely on the ddx and ddy derivative instructions of the HLSL or Cg
languages). Animated sprites (Figure 1(b,c) and video)
Sprites pasted on a 3D model can be animated in two ways. First,
MIP-mapping of the N 3 -tree the application can modify the positioning parameters (position,
If the textured object is seen from a very distant viewpoint, multiple orientation, scaling) at every frame, which is not time consum-
cells of the tree may be projected into the same pixel. Aliasing will ing. Particle animation can be simulated as well to move the sprites
occur if the cells contain different color statistics. Tree filtering can (e.g., drops). In Figure 1(b), the user has interactively placed gears.
be achieved similarly to what was done in the case of [DeBry et al. Then the sprites rotation angle is modified at each frame (clockwise
2002; Benson and Davis 2002] (i.e., defining nodes values that are for sprites with even id and counter-clockwise for odd ones). Sec-
the average of child values, which corresponds to a standard MIP- ond, the pattern bound to a sprite can cycle over a set of texture
mapping). In our case we first need to evaluate the average color patterns, simulating an animation in a cartoon-like fashion. The
of the leaves from the portion of the sprites they contain. However, patterns of Figure 1(c) are animated this way. See Table 1 for frame
cell aliasing does not occur often in practice: First, the cell size rate measurements.
does not depend on the sprite size. In particular, small sprites are
stored in large cells. Second, our insertion algorithm presented in Snake scales (Figure 9 and video)
Section 5.2 tends to minimize the tree depth to avoid small cells. As explained above each sprite can be independently scaled and
Finally, small neighboring cells are usually covered by the same rotated. This can even be done by the GPU as long as a for-
sprites and therefore have the same average color. Thus we did mula is available, and as long as the sprite bounding volume re-
not need to implement MIP-mapping of the N 3 -tree for our demos. mains unchanged. For illustration we emulated the behavior of rigid
Apart from very distant viewpoints (for which the linearity hypoth- scales: usually the texture deforms when the underlying surface
esis assumed by every texturing approach fails), the only practical is deformed (Figure 9, middle). We estimate the local geometric
case where cell aliasing occurs is when two different sprites are distortion and scale the sprites accordingly to compensate for the
close to each other and cannot be inserted inside the same leaf. The deformation of the animated mesh. Our example is an undulating
two sprites have to be separated by splitting the tree. As a result, snake covered by scales: one can see (especially on the video) that
small cells containing different sprites are generated. These cells the scales keep their size and slide on each other in concave areas.
are likely to alias if seen from a large distance. Note that this has similarities with the cellular textures of Fleischer
Figure 9: Undulating snake mapped with 600 overlapping texture sprites whose common pattern (color+bump) have a 512 × 512 resolution. The virtual
composite texture thus has a 30720 × 5120 resolution. One can see the correct filtering at sprite edges. This figure demonstrates the independent tuning of each
scale aspect-ratio in order to simulate rigid scales. Middle: Without stretch compensation. The texture is stretched depending on the curvature. Right: With
stretch compensation. The scales slide onto each other and overlap differently depending on the curvature (see also the video).
et al. [Fleischer et al. 1995]: Sprites have their own life and can
interact. But in our case no extra geometry is added: Everything number node tree max. FPS
occurs in texture space. We really define a textural space in which of sprites size depth overlap 800 × 600
the color can be determined at any surface location, so we do not Lapped 536 4 4 16 27
have to modify the mesh to be textured. Gears 50 4 2 8 80
Stars 69 4 2 8 26
Octree textures (Figure 1(f) and video) Octree (nearest mode) none 2 9 none 365
We have reimplemented the DeBry et al. octree textures [DeBry Blobby 125 4 3 10 70
Voronoi 132 4 3 12 56
et al. 2002] with our system in order to benchmark the efficiency
of our GPU N 3 -tree model. In this application no sprite parameter
is needed, therefore we directly store color data in the leaf cells of Table 1: Performance for examples of Figure 1.
the indirection grids. The octree texture can be created by a paint
application or by a procedural tool. Here we created the nodes at
high resolution by recursively subdividing the N 3 -tree nodes inter- Memory usage
secting the object surface. Then we evaluated a Perlin marble noise Our textures require little memory in comparison to the standard
to set the color of each leaf. For filtering, we implemented a simple textures needed to obtain the same amount of detail. Our tests show
tri–linear interpolation scheme by querying the N 3 -tree at 8 loca- that texturing the Stanford bunny with an atlas automatically gener-
tions around the current fragment. The bunny model of Figure 1(f) ated by modeling software (see video) would require a 2048 × 2048
is textured with an octree texture of depth 9 (maximal resolution of texture (i.e., 16 MB) to obtain an equivalent visual quality. More-
5123 ). We obtain about 33 fps at 1600 × 1200 screen resolution, over, we show on the video how atlas discontinuities generate arti-
displaying the bunny model with the same viewpoint than Figure 1. facts. The memory size used by our various examples is summa-
The timings of DeBry et al. [2002] software implementation was rized in Table 2. Note that since textures must have power of two
about one minute to render a 2048 × 1200 still image on a 1Ghz dimensions in video memory the allocated memory size is usually
Pentium III. This proves that this approach benefits especially well greater than the size of the structure. The last column of Table 2 in-
from our GPU implementation. cludes the size of the 2D texture patterns used for the demonstrated
application.
7.2 Performance
size of the allocated total
structure memory memory
Rendering time
Lapped 1 MB 1.6 MB 1.9 MB
Performance for the examples presented in the paper are summa- Gears 0.012 MB 0.278 MB 0.442 MB
rized in Table 1. Measurements were done on a GeForceFX 6800 Stars 0.007 MB 0.285 MB 5.7 MB
GT, without using the dynamic branching feature. The system is en- Octree 16.8 MB 32.5 MB 32.5 MB
tirely implemented in Nvidia Cg. The performance of our N 3 -tree Blobby paint 0.141 MB 0.418 MB 0.434 MB
allows for fast rendering of color octree textures; but the complete Voronoi 0.180 MB 0.538 MB 0.554 MB
texture sprite system usually performs at a lower frame rate. The
main bottleneck comes from the maximum number of overlapping Table 2: Storage requirements for examples of Figure 1.
sprites allowed. Note that we did not try to optimize GPU register
usage: The code is directly compiled using the Cg compiler (v1.3).
On hardware without true branching in fragment programs, a
lookup in our composite texture always involves as many computa- 8 Conclusions and future work
tion as if Omax sprites were stored in the deepest leaves of the tree.
(Another consequence is that the rendering cost remains constant We have introduced a new representation to texture 3D models with
independently of the number of sprites stored). composite textures. The final appearance is defined by the blending
On hardware allowing branching we are in a favorable case. In- of overlapping texture elements, the texture sprites, locally applied
deed, the tree leaves enclose large surface areas and thus neighbor- onto the surface. We thus reach very high texturing resolution at
ing pixels are likely to follow the same branching path. low memory cost, and without the need for a global planar param-
Note that since the cost is in the fragment shader, the rendering eterization. The sprite’s attributes are efficiently stored in a hierar-
cost is mostly proportional to the number of pixels drawn: The ren- chical grid surrounding the object’s surface. Since this truly defines
dering of an object that is far or partly occluded costs less; i.e., you a 3D texture sampled per–pixel, no modification of the textured ge-
pay only for what you see. ometry is required.
The system is flexible in many ways: Each sprite can be inde- K RAUS , M., AND E RTL , T. 2002. Adaptive Texture Maps. In
pendently animated, moved, scaled and rotated. This offers natural Proceedings of the ACM SIGGRAPH / Eurographics Conference
support for many existing methods such as interactive painting on on Graphics Hardware 2002, 7–15.
surfaces, lapped textures rendering, dynamic addition of local de-
tails, all available within the same texturing system with better qual- K WATRA , V., S CHÖDL , A., E SSA , I., T URK , G., AND B OBICK ,
ity and using less memory. We also showed how our new represen- A. 2003. Graphcut textures: Image and video synthesis using
tation can be used to create new texturing effects, such as animated graph cuts. In Proceedings of ACM SIGGRAPH 2003.
textures and textures reacting to mesh deformations.
We described a complete GPU implementation of our texturing L EFEBVRE , S., AND N EYRET, F. 2003. Pattern based procedural
method, which achieves real–time performance. Moreover, if the textures. In Proceedings of the ACM SIGGRAPH Symposium on
performance are not good enough for a given application, the com- Interactive 3D Graphics 2003, 203–212.
posite texture can be baked into a standard 2D texture using an ex- L EFEBVRE , S., H ORNUS , S., AND N EYRET, F. 2005. GPU Gems
isting parameterization (as shown in the video). II: Programming Techniques for High–Performance Graphics
and General–Purpose Computation. Addison–Wesley, ch. Oc-
Future work tree Textures on the GPU. ISBN 0-32133-559-7.
In this paper we have demonstrated several types of usage of our
system. However, the possibilities are endless and we would like L EFEBVRE , S. 2003. ShaderX2: Shader Programming Tips &
to explore other kinds of textures enabled by this sprite instan- Tricks. Wordware Publishing, ch. Drops of water texture sprites,
tiation scheme. In particular, approaches like painterly render- 190–206. ISBN 1-55622-988-7.
ing [Meier 1996] could probably benefit from our texture represen-
tation. Among the possible improvements, we would like to define M EIER , B. J. 1996. Painterly rendering for animation. In Proceed-
sprite projector functions better than simple planar mapping in or- ings of ACM SIGGRAPH 1996, 477–484.
der to minimize distortion. We also showed that relying on a spatial
structure – and no surface parameterization – the textured objects M ILLER , N. 2000. Decals explained. http://www.
are no longer required to be meshes. In particular, our approach flipcode.com/articles/article_decals.shtml.
could prove interesting for point-based and volumetric representa-
tions. N EYRET, F., H EISS , R., AND S ENEGAS , F. 2002. Realistic
Rendering of an Organ Surface in Real-Time for Laparoscopic
Surgery Simulation. The Visual Computer 18, 3, 135–149.
9 Acknowledgments
P EACHEY, D. R. 1985. Solid texturing of complex surfaces. In
We would like to thank John Hugues, Laks Raghupathi, Adrien Proceedings of ACM SIGGRAPH 1985, 279–286.
Treuille, and Marie–Paule Cani for proof–reading an early version
P ERLIN , K. 1985. An image synthesizer. In Proceedings of ACM
of this paper. Thanks to Emil Praun and Hugues Hoppe for pro-
SIGGRAPH 1985, 287–296.
viding us with the Lapped Textures result used in Figure 1, and to
Nvidia for providing us with the GeForce 6800 used for this work. P RAUN , E., F INKELSTEIN , A., AND H OPPE , H. 2000. Lapped
Also, many thanks to Laure Heïgéas and Gilles Debunne for their textures. In Proceedings of ACM SIGGRAPH 2000, 465–470.
help in creating the accompanying video, and to our reviewers for
their help in improving this paper. S OLER , C., C ANI , M.-P., AND A NGELIDIS , A. 2002. Hierarchi-
cal pattern mapping. In Proceedings of ACM SIGGRAPH 2002,
673–680.
References
T URK , G. 2001. Texture synthesis on surfaces. In Proceedings of
B ENSON , D., AND DAVIS , J. 2002. Octree textures. In Proceed- ACM SIGGRAPH 2001, 347–354.
ings of ACM SIGGRAPH 2002, 785–790.
W EI , L.-Y., AND L EVOY, M. 2000. Fast texture synthesis us-
C OHEN , M. F., S HADE , J., H ILLER , S., AND D EUSSEN , O. 2003. ing tree-structured vector quantization. In Proceedings of ACM
Wang tiles for image and texture generation. In Proceedings of SIGGRAPH 2000, 479–488.
ACM SIGGRAPH 2003, 287–294.
W EI , L.-Y., AND L EVOY, M. 2001. Texture synthesis over ar-
D E B ONET, J. S. 1997. Multiresolution sampling procedure for bitrary manifold surfaces. In Proceedings of ACM SIGGRAPH
analysis and synthesis of texture images. In Proceedings of ACM 2001, 355–360.
SIGGRAPH 1997, 361–368.
W EI , L.-Y. 2004. Tile-based texture mapping on graphics hard-
D E B RY, D., G IBBS , J., P ETTY, D. D., AND ROBINS , N. 2002. ware. In Proceedings of the ACM SIGGRAPH / Eurographics
Painting and rendering textures on unparameterized models. In Conference on Graphics Hardware 2004, 55–64.
Proceedings of ACM SIGGRAPH 2002, 763–768.
W ORLEY, S. P. 1996. A cellular texturing basis function. In Pro-
D ISCHLER , J., M ARITAUD , K., L ÉVY, B., AND G HAZANFAR - ceedings of ACM SIGGRAPH 1996, 291–294.
POUR , D. 2002. Texture particles. In Proceedings of the Euro-
graphics Conference 2002, 401–410.
E FROS , A. A., AND F REEMAN , W. T. 2001. Image quilting
for texture synthesis and transfer. In Proceedings of ACM SIG-
GRAPH 2001, 341–346.
F LEISCHER , K. W., L AIDLAW, D. H., C URRIN , B. L., AND
BARR , A. H. 1995. Cellular texture generation. In Proceed-
ings of ACM SIGGRAPH 1995, 239–248.