0 ratings0% found this document useful (0 votes) 38 views20 pagesWWW - Codeproject - OpenGL
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
Nimbus SDK: A Tiny Framework for C++ Real-
Time Volumetric Cloud Rendering, Animation
and Morphing
a Carlos Jiménez de Parga fate me: WHAT 5.005 42 votes)
BApr2022 CC(ASA3U) lminread @419K fl 32 236
Areusable Visual C++ framework for real-time volumettic cloud rendering, animation and morphing
Real-time volumetric cloud rendering is an attractive feature for multimedia applications such as computer
games and outdoor scenarios. Achieving realistic results requires cutting-edge software and massive multi-
core graphics hardware that is used in the animation industry but itis expensive and often, not real-time. This
article proposes a new approach making an efficient implementation for real-time volumetric cloud rendering
by abstracting the core of mathematics and physical complexity of atmospheric models and taking advantage
of CPU/GPU enhanced parallel programming paradigms.
Index
1. Introduction
2. Theoretical background
2.1 Ray-tracing, ray-casting and ray-marching
2.2 Volumetric rendering
2.3 Water vapour emulation
2.4 Pseudospheroids
3, The Nimbus framework
3.1 Class diagram
3.2 How to create a Gaussian cumulus
3.2.1 The main function
3.2.2 The render function3.2.3 Simulating wind
3.2.4 Result
3.3 Bounding boxes
3.4 Creating a morphing effect,
3.4.1 Linear interpolation
3.4.2 The main function
3.4.3 The render function
3.44 Result
4, Benchmarks
5. Conclusions and limitations
6. Further reading and references
7. Extended licenses
8, Download and documentation
9. Demo Video
1. Introduction
This article is a review of my PhD thesis titled "High-Performance Algorithms for Real-Time GPGPU Volumetric
Cloud Rendering from an Enhanced Physical-Math Abstraction Approach’ presented at the National Distance
Education University in Spain (UNED) in October 2019 with summa cum laude distinction. The aim of this article
is to explain the main features of the Nimbus SDK developed during the research
Real-time volumetric cloud rendering is a complex task for novel developers who lack the math/physics
knowledge. Its also a challenge for conventional computers without advanced 3D hardware capabilities. For
this reason, the current Nimbus SDK provides an efficient base framework for low-performance nVidia graphics
devices such as the nVidia GT1030 and the nVidia GTX 1050.
This framework may be applied in computer games, virtual reality, outdoor landscapes for architectural design,
flight simulators, environmental scientific applications, meteorology, etc.
In the fist lines, 1 will explain the current state-of-the-art and computer graphics background to understand
the main principles explained hereby. Finally, a complete description of the SDK usage will be presented and
exemplarized
2. Theoretical Background
2.1 Ray-Tracing, Ray-Casting and Ray-Marching
The SDK core is based on ray-tracing principles. Basically, the ray-tracing consists in launching straight lines
(rays) from the camera view, where a frame buffer is located, to the target scene: typically spheres, cubes, etc
as illustrated in Figure 1Figure 1. Ray-tracing lay-out.
The math principle of the straight line is its Euclidean equation. The objective is to determine the collision of
this line with the previously cited basic objects. Once the collisions have been determined, we can evaluate the
color and other materials characteristics to produce the pixel color on the 20 frame buffer.
As ray-tracing is a brute force technique, other efficient solutions have been developed in the last decades. For
example, in ray-casting methods, the intersections are analytically computed by using geometrical calculations.
This approach is normally used along with other structures such as voxel grids and space partitioning
algorithms. The method is usually applied in direct volume rendering for scientific and medical visualization to
obtain a set of 2D slice images in magnetic resonance imaging (MRI) and computed tomography (CT).
In advanced real-time computer graphics, a widely used simplification of ray-tracing and ray-casting is known
as raymarching. This method is a lightweight version of raycasting in which samples are taken down a line
in a discrete way to detect intersections with a 3D volume.
2.2 Volumetric Rendering
Many visual effects are volumetric in nature and are difficult to model with geometric primitives, including
fluids, clouds, fire, smoke, fog and dust. Volume rendering is essential for medical and engineering applications
that require visualization of three-dimensional data sets. There are two methods for volumetric rendering:
Texture-based techniques and raycasting-based techniques. In texture-based volume rendering techniques
perform the sampling and compositing steps by rendering a set of 2D geometric primitives inside the volume,
as shown in Figure 2.
Figure 2. View-aligned slicing with three sampling planes.
Each primitive is assigned texture coordinates for sampling the volume texture. The proxy geometry is.
rasterized and blended into the frame buffer in back-to-front or front-to-back order. In the fragment shading
stage, the interpolated texture coordinates are used for a data texture lookup step. Then, the interpolated data
values act as texture coordinates for a dependent lookup into the transfer function textures. Illumination
techniques may modify the resulting color before it is sent to the compositing stage of the pipeline. [ki+04].
In volume visualization with raycasting, high-quality images of solid objects are rendered which allows
visualizing sampled functions of three dimensional spatial data like fluids or medical imaging, Most raycasting
methods are based on Blinn/Kajiya models as illustrated in Figure 3,Each point along the ray calculates the illumination I(t) from the light source. Let P be a phase function to
compute the scattered light along the ray and D(t) be the local density of the volume. The illumination
scattered along R from a distance tis:
$1(t)D(t)P(cos\theta)S (Eq.1)
where O is the angle between the view point and the light source
The inclusion of the line integral from point (xy.2) to the light source may be useful in applications where
internal shadows are desired
Figure 3. A ray cast into a scalar function of a 3D volume.
The attenuation due to the density function along a ray can be calculated as Equation 2.
Se(\tau\int {t_1) tt 2)D(6)d5)$ (Eq. 2)
Finally, the intensity of the light arriving at the eye along direction R due to all the elements along the ray is
defined in Equation 3:
SB= \int_(t1}*(t_2e%(-\tau\int_(t_1)¢t_2}D(s)ds}(I(D()P(cos\theta)}dts (Eq.3)
Since raycasting is implemented as a raytracing variation, involving an extremely computationally
intensive process, one or more of the following optimization processes are usually incorporated:
* Bounding boxes
‘+ Hierarchical spatial enumeration (Octrees, Quadtrees, KD-Trees)
[Paw97]
2.3 Water Vapour Emulation
To simulate water vapour droplets, a 3D noise texture is raymarched, as seen in Figure 4. We typically can use a
Perlin noise or uniform random noise to generate f8m (fractal Brownian motion). The implementation of this
SDK makes intensive use of the f8m noise as a summation of weighting uniform noise. Thus, let w be the
octave scale factor, s the noise sampling factor and i the octave index, the 8m equation is defined as:
$fiom(xy,z) = \sum_{i=1){n}wi\cdot perlin(six,sAiy.siz)$ (Eq.4)
where w = 1/2 and sis 2oo
Figure 4. The uniform noise in the colour scale plot shows the irregular
density of water droplets in a cloud raytraced hypertexture.
2.4 Pseudospheroids
One of the novel features provided by the Nimbus framework is the irregular realistic shape of clouds, thanks
to the use of pseudospheroids. Basically, a pseudosphere is a sphere whose radius is modulated by a noise
function as illustrated in Figure 5:
Figure 5. Noise-modulated of a sphere radius.
‘The radius of the sphere follows Equation 5 during ray-marching:
S\gamma = eA (etl \fract-\Nert rayPos - \vee{sphereCenter) \Vert}(radius \cdot \left( (1 - \kappa) + 2 \kappa
fim(xy.2) \ight) \right]$
a. 5)
The Nimbus Framework
3.1 Class Diagram
The hyperlink below depicts the class diagram and the set of nineteen related classes and two interfaces of the
‘SDK:
In the forthcoming sections, | will explain the usage of these classes step by step,3.2 How
‘0 Create a Gaussian Cumulus
3.2.1 The Main Function
‘The class that creates a Gaussian cloud follows the density function of Equation 6:
$f) = \fract}{\sigma \sqrt(2\pi) ) e(-\frac{(1}2)\left(\fractx-\mu}{\sigma)\right) “2)$
(Eq.6)
It will typically generate a 3D plot similar to the one below:
Figure 6. 3D Gaussian plot.
The code that creates the scene is shown below:
ri stvinka GF
// Entry point for cumulus rendering
#ifdef CUMULUS.
void main(int argc, char* argv[])
t
// Initialize FreeGLUT and Glew
inital (argc, argv);
AmitGLEW() 5
try {
// Cameras setup
caneraFrane.setProjection®H(30.0F, SCRW / SCRH, @.1f, 2008.0f);
caneranxis.setProjectionRH(30.0f, SCR_W / SCRH, 8.1F, 2000.0F);
caneraFrane.setViewport(a, 8, SCRW, SCR_H);
caneraFrane.setLookAt(glm::vec3(@, @, -SCR_Z), glm::vec3(9, @, SCR_Z));
caneraFrame.translate(glm: :vec3(-SCRW / 2.@, -SCRH / 2.0, -SCR_Z));
userCanerapos = gin:
1ec3(0.0, 0.4, 0.0);
// Create fragment shader canvas
canvas.create(SCR_W, SCR_H);
// Create cloud base texture
nimbus: :Cloud: :createTexture (TEXTSIZ,
#ifcet MOUNT
mountain. create(8e0.0, false); // Create mountain
endif
axis.create(); // Create 30 oxis
// Create cumulus clouds
‘myCloud.create(35, 2.8, glm::vec3(0.0, 5.0, 0.0), 0.0f, 3.0f,
@.0f, 1.9F, @.0f, 3.0F, true, false);
// Calculate guide points for cumulus
myCloud. setGuidePoint (nimbus: :Winds: :EAST) 5// Load shaders
Z/ Wain shader
shaderCloud.1oadShader(GL_VERTEX_SHADER, "... /Nube/x64/data/shaders/canvasCloud. vert");
#ifcet MOUNT
// Mountains shader for cumuLus
shaderCloud. loadShader (GL_FRAGMENT_SHADER,
../Nube/x64/data/shaders/Clouds CUMULUS MOUNT. frag");
tenaif
#ifdef SEA
// Sea shader for cumulus
shaderCloud. 1oadShader (6L_FRAGMENT_SHADER,
"../Nube/x64/data/shaders/Clouds_CUMULUS_SEA.frag");
tencif
// axis: shaders
shaderAxis.loadshader(GL_VERTEX SHADER, ". ./Nube/x64/data/shaders/axis.vert");
shaderAxis.loadShader(Gl_FRAGMENT_SHADER, ".. /Nube/x64/data/shaders/axis.frag");
// Create shader programs
shaderCloud.createshaderProgran();
shaderaxis..createShaderProgranm();
#ifceF MOUNT
mountain. getuniforms(shaderCloud) ;
endif
canvas .getUniforms(shaderCloud) ;
ninbus: :Cloud: :getUni forms (shaderCloud) ;
nimbus: :Cunulus: :getUniforms(shaderCloud) ;
axis. getUni forms (shaderaxis);
U1 Start main Loop
gluttainLoop();
catch (ninbus
{
{imbusException& exception)
exception. printError();
system("pause");
y
// Free texture
‘nimbus: :Cloud
reeTexture();
y
tencif
Basically, the code above initializes the OpenGL and Glew libraries and locates the frame buffer and coordinate
axis camera. After this, we define the cloud texture size, create the coordinate axis, the optional mountains and
finally a Gaussian cumulus cloud with Cumulus: :create(). Then we define the guide point for wind direction
with the function Cumulus: :setGuidePoint (windDirection). Before the rendering loop starts, we load and
allocate the uniforms variables into the GLSL shader
using Shader: : loadShader(shader), Shader: :createShaderProgram() and ::getUniforms() methods.
Finally, we start the rendering loop. Do not forget to free textures by
calling ninbus freeTexture() before closing down the application.
‘Lou
3.2.2 The Render Function
We have just initialized the OpenGL (FreeGLUT) , Glew and prepared a cumulus cloud. Then we tell FreeGLUT to
draw the scene or display function that responds to wind advection and shadow precalculation. The
implementation of this function is as follows
C++ brink a
#ifder cum
// Render functionvoid displayat()
{
if (onPlay)
{
// Clear back-buffer
glClear(GL_COLOR BUFFER BIT | GL_DEPTH_BUFFER BIT);
MULLLLLULLLL CLOUDS ILILILITIULHLLI UL
// Change coordinate system
Bln::vec2 mouseScale = mousePos / glm::vec2(SCR_W, SCR_H);
// Rotate camera
glm::vec3 userCaneraRotatePos = glm::vec3(sin(nouseScale.x*3.2),
mouseScale.y, cos(mouseScale.x*3.0));
glDisable(Gl_DePTH_TEST);
shaderCloud.useProgram();
if (firstPass) // If first iteration
€
// Render cloud base texture
‘nimbus: :Cloud: :renderTexture();
#ifdeF MOUNT
// Render mountain
mountain.render();
Cloud: :renderFirstTime(SCR_W, SCR_H);
// Render cloud base class uniforms
‘nimbus: :Cloud: :render(nousePos, timeDelta, cloudDepth,
skyTurn, Cloudbepth * userCaneraRotatePos, debug);
// Calculate and apply wind
(parallel) ? myCloud. conputeWind(&windcridcupA) +
myCloud. conputeWind (&windsridcPu) 5
// Wind setup
‘nimbus: :Cunulus: :setWind(windDirection) ;
U/ Render cumulus class
-render(shaderCloud) ;
Af (preconputeTimeOut >= PRECONPTIMEQUT) // Check for regular
// preconpute Light (shading)
{
clock t start = clock();
Af (parallel)
Af (skyTurn == @) // If morning sun is near else sun is far (sunset)
‘nimbus: precomputeLight(preconpCUDA, sunDir, 100.0f, 2.2);
else ninbus: preconputeLight
(precompCuoA, sunDir, 10000.ef, 1e-6F);
cudabevicesynchronize();
y
else
precomputeLight(precompCPU, sunDir, 100.0f, 0.2F);
preconputeLight
(precompcPu, sunDir, 1000.0F, 1e-6F);
?
clock_t end = clock();
-enderVoxelTexture(a) ;precomputeTimedut = @;
Float msElapsed = static_castefloat>(end - start);
Std: :cout << "PRECOMPUTING LIGHT TIME ="
<< msElapsed / CLOCKS PER_SEC << std:-endl;
y
if (totalTime > TOTALTINE)
{
‘timeDelta += nimbus: :Cunulus: :getTimeDin();
totalTine = 03
y
total Tinet+;
precomputeTimeout++;
// User camera setup
cameraSky .setLookAt (cloudDepth * userCaneraRotatePos, userCaneraPos)
camerasky.translate(userCaneraPos) ;
canvas.render(cameraFrane, cameraSky);
LLL LLL
shaderAxis.useProgran();
// Render axis
caneraaxis.setViewport(SCRW / 3, SCRH / 3, SCRN, SCR_H);
caneraAxis.setLookAt(10.0F * userCameraRotatePos, userCaneraPos) ;
caneraAxis.setPosition(userCameraPos);
axis.render(cameraaxis) ;
// Restore Landscape viewport
caneraFrane.setViewport(@, 0, SCRN, SCR_H);
glutswapbuffers();
glEnable(GL_DEPTH_TEST) ;
// Calculate FPS
calculateFPs();
firstPass = false; // First pass ended
y
fencif
The first step of this function is updating the frame buffer camera and calculating the main scene camera that
responds to mouse motion. This camera acts as a panning camera using mouse, but we can proceed to
translate it by using the keypad arrows. We must start the rendering function with a call to the
main Shader: :useProgram(). In the first iteration, the following methods must be
invoked: nimbus ::Cloud: :renderTexture(), optionally Mountain: :render() and
finally nimbus: :Cloud: :renderFirstTime(SCR_W, SCR_H) in this order. During the normal operation of the
rendering loop, we must call to nimbus: :Cloud: :render() to pass values to the shader uniforms. Now, it is
time to compute wind motion using the fluid engine with a call
to: ninbus:: ::SetWind(windDirection)and Cumulus: :conputeWind() according to the device
selection. The next step is precomputing shadows from time to time
using nimbu: recomputeLight() using a specific precomputer object depending on whether the
CPU or the GPU is selected. The last step is rendering their calculated textures by a call
to nimbus: :Cloud: :renderVoxel Texture (cloudindex) to render the shadow cloud texture.
‘umuLu
Cumulus
3.2.3 Simulating Wind‘The NimbuSDK fluid engine is based on an internal 3D grid of U,V,W wind forces that acts over an automatic
selected guide point for each cloud, as illustrated in Figure 7.
D> caterer
Figure 7. Example of guide points inside a 3D fluid grid.
‘We will use the OpenGL(FreeGLUT) idle function to recalculate wind, as shown in the code below:
CH shrink a O?
// dle function
voi idlest()
if (lonPlay) retumn;
syncFPS() 3
feet CUMULUS
simcont++;
if (simcont > FLUIDLIMIT) // Sinulate fluid
{
applyind();
clock t start = clock()3
if (parallel)
«
windGridcUOA. senddata();
windGridcuoa. sim();
windGridcUDA. receivebata();
‘cudaDeviceSynchronize(); // For clock t usage
} else windridcPU.sin();
clock t end = clock();
float msElapsed = static_castcfloat>(end - start);
std::cout << "FLUID SIMULATION TINE = " << msElapsed / CLOCKS_PER SEC << std::endl}
simcont = 5
>
tenait
glutPostRedisplay();
x
Basically, this idle function calls FluidCUDA: : sendData() and FluidCUDA: :sim() to send 3D grid data to the
device before performing the simulation using CUDA. After the CUDA processing, a call
to FluidCUDA: :receiveData() is mandatory to retrieve the last processed fluid data. In the CPU case, itis not
needed to send data to device, we must proceed to call FluidcPU: :sim() directly.
a
ifdef CUMULUS
ears er aoe
if (parallel)
{The code above must be called for the OpenGL idle function to apply wind to clouds.
3.2.4 Result
The previously explained code will result in the following image:
Figure @, A real-time cloud rendering.
3.3 Bounding Boxes
To avoid excessive tracing during raymarching, the framework wraps the different clouds inside marching
‘cubes called bounding boxes. With this technique, itis possible to render high number of clouds by tracing the
‘ones under the camera frustum and excluding all clouds behind and outside the camera view. This method is
illustrated in Figure 9:
Figure 9, Two rectangular bounding boxes with clouds processing a ray.
3.4 Creating a Morphing Effect
‘One of the most interesting features in the Nimbus SDK are the C++ classes related to cloud morphing effect.
The implemented algorithm requires 90% decimation of 3D meshes before the animation loop, by using
Blender or any other 3D commercial editor. The following sections explain the technical background to
understand the algorithm basis.3.4.1 Linear Interpolation
The transformation of two 3D wireframe meshes is performed by moving each pseudoellipsoid barycenter (Le,
the vertex) of the source shape to the target shape through linear interpolation as described in the following
‘equation in GLSL:
$f(xy,\alpha) = x \cdot (1-\alpha) + y \cdot \alphas
47)
where two situations may arise:
A) Barycenters in the source > Barycenters in the target mesh: In this case, we assign a direct correspondence
between the barycenters in the source and target in iterative order. The excess barycenters in the source are
randomly distributed by overlapping them across the target barycenters with the calculation of the modulus
between their number of barycenters as seen in Figure 10.
Figure 10. Case A. The excess barycenters in the hexagon are randomly
distributed and overlapped over the triangle barycenters.
8) Barycenters in the target > Barycenters in the source mesh: The opposite operation implies a random
reselection of the excess source barycenters to duplicate them and generate a new interpolation motion to the
target mesh as seen in Figure 11.
Figure 11. Case B. The required barycenters are added to the triangle for a
random reselection to the hexagon barycenters.
3.4.2 The Main FunctioniniteLew();
try {
// Cameras setup
caneraFrame.setProjection®H(30.0F, SCRW / SCRH, @.1f, 2008.0f);
canerahxis.setProjectionRH(3@.0f, SCR_W / SCRA, 8.1F, 2000.0F);
canerafrane.setViewport(@, @, SCRW, SCRA);
caneraFrame.setLookAt(glm::vec3(@, 8, -SCR_Z), glm::vec3(8, ®, SCRZ));
caneraframe.translate(gim: :vec3(-SCRW / 2.@, -SCRH / 2.0, -SCR_2));
UuserCaneraPos = gin::vec3(@.0, @.4, 2.0);
// Create fragment shader canvas
canvas.create(SCR_W, SCR_H);
// Create cloud base texture
nimbus: :Cloud: :createTexture(TEXTSIZ) ;
#iféeF MOUNT
mountain. create(360.0, false); // Create mountain
tencif
axis.create(); // Create 30 axis
nodel1.create(gin: :vec3(-1.0, 7.0, 0.0), MESH, 1.1f); // Create mesh 1
model2.create(gin: :vec3(1.0, 7.8, -3.0), MESH2, 1.1f); // Create mesh 2
morphing. setNodels(&odel1, @rodel2, EVOLUTE); // Setup modes for morphing
// Load shaders
7/ Wain shader
shaderCloud. loadshader
(GL_VERTEX_SHADER, *. . /Nube/x64/data/shaders/canvasCloud. vert”);
#iféef MOUNT
// Mountains shader for 3D meshes based clouds
shaderCloud. 1oadShader (6L_FRAGMENT_SHADER,
/Nube/x64/data/shaders/clouds_MORPH_MOUNT. frag”);
jendif
ifcet Si
// Sea shader for 3D meshes based clouds
shaderCloud. 1oadShader (GL_FRAGMENT_SHADER,
/Nube/x64/data/shaders/clouds_MORPH_SEA.frag");
// bxis shaders
shaderAxis.loadshader(Gl_VERTEX_SHADER, ". . /Nube/x64/data/shaders/axis.vert");
shaderAxis.loadShader(GL_FRAGMENT_SHADER, ". . /Nube/x64/data/shaders/axis.frag");
// Create shader prograns
shaderCloud.createshaderProgran() ;
shaderAxis..createShaderProgram();
// Locate uniforms
nimbus: :Cloud: :getUniforms(shaderCloud) ;
#iféef MOUNT
mountain. getuniforms (shaderCloud) ;
canvas. getUni forms (shaderCloud) ;
imbus: :Nodel : :getUniforms(shaderCloud) ;
axis.getUni forms (shaderaxis);
U1 Start main Loop
gluttainLoop();
catch (nimbus: :NinbusException& exception)
{
exception. printerror();
system(“pause");y
// Free texture
‘nimbus: :Cloud: :freeTexture();
t
tenaif
The initialization functions are the same as described for the cumulus section. Its also important to create the
frame buffer canvas with Canvas: :create() and the cloud texture by a call
to nimbus: :Cloud: :createTexture(), Once this is made, we proceed to create the source and target mesh
models with a call to Model: :create() by passing the 3D position, the OB) file path and the scale as
arguments. Finally, we specify whether evolution (forwarding) or involution (backwarding) is preferred by a call
to Morphing: :setModels(_). The rest of the code loads the GLSL shaders and locate the uniforms variables
similarly as seen before.
3.4.3 The Render Function
++ vink a OP
#iféeF MODE
void displayot()
«
if (onPlay)
{
// Clear back-buffer
glClear(GL_COLOR BUFFER BIT | Gl_DEPTH_BUFFER BIT);
// Change coordinate system
gln::vec2 mouseScale = mousePos / glm::vec2(SCRW, SCR_M);
// Rotate camera
gln::vec3 userCaneraRotatePos = gin::vec3(sin(mouseScale.x*3.0),
mouseScale.y, cos(nouseScale.x*3.0));
WILILLLLILLLILT MORPHING IIHHILIIIIMIHULLL
gidisable(st_DePTH_test);
shaderCloud.useProgram()
if (FirstPass) // If first iteration
t
nimbus: :Cloud: :renderFirstTime(SCR_W, SCR_H);
clock_t start = cleck()3
#ifeer CUDA
// Preconpute Light for meshes
Model: :preconputeLight(precompCUDA, sunDir,
100.0f, 1e-6F, modeli.getNumEllipsoids(), model2.getNunEllipsoids());
precomputeLight(preconpCPU, sunbir, 100.0f, 1e-6F,
model1.getNunEllipsoids(), model2.getNunEllipsoids());
fencif
clock_t end = clock();
std::Cout << "PRECOMPUTE TIME LIGHT
<< end - start << std:sendl;
W/ Prepare for morphing
(EVOLUTE) ? morphing.prepareMorphtvolute() : morphing.prepareMorphinvolute();
alpha = alphaDir = 0.01F;
// First morphing render
nimbus: Model: :renderFirstTime(nodel2.getNunEllipsoids(), EVOLUTE);
// Render cloud base texture
‘nimbus: :Cloud: :renderTexture();#ifder MOUNT
// Render mountain
mountain.render();
// Render clouds preconputed Light textures
for (int 4 = @; 1 < nimbus::Cloud: :getNunClouds(); i++)
nimbus: : Cloud: :renderVoxelTexture(i)5
}
// Render cloud base class uniforms
imbus: :Cloud: :render(mousePos, timeDelta, cloudDepth, skyTurn,
CloudDepth * userCaneraRotatePos, debug);
static bool totalTimePass = false
if (totalTime > TOTALTIME) // Check time for morphing animation
<
totalTinepass = trues
timepelta += tinedir;
if (alpha < 1.0 88 alpha > @.0)
{
alpha += alphabir; // Aninate morphing
(EVOLUTE) ? norphing.morphEvolute(aipha) : morphing.morphinvolute(alpha);
morphing-morph(@.1); // Animation speed
>
totalTine
}
totalTines+;
// Mesh renderer
nimbus: :Model: :render(shaderCloud, (totalTimePass) ?
‘morphing. getCloudPosRDst() : nimbus: :Model::getCloudPosR(),,
‘morphing. getCloudPosbst(), alpha);
// User camera setup
canerasky.setLookAt(cloudbepth * userCameraRotatePos, userCameraPos) ;
caneraSky translate (userCameraPos) ;
canvas. render(cameraFrane, caneraSky) ;
LLL LLL
shaderaxis.useProgran();
// Render axis
caneraaxis. setViewport(SCR_W / 3, SCRH / 3, SCR, SCRH);
caneraAxis.setLookAt(10.0f * userCaneraRotatePos, userCaneraPos) ;
canerakxis.setPosition(userCaneraPos) ;
axis.render(cameraaxis);
// Restore Landscape viewport
caneraframe.setViewport(®, 8, SCR_W, SCR);
glutswapBuffers();
glénable(Gl_DEPTH_TEST) ;
// Calculate FPS
calculaterPs();
firstPass = false; // First pass endedThe main part of the OpenGL rendering function is similar to that seen in the cumulus section, so I will not
insist on this. The essential difference lies in the function nimbus: Model: :precomputeLight() that
precomputates light only once. Then we have to call
either Morph: :prepareHorphEvolute() of Morph: :prepareMorphinvolute() depending on the previously
selected option. After initializing the linear interpolation variable counters: alpha for increment
and alphaDir for positive or negative increment, we call ninbus: :Model: :renderFirstTime() with the
number of the target ellipsoids and the preferred progression option. Before exiting the first pass iteration, itis
required to call nimbus: :Cloud: :renderVoxel Texture(meshIndex) to pass the shadow voxel texture to the
shader. Then, during the normal animation loop, we will proceed to feed the morphing effect with the
corresponding calls to Morph: :morphEvolute(alpha) or Morph: :norphInvolute(alpha) depending on the
selected option. As seen in the code above, itis straightforward to increment the animation counters often
depending on the speed we need.
3.4.4 Result
Figures 12 to 18 to illustrate the morphing process of a hand into a rabbit:
Figure 12. Step 1.
3,
Ma
Figure 13. Step 3. Figure 14, Step 4.
Figure 15. Step 5.
Figure 17. Step 7. Figure 18. Step 8.
4. BenchmarksThis se presents the benchmarks performed using GPU/CPU for nVidia GTX 1050 non-Ti (640
cores/Pascal) and GTX 1070 non-Ti (1920 cores/Pascal) under a 64-bit i-Core 7 CPU 860@2.80 GHz (first
generation, 2009) with 6 GB RAM. The cumulus dynamic tests were performed over a moving sea scape with a
real sky function using 4 clouds in the scene with 35 spheres each one (140 spheres in total). The benchmark
versions were defined for each graphics card: a 10? precomputed light grid size with a 100 x 20 x 40 fluid
volume and a 403 precomputed light grid size with a 100 x 40 x 40 fluid volume.
seen ys 5 nae /268
whet Spaces ta ns C8 es
Figure 20, 100% of samples are above 30 FPS.
NVIDIA GeForce GTX 1050 non-Ti- 640 cores / 268 (1200 x 600)
in ai
g ee
Fw 28
Figure 21. CUDA speed-up for light precomputation and fluid simulation in nVidia GTX
1050 non-Ti.NVIDIA GeForce GTX 1070 non-Ti- 1920 cores / 868 (1200 x 600)
Figure 22. CUDA speed-up for light precomputation and fluid simulation in nVidia GTX
1070 non-Ti.
5. Conclusions and Limitations
We can state that these algorithms are very good candidates for real-time cloud rendering with enhanced
performance, thanks to atmospherical physical-math complexity abstraction and taking advantage of GPU/CPU
multicore parallel technology to speed-up the computation. The Nimbus SDK v1.0 is in beta-stage, so some
bugs might arise during your software operation. The main features of the framework are incomplete and they
are susceptible of improvement, specially those that are able to be adapted for a better object-oriented
programming and design patterns.
6. Further Reading and References
This article contents are extracted from my PhD thesis, that can be found at
https://www.educacion.gob.es/teseo/mostrarRef.do?ref=1821765
and the following journal articles:
‘Jiménez de Parga, C; Gomez Palomo, S.R. Efficient Algorithms for Real-Time GPU Volumetric Cloud
Rendering with Enhanced Geometry. Symmetry 2018, 10, 125. https://doi.org/10.3390/sym10040125 (*)
‘+ Parallel Algorithms for Real-Time GPGPU Volumetric Cloud Dynamics and Morphing. Carlos Jiménez de
Parga and Sebastin Rubén Gomez Palomo. Journal of Applied Computer Science & Mathematics. Issue
1/2019, Pages 25-30, ISSN: 2066-4273. https://doi.org/10.4316/JACSM.201901004
(*) If you find this work useful for your research, please, don't forget to cite the former reference in your journal
paper.
Other references used in this article:
‘© [Iki+04] M. Ikits et al. GPU Gems. Addison-Wesley, 2004.
‘* [Paw97] J. Pawasauskas. “Volume Visualization With Ray Casting”. In: CSS63 - Advanced Topics
in Computer Graphics Proceedings. Feb. 1997.
7. Extended Licenses
‘+ Non physical based atmospheric scattering is CC BY-NC-SA 3.0 by robobo1221
* Elevated is CC BY-NC-SA by Iiigo Quilez
‘+ Seascape is CC BY-NC-SA by Alexander Alekseev aka TDM8. Download and Documentation
The API documentation and Visual C++ SDK are available for reading and download
at: http://wwwisometricanet/thesis
and
https://github.com/maddanio/Nube
9. Demo Video
https://www-youtube.com/watch?v=OVqa_AGCaMk
License
This article, along with any associated source code and files, is licensed under The Creative Commons
Attribution-Share Alike 3.0 Unported License
‘Written By
Carlos Jiménez de Parga
Software Developer
= Spain
| obtained my PhD degree in Computer Graphics at the National Distance Education University (UNED) in
October 2019. | also hold a Ms. degree in Software Engineering and Computer Systems and a Bs. degree in
Computer Science from the National Distance Education University (UNED).
have been employed as a C++ software developer in several companies since year 2000.
| currently work as a Tutor-Professor of Symbolic Logic, Discrete Math and Java Object-Oriented Programming
at UNED-Cartagena (Spain) since 2015.
Comments and Discussions 2@66e0e@
You must Sign In to use this message board.
2
Relaxed ¥ Normal 2
First Prev Next@ my vote of 5
@] Re: my vote of §
(C Re: my vote of 5
C My vote of §
(1 Re: My vote of 5
fs Great Article!
C1 Re: Great Article!
@ Carlos Jiménez de Parge 10-Apr-22 21:08
® southmountain 12-Apr-22 15:23
Q Pedro Guida Seapr22 1437
cats iménex de PargeS-Apr-22 2330
S-Apr-22 14:35
@ Carlos Jiménez de Parge S-Apr-22 23:29
(J Re: Great Article! @ Pedro Gtida 6-Apr-22 1:17
CJ Re: Great Article! D Carlos Jiménez de Parge 6-Apr-22 4:42
(J Re: Great Article! @ Pedro Guida G-Apr-22 8:22
(] My vote of 5 - © tugrulctx A-apr22 1:02
(7 Re: My vote of 5 carlos iménerde Page 4-Apr22.130
@ Excellent article, Thanks for this. +5. RTX Ray Tracing? © @ honey the codewiteh 3-Apr22 28:15
Ed Re: Excellent article, Thanks for this. +5, RTX Ray Tracing? @ Carlos Jiménez de Parge 4-Apr-22 050
(J Re: Excellent article. Thanks for this, +5. RTX Ray Tracing? @ honey the codewiteh ——4-Apr-22 153
(11 Re: Excellent article, Thanks for this, +5. RTX Ray Tracing? @ Carlos Jiménez de Parge 4-Apr-22 5:28
(1 Re: Excellent article. Thanks for this. +5. RTX Ray Tracing?” ® honey the codewitch 4-Apee22.784
Cl Re: Bxcllnt ari, Thanks for this +5, RIK ay Tracing?
1 Re: very inspirational
(1 My vote of 5
(1 Re: My vote of 5
Message Closed
@ Message Closed
[@ Re: Can you list what cuda libraries need to be linked in the li
2 Cates mre de Paae
southmountain
_fhpr22 20
19-Jan-22 16:
7
B Carlos Jiménez de Parge 19-Jan-22 20:34
MOGA 17-Jan-22
“1 stefan
2B caves ménes de Pare 17022516
@ sophia Philips
16-Jan-22 20:22
2 cpyburn7e
cats Jiménez de Parge 7-Jun-21 9:21
Last Visit: 31-Dec-99 19:00 Last Update: 9-Dec-23 9:37
Refresh 12 Next>