Table of Contents
nub is a simple, expressive, language-agnostic, and extensible visual computing library, featuring interaction, visualization and animation frameworks and supporting advanced (onscreen/offscreen) rendering techniques, such as view frustum culling.
nub is meant to be coupled with third party real and non-real time renderers. Our current release supports the 2D and 3D PGraphicsOpenGL (a.k.a. P2D and P3D, respectively) Processing renderers.
If looking for the API docs, check them here.
Readers unfamiliar with geometry transformations may first check the great Processing 2D transformations tutorial by J David Eisenberg and the affine transformations and scene-graphs presentations that discuss some related formal foundations.
Instantiate your on-screen scene at the setup():
// import all nub classes
import nub.primitives.*;
import nub.core.*;
import nub.processing.*;
Scene scene;
void setup() {
scene = new Scene(this);
}The Scene context() corresponds to the PApplet main PGraphics instance.
Off-screen scenes should be instantiated upon a PGraphics object:
import nub.primitives.*;
import nub.core.*;
import nub.processing.*;
Scene offScreenScene;
void setup() {
offScreenScene = new Scene(createGraphics(w, h / 2, P3D));
}In this case, the offScreenScene context() corresponds to the PGraphics instantiated with createGraphics() (which is of course different than the PApplet main PGraphics instance).
A node may be translated, rotated and scaled (the order is important) and be rendered when it has a shape. Node instances define each of the nodes comprising the scene tree. To illustrate their use, suppose the following scene hierarchy is being implemented:
World
^
|\
n1 eye
^
|\
n2 n3To setup the scene hierarchy of nodes use code such as the following:
import nub.primitives.*;
import nub.core.*;
import nub.processing.*;
Scene scene;
Node n1, n2, n3;
void setup() {
size(720, 480, P3D);
// the scene object creates a default eye node
scene = new Scene(this);
// Create a top-level node (i.e., a node whose reference is null) with:
n1 = new Node();
// whereas for the remaining nodes we pass any constructor taking a
// reference node parameter, such as Node(Node referenceNode)
n2 = new Node(n1) {
// immediate mode rendering procedure
// defines n2 visual representation
@Override
public void graphics(PGraphics pg) {
Scene.drawTorusSolenoid(pg);
}
};
// retained-mode rendering PShape
// defines n3 visual representation
n3 = new Node(n1, createShape(BOX, 30));
// translate the node to make it visible
n3.translate(50, 50, 50);
}Note that the hierarchy of nodes may be modified with setReference(Node) and the scene eye() set from an arbitrary node instance with setEye(Node). Calling setConstraint(Constraint) will apply a Constraint to a node to limit its motion, see the ConstrainedEye and ConstrainedNode examples.
A node position, orientation and magnitude may be set with the following methods:
| Node localization | Position | Orientation | Magnitude |
|---|---|---|---|
| Globally | setPosition(vector) |
setOrientation(quaternion) |
setMagnitude(mag) |
| Locally | setTranslation(vector) |
setRotation(quaternion) |
setScaling(scl) |
| Incrementally | translate(vector, [inertia]) |
rotate(quaternion, [inertia]), orbit(quaternion, center, [inertia]) |
scale(amount, [inertia]) |
The optional inertia parameter should be a value in [0..1], 0 no inertia (which is the default value) & 1 no friction. Its implementation was inspired by the great PeasyCam damped actions and done in terms of TimingTasks.
Node shapes can be set from an immediate-mode rendering Processing procedure (see graphics(PGraphics)) or from a retained-mode rendering Processing PShape (see shape(PShape)). Shapes can be picked precisely using their projection onto the screen, see picking(). Note that even the eye can have a shape which may be useful to depict the viewer in first person camera style.
The following Scene methods transform points (locations) and vectors (displacements) between screen space (a box of width * height * 1 dimensions where user interaction takes place), NDC and nodes (including the world, i.e., the null node):
| Space transformation | Points | Vectors |
|---|---|---|
| NDC to Screen | ndcToScreenLocation(point) |
ndcToScreenDisplacement(vector) |
| Screen to NDC | screenToNDCLocation(pixel) |
screenToNDCDisplacement(vector) |
| Screen to Node | location(pixel, node) |
displacement(vector, node) |
| Node to Screen | screenLocation(point, node) |
screenDisplacement(vector, node) |
| Screen to World | location(pixel) |
displacement(vector) |
| World to Screen | screenLocation(point) |
screenDisplacement(vector) |
Note that point, pixel and vector are Vector instances.
The following Node methods transform points (locations) and scalars / vectors/ quaternions (displacements) between different node instances (including the world):
| Space transformation | Points | Scalars / Vectors / Quaternions |
|---|---|---|
| Node to (this) Node | location(point, node) |
displacement(element, node) |
| World to (this) Node | location(point) |
displacement(element) |
| (this) Node to World | worldLocation(point) |
worldDisplacement(element) |
Note that point is a Vector instance and element is either a float (scalar), Vector or Quaternion one.
Display the scene node hierarchy from its eye() point-of-view with:
void draw() {
scene.display();
}Render the scene node hierarchy from its eye() point-of-view with:
void draw() {
// the subtree param is optional
scene.render();
}Note that the display and render commands are equivalent when the scene is onscreen. Observations:
- Call
scene.display(subtree)andscene.render(subtree)to justdisplay/renderthe scene subtree. - Call
scene.display(pixelX, pixelY)(orscene.display(subtree, pixelX, pixelY)) to display the offscreen scene at(pixelX, pixelY)left corner. - Enclose 2D screen space with
scene.beginHUD()andscene.endHUD()stuff (such as gui elements and text) with to render it on top of a 3D scene. - Customize the rendering traversal algorithm by overriding the node
visit(graph)method, see the ViewFrustumCulling example.
The Scene implements several static drawing functions that complements those already provided by Processing, such as: drawCylinder(PGraphics, int, float, float), drawHollowCylinder(PGraphics, int, float, float, Vector, Vector), drawCone(PGraphics, int, float, float, float, float), drawCone(PGraphics, int, float, float, float, float, float) and drawTorusSolenoid(PGraphics, int, int, float, float).
Drawing functions that take a PGraphics parameter (including the above static ones), such as beginHUD(PGraphics),
endHUD(PGraphics), drawAxes(PGraphics, float), drawCross(PGraphics, float, float, float) and drawGrid(PGraphics) among others, can be used to set a node shape.
Another scene's eye (different than this one) can be drawn with drawFrustum(Scene). Typical usage include interactive minimaps and visibility culling visualization and debugging.
The scene has several methods to position and orient the eye node, such as: lookAt(Vector), setFov(float), setViewDirection(Vector), setUpVector(Vector), fit() and fit(Node), among others.
The following scene methods implement eye motion actions particularly suited for input devices, possibly having several degrees-of-freedom (DOFs):
| Action | Generic input device | Mouse |
|---|---|---|
| Align | alignEye() |
n.a. |
| Focus | focusEye() |
n.a. |
| Translate | translateEye(dx, dy, dz, [inertia]) |
mouseTranslateEye([inertia]) |
| Rotate | rotateEye(roll, pitch, yaw, [inertia]) |
n.a. |
| Scale | scaleEye(delta, [inertia]) |
n.a. |
| Spin | spinEye(pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia]) |
mouseSpinEye([inertia]) |
| Move forward | moveForward(dz, [inertia]) |
n.a. |
| Rotate CAD | rotateCAD(roll, pitch, [inertia]) |
mouseRotateCAD([inertia]) |
| Look around | lookAround(deltaX, deltaY, [inertia]) |
mouseLookAround([inertia]) |
n.a. doesn't mean the mouse action isn't available, but that it can be implemented in several ways (see the code snippets below). The provided mouse actions got non-ambiguously implemented by simply passing the Processing pmouseX, pmouseY, mouseX and mouseY variables as parameters to their relative generic input device method counterparts (e.g., mouseTranslateEye() is the same as translateEye(pmouseX - mouseX, pmouseY - mouseY, 0) and mouseSpinEye() is the same as spinEye(pmouseX, pmouseY, mouseX, mouseY)), and hence their simpler signatures.
Mouse and keyboard examples:
// define a mouse-dragged eye interaction
void mouseDragged() {
if (mouseButton == LEFT)
scene.mouseSpinEye();
else if (mouseButton == RIGHT)
scene.mouseTranslateEye();
else
// drag along x-axis: changes the scene field-of-view
scene.scaleEye(scene.mouseDX());
}// define a mouse-moved eye interaction
void mouseMoved(MouseEvent event) {
if (event.isShiftDown())
// move mouse along y-axis: roll
// move mouse along x-axis: pitch
scene.rotateEye(scene.mouseRADY(), scene.mouseRADX(), 0);
else
scene.mouseLookAround();
}// define a mouse-wheel eye interaction
void mouseWheel(MouseEvent event) {
if (scene.is3D())
// move along z
scene.moveForward(event.getCount() * 20);
else
// changes the eye scaling
scene.scaleEye(event.getCount() * 20);
}// define a mouse-click eye interaction
void mouseClicked(MouseEvent event) {
if (event.getCount() == 1)
scene.alignEye();
else
scene.focusEye();
}// define a key-pressed eye interaction
void keyPressed() {
// roll with 'x' key
scene.rotateEye(key == 'x' ? QUARTER_PI / 2 : -QUARTER_PI / 2, 0, 0);
}The SpaceNavigator and CustomEyeInteraction examples illustrate how to set up other hardware such as a keyboard or a full fledged 6-DOFs device like the space-navigator.
To directly interact with a given node, call any of the following scene methods:
| Action | Generic input device | Mouse |
|---|---|---|
| Align | alignNode(node) |
n.a. |
| Focus | focusNode(node) |
n.a. |
| Translate | translateNode(node, dx, dy, dz, [inertia]) |
mouseTranslateNode(node, [inertia]) |
| Rotate | rotateNode(node, roll, pitch, yaw, [inertia]) |
n.a. |
| Scale | scaleNode(node, delta, [inertia]) |
n.a. |
| Spin | spinNode(node, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia]) |
mouseSpinNode(node, [inertia]) |
Note that the mouse actions are implemented in a similar manner as it has been done with the eye.
Mouse and keyboard examples:
void mouseDragged() {
// spin n1
if (mouseButton == LEFT)
scene.spinNode(n1);
// translate n3
else if (mouseButton == RIGHT)
scene.translateNode(n3);
// scale n1
else
scene.scaleNode(n1, scene.mouseDX());
}void keyPressed() {
if (key == CODED)
if(keyCode == UP)
scene.translateNode(n2, 0, 10);
if(keyCode == DOWN)
scene.translateNode(n2, 0, -10);
}Customize node behaviors by registering a user gesture data parser with the node setInteraction(Consumer) method, and then send gesture data to the node by calling one of the scene custom interaction invoking methods: interact(Node, Object...), interactTag(String, Object...) or interactTag(Object...). See the CustomNodeInteraction example.
Picking a node (which should be different than the scene eye) to interact with it is a two-step process:
-
Tag the node using an arbitrary name either with tag(String, Node) or ray-casting:
Ray casting Synchronously πΉ Asynchronously πΈ Generic updateTag(tag, pixelX, pixelY)tag(tag, pixelX, pixelY)Mouse updateMouseTag(tag)mouseTag(tag)πΉ The tagged node (see node(String)) is returned immediately :small_orange_diamond: The tagged node is returned during the next call to the
render()algorithm -
Interact with your tagged nodes using one of the following patterns:
- Tagged node:
interactTag(tag, gesture...)which simply callsinteractNode(node(tag), gesture)using node(String) to resolve the node parameter in the node methods above. - Tagged node or
eye:interact(tag, gesture...)which is the same asif (!interactTag(tag, gesture...)) interactEye(gesture...)i.e., To either interact with the node referred with a given tag (pattern i.) or delegate the gesture to the eye (see above) when that tag is not in use.
Generic actions:
Action Tagged node Tagged node or eyeAlign alignTag(tag)align(tag)Focus focusTag(tag)focus(tag)Translate translateTag(tag, dx, dy, dz, [inertia])translate(tag, dx, dy, dz, [inertia])Rotate rotateTag(tag, roll, pitch, yaw, [inertia])rotate(tag, roll, pitch, yaw, [inertia])Scale scaleTag(tag, delta, [inertia])scale(tag, delta, [inertia])Spin spinTag(tag, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia])spin(tag, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia])Mouse actions:
Action Tagged nodes Tagged node or eyeTranslate mouseTranslateTag(tag, [lag])mouseTranslate(tag, [lag])Spin mouseSpinTag(tag, [inertia])mouseSpin(tag, [inertia]) - Tagged node:
Observations:
- A node can have multiple tags but a given tag cannot be assigned to more than one node, and since the null tag is allowed, signatures of all the above methods lacking the tag parameter are provided for convenience, e.g.,
mouseTag()is equivalent to callingmouseTag(null)which in turn is equivalent totag(null, mouseX, mouseY)(andtag(mouseX, mouseY)). - Refer to picking() and enablePicking(int) for the different ray-casting node picking modes.
- To check if a given node would be picked with a ray cast at a given screen position, call tracks(Node, int, int) or mouseTracks(Node).
- To tag the nodes in a given array with ray casting use updateTag(String, int, int, Node[]) and updateMouseTag(String, Node[]).
- In the case of
mouseTranslateTag(tag, [lag])andmouseTranslate(tag, [lag])alagis used (instead ofinertia),0responds immediately and1no response at all. - Set
Scene.inertiain [0..1] (0no inertia &1no friction) to change the defaultinertiavalue globally. It is initially set to0.8and it also affects thelaginmouseTranslateTag(tag, [lag])andmouseTranslate(tag, [lag]). See the CajasOrientadas example. - Invoke custom node behaviors by either calling the scene interact(Node, Object...), interactTag(String, Object...) or interactTag(Object...) methods. See the CustomNodeInteraction example.
Mouse and keyboard examples:
// pick with mouse-moved
void mouseMoved() {
scene.mouseTag();
}
// interact with mouse-dragged
void mouseDragged() {
if (mouseButton == LEFT)
// spin the picked node or the eye if no node has been picked
scene.mouseSpin();
else if (mouseButton == RIGHT)
// spin the picked node or the eye if no node has been picked
scene.mouseTranslate();
else
// spin the picked node or the eye if no node has been picked
scene.scale(mouseX - pmouseX);
}// pick with mouse-clicked
void mouseClicked(MouseEvent event) {
if (event.getCount() == 1)
// use the null tag to manipulate the picked node with mouse-moved
scene.mouseTag();
if (event.getCount() == 2)
// use the "key" tag to manipulate the picked node with key-pressed
scene.mouseTag("key");
}
// interact with mouse-moved
void mouseMoved() {
// spin the node picked with one click
scene.mouseSpinTag();
}
// interact with key-pressed
void keyPressed() {
// focus the node picked with two clicks
scene.focusTag("key");
}Timing tasks are (non)recurrent, (non)concurrent (see isRecurrent() and isConcurrent() resp.) callbacks defined by overriding execute(). For example:
Scene scene;
void setup() {
scene = new Scene(this);
TimingTask spinningTask = new TimingTask() {
@Override
public void execute() {
scene.eye().orbit(new Vector.plusJ, PI / 100);
}
};
spinningTask.run();
}will run the timing-task at 25Hz (which is its default frequency()). See the ParticleSystem example.
An interpolator is a timing-task that allows to define the position, orientation and magnitude a node (including the eye) should have at a particular moment in time, a.k.a., key-frame. When the interpolator is run the node is then animated through a Catmull-Rom spline, matching in space-time the key-frames which defines it. Use code such as the following:
Scene scene;
PShape pshape;
Node shape;
Interpolator interpolator;
void setup() {
...
shape = new Node(pshape);
interpolator = new Interpolator(shape);
for (int i = 0; i < random(4, 10); i++)
// addKeyFrame(node, elapsedTime) where elapsedTime is defined respect
// to the previously added key-frame and expressed in seconds.
interpolator.addKeyFrame(scene.randomNode(), i % 2 == 1 ? 1 : 4);
interpolator.run();
}which will create a shape interpolator containing [4..10] random key-frames. See the Interpolators example.
Import/update it directly from your PDE. Otherwise download your release and extract it to your sketchbook libraries folder.
Thanks goes to these wonderful people (emoji key):
Jean Pierre Charalambos π π π» π¨ π π π‘ π΅ π π€ π¦ π π¬ π π’ |
This project follows the all-contributors specification. Contributions of any kind welcome!