Skip to content

lymanzhang/nub

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

nubAll Contributors

Table of Contents

Description

nub is a simple, expressive, language-agnostic, and extensible visual computing library, featuring interaction, visualization and animation frameworks and supporting advanced (onscreen/offscreen) rendering techniques, such as view frustum culling.

nub is meant to be coupled with third party real and non-real time renderers. Our current release supports the 2D and 3D PGraphicsOpenGL (a.k.a. P2D and P3D, respectively) Processing renderers.

If looking for the API docs, check them here.

Readers unfamiliar with geometry transformations may first check the great Processing 2D transformations tutorial by J David Eisenberg and the affine transformations and scene-graphs presentations that discuss some related formal foundations.

Scene

Instantiate your on-screen scene at the setup():

// import all nub classes
import nub.primitives.*;
import nub.core.*;
import nub.processing.*;

Scene scene;

void setup() {
  scene = new Scene(this);
}

The Scene context() corresponds to the PApplet main PGraphics instance.

Off-screen scenes should be instantiated upon a PGraphics object:

import nub.primitives.*;
import nub.core.*;
import nub.processing.*;

Scene offScreenScene;

void setup() {
  offScreenScene = new Scene(createGraphics(w, h / 2, P3D));
}

In this case, the offScreenScene context() corresponds to the PGraphics instantiated with createGraphics() (which is of course different than the PApplet main PGraphics instance).

Nodes

A node may be translated, rotated and scaled (the order is important) and be rendered when it has a shape. Node instances define each of the nodes comprising the scene tree. To illustrate their use, suppose the following scene hierarchy is being implemented:

World
  ^
  |\
 n1 eye
  ^
  |\
 n2 n3

To setup the scene hierarchy of nodes use code such as the following:

import nub.primitives.*;
import nub.core.*;
import nub.processing.*;

Scene scene;
Node n1, n2, n3;

void setup() {
  size(720, 480, P3D);
  // the scene object creates a default eye node
  scene = new Scene(this);
  // Create a top-level node (i.e., a node whose reference is null) with:
  n1 = new Node();
  // whereas for the remaining nodes we pass any constructor taking a
  // reference node parameter, such as Node(Node referenceNode)
  n2 = new Node(n1) {
    // immediate mode rendering procedure
    // defines n2 visual representation
    @Override
    public void graphics(PGraphics pg) {
      Scene.drawTorusSolenoid(pg);
    }
  };
  // retained-mode rendering PShape
  // defines n3 visual representation
  n3 = new Node(n1, createShape(BOX, 30));
  // translate the node to make it visible
  n3.translate(50, 50, 50);
}

Note that the hierarchy of nodes may be modified with setReference(Node) and the scene eye() set from an arbitrary node instance with setEye(Node). Calling setConstraint(Constraint) will apply a Constraint to a node to limit its motion, see the ConstrainedEye and ConstrainedNode examples.

Localization

A node position, orientation and magnitude may be set with the following methods:

Node localization Position Orientation Magnitude
Globally setPosition(vector) setOrientation(quaternion) setMagnitude(mag)
Locally setTranslation(vector) setRotation(quaternion) setScaling(scl)
Incrementally translate(vector, [inertia]) rotate(quaternion, [inertia]), orbit(quaternion, center, [inertia]) scale(amount, [inertia])

The optional inertia parameter should be a value in [0..1], 0 no inertia (which is the default value) & 1 no friction. Its implementation was inspired by the great PeasyCam damped actions and done in terms of TimingTasks.

Shapes

Node shapes can be set from an immediate-mode rendering Processing procedure (see graphics(PGraphics)) or from a retained-mode rendering Processing PShape (see shape(PShape)). Shapes can be picked precisely using their projection onto the screen, see picking(). Note that even the eye can have a shape which may be useful to depict the viewer in first person camera style.

Space transformations

The following Scene methods transform points (locations) and vectors (displacements) between screen space (a box of width * height * 1 dimensions where user interaction takes place), NDC and nodes (including the world, i.e., the null node):

Space transformation Points Vectors
NDC to Screen ndcToScreenLocation(point) ndcToScreenDisplacement(vector)
Screen to NDC screenToNDCLocation(pixel) screenToNDCDisplacement(vector)
Screen to Node location(pixel, node) displacement(vector, node)
Node to Screen screenLocation(point, node) screenDisplacement(vector, node)
Screen to World location(pixel) displacement(vector)
World to Screen screenLocation(point) screenDisplacement(vector)

Note that point, pixel and vector are Vector instances.

The following Node methods transform points (locations) and scalars / vectors/ quaternions (displacements) between different node instances (including the world):

Space transformation Points Scalars / Vectors / Quaternions
Node to (this) Node location(point, node) displacement(element, node)
World to (this) Node location(point) displacement(element)
(this) Node to World worldLocation(point) worldDisplacement(element)

Note that point is a Vector instance and element is either a float (scalar), Vector or Quaternion one.

Rendering

Display the scene node hierarchy from its eye() point-of-view with:

void draw() {
  scene.display();
}

Render the scene node hierarchy from its eye() point-of-view with:

void draw() {
  // the subtree param is optional
  scene.render();
}

Note that the display and render commands are equivalent when the scene is onscreen. Observations:

  1. Call scene.display(subtree) and scene.render(subtree) to just display / render the scene subtree.
  2. Call scene.display(pixelX, pixelY) (or scene.display(subtree, pixelX, pixelY)) to display the offscreen scene at (pixelX, pixelY) left corner.
  3. Enclose 2D screen space with scene.beginHUD() and scene.endHUD() stuff (such as gui elements and text) with to render it on top of a 3D scene.
  4. Customize the rendering traversal algorithm by overriding the node visit(graph) method, see the ViewFrustumCulling example.

Drawing functionality

The Scene implements several static drawing functions that complements those already provided by Processing, such as: drawCylinder(PGraphics, int, float, float), drawHollowCylinder(PGraphics, int, float, float, Vector, Vector), drawCone(PGraphics, int, float, float, float, float), drawCone(PGraphics, int, float, float, float, float, float) and drawTorusSolenoid(PGraphics, int, int, float, float).

Drawing functions that take a PGraphics parameter (including the above static ones), such as beginHUD(PGraphics), endHUD(PGraphics), drawAxes(PGraphics, float), drawCross(PGraphics, float, float, float) and drawGrid(PGraphics) among others, can be used to set a node shape.

Another scene's eye (different than this one) can be drawn with drawFrustum(Scene). Typical usage include interactive minimaps and visibility culling visualization and debugging.

Interactivity

Eye

The scene has several methods to position and orient the eye node, such as: lookAt(Vector), setFov(float), setViewDirection(Vector), setUpVector(Vector), fit() and fit(Node), among others.

The following scene methods implement eye motion actions particularly suited for input devices, possibly having several degrees-of-freedom (DOFs):

Action Generic input device Mouse
Align alignEye() n.a.
Focus focusEye() n.a.
Translate translateEye(dx, dy, dz, [inertia]) mouseTranslateEye([inertia])
Rotate rotateEye(roll, pitch, yaw, [inertia]) n.a.
Scale scaleEye(delta, [inertia]) n.a.
Spin spinEye(pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia]) mouseSpinEye([inertia])
Move forward moveForward(dz, [inertia]) n.a.
Rotate CAD rotateCAD(roll, pitch, [inertia]) mouseRotateCAD([inertia])
Look around lookAround(deltaX, deltaY, [inertia]) mouseLookAround([inertia])

n.a. doesn't mean the mouse action isn't available, but that it can be implemented in several ways (see the code snippets below). The provided mouse actions got non-ambiguously implemented by simply passing the Processing pmouseX, pmouseY, mouseX and mouseY variables as parameters to their relative generic input device method counterparts (e.g., mouseTranslateEye() is the same as translateEye(pmouseX - mouseX, pmouseY - mouseY, 0) and mouseSpinEye() is the same as spinEye(pmouseX, pmouseY, mouseX, mouseY)), and hence their simpler signatures.

Mouse and keyboard examples:

// define a mouse-dragged eye interaction
void mouseDragged() {
  if (mouseButton == LEFT)
    scene.mouseSpinEye();
  else if (mouseButton == RIGHT)
    scene.mouseTranslateEye();
  else
    // drag along x-axis: changes the scene field-of-view
    scene.scaleEye(scene.mouseDX());
}
// define a mouse-moved eye interaction
void mouseMoved(MouseEvent event) {
  if (event.isShiftDown())
    // move mouse along y-axis: roll
    // move mouse along x-axis: pitch
    scene.rotateEye(scene.mouseRADY(), scene.mouseRADX(), 0);
  else
    scene.mouseLookAround();
}
// define a mouse-wheel eye interaction
void mouseWheel(MouseEvent event) {
  if (scene.is3D())
    // move along z
    scene.moveForward(event.getCount() * 20);
  else
    // changes the eye scaling
    scene.scaleEye(event.getCount() * 20);
}
// define a mouse-click eye interaction
void mouseClicked(MouseEvent event) {
  if (event.getCount() == 1)
    scene.alignEye();
  else
    scene.focusEye();
}
// define a key-pressed eye interaction
void keyPressed() {
  // roll with 'x' key
  scene.rotateEye(key == 'x' ? QUARTER_PI / 2 : -QUARTER_PI / 2, 0, 0);
}

The SpaceNavigator and CustomEyeInteraction examples illustrate how to set up other hardware such as a keyboard or a full fledged 6-DOFs device like the space-navigator.

Nodes

To directly interact with a given node, call any of the following scene methods:

Action Generic input device Mouse
Align alignNode(node) n.a.
Focus focusNode(node) n.a.
Translate translateNode(node, dx, dy, dz, [inertia]) mouseTranslateNode(node, [inertia])
Rotate rotateNode(node, roll, pitch, yaw, [inertia]) n.a.
Scale scaleNode(node, delta, [inertia]) n.a.
Spin spinNode(node, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia]) mouseSpinNode(node, [inertia])

Note that the mouse actions are implemented in a similar manner as it has been done with the eye.

Mouse and keyboard examples:

void mouseDragged() {
  // spin n1
  if (mouseButton == LEFT)
    scene.spinNode(n1);
  // translate n3
  else if (mouseButton == RIGHT)
    scene.translateNode(n3);
  // scale n1
  else
    scene.scaleNode(n1, scene.mouseDX());
}
void keyPressed() {
  if (key == CODED)
    if(keyCode == UP)
      scene.translateNode(n2, 0, 10);
    if(keyCode == DOWN)
      scene.translateNode(n2, 0, -10);
}

Customize node behaviors by registering a user gesture data parser with the node setInteraction(Consumer) method, and then send gesture data to the node by calling one of the scene custom interaction invoking methods: interact(Node, Object...), interactTag(String, Object...) or interactTag(Object...). See the CustomNodeInteraction example.

Picking

Picking a node (which should be different than the scene eye) to interact with it is a two-step process:

  1. Tag the node using an arbitrary name either with tag(String, Node) or ray-casting:

    Ray casting Synchronously πŸ”Ή Asynchronously πŸ”Έ
    Generic updateTag(tag, pixelX, pixelY) tag(tag, pixelX, pixelY)
    Mouse updateMouseTag(tag) mouseTag(tag)

    πŸ”Ή The tagged node (see node(String)) is returned immediately :small_orange_diamond: The tagged node is returned during the next call to the render() algorithm

  2. Interact with your tagged nodes using one of the following patterns:

    1. Tagged node: interactTag(tag, gesture...) which simply calls interactNode(node(tag), gesture) using node(String) to resolve the node parameter in the node methods above.
    2. Tagged node or eye: interact(tag, gesture...) which is the same as if (!interactTag(tag, gesture...)) interactEye(gesture...) i.e., To either interact with the node referred with a given tag (pattern i.) or delegate the gesture to the eye (see above) when that tag is not in use.

    Generic actions:

    Action Tagged node Tagged node or eye
    Align alignTag(tag) align(tag)
    Focus focusTag(tag) focus(tag)
    Translate translateTag(tag, dx, dy, dz, [inertia]) translate(tag, dx, dy, dz, [inertia])
    Rotate rotateTag(tag, roll, pitch, yaw, [inertia]) rotate(tag, roll, pitch, yaw, [inertia])
    Scale scaleTag(tag, delta, [inertia]) scale(tag, delta, [inertia])
    Spin spinTag(tag, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia]) spin(tag, pixel1X, pixel1Y, pixel2X, pixel2Y, [inertia])

    Mouse actions:

    Action Tagged nodes Tagged node or eye
    Translate mouseTranslateTag(tag, [lag]) mouseTranslate(tag, [lag])
    Spin mouseSpinTag(tag, [inertia]) mouseSpin(tag, [inertia])

Observations:

  1. A node can have multiple tags but a given tag cannot be assigned to more than one node, and since the null tag is allowed, signatures of all the above methods lacking the tag parameter are provided for convenience, e.g., mouseTag() is equivalent to calling mouseTag(null) which in turn is equivalent to tag(null, mouseX, mouseY) (and tag(mouseX, mouseY)).
  2. Refer to picking() and enablePicking(int) for the different ray-casting node picking modes.
  3. To check if a given node would be picked with a ray cast at a given screen position, call tracks(Node, int, int) or mouseTracks(Node).
  4. To tag the nodes in a given array with ray casting use updateTag(String, int, int, Node[]) and updateMouseTag(String, Node[]).
  5. In the case of mouseTranslateTag(tag, [lag]) and mouseTranslate(tag, [lag]) a lag is used (instead of inertia), 0 responds immediately and 1 no response at all.
  6. Set Scene.inertia in [0..1] (0 no inertia & 1 no friction) to change the default inertia value globally. It is initially set to 0.8 and it also affects the lag in mouseTranslateTag(tag, [lag]) and mouseTranslate(tag, [lag]). See the CajasOrientadas example.
  7. Invoke custom node behaviors by either calling the scene interact(Node, Object...), interactTag(String, Object...) or interactTag(Object...) methods. See the CustomNodeInteraction example.

Mouse and keyboard examples:

// pick with mouse-moved
void mouseMoved() {
  scene.mouseTag();
}

// interact with mouse-dragged
void mouseDragged() {
  if (mouseButton == LEFT)
    // spin the picked node or the eye if no node has been picked
    scene.mouseSpin();
  else if (mouseButton == RIGHT)
    // spin the picked node or the eye if no node has been picked
    scene.mouseTranslate();
  else
    // spin the picked node or the eye if no node has been picked
    scene.scale(mouseX - pmouseX);
}
// pick with mouse-clicked
void mouseClicked(MouseEvent event) {
  if (event.getCount() == 1)
    // use the null tag to manipulate the picked node with mouse-moved
    scene.mouseTag();
  if (event.getCount() == 2)
    // use the "key" tag to manipulate the picked node with key-pressed
    scene.mouseTag("key");
}

// interact with mouse-moved
void mouseMoved() {
  // spin the node picked with one click
  scene.mouseSpinTag();
}

// interact with key-pressed
void keyPressed() {
  // focus the node picked with two clicks
  scene.focusTag("key");
}

Timing

Timing tasks

Timing tasks are (non)recurrent, (non)concurrent (see isRecurrent() and isConcurrent() resp.) callbacks defined by overriding execute(). For example:

Scene scene;
void setup() {
  scene = new Scene(this);
  TimingTask spinningTask = new TimingTask() {
    @Override
    public void execute() {
      scene.eye().orbit(new Vector.plusJ, PI / 100);
    }
  };
  spinningTask.run();
}

will run the timing-task at 25Hz (which is its default frequency()). See the ParticleSystem example.

Interpolators

An interpolator is a timing-task that allows to define the position, orientation and magnitude a node (including the eye) should have at a particular moment in time, a.k.a., key-frame. When the interpolator is run the node is then animated through a Catmull-Rom spline, matching in space-time the key-frames which defines it. Use code such as the following:

Scene scene;
PShape pshape;
Node shape;
Interpolator interpolator;
void setup() {
  ...
  shape = new Node(pshape);
  interpolator = new Interpolator(shape);
  for (int i = 0; i < random(4, 10); i++)
    // addKeyFrame(node, elapsedTime) where elapsedTime is defined respect
    // to the previously added key-frame and expressed in seconds.
    interpolator.addKeyFrame(scene.randomNode(), i % 2 == 1 ? 1 : 4);
  interpolator.run();
}

which will create a shape interpolator containing [4..10] random key-frames. See the Interpolators example.

Installation

Import/update it directly from your PDE. Otherwise download your release and extract it to your sketchbook libraries folder.

Contributors

Thanks goes to these wonderful people (emoji key):

Jean Pierre Charalambos
Jean Pierre Charalambos

πŸ“ πŸ› πŸ’» 🎨 πŸ“– πŸ“‹ πŸ’‘ πŸ’΅ πŸ” πŸ€” πŸ“¦ πŸ”Œ πŸ’¬ πŸ‘€ πŸ“’ ⚠️ βœ… πŸ“Ή

This project follows the all-contributors specification. Contributions of any kind welcome!

About

A rendering and interaction Processing library

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 92.7%
  • Processing 3.3%
  • GLSL 2.6%
  • Other 1.4%