Augmented and Virtual Reality (B19CS7041)
Unit-2: Multiple Models of Input and Output Interface in VR
VR Inputs:
 Tracker:
     Tracking devices are intrinsic components in any VR system. These devices communicate with the
      system's processing unit, telling it the orientation of a user's point of view. In systems that allow a
      user to move around within a physical space, trackers detect where the user is, the direction he is
      moving and his speed.
     There are several different kinds of tracking systems used in VR systems, but all of them have a few
      things in common. They can detect six degrees of freedom (6-DOF) -- these are the object's position
      within the x, y and z coordinates of a space and the object's orientation. Orientation includes an
      object's yaw, pitch and roll.
     From a user's perspective, this means that when you wear an HMD, the view shifts as you look up,
      down, left and right. It also changes if you tilt your head at an angle or move your head forward or
      backward without changing the angle of your gaze. The trackers on the HMD tell the CPU where
      you are looking, and the CPU sends the right images to your HMD's screens.
     Every tracking system has a device that generates a signal, a sensor that detects the signal and a
      control unit that processes the signal and sends information to the CPU. Some systems require you to
      attach the sensor component to the user (or the user's equipment). In that kind of system, you place
      the signal emitters at fixed points in the environment. Some systems are the other way around, with
      the user wearing the emitters while surrounded by sensors attached to the environment.
   The signals sent from emitters to sensors can take many forms, including electromagnetic signals,
    acoustic signals, optical signals, and mechanical signals. Each technology has its own set of
    advantages and disadvantages.
   Electromagnetic tracking systems measure magnetic fields generated by running an electric
    current sequentially through three coiled wires arranged in a perpendicular orientation to one
    another. Each small coil becomes an electromagnet, and the system's sensors measure how its
    magnetic field affects the other coils. This measurement tells the system the direction and orientation
    of the emitter. A good electromagnetic tracking system is very responsive, with low levels of
    latency. One disadvantage of this system is that anything that can generate a magnetic field can
    interfere in the signals sent to the sensors.
   Acoustic tracking systems emit and sense ultrasonic sound waves to determine the position and
    orientation of a target. Most measure the time it takes for the ultrasonic sound to reach a sensor.
    Usually, the sensors are stationary in the environment -- the user wears the ultrasonic emitters. The
    system calculates the position and orientation of the target based on the time it took for the sound to
    reach the sensors. Acoustic tracking systems have many disadvantages. Sound travels relatively
    slowly, so the rate of updates on a target's position is similarly slow. The environment can also
    adversely affect the system's efficiency because the speed of sound through air can change
    depending on the temperature, humidity, or barometric pressure in the environment.
   Optical tracking devices use light to measure a target's position and orientation. The signal emitter
    in an optical device typically consists of a set of infrared LEDs. The sensors are cameras that can
    sense the emitted infrared light. The LEDs light up in sequential pulses. The cameras record the
    pulsed signals and send information to the system's processing unit. The unit can then extrapolate the
    data to determine the position and orientation of the target. Optical systems have a fast upload rate,
    meaning latency issues are minimized. The system's disadvantages are that the line of sight between
    a camera and an LED can be obscured, interfering with the tracking process. Ambient light or
    infrared radiation can also make a system less effective.
   Mechanical tracking systems rely on a physical connection between the target and a fixed
    reference point. A common example of a mechanical tracking system in the VR field is the BOOM
    display. A BOOM display is an HMD mounted on the end of a mechanical arm that has two points
    of articulation. The system detects the position and orientation through the arm. The update rate is
    very high with mechanical tracking systems, but the disadvantage is that they limit a user's range of
    motion.
 Sensor:
      Accelerometer: An accelerometer is an instrument used to measure acceleration of a moving or a
       vibrating body and is therefore used in VR devices to measure the acceleration along a particular
       axis. The accelerometer is used in our smartphones for instance to let the device know whether the
       user has held the device in landscape or portrait mode. And similarly, the primary function of the
       accelerometers in our VR device is also to tell the direction the user is facing.
      Gyroscope: A gyroscope is a device used to measure orientation. The device consists of a wheel or
       disc mounted so that it can spin rapidly about an axis which itself is free to alter in any direction.
       The orientation of the axis is not affected by tilting of the mounting, so gyroscopes can be used to
       provide stability or maintain a reference direction in navigation systems, automatic pilots, and
       stabilizers.
      Magnetometer: A magnetometer is a device used to measure magnetic forces, usually Earth’s
       magnetism and thus tell the direction that it is facing. A compass is a simple type of magnetometer,
       one that measures the direction of an ambient magnetic field.
 Digital Glove:
      VR gloves are not quite a brand-new technology, but these mostly focus on translating movements
       into digital commands. Taking the input flow in the other direction — so a user can “feel” their
       digital environment — remains a more limited technology, which has typically focused on
       transmitting a sense of texture. That’s already pretty cool, but this new glove actually transmits the
       details of a virtual object’s shape to a user’s fingertips.
      Like most such handwear, this VR glove uses sensors to tell the computer where the virtual hand
       should go, and actuators to provide some kind of sensation for the user’s (real life) hand. The sensors
       make use of piezoelectric technology, materials that produce an electric charge when squeezed. Line
       the glove with them, and every bend and flick of a finger produces a measurable electric pulse,
       which the software can translate into commands for the virtual hand.
      But it’s the actuators within the VR glove that the authors spend the most time describing, since they
       developed them specifically for this project. Each one is, basically, a flat little air bubble encased in
       a thin silicone skin. By using an electric current to change the shape of the silicone, the researchers
       could force the air inside into a tighter space that “popped up.” Varying the signal changed the
       height of the bubble, and they could turn it on and off nigh-instantly.
      It might not sound like much, but that little air bubble is the key. Put them in the VR glove’s
       fingertips, and suddenly the user’s hand is tricked into thinking it’s touching, or holding, something.
                                            The VR glove in action.
 Movement Capture / Motion Tracking:
      Motion tracking, the process of digitizing your movements for use in computer software, is
       incredibly important for virtual reality. Without it your virtual self is paralyzed, unable to move its
       head or move around. At best you’d have to use an abstract control system like a gamepad instead
       which, while functional, does your sense of immersion and presence no favors.
      There are quite a few ways motion can be tracked, here we’ll break them down into two broad
       categories: optical and non-optical tracking. Optical tracking is where an imaging device is used
       to track body motion. Non-optical tracking uses a variety of sensors that are often attached to the
       body for the purpose of motion tracking, but can also involve magnetic fields or sound waves.
     Optical methods of motion tracking usually use cameras of one sort or another. The person being
      tracked has optical markers. Usually dots of highly reflective material on certain known points of
      their body or on the equipment such as the HMD or handheld controllers. In professional contexts
      where motion is captured for use in animation an actor’s body may be covered in such markers, but
      commercial systems designed for personal use in virtual reality may use only a few strategic markers
      or even no markers at all. When a camera installation capable of calculating depth sees a marker it
      can map it to 3D space.
     For example, two cameras at known angles that both see the same dot allow for this mathematical
      calculation. One issue that does arrive with this method is that of maker swap where two cameras see
      different dots, but think they are the same one. Obviously, this provides incorrect tracking data and
      inaccurate motion tracking.
     The best known and most successful consumer motion tracking device is the Microsoft Kinect, this
      sensor package designed for the Xbox 360, and One consoles as well as Windows computers is
      capable of performing complex motion tracking without the use of any markers at all. The camera
      simply needs to see the subject. The latest version of the Kinect can track up to 6 individuals
      simultaneously. These capabilities are achieved through some very clever software methods
      combined with special infrared and depth sensing cameras, but still cannot attain the precision of
      professional marker-based systems as of yet.
     The Leap Motion is another example of a marker-free tracking system, but rather than full body
      tracking the Leap Motion creates high-resolution real time scans of objects in close proximity to it.
      In virtual reality contexts the device can be attached to the front of an HMD and provide precisely
      digitized versions of the user’s hands, which then allows for natural interaction.
 3D Menus / VR Menus:
     Skeumorphic Menus: Skeumorphic design is the idea of designing virtual things mimicking their
      real-world counterparts. For example, the note app on iOS looked like a real notebook with yellow
      pages and lines for writing.
     In VR this could mean that a menu for a photo app looks actually like a shelf with plenty of photo
      albums. You can select an album, open it and start flipping pages. There is no need for a VR photo
      app to look like that since we do not have the same constraints as in the real world. No need to store
      things in shelves, we can just let them appear out of thin air. The big plus of skeumoprhisms: you
      enable the user to leverage the experiences with the real world to operate the new menu.
     Flat Menus Mapped on Geometry: Flat menus are building upon interfaces we know from
      computers and especially tablets & phones. They rely on basic building blocks like cards & tiles and
      hierarchical folder structures. The apps on your smartphone for example are arranged in little tiles
      and you can group them together into a folder to create a hierarchy. Simple interfaces like that
      became very popular with the advent of the smartphone and one could argue that bringing them into
      VR already represents a skeumorphism of its own kind – but let’s drift off into the philosophical
      here.
     In VR those flat, 2D interfaces can be mapped onto simple geometric shapes like a curved screen or
      the inside of a sphere around the user, something we see in many of the currently available apps
      including Oculus Home on GearVR.
     Real 3D interfaces: Real 3D interfaces so far are quite rare, although they seem like the most
      natural fit for VR. The problem: we have many years of experiences and a developed design
      language for flat interfaces, but not so much for 3D ones – especially when it comes to menus.
     We see a glimmer of what is possible with thoughtfully designed 3D interfaces in VR drawing apps
      like Tilt Brush. They put the menu around the controllers, making a large number of functions
      immediately accessible while requiring little space. Having all items present all the time in 3D space
      (continuity & physicality) keeps the interface very explorable and easy to learn – you will never
      search through a number of sub menus for that one particular item. Hierarchy is solved by proximity
      and location.
     Another intriguing outlook on 3D interfaces is the Leap Motion 3D Jam entry Lyra. With Lyra you
      create music using a 3D interface in VR, summoning different virtual elements and connecting them
      to sequences and loops with your hands. The result is an interactive 3D structure that represents the
      song playing.
 3D Scanner:
     As the demand increases for high-quality 3D content-rich environments, we see clients using Artec
      3D Scanners to capture assets for VR projects. For objects large and small an Artec 3D Scanner will
      be a great tool for many of your VR projects. With great resolution and great texture mapping
      capabilities the Artec Eva, Artec Space Spider, and the Artec Leo are our go-to tools for VR and AR
      production studios.
         We have seen .obj and .wrl(VRML World files) for texture-mapped models and .stl(CAD and
          3D Printing, stereolithography) are common for un-textured models. We are able to export
          many different file formats so finding the right file format for your scan data won’t be a problem.
     We can 3D scan and digitize your assets or products for use in augmented reality interactions, 360-
      degree product views or tours, and product visualization in real-world settings.
     SeekXR is an in-browser AR experience that will change the competitive landscape of how e-
      retailers present products online. With this platform, users can host a plethora of content and
      products in an app-less, no download, augmented reality format.
VR Outputs:
 Visual:
     Virtual reality relies heavily on the concept of immersion. In regard to vision, binocular (both eye)
      and stereoscopic (depth perception) cues are critical to "trick" the visual system. Although not
      actually VR, early work on stereo viewing was not only an important concept, but also a
      fundamental visual component for VR to function correctly.
     This is a super-quick refresher on how human vision works in general. The clear front window of the
      eye is called the cornea. Light passes through the cornea into the hole in the color muscle tissue of
      the iris called the pupil. Right behind the pupil is the focusing mechanism of the eye called the lens.
      The lens flexes to focus light on the light-sensitive cells of the retina. Millions of light receptor cells
      called cones and rods transform light information into electrical signals and pass these signals
      through the optic nerve, through the thalamus, and ultimately into the visual cortex.
     What is amazing as that a massive amount of information that is processed by the brain. In fact,
      nearly 1/3 of the human brain is dedicated to vision.
     Staring through a head mounted display is more or less like looking through a pair of binoculars
      except these binoculars provide you with a window into digital landscapes and worlds. The
      arrangement of having one lens per eye gives the user stereo vision, just as you would in real life.
      We humans use stereo vision to infer the depth and distance of objects.
     On top of stereo 3D vision, you also get 1 to 1 head tracking with virtual reality. This means that,
      you should be able to look around you by physically moving your head and seeing the corresponding
      image in the virtual world. When all the necessary components are combined properly, virtual reality
      makes you feel like you are somewhere else, this feeling is called presence and VR is the only
      medium that provides this. In many respects virtual reality is the closest thing we have to dream
      sharing.
 Auditory:
   Auditory representation in VR systems is implemented by means of a sound system. However, in
    contrast to conventional sound systems, the auditory representation is non-stationary and interactive,
    i.e., among other things, dependent on listeners' actions. This implies, for the auditory
    representation, that very complex, physiologically adequate sound signals have to be delivered to the
    auditory systems of the listeners, namely to their eardrums.
   One possible technical way to accomplish this is via transducers positioned at the entrances to
    the ear canals (headphones). Headphones are fixed to the head and thus move simultaneously with
    it. Consequently, head and body movements do not modify the coupling between transducers and ear
    canals (so-called head-related approach to auditory representation) - in contrast to the case where the
    transducers, e.g., loudspeakers, are positioned away from the head and where the head and body can
    move in proportion to the sound sources (room-related approach). In any real acoustical situation,
    the transmission paths from the sources to the ear-drums will vary as a result of the listeners'
    movements in relation to the sound sources- the actual variation being dependent on the directional
    characteristics of both the sound sources and the external ears (skull, pinna, torso) and on the
    reflections and reverberation present.
                       System Architecture for Auditory Virtual Environments.
   The head position and orientation are frequently measured by head tracking hardware, usually based
    on modulated electromagnetic fields. Measured position and orientation data is passed on to the head
    renderer process that is used to buffer the data for immediate access by the central controller. Simple
    transformations, like offset additions, may be applied to the data in the head renderer process.
      The central controller implements the protocol dynamics of a virtual environment application. It
       accepts events from all efferent renderers, evaluates them and reacts by sending appropriate events
       resulting from the evaluation to the afferent renderers. For example, in a multimodal application with
       an integrated hand gesture renderer, the central controller could advise the auditory renderer to move
       sound sources represented by objects that can be grasped in the virtual world.
 Haptic Devices:
      Haptics is a recent enhancement to virtual environments allowing users to “touch” and feel the
       simulated objects with which they interact. Haptics is the science of touch. The word derives from
       the Greek haptikos meaning “being able to come into contact with”. The study of haptics emerged
       from advances in virtual reality.
      Virtual reality is a form of human-computer interaction (as opposed to keyboard, mouse, and
       monitor) providing a virtual environment that one can explore through direct interaction with our
       senses. To be able to interact with an environment, there must be feedback. For example, the user
       should be able to touch a virtual object and feel a response from it. This type of feedback is called
       haptic feedback.
      In Human-Computer Interaction (HCI), haptic feedback means both tactile and force feedback.
       Tactile, or touch feedback is the term applied to sensations felt by the skin. Tactile feedback allows
       users to feel things such as the texture of surfaces, temperature, and vibration. Force feedback
       reproduces directional forces that can result from solid boundaries, the weight of grasped virtual
       objects, mechanical compliance of an object and inertia.
      Haptic devices (or haptic interfaces) are mechanical devices that mediate communication between
       the user and the computer. Haptic devices allow users to touch, feel and manipulate three-
       dimensional objects in virtual environments and tele-operated systems. Most common computer
       interface devices, such as basic mice and joysticks, are input only devices, meaning that they track a
       user's physical manipulations but provide no manual feedback.
      Haptic devices are input-output devices, meaning that they track a user's physical manipulations
       (input) and provide realistic touch sensations coordinated with on-screen events (output). Examples
       of haptic devices include consumer peripheral devices equipped with special motors and sensors
       (e.g., force feedback joysticks and steering wheels) and more sophisticated devices designed for
       industrial, medical, or scientific applications (e.g., PHANTOM device).
      Accordingly, the paradigm of haptic HCI can be classified into three stages: desktop haptics, surface
       haptics, and wearable haptics.
End of Unit-2