4. Manipulating the View – OpenSceneGraph 3 Cookbook

Chapter 4. Manipulating the View

In this chapter, we will cover:

  • Setting up views on multiple screens

  • Using slave cameras to simulate a power-wall

  • Using depth partition to display huge scenes

  • Implementing the radar map

  • Showing the top, front, and side views of a model

  • Manipulating the top, front, and side views

  • Following a moving model

  • Using manipulators to follow models

  • Designing a 2D camera manipulator

  • Manipulating the view with joysticks

Introduction

This chapter is all about the camera and camera manipulation. No matter how beautiful or realistic your scene is, a bad navigating experience will still drive your users away. Your goal is to try changing the view and projection matrices, which represent 3D space transformation, and thus change the world you can see from the camera smoothly and conveniently. But don't forget that manipulating with mouse and keyboard is actually a 2D interaction, with two degrees-of-freedom (DOF) and the mouse buttons. So, you can hardly describe a complete 3D movement (totally 6 DOFs) with mouse move and button click events.

Fortunately, OSG provides us a list of handy in-built camera manipulators, such as the osgGA::TrackballManipulator. It also contains a good framework (including manipulator base class and event handlers) for implementing our own navigation strategy. We will cover all these features in this specialized chapter.

Another interesting topic here is the usage of single viewer and composite viewer, as well as the configuration of viewing attributes and multi-monitor options. You will also find examples of them in the later recipes.

Setting up views on multiple screens

Today, more and more people have multiple physical display devices, especially multiple monitors (or screens) at work. Modern graphics cards can always afford at least two outputs nowadays. Also, most operating systems have already fully supported the simultaneous use of multiple screens, including Linux, Mac OSX, and Microsoft Windows.

However, programming multiple screens is still challenging because developers have to handle the graphics buffer of each screen by themselves, which is painful for many inexperienced developers. Fortunately, OSG encapsulates the platform-specific multi-monitor APIs and uses the context ID to characterize screens. It enables users to create and use one or more graphics contexts on each screen, and write OSG programs without having any discomfort.

How to do it...

Carry out the following steps in order to complete the recipe:

  1. Include necessary headers:

    #include <osg/Camera>
    #include <osgDB/ReadFile>
    #include <osgGA/TrackballManipulator>
    #include <osgViewer/CompositeViewer>
    
  2. The createView() function will create a full-screen window on specified screen:

    osgViewer::View* createView( int screenNum )
    {
    ...
    }
    
  3. Let us configure the desired rendering window attributes according to current screen settings in the function. We may obtain the desired screen size and attributes by calling the WindowingSystemInterface class:

    unsigned int width = 800, height = 600;
    osg::GraphicsContext::WindowingSystemInterface* wsi = osg::Graphic
    Context::getWindowingSystemInterface();
    if ( wsi )
    wsi->getScreenResolution( osg::GraphicsContext:: ScreenIdentifier(screenNum), width, height );
    osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
    traits->screenNum = screenNum;
    traits->x = 0;
    traits->y = 0;
    traits->width = width;
    traits->height = height;
    traits->windowDecoration = false;
    traits->doubleBuffer = true;
    traits->sharedContext = 0;
    
  4. Construct the graphics context and attach it to a camera to provide a usable rendering window here. The viewport of the camera should also receive the size parameters to display the scene in the full-screen range:

    Note

    Note that if the screen number, or any other Traits members set before are invalid (for example, nonexistent screen index), the osg::GraphicsContext object created will just be NULL and you may return here with some error reports.

    osg::ref_ptr<osg::GraphicsContext> gc = osg::GraphicsContext::createGraphicsContext( traits.get() );
    if ( !gc ) return NULL;
    osg::ref_ptr<osg::Camera> camera = new osg::Camera;
    camera->setGraphicsContext( gc.get() );
    camera->setViewport( new osg::Viewport(0, 0, width, height) );
    camera->setProjectionMatrixAsPerspective(
    30.0f, static_cast<double>(width)/static_cast<double>(height), 1.0f, 10000.0f );
    GLenum buffer = traits->doubleBuffer ? GL_BACK : GL_FRONT;
    camera->setDrawBuffer( buffer );
    camera->setReadBuffer( buffer );
    
  5. Create a new view object, set up a default camera manipulator, and return it at the end:

    osg::ref_ptr<osgViewer::View> view = new osgViewer::View;
    view->setCamera( camera.get() );
    view->setCameraManipulator( new osgGA::TrackballManipulator );
    return view.release();
    
  6. In the main entry, we will make use of osgViewer::CompositeViewer class in this recipe:

    osgViewer::CompositeViewer viewer;
    
  7. Create a view with any model shown on the first screen:

    osgViewer::View* view1 = createView( 0 );
    if ( view1 )
    {
    view1->setSceneData( osgDB::readNodeFile("cessna.osg") );
    viewer.addView( view1 );
    }
    
  8. Create yet another view on the second screen. Make sure you have enough monitors and already connect them to your graphics card:

    osgViewer::View* view2 = createView( 1 );
    if ( view2 )
    {
    view2->setSceneData( osgDB::readNodeFile("cow.osg") );
    viewer.addView( view2 );
    }
    
  9. Start the viewer:

    return viewer.run();
    
  10. If you have at least two monitors, you will be able to see a Cessna on the main monitor, and a cow model on the other one; otherwise, you can only render the main scene properly, and you may read some failure information in the console window, which indicates that the second view cannot be initialized.

How it works...

In the createView() function, we can easily obtain the screen size and other attributes by calling the WindowingSystemInterface class, which will have the access to platform-specific Windowing APIs. Then we pass the acquired width and height values to a Traits object. Of course, do not forget the screenNum value, which is the key for setting up multiple views on multiple displays.

The only difference between single-screen and multi-screen programming in OSG is to set up the screen index (screenNum) variable of the Traits class. Anything else that is related with a specific platform will be handled by OSG automatically and screen information will be recorded by the WindowingSystemInterface class.

There's more...

You may retrieve the number of screens connected with current computer by calling the getNumScreens() method of WindowingSystemInterface:

osg::GraphicsContext::WindowingSystemInterface* wsi =
osg::GraphicsContext::getWindowingSystemInterface();
if ( wsi ) numScreens = wsi->getNumScreens();

In order to obtain the detailed information of a screen (screen index is num):

osg::GraphicsContext::ScreenSettings settings;
osg::GraphicsContext::WindowingSystemInterface* wsi = osg::GraphicsContext::getWindowingSystemInterface();
if ( wsi ) wsi->getScreenSettings(
osg::GraphicsContext::ScreenIdentifier(num), settings );

You can get the screen width, height, refresh rate, and color buffer depth attributes from the settings variable.

The example osgcamera in the core source code can exactly explain how to display a scene with single and multiple screens, using the screen number identifier.

Using slave cameras to simulate a power-wall

The PowerWall is a common virtual reality system displaying extremely high resolution scenes by grouping tiled arrays of projectors or monitors sharing the same data. With the really large screen composed by many small screens, it is possible to see the world clearly and find detailed information effectively.

OSG also provides the slave camera feature to implement the same functionality. Each salve represents a small tile in the PowerWall. You can assign the slave camera to different numbers of screens, but in this recipe, we will simply put them in one screen.

How to do it...

Carry out the following steps in order to complete the recipe:

  1. Include necessary headers:

    #include <osg/Camera>
    #include <osgDB/ReadFile>
    #include <osgViewer/Viewer>
    
  2. The createSlaveCamera() is used to create cameras following the viewer's main camera. The creation process is similar to the Setting up views on multiple screens recipe before. An osg::Camera node will be returned, which is already attached with a rendering window:

    osg::Camera* createSlaveCamera( int x, int y, int width, int height )
    {
    osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
    traits->screenNum = 0; // this can be changed for
    //multi-display
    traits->x = x;
    traits->y = y;
    traits->width = width;
    traits->height = height;
    traits->windowDecoration = false;
    traits->doubleBuffer = true;
    traits->sharedContext = 0;
    osg::ref_ptr<osg::GraphicsContext> gc = osg::GraphicsContext::createGraphicsContext( traits.get() );
    if ( !gc ) return NULL;
    osg::ref_ptr<osg::Camera> camera = new osg::Camera;
    camera->setGraphicsContext( gc.get() );
    camera->setViewport( new osg::Viewport(0, 0, width, height) );
    GLenum buffer = traits->doubleBuffer ? GL_BACK : GL_FRONT;
    camera->setDrawBuffer( buffer );
    camera->setReadBuffer( buffer );
    return camera.release();
    }
    
  3. In the main entry, we will allow the user to decide the row and column numbers of the PowerWall. The greater these two numbers are, the more displays you are going to have, but the side effect is that you will have a less effective system containing too many rendering windows:

    osg::ArgumentParser arguments( &argc, argv );
    int totalWidth = 1024, totalHeight = 768;
    arguments.read( "--total-width", totalWidth );
    arguments.read( "--total-height", totalHeight );
    int numColumns = 3, numRows = 3;
    arguments.read( "--num-columns", numColumns );
    arguments.read( "--num-rows", numRows );
    
  4. Read any model for displaying on all the windows.

    osg::ref_ptr<osg::Node> scene= osgDB::readNodeFiles(arguments);
    if ( !scene ) scene = osgDB::readNodeFile("cessna.osg");
    
  5. Now we are going to add each camera created as a 'slave' into the viewer. A slave camera doesn't have independent view and projection matrices. It uses the main camera's view and adds an offset to each matrix. The computation process will be explained later in the How it works... section:

    osgViewer::Viewer viewer;
    int tileWidth = totalWidth / numColumns;
    int tileHeight = totalHeight / numRows;
    for ( int row=0; row<numRows; ++row )
    {
    for ( int col=0; col<numColumns; ++col )
    {
    osg::Camera* camera = createSlaveCamera(
    tileWidth*col, totalHeight - tileHeight*(row+1), tileWidth-1, tileHeight-1 );
    osg::Matrix projOffset =
    osg::Matrix::scale(numColumns, numRows, 1.0) *
    osg::Matrix::translate(numColumns-1-2*col, numRows-1-2*row, 0.0);
    viewer.addSlave( camera, projOffset, osg::Matrix(), true );
    }
    }
    
  6. Add the scene to the viewer and start the simulation:

    viewer.setSceneData( scene );
    return viewer.run();
    
  7. Now, you will find nine windows making up a wall that display one complete scene, as shown in the following screenshot. The power-wall implementation can actually be larger and offer really high-resolution result in practical use. However, it requires more screens or even more computers to collaborate together. This is already beyond the scope of our book:

How it works...

The slave camera will read the main camera's view and projection matrices and multiply each with an offset matrix, that is:

slaveViewMatrix = mainViewMatrix * viewOffset;
slaveProjectionMatrix = mainProjectionMatrix * projectionOffset;

In order to compute the offset matrices here, we must first consider how cameras are arranged in a PowerWall. Let us have a look at the result picture again, with column and row number marked:

In order to render the scene correctly in every slave camera, we can just keep the view matrix as it is, and reset parameters of the frustum of a slave.

The projection matrix can be written as:

Here left, right, top, and bottom specify the horizontal and vertical clipping planes of the frustum, znear and zfar represent the near and far clipping planes. See the OpenGL document for details:

http://www.opengl.org/sdk/docs/man/xhtml/glFrustum.xml

In order to fit the row and column positions of a certain slave camera, we can scale the horizontal and vertical coordinates and reproduce the matrix formula as:

Here,

left' = left + j *(right - left) / numCols
right' = right - (numCols - j - 1)*(right - left) / numCols
bottom' = bottom + i *(top - bottom) / numRows
top' = top - (numRows - i - 1)*(top - bottom) / numRows
A = numCols - 2 * j - 1
B = numRows - 2 * i - 1

The variable left', right', top' and bottom' are parameters of the slave's projection matrix, which are the same meanings of the OpenGL frustum variables and that is what we have already done in the previous section.

There's more...

Slave cameras are allocated and managed by only one computer. This brings a new problem: how could one system afford too many displays and rendering tasks? Maybe you would like to have more computers, and you may set up a rendering cluster with PCs and workstations with monitors. For this case, have a look at the osgcluster example, and consider handling the synchronization of computers by yourself.

Using depth partition to display huge scenes

Is it possible to draw the real sun, earth, mars, and even the solar system in OpenGL and OSG? The answer is absolutely yes. But you may encounter some problems while actually working on this topic.

The most serious one is the computation of near and far planes. As many early devices have only a 16-bit depth buffer or 24-bit one, you may not be able to maintain a huge distance (in the solar system) between the near and the far plane. If you force setting the near plane to a small value and the far plane to a very large one, the rendering of nearby objects will cause the classic Z-fighting problem because there is not enough precision to resolve the distance. The explaination of Z-fighting can be found at: http://en.wikipedia.org/wiki/Z-fighting.

The best solution is to buy a new graphics card supporting a 32-bit buffer. But to keep the program portable, we had better find some other solutions, for instance, the depth partition algorithm in this recipe. To introduce this algorithm in one sentence—it partitions the scene into several parts and renders them separately. Every part has its own near and far values and the distance between them is short enough to avoid the precision problem.

How to do it...

Carry out the following steps in order to complete the recipe:

  1. Include necessary headers:

    #include <osg/Texture2D>
    #include <osg/ShapeDrawable>
    #include <osg/Geode>
    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgGA/TrackballManipulator>
    #include <osgViewer/Viewer>
    
  2. The earth and sun radius, as well as the astronomical unit (AU) are real data and should be set as constant ones. The distance between the earth and the sun is approximately 1 AU.

    const double radius_earth = 6378.137;
    const double radius_sun = 695990.0;
    const double AU = 149697900.0;
    
  3. The createScene() function will create the huge scene which contains a sun and an earth. Compared to our previous scene such as Cessna or a cow, this time it is really a vast expanse.

    osg::Node* createScene()
    {
    ...
    }
    
  4. Create the earth node and apply a texture to make it more realistic:

    osg::ref_ptr<osg::ShapeDrawable> earth_sphere = new
    osg::ShapeDrawable;
    earth_sphere->setShape( new osg::Sphere(osg::Vec3(),
    radius_earth) );
    osg::ref_ptr<osg::Texture2D> texture = new osg::Texture2D;
    texture->setImage(
    osgDB::readImageFile("Images/land_shallow_topo_2048.jpg") );
    osg::ref_ptr<osg::Geode> earth_node = new osg::Geode;
    earth_node->addDrawable( earth_sphere.get() );
    earth_node->getOrCreateStateSet()
    ->setTextureAttributeAndModes( 0, texture.get() );
    
  5. Create the sun sphere and set its color and radius. We will use a transformation node to move it far from the earth's position, that is, the point of origin:

    osg::ref_ptr<osg::ShapeDrawable> sun_sphere = new osg::ShapeDrawable;
    sun_sphere->setShape( new osg::Sphere(osg::Vec3(), radius_sun) );
    sun_sphere->setColor( osg::Vec4(1.0f, 0.0f, 0.0f, 1.0f) );
    osg::ref_ptr<osg::Geode> sun_geode = new osg::Geode;
    sun_geode->addDrawable( sun_sphere.get() );
    osg::ref_ptr<osg::MatrixTransform> sun_node =
    new osg::MatrixTransform;
    sun_node->setMatrix( osg::Matrix::translate(0.0, AU, 0.0) );
    sun_node->addChild( sun_geode.get() );
    
  6. Now, create the scene graph:

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( earth_node.get() );
    root->addChild( sun_node.get() );
    return root.release();
    
  7. In the main entry, first we have to set the depth partition ranges: the near, middle, and far planes along the Z axis in eye coordinates. Note that there is an additional middle plane compared to the traditional OpenGL near/far mechanism, which divides the whole space into two parts:

    osg::ArgumentParser arguments(&argc,argv);
    double zNear = 1.0, zMid = 1e4, zFar = 2e8;
    arguments.read( "--depth-partition", zNear, zMid, zFar );
    
  8. The near/middle/far values are set to the osgViewer::DepthPartitionSettings object:

    osg::ref_ptr<osgViewer::DepthPartitionSettings> dps =
    new osgViewer::DepthPartitionSettings;
    // Use fixed numbers as the partition values.
    dps->_mode = osgViewer::DepthPartitionSettings::FIXED_RANGE;
    dps->_zNear = zNear;
    dps->_zMid = zMid;
    dps->_zFar = zFar;
    
  9. Apply the depth partition settings to the viewer. We have to preset a home position of the camera manipulator here, which helps us look at the earth at the beginning; otherwise, we will have to look for it in the wide universe ourselves:

    osgViewer::Viewer viewer;
    viewer.getCamera()->setClearColor( osg::Vec4(0.0f, 0.0f, 0.0f,
    1.0f) );
    viewer.setSceneData( createScene() );
    viewer.setUpDepthPartition(dps.get());
    viewer.setCameraManipulator( new osgGA::TrackballManipulator );
    viewer.getCameraManipulator()->setHomePosition(
    osg::Vec3d(0.0,-12.5*radius_earth,0.0), osg::Vec3d(),
    osg::Vec3d(0.0,0.0,1.0) );
    return viewer.run();
    
  10. You will see the earth in front of you. If you rotate the camera slightly, you can see the sun which is only a small red point, as shown in the following screenshot. Everything looks normal here:

  11. Now remove the line calling viewer.setUpDepthPartition() method and return the recipe. What you have found this time? The earth seems to have been eaten by someone! If you zoom in the view, you will surprisingly find the earth has disappeared. What just happened when we stopped using the depth partition functionality?

How it works...

OSG enables automatic near/far planes computation by default. The basic idea is: calculate the Z value of each scene object in eye coordinate frame, and thus get the maximum Z value, which can be approximately treated as the farthest depth value. After that, multiply the far plane value with a very small ratio (must be less than 1.0) and get the near plane value:

zNear = zFar * nearFarRatio

And then apply the results to the camera's projection matrix.

The process can't be flipped, that is, it is difficult to get the minimum Z value first, because there are always objects with negative Z values (behind or parallel with the viewer), and we can hardly decide the real near plane due to these confusing values.

Now we can explain why the earth disappeared when disabling use of the setUpDepthPartition() method. As the far plane is too far away from the viewer, the calculated near plane is too far and the resultant frustum doesn't contain the earth node.

There is more than one solution for this. Besides the depth partition (which will render the scene for multiple times and may, therefore, lose efficiency), we could also provide an even smaller ratio to get a smaller near value. This is done by calling the setNearFarRatio() method:

camera->setNearFarRatio( 0.0001 );

By default, the near/far ratio is 0.0005. Be careful of the precision of your depth buffer when you alter it because OpenGL depth buffer always handles 16-bit or 24-bit float values.

There's more...

You may read more about the depth buffer problem, or Z buffer problem at:

http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html.

Implementing the radar map

Traditional radar uses electromagnetic pulses to detect the positions and motions of an object. It maps the electromagnetic parameters onto a two-dimensional plane to form a radar map, which indicates all vehicles and aircrafts appearing in a certain place.

In this recipe, we will simulate a very simple radar map by mapping all static and animated models in the main scene onto an HUD camera, and use colored marks to label them.

How to do it...

Carry out the following steps in order to complete this recipe:

  1. Include necessary headers.

    #include <osg/Material>
    #include <osg/ShapeDrawable>
    #include <osg/Camera>
    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgViewer/Viewer>
    
  2. We will define two mask constants and a macro providing random numbers for later use.

    const unsigned int MAIN_CAMERA_MASK = 0x1;
    const unsigned int RADAR_CAMERA_MASK = 0x2;
    #define RAND(min, max) ((min) + (float)rand()/(RAND_MAX+1) * ((max)-(min)))
    
  3. First we can have a function for creating all kinds of scene objects. To make them visible in both the main view and the radar map, we have to do something special besides reading the model file with osgDB::readNodeFile() method.

    osg::Node* createObject( const std::string& filename, const osg::Vec4& color )
    {
    ...
    }
    
  4. In the function, read the model from file and set up a node mask indicating it to be rendered in the main camera.

    float size = 5.0f;
    osg::ref_ptr<osg::Node> model_node =
    osgDB::readNodeFile(filename);
    if ( model_node.valid() ) model_node->setNodeMask(
    MAIN_CAMERA_MASK );
    
  5. Create a marker to replace the model itself in the radar map.

    osg::ref_ptr<osg::ShapeDrawable> mark_shape =
    new osg::ShapeDrawable;
    mark_shape->setShape( new osg::Box(osg::Vec3(), size) );
    osg::ref_ptr<osg::Geode> mark_node = new osg::Geode;
    mark_node->addDrawable( mark_shape.get() );
    mark_node->setNodeMask( RADAR_CAMERA_MASK );
    
  6. Now add both the model and its marker to a group node and return it as a complete scene object. Apply a material here to paint the object in a specified color:

    osg::ref_ptr<osg::Group> obj_node = new osg::Group;
    obj_node->addChild( model_node.get() );
    obj_node->addChild( mark_node.get() );
    osg::ref_ptr<osg::Material> material = new osg::Material;
    material->setColorMode( osg::Material::AMBIENT );
    material->setAmbient( osg::Material::FRONT_AND_BACK,
    osg::Vec4(0.8f, 0.8f, 0.8f, 1.0f) );
    material->setDiffuse( osg::Material::FRONT_AND_BACK,
    color*0.8f );
    material->setSpecular( osg::Material::FRONT_AND_BACK, color );
    material->setShininess( osg::Material::FRONT_AND_BACK, 1.0f );
    obj_node->getOrCreateStateSet()->setAttributeAndModes(
    material.get(),
    osg::StateAttribute::ON|osg::StateAttribute::OVERRIDE );
    return obj_node.release();
    
  7. The createStaticNode() function is convenient for creating objects at a fixed position:

    osg::MatrixTransform* createStaticNode( const osg::Vec3&
    center, osg::Node* child )
    {
    osg::ref_ptr<osg::MatrixTransform> trans_node = new
    osg::MatrixTransform;
    trans_node->setMatrix( osg::Matrix::translate(center) );
    trans_node->addChild( child );
    return trans_node.release();
    }
    
  8. The createAnimateNode() is easy-to-use for creating objects moving on a specified path:

    osg::MatrixTransform* createAnimateNode( const osg::Vec3&
    center, float radius, float time, osg::Node* child )
    {
    osg::ref_ptr<osg::MatrixTransform> anim_node = new
    osg::MatrixTransform;
    anim_node->addUpdateCallback(
    osgCookBook::createAnimationPathCallback(radius, time) );
    anim_node->addChild( child );
    osg::ref_ptr<osg::MatrixTransform> trans_node = new
    osg::MatrixTransform;
    trans_node->setMatrix( osg::Matrix::translate(center) );
    trans_node->addChild( anim_node.get() );
    return trans_node.release();
    }
    
  9. In the main entry, we will first load and create some composite objects, each including the origin model and a marker box:

    osg::Node* obj1 = createObject( "dumptruck.osg",
    osg::Vec4(1.0f, 0.2f, 0.2f, 1.0f) );
    osg::Node* obj2 = createObject( "dumptruck.osg.0,0,180.rot",
    osg::Vec4(0.2f, 0.2f, 1.0f, 1.0f) );
    osg::Node* air_obj2 = createObject( "cessna.osg.0,0,90.rot",
    osg::Vec4(0.2f, 0.2f, 1.0f, 1.0f) );
    
  10. Now we randomly place some static and animating objects with the XY range from [-100, -100] to [100, 100]. This can be treated as the region shown in the radar map:

    osg::ref_ptr<osg::Group> scene = new osg::Group;
    for ( unsigned int i=0; i<10; ++i )
    {
    osg::Vec3 center1( RAND(-100, 100), RAND(-100, 100), 0.0f );
    scene->addChild( createStaticNode(center1, obj1) );
    osg::Vec3 center2( RAND(-100, 100), RAND(-100, 100), 0.0f );
    scene->addChild( createStaticNode(center2, obj2) );
    }
    for ( unsigned int i=0; i<5; ++i )
    {
    osg::Vec3 center( RAND(-50, 50), RAND(-50, 50),
    RAND(10, 100) );
    scene->addChild( createAnimateNode(center, RAND(10.0, 50.0),
    5.0f, air_obj2) );
    }
    
  11. Create an HUD camera for representing the radar map screen:

    osg::ref_ptr<osg::Camera> radar = new osg::Camera;
    radar->setClearColor( osg::Vec4(0.0f, 0.2f, 0.0f, 1.0f) );
    radar->setRenderOrder( osg::Camera::POST_RENDER );
    radar->setAllowEventFocus( false );
    radar->setClearMask( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT
    );
    radar->setReferenceFrame( osg::Transform::ABSOLUTE_RF );
    radar->setViewport( 0.0, 0.0, 200.0, 200.0 );
    
  12. Set up the view matrix, projection matrix, and culling mask of the HUD camera. The mask set here is the same as the one we set for the marker node (so the AND operation between them will result in true). That means only markers can be seen in the radar camera. The view and projection matrices here are used to describe the view direction and range of the radar map:

    radar->setViewMatrix( osg::Matrixd::lookAt(osg::Vec3(0.0f,
    0.0f, 120.0f), osg::Vec3(), osg::Y_AXIS) );
    radar->setProjectionMatrix( osg::Matrixd::ortho2D(-120.0,
    120.0, -120.0, 120.0) );
    radar->setCullMask( RADAR_CAMERA_MASK );
    radar->addChild( scene.get() );
    
  13. Add the radar and the main scene to the root node. To note, scene is already added as the child of the radar camera, so it will be always traversed twice after being added as root's child here:

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( radar.get() );
    root->addChild( scene.get() );
    
  14. The MAIN_CAMERA_MASK constant set to the main camera makes it only render model nodes created before. The lighting mode set here improves the light computation on model surfaces in this recipe.

    osgViewer::Viewer viewer;
    viewer.getCamera()->setCullMask( MAIN_CAMERA_MASK );
    viewer.setSceneData( root.get() );
    viewer.setLightingMode( osg::View::SKY_LIGHT );
    return viewer.run();
    
  15. Start the program and you will see a number of trucks and Cessnas in the main view, and only markers in the radar map view, as shown in the following screenshot. That is exactly what we want at the beginning:

How it works...

The cull mask indicates that some types of nodes can be rendered in the camera (the AND operation between node mask and cull mask results in a non-zero value), and some types can't (the AND operation results in zero). It will be checked before other scene culling operations such as view-frustum culling and small feature culling.

There are two extra methods called setCullMaskLeft() and setCullMaskRight() besides the setCullMask() method. They are mainly used for the stereo-displaying situations, and can determine what kinds of nodes will be shown in the left eye, and what to show in the right eye.

The setLightingMode() method controls the global lighting mechanism in OSG. By default the enumeration value is HEADLIGHT, which means the light is positioned near the eye and shines along the line of sight. You may change it to NO_LIGHT (no global light) or SKY_LIGHT (the light is fixed at a position in the world) if required. You may also use the setLight() method of the viewer to specify a user light object for global illumination.

Showing the top, front, and side views of a model

Open one of your favorite 3D scene editors, such as 3DS Max, Maya, Blender 3D, and so on. Most of them should default to four views: the top view, side view, front view, and a fourth perspective view. You may change the first three to display the scene from the top, left, right, front, back, or bottom side, which brings great convenience for editors to watch and alter 3D models.

How to do it...

Carry out the following steps in order to complete this recipe:

  1. Include necessary headers:

    #include <osg/Camera>
    #include <osgDB/ReadFile>
    #include <osgGA/TrackballManipulator>
    #include <osgViewer/CompositeViewer>
    
  2. The create2DView() function will be used for creating top, left, and side views of the scene. Set up the view to look at a specific direction but keep it focused on the scene node, and also configure the perspective projection matrix according to the window size:

    osgViewer::View* create2DView( int x, int y, int width, int
    height, const osg::Vec3& lookDir, const osg::Vec3& up,
    osg::GraphicsContext* gc, osg::Node* scene )
    {
    osg::ref_ptr<osgViewer::View> view = new osgViewer::View;
    view->getCamera()->setGraphicsContext( gc );
    view->getCamera()->setViewport( x, y, width, height );
    view->setSceneData( scene );
    osg::Vec3 center = scene->getBound().center();
    double radius = scene->getBound().radius();
    view->getCamera()->setViewMatrixAsLookAt( center -
    lookDir*(radius*3.0), center, up );
    view->getCamera()->setProjectionMatrixAsPerspective(
    30.0f, static_cast<double>(width)/static_cast<double>
    (height), 1.0f, 10000.0f );
    return view.release();
    }
    
  3. In the main entry, we will always read a renderable scene first. This time, it is a Cessna:

    osg::ArgumentParser arguments( &argc, argv );
    osg::ref_ptr<osg::Node> scene = osgDB::readNodeFiles
    ( arguments );
    if ( !scene ) scene = osgDB::readNodeFile("cessna.osg");
    
  4. Read the screen size and create a new graphics context according to its traits. The same work is done in the first example of this chapter, so you should already be familiar with this process.

    unsigned int width = 800, height = 600;
    osg::GraphicsContext::WindowingSystemInterface* wsi =
    osg::GraphicsContext::getWindowingSystemInterface();
    if ( wsi ) wsi->getScreenResolution(
    osg::GraphicsContext::ScreenIdentifier(0), width, height );
    osg::ref_ptr<osg::GraphicsContext::Traits> traits = new
    osg::GraphicsContext::Traits;
    traits->x = 0;
    traits->y = 0;
    traits->width = width;
    traits->height = height;
    traits->windowDecoration = false;
    traits->doubleBuffer = true;
    traits->sharedContext = 0;
    osg::ref_ptr<osg::GraphicsContext> gc =
    osg::GraphicsContext::createGraphicsContext( traits.get() );
    if ( !gc || !scene ) return 1;
    
  5. Now, we are going to make the four views share the same graphics context, that is, the same window. Three of them are 2D views (top, left, and front), and they are placed at different X and Y coordinates displaying the same scene:

    int w_2 = width/2, h_2 = height/2;
    osg::ref_ptr<osgViewer::View> top = create2DView(
    0, h_2, w_2, h_2,-osg::Z_AXIS, osg::Y_AXIS, gc.get(), scene.get());
    osg::ref_ptr<osgViewer::View> front = create2DView(
    w_2, h_2, w_2, h_2, osg::Y_AXIS, osg::Z_AXIS, gc.get(), scene.get());
    osg::ref_ptr<osgViewer::View> left = create2DView(
    0, 0, w_2, h_2, osg::X_AXIS, osg::Z_AXIS, gc.get(), scene.get());
    
  6. The main view does not use fixed view matrix, but applies a default manipulator for users to navigate the camera freely:

    osg::ref_ptr<osgViewer::View> mainView = new osgViewer::View;
    mainView->getCamera()->setGraphicsContext( gc.get() );
    mainView->getCamera()->setViewport( w_2, 0, w_2, h_2 );
    mainView->setSceneData( scene.get() );
    mainView->setCameraManipulator( new osgGA::TrackballManipulator );
    
  7. Add the four views to the composite viewer:

    osgViewer::CompositeViewer viewer;
    viewer.addView( top.get() );
    viewer.addView( front.get() );
    viewer.addView( left.get() );
    viewer.addView( mainView.get() );
    
  8. Start the simulation. Here, we don't use the viewer.run() method but instead write a simple loop that calls the frame() method all the time. That's because the run() method will automatically apply a trackball manipulator to every view unless it already has a valid one. As we don't want the 2D views to be controlled improperly by manipulators, we have to avoid using the run() method in this case:

    while ( !viewer.done() )
    {
    viewer.frame();
    }
    return 0;
    
  9. Ok, now we will have a four-view interface a little similar to some of the famous 3D software. The main view can still be rotated and scaled by dragging the mouse, but the 2D ones are fixed and can't be moved, which is certainly inconvenient. In the next recipe, we will continue to work on this issue.

How it works...

In the create2DView() function, we need the lookDir and up vectors to define the view matrix, and the gc variable to specify the graphics context attached with the camera.

We may encounter a bifurcation while creating 2D views, that is, there are two ways to set up the projection matrix: to use the perspective projection or the orthographic one. In order to represent a true 2D view of the world, the orthographic projection is preferred. However, in this and the next recipe, we will use perspective projection matrices to make user interactions easier. It's up to you to change the code listed here and make it work as you wish.

There's more...

Now, it's time for us to summarize the relationships of views, cameras, and graphics contexts.

An OSG scene viewer can have only one view (osgViewer::Viewer) or multiple views (osgViewer::CompositeViewer). Every view has its own scene graph, camera manipulator, and event handlers. A view can have one main camera, which is controlled by the manipulator, and some slave cameras with customized offsets to the main one (see the second recipe in this chapter). Cameras can also be added as a normal node into the scene graph (see the fourth recipe). It will multiply or reset current view and projection matrices, and use current context or choose a different one to render the sub-graph.

The camera must be attached with a graphics context to create and enable the OpenGL rendering window. If different screen numbers are specified while creating the contexts, we may even enable the cameras to work in a multi-monitor environment (see the first recipe). Multiple cameras can share one graphics context and use setViewport() and setRenderOrder() to set the rendering range and orders.

Thus, to design an application with multiple windows and multiple scenes, we can either make use of the composite viewer or add more cameras in a single viewer framework. The former provides event handlers and manipulators for each view to update user data. But the latter could also implement the similar work with node callbacks. At last, it's up to you to choose a view/camera framework to create multi-windows applications.

Manipulating the top, front, and side views

Let us continue work on the last recipe. This time we are going to handle some interactive events on the 2D views. In order to achieve this, we can just add an event handler to each view and customize the behaviors while the user drags the mouse in the viewport. Code segments that are completely unchanged will not be listed here again.

How to do it...

Carry out the following steps in order to complete this recipe:

  1. The most important addition to the previous recipe is the AuxiliaryViewUpdater class. It allows users to pan the 2D view with the left button, and zoom in/out with the right button. The _distance variable defines the zoom factor. The _offsetX and _offsetY indicate the offset while panning. And _lastDragX and _lastDragY are used for recording the mouse coordinates only during the mouse dragging action (otherwise they are set to -1).

    class AuxiliaryViewUpdater : public osgGA::GUIEventHandler
    {
    public:
    AuxiliaryViewUpdater()
    : _distance(-1.0), _offsetX(0.0f), _offsetY(0.0f),
    _lastDragX(-1.0f), _lastDragY(-1.0f)
    {}
    virtual bool handle( const osgGA::GUIEventAdapter& ea,
    osgGA::GUIActionAdapter& aa );
    protected:
    double _distance;
    float _offsetX, _offsetY;
    float _lastDragX, _lastDragY;
    };
    
  2. In the handle() method, we will obtain the view object and decide operations according to the event type:

    osgViewer::View* view = static_cast<osgViewer::View*>(&aa);
    if ( view )
    {
    switch ( ea.getEventType() )
    {
    ...
    }
    }
    return false;
    
  3. In the mouse dragging event, we change the panning offset or the zoom distance by computing the delta values of current and last mouse positions. And, in the mouse pushing event, we will reset the dragging values for next time use:

    case osgGA::GUIEventAdapter::PUSH:
    _lastDragX = -1.0f;
    _lastDragY = -1.0f;
    break;
    case osgGA::GUIEventAdapter::DRAG:
    if ( _lastDragX>0.0f && _lastDragY>0.0f )
    {
    if ( ea.getButtonMask()==osgGA::GUIEventAdapter::LEFT_MOUSE_BUTTON )
    {
    _offsetX += ea.getX() - _lastDragX;
    _offsetY += ea.getY() - _lastDragY;
    }
    else if ( ea.getButtonMask()==
    osgGA::GUIEventAdapter::RIGHT_MOUSE_BUTTON )
    {
    float dy = ea.getY() - _lastDragY;
    _distance *= 1.0 + dy / ea.getWindowHeight();
    if ( _distance<1.0 ) _distance = 1.0;
    }
    }
    _lastDragX = ea.getX();
    _lastDragY = ea.getY();
    break;
    
  4. In the frame event, which is executed in every frame, we will apply the member variables to the view camera. The algorithm will be explained in the next section of the recipe.

    case osgGA::GUIEventAdapter::FRAME:
    if ( view->getCamera() )
    {
    osg::Vec3d eye, center, up;
    view->getCamera()->getViewMatrixAsLookAt( eye, center,
    up );
    osg::Vec3d lookDir = center - eye; lookDir.normalize();
    osg::Vec3d side = lookDir ^ up; side.normalize();
    const osg::BoundingSphere& bs = view->getSceneData()->
    getBound();
    if ( _distance<0.0 ) _distance = bs.radius() * 3.0;
    center = bs.center();
    center -= (side * _offsetX + up * _offsetY) * 0.1;
    view->getCamera()->setViewMatrixAsLookAt( center-
    lookDir*_distance, center, up );
    }
    break;
    
  5. The create2DView() function does not have to be changed compared to the last example. Just leave it as it was earlier.

  6. The only change in the main function is to allocate and add the new AuxiliaryViewUpdater class to each 2D view. This must be done before the simulation loop begins:

    top->addEventHandler( new AuxiliaryViewUpdater );
    front->addEventHandler( new AuxiliaryViewUpdater );
    left->addEventHandler( new AuxiliaryViewUpdater );
    
  7. Now, try to drag in the 2D views with your left or right mouse button and see what happens, as shown in the following screenshot. It is now possible to adjust and observe the scene in all-around views, as if you are working with other 3D software like Autodesk 3DS Max and Maya.

How it works...

The computation of new view matrix in the AuxiliaryViewUpdater class is not hard to realize. First, we get a current look at the parameters with the getViewMatrixAsLookAt() method. Then we can easily obtain the look direction vector (center - eye) and the side vector (cross product of the look direction and up vector). The latter is actually the X axis in the eye coordinates, and the up vector is the Y axis.

Panning the scene in a 2D view means to change the X and Y values of the scene in the viewer's eye. Thus we can just use _offsetX and _offsetY to change the view point along the lookDir and up directions, and use _distance to control the distance between the eye and the view center. That's what we have done in the handle() method's implementation.

Following a moving model

It is common in games and simulation programs that you are orbiting around a moving vehicle, and keep focus on the vehicle's center or a specific point. The view center may not move away from the tracking point, no matter the vehicle is animating or not. Except that, the rotation and scale operations are available. This results in the viewer's camera following the vehicle and even working in a first person's perspective.

How to do it...

Carry out the following steps in order to complete this recipe:

  1. Include necessary headers:

    #include <osg/Camera>
    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgGA/OrbitManipulator>
    #include <osgViewer/Viewer>
    
  2. In this recipe, we implement a node following functionality, that is, the FollowUpdater class, which is derived from the osgGA::GUIEventHandler. It requires the target node pointer to be passed in the constructor:

    class FollowUpdater : public osgGA::GUIEventHandler
    {
    public:
    FollowUpdater( osg::Node* node ) : _target(node) {}
    virtual bool handle( const osgGA::GUIEventAdapter& ea, osgGA::GUIActionAdapter& aa );
    osg::Matrix computeTargetToWorldMatrix( osg::Node* node ) const;
    protected:
    osg::observer_ptr<osg::Node> _target;
    };
    
  3. In the handle() function, when encountering the FRAME event, which is called in every frame, we will try to retrieve the osgGA::OrbitManipulator object, (which is the super class of the default trackball manipulator) and place its center at the central point of the target node in world coordinates.

    osgViewer::View* view = static_cast<osgViewer::View*>(&aa);
    if ( !view || !_target ||
    ea.getEventType()!=osgGA::GUIEventAdapter::FRAME ) return false;
    osgGA::OrbitManipulator* orbit =
    dynamic_cast<osgGA::OrbitManipulator*>( view->
    getCameraManipulator() );
    if ( orbit )
    {
    osg::Matrix matrix = computeTargetToWorldMatrix(
    _target.get() );
    osg::Vec3d targetCenter = _target->getBound().center() *
    matrix;
    orbit->setCenter( targetCenter );
    }
    return false;
    
  4. You should already get to be familiar with the method of computing local to world matrix in computeTargetToWorldMatrix(), which was introduced in Chapter 2,Designing the Scene Graph.

    osg::Matrix computeTargetToWorldMatrix( osg::Node* node ) const
    {
    osg::Matrix l2w;
    if ( node && node->getNumParents()>0 )
    {
    osg::Group* parent = node->getParent(0);
    l2w = osg::computeLocalToWorld( parent->
    getParentalNodePaths()[0] );
    }
    return l2w;
    }
    
  5. In the main entry, we load a moving Cessna and a terrain to form the scene. The Cessna, which is cycling in the sky and worth following, will be specified as the target node of the following updater class:

    osg::Node* model =
    osgDB::readNodeFile("cessna.osg.0,0,90.rot");
    if ( !model ) return 1;
    osg::ref_ptr<osg::MatrixTransform> trans = new
    osg::MatrixTransform;
    trans->addUpdateCallback( osgCookBook::createAnimationPathCallback(100.0f, 20.0) );
    trans->addChild( model );
    osg::ref_ptr<osg::MatrixTransform> terrain = new
    osg::MatrixTransform;
    terrain->addChild( osgDB::readNodeFile("lz.osg") );
    terrain->setMatrix( osg::Matrix::translate(0.0f, 0.0f,-200.0f) );
    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( trans.get() );
    root->addChild( terrain.get() );
    
  6. Add the FollowUpdater instance to the viewer and start the simulation:

    osgViewer::Viewer viewer;
    viewer.addEventHandler( new FollowUpdater(model) );
    viewer.setSceneData( root.get() );
    return viewer.run();
    
  7. Now, you will find that the main camera is always focusing on the target Cessna model, no matter where it is and what the orientation is. You can still rotate around the model with the left mouse button, and zoom in/out with the mouse wheel or right button. However, the middle button, which can pan the camera by default, doesn't work anymore, which means the Cessna model will always be rendered at the center of the screen. This is a so called node follower in this recipe:

How it works...

Here, we use the event handler to get the node's world center and set up the main camera. Certainly it can be replaced by node callbacks. Applying a node callback to the trackee and pass the main camera object as a parameter of the callback, so that we can implement the same functionality in the same way. Another solution is to use the camera manipulator, which is a more preferred option if you need mouse and keyboard interactions with the main camera. We will introduce it in the next recipe soon.

Using manipulators to follow models

Does OSG already provide us some functionality that implements the node tracking features? The answer is affirmative. The osgGA::NodeTrackerManipulator class enables us to follow a static or moving node, which will be demonstrated in this recipe. OSG also provides the osgGA::FirstPersonManipulator class for walking, driving, and flight manipulations. You may explore them yourself after reading this chapter.

How to do it...

Let us start.

  1. Include necessary headers.

    #include <osg/Camera>
    #include <osg/MatrixTransform>
    #include <osgDB/ReadFile>
    #include <osgGA/KeySwitchMatrixManipulator>
    #include <osgGA/TrackballManipulator>
    #include <osgGA/NodeTrackerManipulator>
    #include <osgViewer/Viewer>
    
  2. Load the animating Cessna and the sample terrain. This is exactly the same as the previous example:

    osg::Node* model =
    osgDB::readNodeFile("cessna.osg.0,0,90.rot");
    if ( !model ) return 1;
    osg::ref_ptr<osg::MatrixTransform> trans = new
    osg::MatrixTransform;
    trans->addUpdateCallback(
    osgCookBook::createAnimationPathCallback(100.0f, 20.0) );
    trans->addChild( model );
    osg::ref_ptr<osg::MatrixTransform> terrain = new
    osg::MatrixTransform;
    terrain->addChild( osgDB::readNodeFile("lz.osg") );
    terrain->setMatrix( osg::Matrix::translate(0.0f, 0.0f,-200.0f)
    );
    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( trans.get() );
    root->addChild( terrain.get() );
    
  3. Create the node tracker manipulator and set the Cessna as the target. The home position is set here to make sure that the camera is initially not too far from the cessna node:

    osg::ref_ptr<osgGA::NodeTrackerManipulator> nodeTracker = new
    osgGA::NodeTrackerManipulator;
    nodeTracker->setHomePosition( osg::Vec3(0, -10.0, 0),
    osg::Vec3(), osg::Z_AXIS );
    nodeTracker->setTrackerMode( osgGA::NodeTrackerManipulator::NODE_CENTER_AND_ROTATION );
    nodeTracker->setRotationMode(
    osgGA::NodeTrackerManipulator::TRACKBALL );
    nodeTracker->setTrackNode( model );
    
  4. We also create a switcher manipulator to switch between the classic trackball manipulator and the tracker. The first parameter of the addMatrixManipulator() method indicates the key, which can be pressed to change to corresponding sub-manipulator:

    osg::ref_ptr<osgGA::KeySwitchMatrixManipulator> keySwitch = new
    osgGA::KeySwitchMatrixManipulator;
    keySwitch->addMatrixManipulator( '1', "Trackball", new
    osgGA::TrackballManipulator );
    keySwitch->addMatrixManipulator( '2', "NodeTracker",
    nodeTracker.get() );
    
  5. It's enough to start the viewer now:

    osgViewer::Viewer viewer;
    viewer.setCameraManipulator( keySwitch.get() );
    viewer.setSceneData( root.get() );
    return viewer.run();
    
  6. You may view the scene normally first, and press the number 2 key to change to node tracking mode. It fixes the camera just behind the moving Cessna and prevents you from panning the camera. Press 1 to change back to trackball mode at any time you like.

How it works...

OSG has a complete camera manipulator interface, which is defined as the osgGA::CameraManipulator abstract class. It can only work on the main camera of a view and it will alter the view matrix for every frame to implement camera animations and navigations by mouse and keyboard. As we are going to adjust the main camera to track the moving node, the manipulator class will be a good choice. And we can change the tracking target and parameters on the fly by storing and obtaining the manipulator pointer.

You will find it easier to use the in-built node tracker manipulator to follow the nodes here. You can also use the setHomePosition() method for the manipulator to have a suitable start position. However, if you need a picture-in-picture effect or want to track the node in an auxiliary camera, callbacks should be more suitable because they can provide more flexible and less methods to override.

In fact, camera manipulators are also derived from the osgGA::GUIEventHandler class. However, they are treated separately from common event handlers, and do have extra interface for scene navigation purposes.

There's more...

In order to help you create your own manipulators in the future, we are going to introduce more about camera manipulators, as well as provide a simple manipulator example later in the chapter.

Designing a 2D camera manipulator

In the last few recipes, we have introduced how to control the main camera with event handlers and preset camera manipulators. The manipulator defines an interface, as well as some default functionalities, to control the main cameras of OSG views in response to user events.

The osgGA::CameraManipulator class is an abstract base class for all manipulators. Its subclasses include osgGA::TrackballManipulator, osgGA::NodeTrackerManipulator, osgGA::KeySwitchMatrixManipulator, and so on. In this recipe, it is time for us to create a manipulator of our own. In order to make things easier, we will aim at designing a two-dimension manipulator, which can only view, pan, and scale (but not rotate) the scene as if it is projected onto the XOY plane.

How to do it...

Let us start.

  1. Include necessary headers.

    #include <osgDB/ReadFile>
    #include <osgGA/KeySwitchMatrixManipulator>
    #include <osgGA/TrackballManipulator>
    #include <osgViewer/Viewer>
    
  2. The osgGA::StandardManipulator class is a good start for designing our own manipulators. It handles user events like mouse clicking and key pressing, and sends the event content to different virtual methods according to the event type. There are also virtual methods to be called during the traversal for delivering data. Therefore, the most important work for creating a new manipulator is to derive this class and override the necessary methods.

    class TwoDimManipulator : public osgGA::StandardManipulator
    {
    public:
    TwoDimManipulator() : _distance(1.0) {}
    virtual osg::Matrixd getMatrix() const;
    virtual osg::Matrixd getInverseMatrix() const;
    virtual void setByMatrix( const osg::Matrixd& matrix );
    virtual void setByInverseMatrix( const osg::Matrixd& matrix );
    // Leave empty as we don't need these here. They are used by other functions and classes to set up the manipulator directly.
    virtual void setTransformation( const osg::Vec3d&, const osg::Quat& ) {}
    virtual void setTransformation( const osg::Vec3d&, const osg::Vec3d&, const osg::Vec3d& ) {}
    virtual void getTransformation( osg::Vec3d&, osg::Quat& ) const {}
    virtual void getTransformation( osg::Vec3d&, osg::Vec3d&, osg::Vec3d& ) const {}
    virtual void home( double );
    virtual void home( const osgGA::GUIEventAdapter& ea, osgGA::GUIActionAdapter& us );
    protected:
    virtual ~TwoDimManipulator() {}
    virtual bool performMovementLeftMouseButton(
    const double eventTimeDelta, const double dx, const double dy );
    virtual bool performMovementRightMouseButton(
    const double eventTimeDelta, const double dx, const double dy );
    osg::Vec3 _center;
    double _distance;
    };
    
  3. The getMatrix() method means to get the current position and the attitude matrix of this manipulator. The getInverseMatrix() method should get the matrix of the camera manipulator and inverse it. The inverted matrix is typically treated as the view matrix of the camera. These two methods are the most important when implementing a user manipulator, as they are the only interfaces for the system to retrieve and apply the view matrix. We will explain their implementations later in the How it works... section.

    osg::Matrixd TwoDimManipulator::getMatrix() const
    {
    osg::Matrixd matrix;
    matrix.makeTranslate( 0.0f, 0.0f, _distance );
    matrix.postMultTranslate( _center );
    return matrix;
    }
    osg::Matrixd TwoDimManipulator::getInverseMatrix() const
    {
    osg::Matrixd matrix;
    matrix.makeTranslate( 0.0f, 0.0f,-_distance );
    matrix.preMultTranslate( -_center );
    return matrix;
    }
    
  4. The setByMatrix() and setByInverseMatrix() methods can be called from user-level code to set up the position matrix of the manipulator, or set with the inverse matrix (view matrix). The osgGA::KeySwitchMatrixManipulator object also makes use of them when switching between two manipulators. In our implementations, the _node variable, which is a member of osgGA::StandardManipulator instance, is used to compute distance from our eyes to the view center. It is set internally to point to the scene graph root.

    void TwoDimManipulator::setByMatrix( const osg::Matrixd& matrix )
    {
    setByInverseMatrix( osg::Matrixd::inverse(matrix) );
    }
    void TwoDimManipulator::setByInverseMatrix( const osg::Matrixd& matrix )
    {
    osg::Vec3d eye, center, up;
    matrix.getLookAt( eye, center, up );
    _center = center; _center.z() = 0.0f;
    if ( _node.valid() )
    _distance = abs((_node->getBound().center() - eye).z());
    else
    _distance = abs((eye - center).length());
    }
    
  5. The home() method and its overloaded version is used to move the camera to its default position (home position). In most cases, the home position is computed automatically with all the scene objects in the view frustum and Z axis upwards. If you need to change the behavior, use the setHomePosition() method to specify its default eye, center, and up vectors for your convenience. However, in this recipe, the default home position is in fact ignored, because we directly compute suitable values in the home() method by ourselves.

    void TwoDimManipulator::home( double )
    {
    if ( _node.valid() )
    {
    _center = _node->getBound().center();
    _center.z() = 0.0f;
    _distance = 2.5 * _node->getBound().radius();
    }
    else
    {
    _center.set( osg::Vec3() );
    _distance = 1.0;
    }
    }
    void TwoDimManipulator::home( const osgGA::GUIEventAdapter& ea,
    osgGA::GUIActionAdapter& us )
    { home( ea.getTime() ); }
    
  6. The performMovementLeftMouseButton() method will be invoked when the user is dragging the mouse with left button down. Pan the camera at this time!

    bool TwoDimManipulator::performMovementLeftMouseButton(
    const double eventTimeDelta, const double dx, const double
    dy )
    {
    _center.x() -= 100.0f * dx;
    _center.y() -= 100.0f * dy;
    return false;
    }
    
  7. The performMovementRightMouseButton() method will be invoked when the user is dragging with right button down. Perform zoom in/out actions now!

    bool TwoDimManipulator::performMovementRightMouseButton(
    const double eventTimeDelta, const double dx, const double
    dy )
    {
    _distance *= (1.0 + dy);
    if ( _distance<1.0 ) _distance = 1.0;
    return false;
    }
    
  8. In the main entry, we will read a sample terrain for testing the new 2D manipulator.

    osg::ref_ptr<osg::Group> root = new osg::Group;
    root->addChild( osgDB::readNodeFile("lz.osg") );
    
  9. Use key switch manipulator to switch between the default trackball manipulator and our work. Here, we also set a home position for the trackball manipulator to start at a good place.

    osg::ref_ptr<osgGA::KeySwitchMatrixManipulator> keySwitch = new
    osgGA::KeySwitchMatrixManipulator;
    keySwitch->addMatrixManipulator( '1', "Trackball", new
    osgGA::TrackballManipulator );
    keySwitch->addMatrixManipulator( '2', "TwoDim", new
    TwoDimManipulator );
    const osg::BoundingSphere& bs = root->getBound();
    keySwitch->setHomePosition( bs.center()+osg::Vec3(0.0f, 0.0f,
    bs.radius()), bs.center(), osg::Y_AXIS );
    
  10. Finally, start the viewer.

    osgViewer::Viewer viewer;
    viewer.setCameraManipulator( keySwitch.get() );
    viewer.setSceneData( root.get() );
    return viewer.run();
    
  11. When the application starts, you will find the camera stays on the top of the terrain. Press 2 to change to our customized manipulator, and drag the mouse with left or right button down to browse the scene in 2D mode. Press 1 at any time to switch back to trackball mode.

How it works...

As we already know, OpenGL defines the eye position at the origin with the view direction along the negative Z axis by default. So if we want to emulate a 2D camera controller, we can simply move the position of the eye but keep the view direction unchanged. That is what we see in the getMatrix() method— the _center variable decides the X and Y coordinates of the eye, and _distance decides the Z value (using two variables makes it a little easier to handle changes in the setByMatrix() method).

Manipulating the view with joysticks

In the last example of this chapter, we would like to add joystick support to the previous 2D manipulator, which means to use joysticks to pan and scale the scene in view. OSG doesn't provide native joystick functionalities so it is time to reply on some external libraries. This time we will choose the famous DirectInput library that belongs to DirectX 8 and higher versions. With some modifications to the last example's code, we can quickly add supports of DirectInput devices and make use of them in camera manipulators, callbacks, and event handlers.

Note

Note that this recipe can only work under Windows systems.

A picture of the joystick used here is shown in the following screenshot:

Getting ready

If you don't have the DirectX SDK installed, find and download it from the Microsoft website:

http://msdn.microsoft.com/en-us/directx/

After that, configure CMake to find the library file dinput8.lib and headers for adding DirectInput support.

FIND_PATH(DIRECTINPUT_INCLUDE_DIR dinput.h)
FIND_LIBRARY(DIRECTINPUT_LIBRARY dinput7.lib dinput8.lib)
FIND_LIBRARY(DIRECTINPUT_GUID_LIBRARY dxguid.lib)
SET(EXTERNAL_INCLUDE_DIR "${DIRECTINPUT_INCLUDE_DIR}")
TARGET_LINK_LIBRARIES(${EXAMPLE_NAME}
${DIRECTINPUT_LIBRARY}
${DIRECTINPUT_GUID_LIBRARY}
)

How to do it...

  1. We need some more headers and definitions before the TwoDimManipulator class declaration.

    #define DIRECTINPUT_VERSION 0x0800
    #include <windows.h>
    #include <dinput.h>
    #include <osgViewer/api/Win32/GraphicsWindowWin32>
    
  2. Add two new virtual methods in the class.

    class TwoDimManipulator : public osgGA::StandardManipulator
    {
    public:
    ...
    virtual void init( const osgGA::GUIEventAdapter& ea,
    osgGA::GUIActionAdapter& us );
    virtual bool handle( const osgGA::GUIEventAdapter& ea,
    osgGA::GUIActionAdapter& us );
    ...
    };
    
  3. Use global variables to save DirectInput device and joystick objects.

    LPDIRECTINPUT8 g_inputDevice;
    LPDIRECTINPUTDEVICE8 g_joystick;
    
  4. The EnumJoysticksCallback() method will look for all usable joysticks and record the first valid one.

    static BOOL CALLBACK EnumJoysticksCallback( const
    DIDEVICEINSTANCE* didInstance, VOID* )
    {
    HRESULT hr;
    if ( g_inputDevice )
    {
    hr = g_inputDevice->CreateDevice( didInstance->
    guidInstance, &g_joystick, NULL );
    }
    if ( FAILED(hr) ) return DIENUM_CONTINUE;
    return DIENUM_STOP;
    }
    
  5. In the TwoDimManipulator constructor, we will create a new input device and try to load an existing joystick object.

    TwoDimManipulator::TwoDimManipulator()
    : _distance(1.0)
    {
    HRESULT hr = DirectInput8Create( GetModuleHandle(NULL),
    DIRECTINPUT_VERSION, IID_IDirectInput8,
    (VOID**)&g_inputDevice, NULL );
    if ( FAILED(hr) || !g_inputDevice ) return;
    hr = g_inputDevice->EnumDevices( DI8DEVCLASS_GAMECTRL,
    EnumJoysticksCallback, NULL, DIEDFL_ATTACHEDONLY );
    }
    
  6. In the destructor, we release the joystick and device objects.

    TwoDimManipulator::~TwoDimManipulator()
    {
    if ( g_joystick )
    {
    g_joystick->Unacquire();
    g_joystick->Release();
    }
    if ( g_inputDevice ) g_inputDevice->Release();
    }
    
  7. The init() method is called when the manipulator is initialized at the first frame. This is the right place to bind the joystick to OSG window handle and set up necessary attributes.

    void TwoDimManipulator::init( const osgGA::GUIEventAdapter& ea,
    osgGA::GUIActionAdapter& us )
    {
    const osgViewer::GraphicsWindowWin32* gw =
    dynamic_cast<const osgViewer::GraphicsWindowWin32*>(
    ea.getGraphicsContext() );
    if ( gw && g_joystick )
    {
    DIDATAFORMAT format = c_dfDIJoystick2;
    g_joystick->SetDataFormat( &format );
    g_joystick->SetCooperativeLevel( gw->getHWND(),
    DISCL_EXCLUSIVE|DISCL_FOREGROUND );
    g_joystick->Acquire();
    }
    }
    
  8. The handle() method has the same meaning as the one in the osgGA::GUIEventHandler class. Here we will try to acquire joystick states for every frame, and parse and perform actions according to the selected joystick directions and buttons.

    Note

    Note that OSG has performMovementLeftMouseButton() and performMovementRightMouseButton() methods for performing mouse movements, which can be called here directly to perform joystick events.

    bool TwoDimManipulator::handle( const osgGA::GUIEventAdapter&
    ea, osgGA::GUIActionAdapter& us )
    {
    if ( g_joystick &&
    ea.getEventType()==osgGA::GUIEventAdapter::FRAME )
    {
    HRESULT hr = g_joystick->Poll();
    if ( FAILED(hr) ) g_joystick->Acquire();
    DIJOYSTATE2 state;
    hr = g_joystick->GetDeviceState( sizeof(DIJOYSTATE2),
    &state );
    if ( FAILED(hr) ) return false;
    ... // Please find details in the source code
    }
    return false;
    }
    
  9. The main entry has no changes. Now rebuild and run the example. Change to the 2D manipulator. You may press button 1 of the joystick and simultaneously press the direction buttons to move the camera. You may also press button 2 and the directions to zoom in/out. Be careful that different joystick devices may not have exactly the same key code, so you may have to configure your own joystick buttons according to the actual situation.

How it works...

OSG by default uses platform-specific Windows APIs to handle user interactions, including mouse and keyboard events. For example, under Windows, OSG will internally create a message queue, convert messages into OSG events, and send them to the event handler. The converted events are so called osgGA::GUIEventAdapter objects. This works in many situations but does not support some advanced functionalities such as joystick and force feedback. The message mechanism may not work properly when users press multiple keys. DirectInput in contrast returns each key and button's state and lets the developers decide what to do at that time. It also provides complete functions for handling joysticks and gamepads. That is exactly why this recipe makes sense here, and it may also help in your future applications.

There's more...

DirectInput is a great dependency for applications and games requiring joystick support. However, it works only under Windows and your program will thus be platform-specific. Try looking for some other input libraries if you need to port to other platforms. Object Oriented Input System (OIS) may be a good choice. You may download the library source code at:

http://sourceforge.net/projects/wgois/