7. Visualizing the World – OpenSceneGraph 3 Cookbook

Chapter 7. Visualizing the World

In this chapter, we will cover:

  • Preparing the VirtualPlanetBuilder (VPB) tool

  • Generating a small terrain database

  • Generating a terrain database on the earth

  • Working with multiple imagery and elevation data

  • Patching an existing terrain database with newer data

  • Building NVTT support for device-independent generation

  • Using SSH to implement cluster generation

  • Loading and rendering terrain from the Internet


It is always exciting to create and view a large area, for example, the earth, in our OSG-based applications. A detailed terrain which can be paged dynamically and rendered smoothly is necessary for geographic information system (GIS). And that is what we are going to discuss in this chapter.

Early OSG developers may have heard of a simple utility named osgdem in the core OSG releases at that time. It can build terrain data from original elevation and texture files and makes the results easy to merge into the scene graph. There is even a BlueMarbleViewer project showing how to build earth models with NASA's BlueMarble imagery using OpenSceneGraph 1.2 at http://www.andesengineering.com/BlueMarbleViewer/.

The osgdem utility has grown to a complete terrain generation tool set named VirtualPlanetBuilder, which is also managed by Robert Osfield, the OSG team leader. And there are some other very good terrain builders and renderers. We will introduce one of them in this chapter—the osgEarth project, which is maintained by developers at Pelican Mapping (http://pelicanmapping.com/).

Building terrain requires original data. Some low-resolution data can be downloaded freely from the Internet, but some of them cannot be used directly for commercial purposes. You may have to obtain data from regular map-service providers and get permits first while developing paid software such as GIS systems and earth viewers.

Preparing the VirtualPlanetBuilder (VPB) tool

The VirtualPlanetBuilder (VPB) is the best known terrain-creation tool based on OSG. It uses the famous GDAL library to read a wide range of geospatial imagery and elevation data formats, and build paged terrain database for real-time viewing and analyzing.

VPB was first designed as a terrain-generation tool in OpenSceneGraph 1.2. As it developed so rapidly, it soon became a separate project focusing only on databases creation. It now supports working under projected and earth coordinates, processing gigabyte- and terabyte-sized data, cluster building using SSH, and different database optimization methods.

At the time the book is being written, VPB is still on its way to the stable 1.0 release. So we will work on the latest trunk version of it while studying the next few recipes in this chapter.

You can read more about VPB at the following website:


And for more about the GDAL project and its derivatives, refer to the following link:


Getting ready

Before we build VPB from the source code and use it for terrain creation later, we should establish some prerequisites. One is to install OSG headers and libraries at a reachable location. Of course every reader of this book should be able to achieve this.

The other requirement is that you must have the GDAL library installed, which will be used heavily in VPB for reading and parsing original raster data. You may download the source code and build it from the source code too. But GDAL has already provided various downloadable binaries for different platforms and versions. Linux, Mac OS X, and Windows users please see the download link for details:


And Debian and Ubuntu developers can also make use of the common apt-get command to obtain GDAL binaries and developer files:

# apt-get install gdal-bin
# apt-get install libgdal-dev

How to do it...

Let us start.

  1. You will have to check out the VPB source code with any Subversion tool:

    # svn checkout http://www.openscenegraph.org/svn/
    VirtualPlanetBuilder/trunk VirtualPlanetBuilder
  2. Start the cmake-gui executable and select the VPB source's root directory and a new folder to place the building-related files.

  3. Choose a suitable generator and start the configuration. Like the recipes in Chapter 1, there will be a few options for you to check and edit before really generating the makefiles or solution.

  4. The GDAL group and the OSG group are the most important, without which you will fail to make the VPB system work. Please open these two groups and see if include directories and libraries are set. Under Linux, this is always done automatically because most developer files can be located in the /usr and /usr/local directories. But Windows users may have to specify the folders by themselves.

  5. Click on Generate to create the makefiles. Note that it is disabled until you choose Configure again to set up the options, as shown in the preceding screenshot. Next, you can compile VPB in the build folder immediately using the following command:

    # sudo make
    # sudo make install
  6. Windows users could choose Visual Studio as the generator. And Mac OS X users may either use XCode project or UNIX makefiles.

  7. Now you will find some more executables in the bin directory of your installation folder. Among them, vpbmaster is the most important one and the only one to be introduced in the remainder part of this chapter.

How it works...

Most OSG-based projects, including VPB and some other projects introduced before (osgOcean and so on), use CMake as the building system. So it is important for them to find various OSG libraries as dependencies. CMake provides an automatic searching script which can look for OSG installations under /usr and /usr/bin directories, as well as the place indicated by the environment variable OSG_DIR. The CMake system will then try to find OSG's necessary header files in the include subdirectory of each folder specified in OSG_DIR, and library files in the lib subdirectory. If successful, it presets these locations as the default values before the user-configuration process. This, of course, brings convenience when there are too many options to set for the same dependency.

There is a similar solution for specifying GDAL options in CMake, but it uses another environment variable GDAL_DIR instead, which indicates where GDAL binaries and libraries are installed.

Generating a small terrain database

Looking into the installation folder, there are at least four new applications in the bin directory:

  • vpbmaster: The main processor for terrain database generation. It is a command-line tool without any GUI.

  • vpbcache: A tool for creating cache or building re-projections of original source data.

  • vpbsizes: A convenient calculator for computing tile sizes of specified terrain width and height.

  • osgdem: The terrain-creating tool used internally for handling different terrain tiles. There may be multiple osgdem applications running parallel when users use vpbmaster to build a huge database.

The next step is to create a small terrain database (only a few megabytes) using the vpbmaster tool. Of course, the first thing is to look for some adequate original data.

Getting ready

The Large Geometric Models Archive project at Georgia Tech has some excellent terrain data that can be used here to show how VPB works with original geographic data. The project site is managed by Greg Turk and Brendan Mullins. You can visit it at http://www.cc.gatech.edu/projects/large_models/.

And we are mainly interested in the Grand Canyon data, which can be found at http://www.cc.gatech.edu/projects/large_models/gcanyon.html.

Download the BMP format of the elevation and texture maps. We will use them along with the vpbmaster tool soon.


You can't use these data for commercial purposes without their permission.

How to do it...

Let us start.

  1. First, we should put the downloaded BMP files in a suitable place. They can't be used for generation yet as GDAL doesn't directly support this format. So we would better convert these raster data into GeoTiff format, which allows geo-referenced information to be integrated within a TIFF file. Now open a new terminal and type the following command:

    # gdal_translate data/gcanyon_color_4k2k.bmp data/
    # gdal_translate data/gcanyon_height.bmp data/gcanyon_height.png
  2. Here we assume that all terrain data are stored in the data folder of the working directory, and use relative paths to specify them.

  3. Use vpbmaster to build our first terrain database now. Here the argument -d will specify the digital elevation map to use, and -t decides the imagery used as textures. The option -o determines the output directory and root filename.

    # vpbmaster -d data/gcanyon_height.png -t data/
    gcanyon_color_4k2k.png -o output/out.osgb
  4. The generation process may take a while depending on your system, so you can just serve yourself a cup of tea while VPB is working. The output will be located at the output folder. It will be created automatically if it doesn't exist.

  5. After the building process is completed, run osgviewer to see the terrain model.

    # osgviewer output/out.osgb
  6. The entire output folder's size is over 150 MB, but it can be rendered and displayed smoothly. You can either view the canyon in a global perspective, or press close to one of the hills and valleys.

How it works...

Have a look at the generated directory. It includes a large number of files and folders with the same name infix—L[a]_X[b]_Y[c]. Here a means the level number, and b and c are range identifiers. Level 0 is the rough level, and level 6 in this example is the most detailed. In the following screenshot, there is only one L0 subfolder and several L2 subfolders. So what do they mean here?

VPB will always try to split the input image to some square sections, for instance, 4096 x 2048 (L0) will be separated into two 2048 x 2048 tiles. And they are named L1_X0_Y0 and L1_X1_Y0, which lie in the out_root_L0_X0_Y0 folder. The L1 sections will then be split again using the quad-tree structure, that is, each level's tile is divided into four pieces of the next level. So we can see in the output folder four L2 subfolders (X0_Y0 to X1_Y1) for L1_X0_Y0, and the other four (X2_Y0 to X3_Y1) for L1_X1_Y0. All child levels will be placed in these L2 subfolders.

Thanks to OSG's paged LOD (level-of-details) mechanism, the tile will only be replaced by its four sub-tiles when the viewer is near enough, and will render data of its range with a higher resolution. The basic structure of a quad-tree LOD in terrain rendering is shown in the following diagram:

And the node structure can be described as follows:

osg::Group* nextLvGroup;
nextLvGroup->addChild( pagedNextTile1 );
nextLvGroup->addChild( pagedNextTile2 );
nextLvGroup->addChild( pagedNextTile3 );
nextLvGroup->addChild( pagedNextTile4 );
osg::PagedLOD* pagedThisTile;
thisTile->addChild( dataOfThisLevel ); // The rough level
thisTile->addChild( nextLvGroup ); // The refined level

The nodes pagedNextTile1 to pagedNextTile4 are actually files with the prefix out_La'_Xb'_Yc'. In this case:

a' = a + 1
For tile1: b' = 2 * b, c' = 2 * c
For tile2: b' = 2 * b + 1, c' = 2 * c
For tile3: b' = 2 * b, c' = 2 * c + 1
For tile4: b' = 2 * b + 1, c' = 2 * c + 1

There's more...

As .osgb is a binary native format, it is nearly impossible to quickly read and understand the contents of the generated database. We can slightly change the command-line arguments of vpbmaster to support writing to ASCII files (.osg or .osgt), as well as writing out tile images (using --image-ext to set a valid image extension) at the same time:

# vpbmaster -d data/gcanyon_height.png -t data/
gcanyon_color_4k2k.png -o output/out.osg --image-ext bmp

By default, each tile of VPB is formed by 64 x 64 vertices and mapped by a 256 x 256 texture. As the original elevation and texture size is 4096 x 2048, VPB must build at least seven levels to get to the highest data resolution. The entire generation time may be too long and the result may not be necessary in some situations. In this case, we can control the levels to generate manually by specifying the -l parameter:

# vpbmaster -l 3 -d data/gcanyon_height.png -t data/
gcanyon_color_4k2k.png -o output/out.osgb

Other two useful arguments are --terrain (default) and --polygonal. The option --terrain means to use the osgTerrain::TerrainTile class for generating grid geometries based on height fields, which is used by default. The option --polygonal will treat the data as triangle faces and must tessellate and simplify them while creating tiles, which is much slower and not good for further analyzing work. The two opinions can't co-exist in one terrain generation process.

Generating terrain database on the earth

The terrain we just generated comes from two simple bitmaps and is constructed with height values along the Z axis. We can say that it is computed in a projected coordinate system. It actually means that the terrain is defined on a flat, two-dimensional surface (with height). The two dimensions (x- and y-coordinates) determine the area covered by the terrain. This model can be easily understood and used in a scene, but it doesn't correspond with the geographic coordinate system or with an existing place on the earth. As far as we know, terrain data are often acquired by aircraft and satellites flying over a real region. So could we just build the data in the earth's coordinate? This kind of coordinate system is always called the geographic coordinate system.

Getting ready

You may either download the earth's imagery from the TrueMarble or BlueMarble website:

TrueMarble: http://www.unearthedoutdoors.net/global_data/true_marble/download

BlueMarble: http://earthobservatory.nasa.gov/Features/BlueMarble/

We are going to make use of the free data from TrueMarble. Here is their copyright information:

"Unearthedoutdoors.net contains graphics, information, data, reviews, and other content accessible by any Internet user. All Content is owned and/or copyrighted by Unearthed Outdoors, LLC (unless otherwise explicitly noted), and may be used only in accordance with this limited use license.

Unearthed Outdoors, LLC is protected by copyright pursuant to U.S. copyright laws, international conventions, and other copyright laws."

You can't use these data for commercial purpose without the permissions from Unearthed Outdoors, LLC.

How to do it...

Let us start.

  1. To build databases in geographic coordinate, we can simply use the --geocentric option while executing vbpmaster. The complete command is:

    # vpbmaster -t data/TrueMarble.4km.10800x5400.tif --geocentric
    -o output/out.osgb
    • Don't doubt the arguments we used this time. Yes, there is no -d option and thus no elevation map specified. As we have already indicated to use the geocentric system to build from the source, VPB will automatically use flat sea-level elevation data and construct the earth geometry according to the given GeoTiff imagery.

  2. After the generation process, you may view the terrain by calling osgviewer.

# osgviewer output/out.osgb

  • And you will find that the result is a six-level quad-tree structure, which simulates a realistic earth model with TrueMarble overlays, as shown in the following screenshot:


To note, the gcanyon data used in the last recipe is not suitable this time. Those data don't have a valid WKT (well-known text) coordinate system and must be re-projected so that VPB may then recognize them as a piece of ground in the real world.

How it works...

OSG has a special node type osg::CoordinateSystemNode for the viewer system to convert data between XYZ and latitude/longitude/height, and it also builds a local transition matrix internally for node transformations in the scene graph. In polygonal mode (set with the option --polygonal), VPB will set it as the parent node of the entire terrain sub-graph to guide the creation of all sub-tiles on the earth. But in grid mode (--terrain), the osgTerrain::Terrain, which is the derivative class, will be used instead. Both classes store the earth dimension and coordinate information. The --geocentric option here will indicate VPB to use the center of the earth as the terrain's center and defines units in meters directly. It is actually defined as the ECEF coordinate system, which has an ellipsoid as the measurement of the earth shape, called World Geodetic System 1984 (WGS-84). Refer to the following link for details:


There's more...

We can make use of some other coordinate systems by specifying the --cs option. It uses a PROJ4 format string to declare new coordinate systems, for instance:

# vpbmaster -cs "+proj=latlong +datum=WGS84" ...

More information about the coordinate system string format can be found at the following PROJ4 project website:


Working with multiple imagery and elevation data

It is impractical to put the whole earth's elevation or texture into only one file. That is because the original data may be terabyte-sized or even larger and, thus, not easy to maintain. Saving multiple images of different areas is a more suitable way, and convenient for outputting data from surveying equipment. This requires VPB to read data from multiple inputs or from a subdirectory including many smaller tiles' images, and merge them into one complete terrain model. Fortunately, this can be done directly with the -d and -t options.

Getting ready

We will continue working on the earth model and try to add some height fields at a certain longitude and latitude range. Thanks to the SRTM project, we can freely download and use the globa-elevation data along with the textures for non-commercial purposes. The download link is:



"Jarvis A., H.I. Reuter, A. Nelson, E. Guevara, 2008, Hole-filled seamless SRTM data V4, International Centre for Tropical Agriculture (CIAT), available from http://srtm.csi.cgiar.org."

You must read the disclaimer given in the following link before making use of SRTM's elevation data for any purposes:


How to do it...

Let us start.

  1. Let us first open the website and select a few areas we are interested in.

  2. Download the elevation files found by SRTM's data search engine.

  3. Put all TIFF files into a separate folder named srtm. Now it's time to start the vpbmaster tool again.

    # vpbmaster -d data/srtm -t data/TrueMarble.4km.10800x5400.tif
    --geocentric -o output/out.osgb
  4. View the final result with osgviewer. As VPB can automatically handle assemblage of multiple files, you may either specify a directory as the parameter of -d and -t, or use the same option for more than one time to add multiple files to the building process, for example:

    # vpbmaster -d data/srtm/srtm_54_05.tif -d data/srtm/
    srtm_55_05.tif -d data/srtm/srtm_54_06.tif -d data/srtm/
    srtm_55_06.tif -t data/TrueMarble.4km.10800x5400.tif
    --geocentric -o output/out.osgb
  5. The final result is shown in the following screenshot. You will find that there are higher-resolution elevation data around the British Isles.

How it works...

You can find in the working directory a series of new files and folders, which are created and managed by VPB. The build_master.source is an ASCII file wrapping up all the source data and build options. Open it with any text editor, and you will find that the file looks like a OSG native scene file (.osgt) and may even be loaded with the osgDB::readNodeFile() function. It has an osgTerrain::TerrainTile node to save build options (output name, extents, levels, and others) via the vpb::DatabaseBuilder object, and save child layers for different input data.

The build_master.tasks file records all the sub-tiles to be generated during the whole process. The status of each sub-tile task can be found in the tasks folder. A standard status file may be automatically written, as shown in the following code block:

application : osgdem --run-path /usr/local/bin -s
build_master.source --record-subtile-on-leaf-tiles -l 8 --
subtile 3 0 0 --task tasks/build_subtile_L3_X0_Y0.task --
log logs/build_subtile_L3_X0_Y0.log
date : [building time]
duration : [building duration]
fileListBaseName : output\out_subtile_L3_X0_Y0/
hostname : [host name]
pid : [pid]
source : build_master.source
status : [pending/completed]

A pending task indicates that the sub-tile is not created yet; and completed task means there is no need to work on the sub-tile unless the user needs a complete rebuild. If the building process is canceled, or crashes due to some system reason, you can make use of the task files' status and rerun vpbmaster with the --tasks argument:

# vpbmaster --tasks build_master.tasks

Tasks that are marked as completed will be skipped this time. Otherwise, when you execute vpbmaster again, the finished data will be overwritten instead of a resuming process.

Patching an existing terrain database with newer data

As you may see in Google Earth and some other 3D GIS explorers, sometimes newer and more refined images captured by satellites may be added to the entire earth model, and this helps you take a closer look at the places that are not distinct enough before. It might be important to integrate newly obtained data to the scene and distribute them to end users as soon as possible.

VPB can support patching of existing terrain database too. It requires the source file and all the original data for reference, and will add new raster and elevation data to update the database with higher resolution patches. It is extremely useful if we need to make some changes on a generated terrain model or use higher resolution images to replace the old ones.

How to do it...

To make use of the patching functionality, there are two prerequisites: first, you must have the new data; and second, you should keep the build_master.source file and all source files used to produce the old database, as they are needed for handling resolution and boundary problems. Sub-tiles that are not affected by the new patch will not be rebuilt anymore.

Let us start.

  1. We will patch the gcanyon data used in the first recipe in this chapter. Of course, there are no real patches for use, so we have to create one by ourselves. Open your Photoshop or GIMP and create a new white-colored picture, then save it as a TIFF file (sub_gcanyon_height.tif, 1024 x 512 sized in this recipe). If we use this file as an elevation patch, it means that the height field will be set to a very high value, and old values will be totally overwritten.

  2. We may have to create a world file (sub_gcanyon_height.wld) for specifying the resolution and range of the patch file. The content of this ASCII file can be simply written as shown in the following code block (this will be explained later):

  3. Now place the world file and the TIFF file together in the data directory, and start vpbmaster:

    # vpbmaster --patch build_master.source -d data/
  4. The build_master.source is the source file generated during the last build. The building process may take less time than creating a complete gcanyon terrain. As we specify the -d option this time, the height field will be recalculated to merge the effect of the patch.

  5. Use osgviewer to view the output model. The white-colored elevation map is constructed as a raised cube on the terrain, as shown in the following screenshot:

How it works...

The world file is an ASCII parameter file used for geo-referencing raster map images. It was first introduced by the ESRI. This kind of files (with the extension .tfw, .tifw, or .wld) often describes the location, scale, and rotation of the map with six lines (each with a decimal number). When GDAL is going to read a TIFF file, it will automatically look for a world file with the same name and associate these two files together for gathering necessary information.

The meaning of the six-line parameter is as follows:

  • Line 1: pixel size along X (0.5 here).

  • Line 2: rotation about X (0 in most cases).

  • Line 3: rotation about Y (0 in most cases).

  • Line 4: pixel size along Y. It's often a negative number because image data are stored from top to bottom (-0.5 here).

  • Line 5: center X of the upper-left pixel (1000 here).

  • Line 6: center Y of the upper-left pixel (500 here).

Because the newly-created patch file (sub_gcanyon_height.tif) doesn't contain any geographic information inside, we have to provide a world file to place it at an appropriate place and with suitable resolution. If you have GDAL installed, we will find an executable named gdalinfo. Let us use it to check the image with the associated .wld file:

# gdalinfo data/sub_gcanyon_height.tif

And you will get some report, as shown in the following code block:

Corner Coordinates:
Upper Left ( 999.750, 500.250)
Lower Left ( 999.750, 244.250)
Upper Right ( 1511.750, 500.250)
Lower Right ( 1511.750, 244.250)
Center ( 1255.750, 372.250)

Because the patch is 1024 x 512 but only covers a 500 x 250 area, the maximized level will increase to 7. And you could see some L7 files in part of the subfolders of output, maybe L2_X0_Y0 and L2_X1_Y0 in this recipe, as the new patch has intersections with them.

In fact it is common to obtain geospatial data in the GeoTIFF format instead, which already has such metadata embedded.

Building NVTT support for device-independent generation

By default, VPB uses the osg::Image and osg::Texture classes to generate compressed data formats for internal use, and create mipmaps if required. They both encapsulate OpenGL functions for implementing such work, and thus must be used on systems where a graphic card, capable of providing an OpenGL rendering context, is available. This may not be a problem in most cases, but because of the development of new compressed texture formats, there are still possibilities that your graphic cards don't afford these features, or you simply don't have a graphics card supporting OpenGL at all (for example, headless cluster or server computers). All these may lead to VPB's functionality missing and a failure to build terrain database on such machines.

Fortunately we could make use of the NVIDIA Texture Tools (NVTT) , which is an open source image processing and texture manipulation project. In this recipe, we will compile and use it to configure a new OSG plugin named osgdb_nvtt, and make VPB depend on it to generate device-independent textures and terrain models.

Getting ready

You can download the latest NVTT source code from the following link:


Or you can use SVN to check it out:

# svn checkout http://nvidia-texture-tools.googlecode.com/
svn/branches/2.0/ nvtt

NVTT also provides CMake scripts for cross-platform building. But for Linux users, you can directly compile it by executing the following commands under the NVTT root directory:

# ./configure
# make
# make install

An important note for building NVTT with CMake: By default, CMake will generate makefiles for compiling static libraries, but in this way the results will not work for the corresponding OSG plugin. So you must add a definition NVTT_SHARED=1 to force generating shared libraries while running the cmake executable, that is:

# cmake /home/nvtt -DNVTT_SHARED=1

The cmake-gui tool can't be used here as it doesn't allow macros to be added as arguments.

How to do it...

Let us start.

  1. Now start the cmake-gui tool and select to configure the OpenSceneGraph directory. Find the group NVTT and set nvcore as the NVTT_LIBRARY, and the directory containing nvtt/nvtt.h as the NVTT_INCLUDE_DIR value.

  2. Rebuild OpenSceneGraph now. If you have kept all the build files before, the building process will be much faster as only a few projects should be compiled.

  3. Make sure to run'make install' and see if there is a new osgdb_nvtt plugin in the dynamic library's directory (lib for UNIX and bin for Windows).

    Do we have to rebuild VPB as well? The answer is absolutely not. VPB will automatically look for the NVTT plugin and make use of it for terrain generation regardless of graphics contexts.

  4. Now, if you have an old enough computer, turn it on and try to run VPB to generate the gcanyon data again. You will see that VPB can work smoothly under such devices too.

How it works...

As we know, OSG uses the osgDB::ReaderWriter class as the base interface of all reader/writer plugins. Every file format is parsed within a certain plugin and then returned an osg::Image or osg::Node pointer as the result (or save to specified filename while writing scene nodes and images). For instance, COLLADA 3D models are handled by osgdb_dae. You will see a ReaderWriterDAE class defined in the DAE plugin source code which implements the concrete data reading and conversion.

But in this recipe, we meet another kind of OSG plugins—the image processor plugin. It uses a base interface called osgDB::ImageProcessor and implements its derived classes in plugins. This class has two important virtual methods to override:

virtual void compress(osg::Image&,
osg::Texture::InternalFormatMode, bool, bool,
CompressionMethod, CompressionQuality);
virtual void generateMipMap(osg::Image&, bool,

Re-implement them and then the processor will be able to compress images to specific formats and generate mipmaps for them. That is also what the osgdb_nvtt plugin does with the external NVTT library.

Using SSH to implement cluster generation

The computer cluster is a more and more common concept in modern development. It means a group of linked computers working together, and often connecting to each other through the Local Area Network (LAN) . It improves the performance and availability compared with just a single computer, but of course it is much more costly.

Could we use VPB on such a cluster system and benefit from the high availability and speed? Of course. As we already know, it is not a short task to build terrain with VPB, especially when the original data are extremely large. So a computer cluster used for computational purposes can be of great help. The original data can be stored using the Network File System (NFS) technique so that all computers can get access to data stored in the same place. And there should be one primary computer which takes care of the task distribution and keeps in communication with all other slave nodes who build parts of the tiles simultaneously.

In the following section, we will mainly introduce the configuration under Linux. Windows and Mac OS X users may have troubles using the same steps. Please first read the related instructions on the OpenSSH website, and set up your own SSH environment.

How to do it...

Let us start.

  1. We can use the Secure Shell (SSH) protocol to communicate with a remote computer and send commands to it. Please make sure you have had the OpenSSH (http://www.openssh.com/) service installed and enabled. Type the following command to connect to user user1 at remote computer (of course, they are both fictional), and execute the vpbmaster tool without parameters:

    # ssh user1@ vpbmaster
  2. If you have already installed OSG and VPB on the remote computer, the command should work. But you may have to input the password before login. VPB will also try to execute ssh internally while working with cluster systems, so it is important to prevent inputting the password all the time. Developers who are familiar with SSH can quickly do this by sending a public key to each remote host:

    # ssh-keygen -t rsa
    # ssh-copy-id user1@
  3. Create a new text file named machinepool.txt (or any other name) and provide all remote computer names and numbers of CPUs you want to use, as shown in the following code block:

    Machine {
    hostname user1@
    processes 1
    Machine {
    hostname user2@
    processes 1
    Machine {
    hostname user3@
    processes 1
  4. Now let us start building the real global data with the --machines option (assuming they are stored in /nfs/data):

    # vpbmaster --machines machinepool.txt -d /nfs/data/srtm
    -t /nfs/data/TrueMarble.4km.10800x5400.tif --geocentric
    -o output/out.osgb
  5. Enjoy the process. Is it a much shorter journey this time?

How it works...

Do you remember the sub-tile task files in the tasks folder? Let us open some task files randomly this time and have a look at the hostname line:

fileListBaseName : output\out.osgb.task.0
hostname : user1@
fileListBaseName :
hostname : user2@
fileListBaseName :
hostname : user3@

You can find that VPB automatically divides the tasks and sends commands to different hosts within the local network. It's really good to see that a series of high-performance computers can cooperate so smoothly on a huge generation work. It really is a time-saving idea if you have such an environment!

There's more...

To build a NFS system, you can try the GlusterFS at http://www.gluster.org/.

And for SSH implementations under different platforms, see the OpenSSH website for details:


Loading and rendering terrain from the Internet

VPB generated terrain tiles are so small that they can be easily transferred on the Internet or an intranet. And because of the osgdb_curl plugin, which depends on the cURL library, OSG can quickly read these files from remote servers through multiple protocols. These will be the foundation for loading and rendering terrain databases from the web.

OSG also provides a simple file-cache mechanism that writes temporary files, reads from the web to local disk, and loads the disk file directly when the same transferring request comes again. At present, it only works for paged nodes that are dynamically managed (loaded or removed due to current view point) by the osgDB::DatabasePager class. This solution prevents user applications from visiting remote websites and downloading unchanged data repeatedly and thus saves bandwidth and loading time.

How to do it...

Let us start.

  1. It is easy to get a quick taste of rendering-terrain databases on a web server. First you should have a website to store terrain files and provide access authority to anonymous visitors. AppServ (http://www.appservnetwork.com/) might be a way to create such a site.

  2. Copy all the files in the output directory to the site. They are just generated by VPB in the last recipes of this chapter.

  3. Make sure the server is enabled and running. Assume the hostname is (localhost) , and start osgviewer:

    # osgviewer
  4. Now you will be able to view the database previously created. Of course it may be too simple to show the powerful web support in OSG. So this time we will try to display a larger earth model from a real remote server:

    # osgviewer http://www.openscenegraph.org/data/
    • This 547 MB paged database is composed of the NASA BlueMarble data and the high-resolution bay area of California, USA. You may navigate to the area to have a look at some very detailed data. And if your Internet service provider doesn't have a good bandwidth, the loading of sub-tiles may be slow, and it will be painful when you zoom in and out multiple times.

  5. Now it's time to use the file cache mechanism. Just set a new environment and create a new folder for caching:

    # export OSG_FILE_CACHE = /home/cache
    # mkdir /home/cache
  6. You may specify any folder as the cache folder. Make sure you have read/write permissions there.

  7. Now try step 4 again. Enjoy the picture of the bay area, and exit the viewer program after a while.

  8. Go to the cache folder and you will find that it records the sub-tiles you have visited, and thus makes the loading speeds of same files faster.

How it works...

The file cache is checked and reused while the database pager is processing paged nodes in request. First, it determines if a new name is a remote filename (with the hostname at the beginning of the name string). If so, it will try to find the required hostname and filename in the cache folder. If it succeeds, the pager will mark the request as 'high latency' and directly read from the local cached files later.

You can also decide if a file should be cached or not by specifying an osgDB::FileLocationCallback object. It has two virtual methods to override:

virtual Location fileLocation(const std::string& filename,
const Options* options);
virtual bool useFileCache() const;

The first method will return if the file is local (LOCAL_FILE) or remote (REMOTE_FILE), and local files will not be added to the cache. The second method can quickly enable or disable the use of caching. You can at any time specify your own location callback by calling the setFileLocationCallback() method of the reading option object:

osg::ref_ptr<osgDB::Options> options = new osgDB::Options;
options->setFileLocationCallback( ownCallback );
pagedNode->setDatabaseOptions( options.get() );

There's more...

OSG also supports a revision mechanism that can provide adding/removing/modifying information of the remote terrain database. The database pager will then decide if the scene graph must be updated due to changes on the remote server. This functionality is not enabled by default at the time of writing this book, but you can try the osgdatabaserevisions example in the OSG source code to see how it works.