Immersive Visualization / IQ-Station Wiki

This site hosts information on virtual reality systems that are geared toward scientific visualization, and as such often toward VR on Linux-based systems. Thus, pages here cover various software (and sometimes hardware) technologies that enable virtual reality operation on Linux.

The original IQ-station effort was to create low-cost (for the time) VR systems making use of 3DTV displays to produce CAVE/Fishtank-style VR displays. That effort pre-dated the rise of the consumer HMD VR systems, however, the realm of midrange-cost large-fishtank systems is still important, and has transitioned from 3DTV-based systems to short-throw projectors.

Barney

From IQ-Station Wiki
Jump to navigation Jump to search

Barney Volume Visualizer

Barney is a GPU-based rendering library designed to take advantage of multiple GPUs to divide the task of rendering a complex scene. Barney can handle mesh geometry as well as do volume visualization. My primary interest is in the volume visualization aspect, but perhaps in the future a scene with meshes, and perhaps combined with volumes could be of interest.

One important (new) feature of Barney is the ability to render 4-channel volumetric data. The colored volumetic rendering comes from the RGB channels, and the 4 channel is used to control the alpha value of each voxel -- which doesn't necessarily have to be the pre-assigned value of that 4th channel.

BANARI

Barney has a compile-time option to create an ANARI backend renderer that will work with the current ANARI-SDK (presently version 0.8.1). I haven't yet built or tested this feature yet.

One drawback of using the ANARI backend is that ANARI doesn't (yet) handle the 4-channel volume data.

HayStack

HayStack is a desktop GUI tool that interfaces with the Barney library. HayStack provides the means to specify the scene, along with parameters of the scene (especially useful for volumes of raw data), it also provides a simple interface for viewing the scene, and modifying colormap/opacity transfer functions for volume rendering.

HayStack Command Line Arguments

The HayStack scene viewer ("hsViewer" & "hsViewerQT") takes a handful of command line arguments to control what will be viewed in the scene, and how data will be divided among GPUs.

  • -ndg <n> — number of different groups: into which to divide the pieces
  • --camera <From: x,y,z> <POI: x,y,z> <Up: x,y,z> <fovy> — how to set the initial camera view
  • -xf <file.xf> — a transfer function map
  • <data-source> — a URL-style file descriptor
    • <type>://[n@]/<path>[:option][:option]
      • <n>@ — partition the object into <n> parts
      • :format={uint8|uint16|...} — Type of data in a raw file
      • :dims=<x-size>,<y-size>,<z-size> — Dimensions of data in a raw file
      • :channels=n — Number of channels of data in each volume voxel


  • % hsViewerQT raw:///path/skinO_700x800x965_uint8.raw:format=uint8:dims=700,800,965
  • % hsViewerQT raw:///path/skinO_700x800x965_uint8.raw:format=uint8:dims=700,800,965:channels=4
Note that the filename contains the metadata pertaining to the size of each data dimension,

but must be specified through the file syntax.

Note also: when "channels=4" then the given file is the value/alpha component, and the red/green/blue files use the same filename with ".r", ".g", and ".b" appended to them.

Ingo Wald's explanation

Regarding the 2@ "snake egg" : that's a hint to the loader that it should on-the-fly partition this object into N (in this case, 2) smaller parts.
The way this works is that barney can do data parallel rendering, but "somebody" has to decide what parts of the scene go where - eventually that'll be visit or ParaView deciding that, but at least for now it's haystack.
HayStack, in turn, has to know what the inputs are (ie, the files on the command line), but it also needs "some" sort of instructions on what to do with those parts - basically, into how many different parts to chop them, and how to distribute those to the different ranks and/or GPUs (if you have more than one)
This, in haystack at least, is controlled through two different flags:
a) There is a command line flag -ndg <n> to tell haystack into how many different pieces it should group all the inputs you provided. So if you pass it 20 different .obj files, and specify -ndg 4 it'll somehow magically/greedily group those 20 different obj files into a total of 4 groups, and with that it can then do 4-wide data parallel rendering (if you have four ranks or four GPUs).
Often, however, you only have a single huge input file, so it can't just create these ndg groups from that; so to allow that you cal also, for each individual input, specify how many different pieces it can chop this into while loading it. So basically:
b) The <type>://N@<location> tells the loader that whatever type you're loading there, simply split it on-the-fly into N smaller chunks, then treat those as if they had been loaded as N different pieces.
This way, if you specify raw://2@...bla.raw and -ndg 2 what it'll do is that that one .raw file, load it into two different (smaller) volumes, and then assign one to one rank, and the other into the other rank. (actually, it's the respective ranks that decide that, and each rank will only load its half, but that's just an implementation detail).
If you have more ranks than the ndg value you asked for, it'll also do some additional round-robin thingy. So if you used 2@ to load your giant volume into 2 pieces (call them "A" and "B"), and also -ndg 2 to tell it do to two-wide data parallel rendering, but your machine actually has 8 GPUs, then barney will automatically assign A to GPUs 0-3, and B to 4-7 - but it can't do that unless haystack gives it the right number of different scene parts.
So basically, if you always use a single raw file, and never use -ndg and N@, then you'll always use data replicated rendering, with the same raw file replicated across every gpu. for the small raw file that won't be a problem, it'll easily fit. Once you go to larger raw files, at some point it'll be too big for one GPU, but if you have, say, 4 of them, you can still use raw://4@<filename> ... -ndg 4 and haystack will automatically split that raw file into four smaller parts (with the right scaling and translation, and ghost layer(s) to seamlessly place them next to each other)


HayStack Keyboard/Mouse Controls

While running the HayStack viewer program ("hsViewer" or "hsViewerQT"), there are a number of mouse movements and keyboard inputs available to affect the view:

  • Left-Mouse Button — zoom in and out
  • Middle-Mouse Button — rotate the view
  • Right-Mouse Button — "strafe" the view (move laterally)
  • w,a,s,d — rotate the scene
  • c,e — move camera away from and toward the scene (respectively)
  • +,- — increase/decrease movement speed
  • i/I,f/F — Change movement mode between "inspect" and "fly" (changes center of rotation)
  • x,y,z,X,Y,Z — make that axis point up/down (toggles between)
  • ! — dump a screenshot
  • C — output the camera coordinates to the terminal shell (including with CLA to match)
  • T — dump the current transfer function as "hayMaker.xf"


Building Barney & HayStack

To use the HayStack application, it will first need to build barney. And while it's build process does the barney building too, the code has to be separately cloned — and must be in a sibling directory named exactly barney (so the date can't be appended to the name).

Note the the CMake files for HayStack assume a static version of the CUDA library, but that is not generally available, so the CMake instructions are edited to remove that.

Now do the build:

% cd Apps/Barney
% git clone https://github.com/ingowald/barney.git
% cd barney
% git submodule update --init --recursive
% cp -p submodules/owl/owl/CMakeLists.txt{,_orig}
% vi +160 submodules/owl/owl/CMakeLists.txt
[change "CUDA::cudart_static" to "CUDA::cudart"]
% cd ..
% git clone https://github.com/ingowald/hayStack.git hayStack-&LT;date&GT;
% cd hayStack-&LT;date&GT;
% git submodule update --init --recursive
% mkdir Build
% cd Build
% module load tbb optix [cuda] [mpich]
pyrite% cmake -DCMAKE_CXX_COMPILER=g++-10 -DCMAKE_CUDA_HOST_COMPILER=g++-10 -DCMAKE_BUILD_TYPE=Release ..
% cmake -DCMAKE_BUILD_TYPE=Release ..
% make

And testing it:

% ./hsViewerQT -ndg 4 raw://4@/mnt/hdd1/wrs1/Data/MayoClinic/skinO_700x800x965_uint8.raw:format=uint8:dims=7,800,965:channels=4

% ./hsViewerQT /mnt/hdd1/wrs1/Data/BarneySamples/AmazonLumberyard/amazon_lumberyard_bistro.mini

% ./hsViewerQT ~/Data/Models/Matterport/c570ad3123ed44a7a08041e27664f7e9.obj