- Summary and Version Information
- Using Visit on HPC clusters
Summary and Version Information
|Description||ViSit Visualization and Graphical Analysis tool|
|Categories||Graphics, Progamming/Development, Research|
|1.11.2||visit/1.11.2|| Non-HPC Glue systems
|2.9.0||visit/2.9.0|| Non-HPC Glue systems
*: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.
In general, you need to prepare your Unix environment to be able to use this software. To do this, either:
module load MODFOO
where TAPFOO and MODFOO are one of the tags in the tap
and module columns above, respectively. The
tap command will
print a short usage text (use
-q to supress this, this is needed
in startup dot files); you can get a similar text with
module help MODFOO. For more information on
the tap and module commands.
For packages which are libraries which other codes get built against, see the section on compiling codes for more help.
Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.
In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.
Using Visit on the HPC clusters
This section discusses various topics relating to using Visit on High Performance Computing (HPC) clusters.
One major concern when using visualization software such as Visit on HPC clusters is how to display the data. HPC clusters can generate large amounts of data, and visualization tools are useful in enabling researchers to understand the data that what produced. But generally the researchers are not sitting anywhere near the HPC clusters, HPC clusters generally do not have displays attached, and users usually wish to view the data on their desktop workstations. While users can copy the data files from the HPC clusters to their workstations, this can be time consuming as sometimes the data files are quite large. And that assumes there is room on the workstation disk.
In the remainder of this subsection, we will discuss some ways to view data sitting on disks attached to an HPC cluster on your desktop or similar system.
Remote Visualization using X
If you have a desktop that has an X server available, then the easiest solution might be to simply to ssh to one of the login nodes and run visit on the login node with the X display tunnelled back to your desktop. The help pages on using X11 discuss the mechanics of this; basically you ssh to the login node with X11 tunnelling enabled, and then run visit in the remote shell.
When this works, it can be the simplest way to view data remotely using Visit. However, even when it works, it can be sluggish. The visit process on the HPC system is sending all that graphics data back to your desktop for display, and things can become quite unresponsive at times. Furthermore, there can be quirks and incompatibilities between the version of X that Visit running on the HPC cluster was built against and the X server running on your desktop which can cause all sorts of issues. In general, if you encounter issues, it is probably easiest to just use Visit in client/server mode.
Remote Visualization using Visit Client/Server mode
Visit supports a client/server mode wherein one launches Visit on your desktop but you can direct Visit to open data files on a remote system. You may have noticed that the "Open File" page includes a hostname field --- that can be used to read data from a remote system. This functionality can also be used to allow Visit to operate in parallel mode for processing large data sets.
Connecting to Visit Server on login nodes
We start be discussing how to run Visit on your local desktop in such a way that it can access files from the HPC cluster. We will not be doing any parallelization, but just using Visit's client/server mode to access data without needing to copy it over to our local system.
First, one needs to configure the remote host profiles if one has not
done so previously. This only needs to be done once on each system you
sit at to run Visit (and for each remote system you wish to connect to from
that local system). You might be able to skip the manual setup described
below by downloading one of the following xml files and copying it to the
appropriate hosts file (usually
~/.visit/hosts on Unix systems):
If the xml files above are not working, you can create the profile manually. The following instructions are based on Visit 2.9, but things should be similar for later versions.
- Start by opening the
Host Profilespage from the menu bar.
- You can either select an existing host you wish to edit from the Hosts area on the left, or click the new host button to create a new host entry. Either way, it will open the entry with fields on the right side.
Host Nicknameis the name that will be shown to you for the host profile. I suggest something like
UMD Deepthought2 Cluster (Login Node).
Remote hostnameis the hostname that Visit will ssh to to open the remote Visit process. I would recommend that you give a specific login node, like:
login-1.deepthought.umd.edufor the Deepthought cluster
login-1.deepthought2.umd.edufor the Deepthought2 cluster
login-node01.marcc.jhu.umd.edufor the Bluecrab cluster
- You can leave
- Check both
Maximum processorsand leave the default value of 1 for each. (Because you will be running Visit on the login node, we want to be considerate of other users).
Path to Visit Installation, use:
/cell_root/software/visit/2.9.0/gcc/4.6.1/openmpi/1.6.5/syson the Deepthought clusters
/cm/shared/apps/visit/2.10.3on the Bluecrab cluster.
Username, enter your username on the cluster. Remember that on Bluecrab, your username includes
- Click the box for
Tunnel data connections through SSH
- The other fields can be left to the defaults.
- Hit the
Postbutton to save.
After the host profile is configured, the newly created profiles should be available in the host dropdown in the File open window. Select the appropriate option (or localhost to open a local data file). You might get a prompt for a password (depending on whether you have passwordless ssh configured for the system); if so enter your password. Hopefully you should now see the files available on the remote system.
Connecting to Visit Server on compute nodes
Sorry, this page is not ready yet.