Visit

Contents

  • Summary and Version Information
  • Using Visit on HPC clusters

    Summary and Version Information

    Package Visit
    Description ViSit Visualization and Graphical Analysis tool
    Categories Graphics,   Progamming/Development,   Research
    Version Module tag Availability* GPU
    Ready
    Notes
    2.9.0 visit/2.9.0 Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N
    2.10.2 visit/2.10.2/no-osmesa Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N
    2.10.2 visit/2.10.2/osmesa Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N

    Notes:
    *: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.

    In general, you need to prepare your Unix environment to be able to use this software. To do this, either:

    • tap TAPFOO
    OR
    • module load MODFOO

    where TAPFOO and MODFOO are one of the tags in the tap and module columns above, respectively. The tap command will print a short usage text (use -q to supress this, this is needed in startup dot files); you can get a similar text with module help MODFOO. For more information on the tap and module commands.

    For packages which are libraries which other codes get built against, see the section on compiling codes for more help.

    Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.

    In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.

    Using Visit on the HPC clusters

    This section discusses various topics relating to using Visit on High Performance Computing (HPC) clusters.

    Remote Visualization

    One major concern when using visualization software such as Visit on HPC clusters is how to display the data. HPC clusters can generate large amounts of data, and visualization tools are useful in enabling researchers to understand the data that what produced. But generally the researchers are not sitting anywhere near the HPC clusters, HPC clusters generally do not have displays attached, and users usually wish to view the data on their desktop workstations. While users can copy the data files from the HPC clusters to their workstations, this can be time consuming as sometimes the data files are quite large. And that assumes there is room on the workstation disk.

    In the remainder of this subsection, we will discuss some ways to view data sitting on disks attached to an HPC cluster on your desktop or similar system.

    Remote Visualization using X

    If you have a desktop that has an X server available, then the easiest solution might be to simply to ssh to one of the login nodes and run visit on the login node with the X display tunnelled back to your desktop. The help pages on using X11 discuss the mechanics of this; basically you ssh to the login node with X11 tunnelling enabled, and then run visit in the remote shell.

    When this works, it can be the simplest way to view data remotely using Visit. However, even when it works, it can be sluggish. The visit process on the HPC system is sending all that graphics data back to your desktop for display, and things can become quite unresponsive at times. Furthermore, there can be quirks and incompatibilities between the version of X that Visit running on the HPC cluster was built against and the X server running on your desktop which can cause all sorts of issues. In general, if you encounter issues, it is probably easiest to just use Visit in client/server mode.

    Remote Visualization using Visit Client/Server mode

    Visit supports a client/server mode wherein you launch the Visit GUI on your workstation/desktop but the data processing is handled on one or more remote systems. Graphical processing is split between the workstation and the remote systems.

    This is particularly advantageous when working on High Performance Computing (HPC) clusters, as this mode of operation can:

    • enable you to work on large data sets on the HPC cluster, without needing to transfer many GBs (or TBs) of data back to your workstation.
    • allow you to leverage the CPU power of the HPC cluster to speed up the processing for visualization.

    NOTE: Although it should be possible to avail oneself of GPU enabled nodes for hardware accelerated processing of graphical data, this is NOT currently supported on the Deepthought clusters.

    Within ViSit, this client/server mode is controlled by "Host Profiles". The following subsection deals with setting up these profiles (and includes some standard profiles for the Deepthought clusters). After that, we discuss using the profiles for visualization tasks.

    Defining Host Profiles

    Before you can do client/server visualization with ViSit, you need to set up Host Profiles. You can probably do this fairly easily by just copying one or both of our standard profiles for the Deepthought clusters to the appropriate hosts directory on your workstation. The standard profiles can be downloaded at:

    These files should go into the appropriate "hosts" directory on your workstation. For Unix-like systems, this is usually ~/.visit/hosts. On Windows systems I believe it is something like My Documents\VisIt VISIT_VERSION\hosts. After copying the files there, you will need to start ViSit again for them to be detected.

    If you use one of these files, you can probably skip over the manual configuration described below, and proceed on to the section on using the profiles. However, the remainder of this subsection is still useful if you wish to customize the standard profiles.

    (The following instructions are based on Visit 2.10, but things should be similar for later versions.)

    1. Start by opening the Options | Host Profiles page from the menu bar.
    2. If you copied one of the standard host profiles, the should be visible in the Hosts area to the left, and you can select one of them to edit it. Or can click the new host button to create a new host entry. Either way, it will open the entry with fields on the right side. There are two tabs on the right, Host Settings and Launch Profiles. We deal with Host Settings first.
    3. Host Nickname is the name that will be shown to you for the host profile. I suggest something like UMD Deepthought2 Cluster.
    4. Remote hostname is the hostname that Visit will ssh to to open the remote Visit process. Here you should give the appropriate hostname for the cluster, e.g
      • login.deepthought.umd.edu for the Deepthought cluster
      • login.deepthought2.umd.edu for the Deepthought2 cluster
    5. In the Hostname aliases field, you should include the pattern that will match the hostnames for specific login nodes for the cluster. E.g.:
      • login-*.deepthought.umd.edu for Deepthought
      • login-*.deepthought2.umd.edu for Deepthought2
    6. Leave both Maximum nodes and Maximum processors unchecked
    7. For Path to Visit Installation, enter the value /cell_root/software/visit for the two Deepthought clusters. This will cause it to find custom wrapper scripts for these clusters which will ensure the correct environmental variables are set to run the compute engines, etc. on these clusters.
    8. For Username, enter your username on the cluster. Remember that on Bluecrab, your username includes @umd.edu.
    9. You will probably need to click the box for Tunnel data connections through SSH. This is required if your workstation has any sort of firewall on it, which is typically the case.
    10. The other fields can be left to the defaults.
    11. Now select the Launch Profiles tab. The previous tab gave basic information about connecting to the cluster, we now provide information about how to run on the cluster. You can select an existing launch profile and edit below, or use "New profile" button to create a new profile. We are going to define three profiles:
      1. serial: this runs ViSit in one process on the login node.
      2. parallel (debug partition): this will run ViSit in a job submitted to the debug partition. I.e., a short job, but run at somewhat higher priority for better interactive use.
      3. parallel: this will run ViSit in a more generic job. You can specify the number of cores/nodes/etc.
    12. The serial launch profile is easiest. Just click the "New Profile" button, and enter its name, e.g. serial. That's it.
    13. The two parallel profiles are defined similarly. Click the "New Profile" button and enter its name. Then select the Parallel tab, and:
      1. Click the Launch parallel engine checkbox.
      2. Click the Parallel launch method checkbox and select sbatch/mpirun in the drop down (probably the last entry).
      3. For the parallel (debug partition) profile, also click the Parition/PoolQueue checkbox and enter debug in the text box. For the generic parallel profile, you are probably best just leaving this unchecked/blank.
      4. You can adjust the Number of processors value to the desired default value. You will be able to adjust this each time you use the profile, but this will be the default value. I recommend a value of 20 for Deepthought2 and 8 for Deepthought, as this is what is typically available on a single node.
      5. For the next 4 items (Number of nodes, Bank/Account, Time limit, and Machine file), if you check the checkbox you can set a default which can be modified each time you use the profile. If left unchecked, you will not be able to modify when using the profile, and it will default to whatever sbatch decides. I would recommend checking the boxes for Number of nodes, Bank/Account and Time Limit, but typically Machine File can be left unchecked.
    14. Once you have things as you like them, click the Apply button to make it effective. If you editted anything (i.e. created new profiles or changed a profile), you should select the new/modified host profiles and Export host them to ensure they are saved and available in your next ViSit session.
    15. Click the Dismiss button to close the Host Profiles window.

    In this section we will briefly describe how to use the profiles. I am assuming you have a Host Profile for one of the Deepthought clusters, with three launch profiles as described above, and that you have access to use the HPC cluster you have the profile for.

    In general, using ViSit in client/server starts with opening the data file. Just start to open a file as usual, but in the Host dropdown at the top of the file open dialog, there should be an option for the various host profiles you have defined. Select the appropriate host profile. It will likely prompt you for a password (make sure the username given, if any is correct, and correct it if not) (If no username is given, it assumes your username on the workstation is the same as on the remote cluster). Within a few seconds you should see a file list corresponding to your home directory on the cluster. You can then select a file as usual.

    If multiple launch profiles exist for that host, you will be given an option of choosing which profile you wish to use, and what options you wish to use with that launch profile if it supports any. If there is only a single launch profile, you obviously cannot choose a different launch profile, but a pop up will appear if there are any options for that launch profile. Otherwise, visit will just launch the profile with the defaults.

    If you just wish to use ViSit on a file that resides on the HPC cluster (without copying the file to your local workstation) but do not need (or cannot use) the parallel capabilities of ViSit, the serial option is the easiest, and does not take additional options. Just select it and hit OK. It may take a couple of seconds to start the remote engine, but then should return and you can visualize your data as if it were local.

    The parallel launch options offer more power, but are a bit more complicated to use even though ViSit does a good job of hiding most of the complexity from the user. ViSit generally uses data based parallelization, which means generally one will need a parallel mesh data set to effectively use its parallelization.

    To use one of the parallel profiles, just select it after selecting the file. The parallel (debug partition) is good for a short interactive visualization, but is limited in number of processes/nodes and to 15 minutes. However, since it uses the debug partition, it generally will spend less time waiting in the queue. The generic parallel profile is less restrictive, but depends on jobs submitted via sbatch and can have significant wait times before the job starts running.

    When you select the profile, you typically will have the opportunity to change the defaults for wall time, number of nodes, and allocation account to which the job is to be charged. NOTE: ViSit seems to assume 8 processes per node by default, so e.g. if you request 20 processes on Deepthought2, it will try to spread them over 3 nodes. I strongly advise manually setting the number of nodes appropriately. Note also that the memory of the node is split evenly over all the ViSit processes on the node, so you might need to adjust the node count to use more than the minimal number of nodes in cases where memory requirements are higher.

    When you finish updating options and hit "OK", your ViSit GUI will ssh to the login node for the cluster and submit a batch job requesting the desired number of nodes/cores. Typically you will see a pop up showing that ViSit is awaiting a connection from the compute engines --- this will not occur until after the batch job starts. For batch jobs submitted to the debug partition, this should typically be within a minute or two, but it is likely significantly longer for the general parallel profile.

    When the job starts, after 20 seconds or so the connection should be made and the pop up will go away. At this point you can use ViSit as normal.

    At some point, the scheduler may terminate your compute engines (e.g. due to exceeding walltime). You should be able to continue using the GUI, and when you try to do something that requires the compute engine, a pop up will appear allowing you to start up a new launch profile.