Ranger, a key resource on the National Science Foundation’s TeraGrid and the most powerful academic system in the world, is now even more attractive to researchers who depend on it to do advanced scientific analysis. With the integration of Spur, they now have the ability to visualize their data immediately.
“Spur is the first-of-its-kind visualization resource to be tightly integrated with a high performance computing system (HPC) of Ranger’s magnitude,” Paul Navratil, Texas Advanced Computing Center (TACC) visualization scientist, said. “Each Spur node is more than a researcher will have on their desktop or laptop, and the remote visualization aspect allows the researcher to perform his or her analysis on a powerful machine from a remote location. Researchers no longer have to come to the graphics resources. We’re bringing the graphics resources to them.”
Spur, based on the Sun Visualization System, provides tight integration with Ranger, remote visualization access, and powerful visualization and data analysis capabilities directly to TeraGrid and Texas Higher Education research communities.
Based on state-of-the-art technology from Sun Microsystems, Advanced Micro Devices and NVIDIA, Spur began full production last October, and replaces the Maverick system as TACC’s primary remote visualization resource.
Researchers are using Spur for a variety of visualization solutions: 1) primary data analysis; 2) improvement of models and code; and, 3) polished runs for publication purposes.
“We’re also integrating Spur into our training procedures to teach people to perform scientific visualization,” Navratil said.
Madhu Pai at Stanford University is using Spur for primary data analysis.
“Spur provides the ability to visualize your results as they’re being produced, and that is very remarkable,” Pai said. “Our work is focused on detailed numerical simulations of primary break-up of fuel jets in a stationary or moving gas. Our goal is to understand how liquid fuel breaks up in the combustion chamber of internal combustion engines and gas turbines which ultimately dictates combustion efficiency and how pollutants are formed. Since these problems are so complex and computationally intensive, you want to visualize the results on the fly so that you are sure that the simulation is proceeding correctly. As the results were being produced on Ranger, I could visualize them on Spur almost immediately.”
Prior to the integration of Ranger and Spur, researchers had to transfer terabytes of data across an Internet connection to their local system or to other visualization resources. Even on a very fast Internet connection, it takes hours or even days to move this amount of data, which severely limits productivity. This transfer process also duplicates data across multiple resources, wasting bandwidth, time and disk space.
Because Spur’s master and visualization nodes are directly connected to the Sun Datacenter Switch 3456, which is the Infiniband network fabric of Ranger, users can directly access the high performance parallel file systems on Ranger. This tight coupling lets users visualize terascale data sets without having to transfer the data sets off Ranger to a separate visualization resource for post-processing.
“Ranger nodes can communicate with Spur nodes and they share their file system, so you can write data on Ranger and read it on Spur, and you’re not moving it over a wide area network or the Internet,” Navratil said. “Analyzing and visualizing data at the point of production–that’s why it’s so powerful.”
According to Pai, the time spent from the execution of code to the visualization of result datasets has reduced significantly with the introduction of Spur.
“As we move towards larger problems to investigate phenomena that have never been studied at the level of detail that we hope to achieve on Ranger,” Pai said, “availability of a visualization system like Spur is critical to achieving the project objectives in a timely manner and to the overall success of our research. I believe Spur will be an integral part of my current and future research.”
Professor Ken Jansen’s group at Rensselaer Polytechnic Institute used nearly one million hours on Ranger in 2008 and plans to use even more hours this year. In addition to mining the physics from production simulations, some of Jansen’s early use of Spur has been for local evaluation of models, algorithms and code–rapidly isolating, identifying and solving issues involved with the modeling of very complex problems.
“We’re not just running a production code,” said Jansen, a computational fluid dynamics specialist in unstructured grids and adapted methods for turbulence simulation. “We’re trying to extend the code and improve its performance, accuracy, efficiency and ability to adapt to new physics. Because we’re making changes all of the time, we use visualization to uncover problems in the method. Spur is a major step forward over anything I’ve ever used before.”
David Fuentes, a leading technologist at the M.D. Anderson Cancer Center, studies real-time laser surgery techniques that can sense how a patient is responding and make adjustments accordingly.
“I use Spur for remote, real-time visualization of finite element and anatomical MRI data,” Fuentes said. “We use the default virtual network computing (VNC) software to broadcast visualizations back and forth between Austin, Houston and San Antonio.”
Spur also helps Fuentes and his colleagues collaborate on research projects.
“It’s really nice to keep all of the data on a local repository so everyone can log into the same machine and look at the data simultaneously,” Fuentes said. “It’s a collaborative effort, and the funding agencies are looking to see this happen.”
How do I get an allocation on Spur?
Allocations on Spur are available to the Texas Higher Education and TeraGrid research communities.
Researchers from the Texas Higher Education community can apply for an allocation by selecting the “Allocations” tab on the TACC User Portal. Researchers applying via the TeraGrid should visit the following link for the types of allocations available, deadlines for proposal submission and how to apply.
Please note the following additional allocation-related items:
- Ranger users will automatically be validated on Spur;
- New Ranger allocations will result in an automatic allocation on Spur; and
- New Spur allocations will not result in an automatic allocation on Ranger; users desiring to only apply for a visualization resource will be able to do so.
Spur Technical Specifications
Sun FireTM X4600M2 server (master node) with:
- 8 dual-core CPUs (16 cores)
- 256 Gigabytes of Memory
- 2 NVIDIA Quadro® Plex 1000 Model IV S4 visual computing system (VCS) (2 GPUs each)
Sun Fire X4440 server (display node) with:
- 4 quad-core CPUs (16 cores)
- 128 Gigabytes of Memory
- 2 NVIDIA Quadro Plex 1000 Model IV VCS (2 GPUs each)
6 Sun Fire X4400 servers (render nodes), each with:
- 4 quad-core CPUs (16 cores)
- 128 Gigabytes of Memory
- 1 NVIDIA Quadro Plex 2100 S4 VCS (4 GPUs each)
Total system capability: 128 cores, 1 Terabyte aggregate memory, 32 GPUs.
Please submit questions via the TACC Consulting System.