Video: Brunson Discusses Mission of Oklahoma HPC Center

Posted on April 10, 2017

Dana BrunsonIn this video featured on InsideHPC.com, Dana Brunson from Oklahoma State talks about the mission of the Oklahoma High Performance Computing Center.

The center is home to Cowboy, a supercomputer cluster built by Advanced Clustering Technologies that is comprised of 250 individual compute nodes. Cowboy helps researchers and scientists in diverse fields including bioinformatics, engineering, geography and physics.

“By placing advanced technology in the hands of the academic population, research can be done more quickly, less expensively, and with greater certainty of success,” Brunson said.

The Oklahoma HPCC was founded in 2007 and facilitates computational and data-intensive research by students, faculty and staff. Brunson servers as director of the HPCC and is also assistant vice president for research cyberinfrastructure and an adjunct associate professor in the Computer Science Department and the Mathematics Department at Oklahoma State University (OSU).

Watch the video here: http://insidehpc.com/2017/04/cowboy-supercomputer-powers-research-oklahoma-state/

Visit the Oklahoma High Performance Computing Center online:
https://hpcc.okstate.edu/

The Cowboy cluster, acquired from Advanced Clustering Technologies, consists of:

  • 252 standard compute nodes, each with dual Intel Xeon E5-2620 “Sandy Bridge” hex core 2.0 GHz CPUs, with 32 GB of 1333 MHz RAM and
  • Two “fat nodes” each with 256 GB RAM and an NVIDIA Tesla C2075 card.
  • The aggregate peak speed is 48.8 TFLOPs, with 3048 cores, 8576 GB of RAM.
  • Cowboy also includes 92 TB of globally accessible high performance disk provided by three shelves of Panasas ActivStor12, this includes 20x 2TB drives and peak speed of 1500MB/s read and 1600MB/s write per shelf. The total solution provides an aggregate of 4.5GB/s read and 4.8GB/s write.
  • The interconnect networks are Infiniband for message passing, Gigabit Ethernet for I/O, and an ethernet management network. The Infiniband for message passing is Mellanox Connect-X 3 QDR in a 2:1 oversubscription. There are a total of 15x MIS5025Q switches providing both the leaf and spine components. Each leaf is connects to 24 compute nodes, and 12x 40Gb QDR links to the spine. Point-to-point latency is approx 1 microsecond. The ethernet network includes 11 leaf gigabit switches that connect to 24 compute nodes. Each leaf is uplinked via 2x 10G network ports to the spine 64 port Mellanox MSX1016 10 Gigabit switch. The network configuration provides a 1.2:1 oversubscription.
Download our HPC Pricing Guide
Get our Guide to Grant Writing

Request a Consultation from our team of HPC and AI Experts

Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.