Expand your knowledge of hardware, software and supercomputing

Standard Cluster – InfiniBand Networking

This is the InfiniBand configuration for most of the HPC clusters we build.

Checking InfiniBand

f one of your machines has an InfiniBand device installed and you want to know what state the device is in, you can use the “ibstat” command. The output of “ibstat” shows a lot of information, but the two main lines you should look at are: State: Active Physical state: LinkUp The “State” line can […]

Checking and Clearing InfiniBand Errors

An easy way to check for errors on your entire cluster IB network is to run the command ‘ibcheckerrors.’ This will print any errors that can range from a port being down (even just unplugged temporarily) to transmission errors. After troubleshooting any errors you find, you can clear out the error counters with the command […]

MPI Over InfiniBand

To take full advantage of InfiniBand, an MPI implementation with native InfiniBand support should be used. Supported MPI Types MVAPICH2, MVAPICH, and Open MPI support InfiniBand directly. Intel MPI supports InfiniBand through and abstraction layer called DAPL. Take note that DAPL adds an extra step in the communication process and therefore has increased latency and […]

InfiniBand Port States

The status for your InfiniBand Host Channel Adapter (HCA) can be found using the ‘ibstat’ command. # ibstat CA ‘mlx4_0’ CA type: MT4099 Number of ports: 1 Firmware version: 2.10.0 Hardware version: 0 Node GUID: 0x0002c9030031fdc0 System image GUID: 0x0002c9030031fdc3 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 1 LMC: 0 SM […]

IPoIB – Using TCP/IP on an InfiniBand Network

Existing applications can take advantage of the higher bandwidth and lower latency of InfiniBand by use of IPoIB, Internet Protocol over InfiniBand. When the driver for IPoIB is loaded virtual network interfaces are made visible to the operating system. These devices appear is if they were Ethernet device and can be manipulated in the same […]

InfiniBand Cable and Port Types

QSFP QSFP cables and ports are used to DDR (20 Gbps), QDR (40 Gbps), and FDR (56 Gbps) InfiniBand links. QSFP Cable The connector on a QSFP cable is long and narrow. The connector slides into the port. QSFP Port QSFP port are recessed openings. The QSFP cable slides into the port. CX4 CX4 cables […]

Drivers: Distro vs OFED

Like all computer hardware, InfiniBand adapters need drivers in order to be used by the operating system. Most modern Linux distributions provide the kernel drivers, libraries, and support programs needed to have a functioning InfiniBand adapter. While functional, these may not be the best choice in all cases. When a new InfiniBand card, or firmware […]

InfiniBand Types and Speeds

Since its release, InfiniBand has been made in 5 speeds and has used two types of connectors. FDR FDR InfiniBand provides a 56 Gbps second link. The data encoding for FDR is different from the other InfiniBand speeds: for every 66 bits transmitted 64 bit are data. This is cable 64b/66b encoding. This provides actual […]

Use our Breakin stress test and diagnostics tool to pinpoint hardware issues and component failures.
Check out our product catalog and use our Configurator to plan your next system and get a price estimate.

Request a Consultation from our team of HPC Experts

Would you like to speak to one of our HPC experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.