A customer recently asked, “When adding a new InfiniBand switch to an existing fabric, should the firmware on the existing switches be upgraded to the version of the firmware on the new switch before connecting the new switch?” It is not required for all switches in an InfiniBand network to have matching firmware. Since adding […]
This is the InfiniBand configuration for most of the HPC clusters we build.
If one of your machines has an InfiniBand device installed and you want to know what state the device is in, you can use the “ibstat” command. The output of “ibstat” shows a lot of information, but the two main lines you should look at are: State: Active Physical state: LinkUp The “State” line can […]
An easy way to check for errors on your entire cluster IB network is to run the command ‘ibcheckerrors.’ This will print any errors that can range from a port being down (even just unplugged temporarily) to transmission errors. After troubleshooting any errors you find, you can clear out the error counters with the command […]
To take full advantage of InfiniBand, an MPI implementation with native InfiniBand support should be used. Supported MPI Types MVAPICH2, MVAPICH, and Open MPI support InfiniBand directly. Intel MPI supports InfiniBand through and abstraction layer called DAPL. Take note that DAPL adds an extra step in the communication process and therefore has increased latency and […]
The status for your InfiniBand Host Channel Adapter (HCA) can be found using the ‘ibstat’ command. # ibstat CA ‘mlx4_0’ CA type: MT4099 Number of ports: 1 Firmware version: 2.10.0 Hardware version: 0 Node GUID: 0x0002c9030031fdc0 System image GUID: 0x0002c9030031fdc3 Port 1: State: Active Physical state: LinkUp Rate: 40 Base lid: 1 LMC: 0 SM […]
Existing applications can take advantage of the higher bandwidth and lower latency of InfiniBand by use of IPoIB, Internet Protocol over InfiniBand. When the driver for IPoIB is loaded virtual network interfaces are made visible to the operating system. These devices appear is if they were Ethernet device and can be manipulated in the same […]
QSFP QSFP cables and ports are used to DDR (20 Gbps), QDR (40 Gbps), and FDR (56 Gbps) InfiniBand links. QSFP Cable The connector on a QSFP cable is long and narrow. The connector slides into the port. QSFP Port QSFP port are recessed openings. The QSFP cable slides into the port. CX4 CX4 cables […]
Like all computer hardware, InfiniBand adapters need drivers in order to be used by the operating system. Most modern Linux distributions provide the kernel drivers, libraries, and support programs needed to have a functioning InfiniBand adapter. While functional, these may not be the best choice in all cases. When a new InfiniBand card, or firmware […]
Since its release, InfiniBand has been made in 5 speeds and has used two types of connectors. FDR FDR InfiniBand provides a 56 Gbps second link. The data encoding for FDR is different from the other InfiniBand speeds: for every 66 bits transmitted 64 bit are data. This is cable 64b/66b encoding. This provides actual […]
Request a Consultation from our team of HPC and AI Experts
Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.