Expand your knowledge of hardware, software and supercomputing

MPI Over InfiniBand

To take full advantage of InfiniBand, an MPI implementation with native InfiniBand support should be used.

Supported MPI Types

MVAPICH2, MVAPICH, and Open MPI support InfiniBand directly. Intel MPI supports InfiniBand through and abstraction layer called DAPL. Take note that DAPL adds an extra step in the communication process and therefore has increased latency and reduced bandwidth when compared to the MPI implementations that use InfiniBand directly.

Running MPI over InfiniBand

Running and MPI program over InfiniBand is identical to running one using standard TCP/IP over Ethernet. The same hostnames will be used in the machines file or in the queuing system.
Open MPI tries to intelligently choose which communication interface it should use and will fall back to using TCP/IP if there is a failure when opening the InifiniBand device. To prevent this behavior add ‘–mca btl ^tcp’ to your command line to exclude TCP/IP as a valid communication interface.
Use our Breakin stress test and diagnostics tool to pinpoint hardware issues and component failures.
Check out our product catalog and use our Configurator to plan your next system and get a price estimate.

Request a Consultation from our team of HPC and AI Experts

Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.