MPI Over InfiniBand
To take full advantage of InfiniBand, an MPI implementation with native InfiniBand support should be used.
Supported MPI Types
MVAPICH2, MVAPICH, and Open MPI support InfiniBand directly. Intel MPI supports InfiniBand through and abstraction layer called DAPL. Take note that DAPL adds an extra step in the communication process and therefore has increased latency and reduced bandwidth when compared to the MPI implementations that use InfiniBand directly.
Running MPI over InfiniBand
Running and MPI program over InfiniBand is identical to running one using standard TCP/IP over Ethernet. The same hostnames will be used in the machines file or in the queuing system.
Open MPI tries to intelligently choose which communication interface it should use and will fall back to using TCP/IP if there is a failure when opening the InifiniBand device. To prevent this behavior add ‘–mca btl ^tcp’ to your command line to exclude TCP/IP as a valid communication interface.