Cluster Grid

Font Size



Menu Style


Multihomed MPI communication using OpenMPI

Today we tested something funny. We have a HS21 Blade Center with two Cisco Switches as I/O. The problem we faces was that the uplink speeds between the Blade Centers were maximum 4Gb/s because each cisco switch only had 4 external ports and there was no way to make a trunk between the two I/Os.



So we were trying to see if MPI can use multiple interfaces by its' own, and it seems it can. The SGE batch system uses eth0, network for internal cluster communication. The OpenMPI documentation states that the node view and netwok view are completly diffent.


Question: should the eth1 interface on the blades reside on the same subnet or not? (surley not because of routing issues, but we wanted to test that anywhay.

--snip-- Section 10.

So .. the test: