InfiniBand is a high-bandwidth, low-latency network interconnect solution that has grown tremendous market share in the High Performance Computing (HPC) community. InfiniBand is a popular and widely used I/O fabric among HPC customers: Universities and Labs; Life Sciences; Biomedical; Oil and Gas (Seismic, Reservoir, Modelling Applications); Computer Aided Design and Engineering; Enterprise Oracle; and Financial Applications.
How Does it Help an End-user?
InfiniBand was designed to meet the evolving needs of the high performance computing market. Computational science depends on InfiniBand to deliver:
Supports host connectivity of 40Gbps with Quad Data Rate (QDR) and 56 Gbps with Fourteen Data rate (FDR).
Accelerates the performance of HPC and enterprise computing applications by providing ultra-low latencies.
Superior Cluster Scaling
Point-to-point latency remains low as node and core counts scale —(<1.2 ?s). Excellent communications/ computation overlap among nodes in a cluster.
InfiniBand allows reliable protocols like Remote Direct Memory Access (RDMA) communication to occur between interconnected hosts, thereby increasing efficiency.
Fabric Consolidation and Energy Savings
InfiniBand can consolidate networking, clustering, and storage data over a single fabric, which significantly lowers overall power, real estate, and management overhead in data centers. Enhanced Quality of Service (QoS) capabilities support running and managing multiple workloads and traffic classes.
Data Integrity and Reliability
InfiniBand provides the highest levels of data integrity by performing Cyclic Redundancy Checks (CRCs) at each fabric hop and end-to-end across the fabric to avoid data corruption. To meet the needs of mission critical applications and high levels of availability, InfiniBand provides fully redundant and lossless I/O fabrics with automatic failover path and link layer multi-paths.