As commercial enterprises and government agencies strive to reduce expenses, increase reliability, and expand services, they are realizing that using separate application-specific networks and dedicated isolated data centers requires more equipment, power, management personnel, training, and maintenance resources than using a common, converged network infrastructure with consolidated storage and server resources. While there are numerous interconnect technologies to choose from for migrating to a converged data center network infrastructure, picking one that will scale with your business needs and is also commercially viable is critical.
Deployed in thousands of production environments throughout the world, InfiniBand combines high-speed data movement between systems with ultra-low latencies, reduced power consumption, and 100-percent reliability to support highly scalable computing and storage over a single converged fabric. InfiniBand supports up to 120 Gigabit interfaces today with an unprecedented industry-backed roadmap to deliver higher bandwidths as future business demands.
While InfiniBand provides many benefits within the data center, the challenge has been how to use InfiniBand for linking together geographically isolated data centers to form a single unified network infrastructure for sharing compute and storage resources. Until now, InfiniBand’s inherent distance limitations due to inadequate port buffering have made it unsuitable for deployment between multiple geographically distributed sites.
The Intelligent Bandwidth Exchange (IBEx™) InfiniBand product family utilizes Bay Microsystems’ proprietary packet and transport processing technology along with enhanced credit buffering and end-to-end flow control to reliably extend native InfiniBand over campus, metro, and wide area networks spanning from just a few miles to thousands of miles.
The IBEx platform provides flexible connectivity options delivering line rate 4X InfiniBand QDR (up to 40 Gbps) performance over most wide area network technologies including 10/40G Ethernet, IPv4/IPv6, SONET OC-192/768 / SDH STM-64/256, ITU-T G.709 OTU2/3, and dark fiber enabling IT managers to maintain protocol continuity beyond a single site to virtually anywhere around the globe without the need to modify existing applications or the local InfiniBand network. In addition to InfiniBand, the IBEx also supports encapsulation of multiple 1/10G Ethernet links over the same wide area network connection allowing management and other Ethernet network traffic to transparently pass without the need for additional wide area network services between sites.
The IBEx platform is designed to work with all native InfiniBand protocols by merging disparate subnets into a single unified InfiniBand fabric allowing applications to maintain high performance Remote Direct Memory Access (RDMA) transfers between data centers providing near line rate performance over virtually any distance. This is accomplished using IBEx’s fully optimized packet processing architecture with ultra-low port-to-port latency providing efficient, unencumbered cut-through data transfers without introducing any noticeable latency that could degrade application performance.