Infiniband 카드

Mellanox 에서 제조한 어댑터 카드의 뛰어난 디자인으로 빠른 InfiniBand 및 10 / 40 기가 비트 Ethernet으로 구성된 패브릭에서 최상의 성능을 제공합니다. Mellanox 5 세대 InfiniBand 어댑터 카드 ConnectX-3 시리즈는 탁월한 성능과 처리량, 그리고 최저 지연을 선도하고 있는 제품입니다.

Resources



Product Description


Mellanox Cards Mellanox InfiniBand Host Channel Adapters (HCA)

 
Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. World-Class Performance
Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Application acceleration with CORE-Direct™ and GPU communication acceleration brings further levels of performance improvement. Mellanox InfiniBand adapters' advanced acceleration technology enables higher cluster efficiency and large scalability to tens-of-thousands of nodes. I/O Virtualization
Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization on InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity. Storage Accelerated
A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Mellanox adapters support SCSI, iSCSI, NFS protocols. Software Support
All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software, and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are compatible with configuration and management tools from OEMs and operating system vendors.

Virtual Protocol Interconnect® (VPI)

VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over InfiniBand (EoIB) and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center. ConnectX-3
Mellanox's industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most flexible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0 host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI messages per second making it the most scalable and suitable solution for current and future transaction-demanding applications. ConnectX-3 maximizes the network efficiency making it ideal for HPC or converged data centers operating a wide range of applications.


Connect-IB™

Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.


Complete End-to-End 56Gb/s InfiniBand Networking

ConnectX-3 adapters are part of Mellanox's full FDR 56Gb/s InfiniBand end-to-end portfolio for data centers and high-performance computing systems, which includes switches and cables. Mellanox's SwitchX family of FDR InfiniBand switches and Unified Fabric Management software incorporate advanced tools that simplify networking management and installation, and provide the needed capabilities for the highest scalability and future growth. Mellanox's line of FDR copper and fiber cables ensure the highest interconnect performance. With Mellanox end to end, IT managers can be assured of the highest performance, most efficient network fabric.


InfinibandAdapter Cards
QDR FDR10 FDR
Models MHQH19B- XTR MHQH29C- XTR MCX353A- QCBT MCX354A- QCBT MCX353A- TCBT MCX354A- TCBT MCX353A- FCBT MCX354A- FCBT
PCI-E Gen2 Gen2 Gen3 Gen3 Gen3 Gen3 Gen3 Gen3
encoding 8b/10b 8b/10b 8b/10b 8b/10b 64b/66b 64b/66b 64b/66b 64b/66b
Ports Single Dual Single Dual Single Dual Single Dual
Per lane signalingrate 10Gb/s 10Gb/s 10Gb/s 10Gb/s 10Gb/s 10Gb/s 14Gb/s 14Gb/s
Forward Error Correction(FEC) not support not support not support not support support support support support
OS Support RHEL, SLES,Win2003/2008, FreeBSD,VMware ESX3.5,vSphere 4.0/4.1