Finden Sie Ihr Produkt
Finden Sie Ihr Produkt
-
QuantaGrid S74G-2U
Breakthrough accelerated performance for giant-scale AI-HPC applications
- Introducing the first gen NVIDIA® MGX™ architecture with modular infrastructure
- Powered by NVIDIA® Grace™ Hopper™ Superchip
- Coherent memory between CPU and GPU with NVLink®- C2C interconnect
- Optimized for memory intensive inference and HPC performance
- arm SystemReady
-
QuantaGrid-D43N-3U
Optimized Accelerated Server
- Flexible acceleration card configuration optimized for both compute and graphic intensive workloads
- Up to 128 CPU cores with 8TB memory capacity to feed high throughput accelerator cards
- Up to 2x HDR/200GbE networking to cluster computing
- Easy maintenance design for minimum downtime
-
QuantaGrid D74A-7U
Accelerated Parallel Computing Performance for the Most Extreme AI-HPC Workloads
- Multiple-GPUs Server for HPC / AI Training ( e.g LLMs / NLP ).
- Powered by 2x 4th AMD EPYC9004 Processors and Compatible with Next-Gen AMD EPYC™ Processor in the future.
- Introducing NVIDIA HGX Architecture and Flexible with 8x NVIDIA H100/H200 GPUs or 8x AMD MI300X GPUs.
- 18x SFF All-NVMe Drive Bays for GPUDirect Storage and Boot Drive.
- 10x OCP NIC 3.0 TSFF for GPUDirect RDMA.
- Modularized Design for Easy Serviceability.
-
QuantaGrid D54U-3U
Endless Flexibility for Diverse Applications
- Powered by 5th/4th Gen Intel® Xeon® Scalable Processor
- Powered by NVIDIA® GPUs
- PCle 5.0 & DDR5 platform ready
- Up to 4x DW accelerator or 8x SW accelerator
- Support active type and passive type accelerator
- Up to 10x PCIe 5.0 NVMe drive to speed up data-loading
- PCle 5.0 400Gb Networking for Scale-out
- Enhanced serviceability with tool-less, hot-plug designs
-
QuantaGrid D74H-7U
Advanced Performance for the Most Extreme AI-HPC Workloads
- 2x Top Bin 5th/4th Gen Intel® Xeon® Processor
- 18x SFF All-NVMe drive bays for GPUDirect storage and boot drive
- 10x OCP NIC 3.0 TSFF for GPUDirect RDMA
- 8x Hopper H100 SXM5 GPU modules with HGX baseboard
-
QuantaGrid D52G-4U
All-in-one Box Prevails over AI and HPC challenge
- Up to 8x NVIDIA® Tesla® V100 with NVLink™ support up to 300GB/s GPU to GPU communication
- Up to 10x dual-width 300 Watt GPU or 16x single-width 75 Watt GPU support
- Diversify GPU topology to Conquer Any Type of Parallel Computing Workload
- Up to 4x100Gb/s High Bandwidth RDMA-enabled Network to Scale Out with Efficiency
- 8x NVMe Storage to Accelerate Deep Learning