NVIDIA H100 NVL
Dual-GPU PCIe accelerator with 94 GB HBM3 per card and an NVLink bridge delivering 600 GB/s GPU-to-GPU bandwidth. Purpose-built to supercharge LLM inference in mainstream PCIe servers.
🚀 Express Shipping Available Across Europe & MENA Dismiss
Showing all 4 results

Dual-GPU PCIe accelerator with 94 GB HBM3 per card and an NVLink bridge delivering 600 GB/s GPU-to-GPU bandwidth. Purpose-built to supercharge LLM inference in mainstream PCIe servers.

The GPU that launched the AI era — 80GB HBM3, 3.35 TB/s bandwidth, 3,958 TFLOPS FP8, NVLink 900 GB/s in an SXM form factor for HGX and DGX systems.

The first PCIe GPU with HBM3e — 141 GB of memory at 4.8 TB/s. Hopper architecture with NVLink bridge, purpose-built to deploy large language models and generative AI in mainstream enterprise servers.

Hopper architecture with HBM3e — 141GB memory, 4.8 TB/s bandwidth, up to 2x faster LLM inference vs H100. The memory-optimized AI GPU.