NVIDIA H100
Performance leap for AI and HPC

NVIDIA A100
Unprecedented acceleration

NVIDIA RTX™
The world’s first ray tracing GPU

NVIDIA H100:Order-of-magnitude Performance Leap
NVIDIA announced its new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture during today’s GTC keynote address. H100 carries over the major design focus of A100 to improve strong scaling for AI and HPC workloads, with substantial improvements in architectural efficiency.
For today’s mainstream AI and HPC models, H100 with InfiniBand interconnect delivers up to 30x the performance of A100. The new NVLink Switch System interconnect targets some of the largest and most challenging computing workloads that require model parallelism across multiple GPU-accelerated nodes to fit. These workloads receive yet another generational performance leap, in some cases tripling performance yet again over H100 with InfiniBand.
The new Tensor Cores are up to 6x faster chip-to-chip compared to A100, including per-SM speedup, additional SM count, and higher clocks of H100.
NVIDIA A100:NVIDIA A100:Unprecedented Acceleration at Every Scale
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics and HPC to tackle the world’s toughest computing challenges.
Advanced Clustering Technologies offers systems that integrate this latest addition to the NVIDIA produce line, which as the engine of the NVIDIA data center platform can efficiently scale up to thousands of GPUs. Using new Multi-Instance GPU (MIG) technology, A100 can be partitioned into seven smaller GPUs to accelerate workloads of all sizes.
A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market. A100 accelerates all major deep learning frameworks, and over 650 HPC applications, and containerized software from NGC helps developers easily get up and running.

Yes! I want to hear more about GPUs.
GPU Computing with the NVIDIA Tesla; Educational discounts are available here
Advanced Clustering Technologies is offering educational discounts on NVIDIA A100 GPU accelerators.
Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.
Advanced Clustering’s GPU clusters consist of our innovative ACTblade compute blade products and NVIDIA GPUs. Our modular design allows for mixing and matching of GPU and CPU configurations while at the same time preserving precious rack and datacenter space.
Contact us today to learn more about the educational discounts and to determine if your institution qualifies.
NVIDIA, the NVIDIA logo, and are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. © 2021 NVIDIA Corporation. All rights reserved.
Additional online resources:
GPU Computing Systems for AI and HPC
-
CPU
2x up to 60 core Intel Xeon (Sapphire Rapids)
-
MEMORY
16x DDR5 4800MHz DIMM sockets (Max: 2 TB)
-
STORAGE
2x 3.5″ & 4x 2.5″ SATA,SSD,NVMe drive bays (Max: 1 TB)
-
ACCELERATORS
Max 4x NVIDIA Tesla, AMD accelerators
-
CONNECTIVITY
Onboard 2x 10Gb NICs
-
DENSITY
2U rackmount chassis with redundant power
-
CPU
2x up to 60 core Intel Xeon (Sapphire Rapids)
-
MEMORY
32x DDR5 4800MHz DIMM sockets (Max: 4 TB)
-
STORAGE
8x 3.5″SATA,NVMe drive bays (Max: 50 TB)
-
ACCELERATORS
Max 8x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE
-
DENSITY
4U rackmount chassis with redundant power
-
CPU
2x up to 96 core AMD EPYC (Genoa)
-
MEMORY
24x DDR5 4800MHz DIMM sockets (Max: 2.9296875 TB)
-
STORAGE
8x 3.5″SATA,NVMe drive bays (Max: 3 TB)
-
ACCELERATORS
Max 8x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE
-
DENSITY
4U rackmount chassis with redundant power
-
CPU
1x up to 96 core AMD EPYC (Genoa)
-
MEMORY
12x DDR5 4800MHz DIMM sockets (Max: 1.5 TB)
-
STORAGE
6x 3.5″SATA,SSD,NVMe drive bays (Max: 92 TB)
-
ACCELERATORS
Max 4x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE
-
DENSITY
2U rackmount chassis with redundant power
-
CPU
2x up to 60 core Intel Xeon (Sapphire Rapids)
-
MEMORY
16x DDR5 4800MHz DIMM sockets (Max: 2 TB)
-
STORAGE
2x 2.5″ & 2x M.2 NVMe drive bays (Max: 30 TB)
-
ACCELERATORS
Max 4x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 1x 10Gb NICs & Optional: InfiniBand, OmniPath, 50GbE/100GbE/200GbE
-
DENSITY
Compute Block rackmount chassis with redundant power
-
CPU
2x up to 40 core Intel Xeon (Ice Lake)
-
MEMORY
16x DDR4 3200MHz DIMM sockets (Max: 4 TB)
-
STORAGE
8x 2.5″ SATA,SSD,NVMe drive bays (Max: 64 TB)
-
ACCELERATORS
Max 8x NVIDIA Tesla, NVIDIA Quadro, NVIDIA GTX accelerators
-
CONNECTIVITY
Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE
-
DENSITY
2U rackmount chassis with redundant power
-
CPU
2x up to 40 core Intel Xeon (Ice Lake)
-
MEMORY
32x DDR4 3200MHz DIMM sockets (Max: 4 TB)
-
STORAGE
8x 2.5″ & 2x M.2 SSD,NVMe drive bays (Max: 2 TB)
-
ACCELERATORS
Max 2x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 1x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE
-
DENSITY
2U rackmount chassis with redundant power
-
CPU
2x up to 40 core Intel Xeon (Ice Lake)
-
MEMORY
16x DDR4 3200MHz DIMM sockets (Max: 2 TB)
-
STORAGE
2x 2.5″ & 2x M.2 NVMe drive bays (Max: 16 TB)
-
ACCELERATORS
Max 4x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 1x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE
-
DENSITY
Compute Block rackmount chassis with redundant power
-
CPU
1x up to 64 core AMD EPYC (Milan/Milan-X)
-
MEMORY
8x DDR4 3200MHz DIMM sockets (Max: 1 TB)
-
STORAGE
4x 3.5″SATA,SSD drive bays (Max: 38.72 TB)
-
ACCELERATORS
Max 4x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE
-
DENSITY
2U rackmount chassis with redundant power
-
CPU
2x up to 64 core AMD EPYC (Milan/Milan-X)
-
MEMORY
32x DDR4 2933MHz DIMM sockets (Max: 4 TB)
-
STORAGE
10x 2.5″ SATA,SSD drive bays (Max: 50 TB)
-
ACCELERATORS
Max 8x NVIDIA Tesla accelerators
-
CONNECTIVITY
Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE
-
DENSITY
4U rackmount chassis with redundant power
NVIDIA A100 Features and Benefit
Increased Performance
The new NVIDIA Ampere architecture enables the A100 to deliver the highest absolute performance for HPC and Artificial Intelligence (AI) workloads.
Stronger Memory Performance
Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.
Scalable Applications
NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC.
Simpler Programming
Businesses can access an end-to-end, cloud-native suite of AI and data analytics software that’s optimized, certified, and supported by NVIDIA to run on VMware vSphere with NVIDIA-Certified Systems.
Note about GPU warranties: Manufacturer’s warranty only; Advanced Clustering Technologies does not warranty consumer-grade GPUs.