We recommend placing orders as soon as possible to minimize wait times and price increases caused by global supply chain issues.

NVIDIA A100

Unprecedented acceleration

NVIDIA RTX™

The world’s first ray tracing GPU

NVIDIA A100:
Unprecedented Acceleration at Every Scale

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics and HPC to tackle the world’s toughest computing challenges. 

Advanced Clustering Technologies offers systems that integrate this latest addition to the NVIDIA produce line, which as the engine of the NVIDIA data center platform can efficiently scale up to thousands of GPUs. Using new Multi-Instance GPU (MIG) technology, A100 can be partitioned into seven smaller GPUs to accelerate workloads of all sizes. 

A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market. A100 accelerates all major deep learning frameworks, and over 650 HPC applications, and containerized software from NGC helps developers easily get up and running.

Download the NVIDIA A100 data sheet.

Yes! I want to hear more about GPUs.

  • This field is for validation purposes and should be left unchanged.

GPU Computing with the NVIDIA Tesla; Educational discounts are available here

NVIDIA Preferred Solution ProviderAdvanced Clustering Technologies is offering educational discounts on NVIDIA A100 GPU accelerators.

Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.

Advanced Clustering’s GPU clusters consist of our innovative ACTblade compute blade products and NVIDIA GPUs. Our modular design allows for mixing and matching of GPU and CPU configurations while at the same time preserving precious rack and datacenter space.

Contact us today to learn more about the educational discounts and to determine if your institution qualifies.

NVIDIA, the NVIDIA logo, and are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. © 2021 NVIDIA Corporation. All rights reserved.

GPU Computing Systems

ACTserv x2380c

Our ACTserv x2380c is a perfect balance 2x Xeon SP CPUs, 8x GPUs and storage.

  • CPU

    2x up to 40 core Intel Xeon (Ice Lake)

  • MEMORY

    16x DDR4 3200MHz DIMM sockets (Max: 4 TB)

  • STORAGE

    8x 2.5″ SATA,SSD,NVMe drive bays (Max: 64 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla, NVIDIA Quadro, NVIDIA GTX accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv x2310c

2U, dual processor with 2x GPUs

  • CPU

    2x up to 40 core Intel Xeon (Ice Lake)

  • MEMORY

    32x DDR4 3200MHz DIMM sockets (Max: 4 TB)

  • STORAGE

    8x 2.5″ & 2x M.2 SSD,NVMe drive bays (Max: 2 TB)

  • ACCELERATORS

    Max 2x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 1x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTblade x380c

2U node with Accelerators/GPUs

  • CPU

    2x up to 40 core Intel Xeon (Ice Lake)

  • MEMORY

    16x DDR4 3200MHz DIMM sockets (Max: 2 TB)

  • STORAGE

    2x 2.5″ & 2x M.2 NVMe drive bays (Max: 16 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 1x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE

  • DENSITY

    Compute Block rackmount chassis with redundant power

ACTserv e2280c

Our ACTserv e2280c is a single processor AMD EPYC Rome 2U that supports up to 4x GPUs

  • CPU

    1x up to 64 core AMD EPYC (Rome/Milan)

  • MEMORY

    8x DDR4 3200MHz DIMM sockets (Max: 1 TB)

  • STORAGE

    4x 3.5″SATA,SSD drive bays (Max: 38.72 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv e4280c

Our ACTserv x4280c is a GPU powerhouse with 2x AMD Epyc 7002 series CPUs and up to 8x GPUs

  • CPU

    2x up to 64 core AMD EPYC (Rome/Milan)

  • MEMORY

    32x DDR4 2933MHz DIMM sockets (Max: 4 TB)

  • STORAGE

    10x 2.5″ SATA,SSD drive bays (Max: 50 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    4U rackmount chassis with redundant power

ACTserv x2280c

Our ACTserv x2280c is a perfect balance 2x Xeon SP CPUs, 8x GPUs and storage.

  • CPU

    2x up to 26 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    24x DDR4 2933MHz DIMM sockets (Max: 6 TB)

  • STORAGE

    8x 2.5″ SATA,SSD drive bays (Max: 96 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla, NVIDIA Quadro, NVIDIA GTX accelerators

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv x1270c

This server is ideal for use in hyper-converged, data analytics, cloud and high performance computing applications.

  • CPU

    1x up to 28 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    12x DDR4 2666MHz DIMM sockets (Max: 768 GB)

  • STORAGE

    2x 2.5″ SSD drive bays (Max: 15.36 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    1U chassis

ACTserv x2210

Our ACTserv x2210 supports high core count, high frequency cpus and offers a lot of fast storage. It can be used as a compute, storage, head node or 2x GPU node. Provides option to add two GPUs.

  • CPU

    2x up to 28 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    24x DDR4 2933MHz DIMM sockets (Max: 1.5 TB)

  • STORAGE

    8x 2.5″ SATA,SSD,NVMe drive bays (Max: 64 TB)

  • ACCELERATORS

    Max 2x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

Tesla V100 Features and Benefits

Increased Performance

The new NVIDIA Pascal™ architecture enables the Tesla V100 to deliver the highest absolute performance for HPC and hyperscale workloads.

Stronger Memory Performance

The Tesla V100 integrates compute and data on the same package by adding CoWoS® (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell™ architecture.

Scalable Applications

Performance is often throttled by the interconnect. The revolutionary NVIDIA NVLink™ high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5X higher performance compared to today's best-in-class technology.

Simpler Programming

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory.

Note about GPU warranties: Manufacturer’s warranty only; Advanced Clustering Technologies does not warranty consumer-grade GPUs.

Request a Consultation from our team of HPC Experts

Would you like to speak to one of our HPC experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.