AI Optimized Servers from Advanced Clustering Technologies...
Advancing the Artificial Intelligence AI Lifecycle

Our AI solutions incorporate everything you need to drive innovation through AI, including:

  • Storage that supports very large datasets requiring very high IOPS to keep data flowing to the GPUs.
  • Networking is based on high speed Ethernet or InfiniBand to provide high bandwidth, low latency fabrics


Our Artificial Intelligence AI optimized systems are designed to meet the demands of the two most important parts of the deep learning lifecycle, AI training and AI inferencing. 

Our AI Optimized Servers for Both Inference and Training

ACTserv x2440c is a 2U dual socket “Sapphire Rapids” Xeon system with 6x hotswap drive bays. GPUs for both training and inference are available.

Our ACTserv e2471c is a perfect balance for AI and HPC applications with 1x AMD Genoa CPUs, storage, and 4x GPUs for inference and training.

Our ACTserv x4411c is a GPU powerhouse for AI and HPC with 2x Intel Xeon Sapphire Rapids series CPUs and up to 8x GPUs for inference and training.

Our ACTserv e4411c is a GPU powerhouse for AI and HPC with 2x AMD EPYC Genoa 9004 series CPUs and up to 8x GPUs for inference and training.

Our AI Optimized Workstation

Our powerful workstation gives you the ability to complete your AI workloads from the convenience of your desktop. This workstation holds up to two NVIDIA GPUs for inference and training. It is quiet and air cooled.

Are You New to AI? Let Us Help.

If your organization wants to start using the power of artificial intelligence AI data analysis to take your work to the next level, a single server or workstation offers the flexibility and durability to get you started with AI inference and training today. We can help.

Let the engineering team at Advanced Clustering advise you on options and the benefits of building your own AI-optimized workstation, server or small cluster. Are you ready to draw greater value from your data? Start configuring your new AI system today using these servers as a start:

AI TRAINING is the most data- and processing-intensive part of the AI lifecycle. Enormous amounts of data must be fed into the models in order to train them to recognize patterns. The vast data flow being pushed through training demands an equally powerful capacity for processing and computing power. Our AI optimized servers deliver incredible compute power for the demands of AI training.

AI INFERENCING is the point at which all data collected during AI training is run through a model to make predictions. Once the model is trained, it needs a relatively small amount of processing power to process incoming data in real time. While models are trained at the beginning of the process and need to have a large amount of concentrated power, inferencing happens close to the data: in a factory, within a moving automobile, in a radiology department, etc.

GPU Accelerated Servers - Accelerating the Future of AI

NVIDIA H100:
Order-of-magnitude Performance Leap

NVIDIA announced its new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture during today’s GTC keynote address. H100 carries over the major design focus of A100 to improve strong scaling for Artificial Intelligence (AI) and HPC workloads, with substantial improvements in architectural efficiency.

For today’s mainstream AI and HPC models, H100 with InfiniBand interconnect delivers up to 30x the performance of A100. The new NVLink Switch System interconnect targets some of the largest and most challenging computing workloads that require model parallelism across multiple GPU-accelerated nodes to fit. These workloads receive yet another generational performance leap, in some cases tripling performance yet again over H100 with InfiniBand.

The new Tensor Cores are up to 6x faster chip-to-chip compared to A100, including per-SM speedup, additional SM count, and higher clocks of H100. 

Download the NVIDIA H100 data sheet.

NVIDIA A100:
NVIDIA A100:Unprecedented Acceleration at Every Scale 

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics and HPC to tackle the world’s toughest computing challenges. 

Advanced Clustering Technologies offers systems that integrate this latest addition to the NVIDIA produce line, which as the engine of the NVIDIA data center platform can efficiently scale up to thousands of GPUs. Using new Multi-Instance GPU (MIG) technology, A100 can be partitioned into seven smaller GPUs to accelerate workloads of all sizes. 

A100’s third-generation Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market. A100 accelerates all major deep learning frameworks, and over 650 HPC applications, and containerized software from NGC helps developers easily get up and running.

Download the NVIDIA A100 data sheet.

Yes! I want to hear more about an 
AI-ready cluster.

  • This field is for validation purposes and should be left unchanged.

NVIDIA H100

Performance leap for AI and HPC

NVIDIA A100

Unprecedented acceleration

NVIDIA RTX™

The world’s first ray tracing GPU

GPU Computing with the NVIDIA Tesla; Educational discounts are available here

NVIDIA Preferred Solution ProviderAdvanced Clustering Technologies is offering educational discounts on NVIDIA A100 GPU accelerators.

Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.

Advanced Clustering’s GPU clusters consist of our innovative ACTblade compute blade products and NVIDIA GPUs. Our modular design allows for mixing and matching of GPU and CPU configurations while at the same time preserving precious rack and datacenter space.

Contact us today to learn more about the educational discounts and to determine if your institution qualifies.

NVIDIA, the NVIDIA logo, and are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. © 2021 NVIDIA Corporation. All rights reserved.

GPU Computing Systems for AI and HPC

ACTserv x2440c

ACTserv x2440c is a flexible 2U dual socket “Sapphire Rapids” Xeon system with support for 4x dual slot GPUs for AI training or 8x single slot GPUs for AI inference.

  • CPU

    2x up to 60 core Intel Xeon (Sapphire Rapids)

  • MEMORY

    16x DDR5 4800MHz DIMM sockets (Max: 2 TB)

  • STORAGE

    2x 3.5″ & 4x 2.5″ SATA,SSD,NVMe drive bays (Max: 61 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla, AMD accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, InfiniBand, OmniPath, 100GbE, 50GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv x4411c

Our ACTserv x4411c is a GPU powerhouse for AI and HPC with 2x Intel Xeon Sapphire Rapids series CPUs and up to 8x GPUs

  • CPU

    2x up to 60 core Intel Xeon (Sapphire Rapids)

  • MEMORY

    32x DDR5 4800MHz DIMM sockets (Max: 4 TB)

  • STORAGE

    8x 3.5″SATA,NVMe drive bays (Max: 50 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    4U rackmount chassis with redundant power

ACTserv e4411c

Our ACTserv e4411c is a GPU powerhouse for AI and HPC with 2x AMD EPYC Genoa 9004 series CPUs and up to 8x GPUs

  • CPU

    2x up to 128 core AMD EPYC Genoa/Bergamo

  • MEMORY

    24x DDR5 4800MHz DIMM sockets (Max: 3 TB)

  • STORAGE

    8x 3.5″SATA,NVMe drive bays (Max: 176 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, InfiniBand, OmniPath, 100GbE, 50GbE

  • DENSITY

    4U rackmount chassis with redundant power

ACTserv e2471c

Our ACTserv e2471c is a perfect balance for AI and HPC applications with 1x AMD EPYC Genoa/Bergamo CPUs, 4x GPUs and storage.

  • CPU

    1x up to 128 core AMD EPYC Genoa/Bergamo

  • MEMORY

    12x DDR5 4800MHz DIMM sockets (Max: 1.501953125 TB)

  • STORAGE

    6x 3.5″SATA,SSD,NVMe drive bays (Max: 92 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, InfiniBand, OmniPath, 100GbE, 50GbE

  • DENSITY

    2U rackmount chassis with redundant power

NVIDIA A100 Features and Benefit

Increased Performance

The new NVIDIA Ampere architecture enables the A100 to deliver the highest absolute performance for HPC and Artificial Intelligence (AI) workloads.

Stronger Memory Performance

Available in 40GB and 80GB memory versions, A100 80GB debuts the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.

Scalable Applications

NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale to power the world’s highest-performing elastic data centers for AI, data analytics, and HPC.

Simpler Programming

Businesses can access an end-to-end, cloud-native suite of  AI and data analytics software that’s  optimized, certified, and supported by NVIDIA to run on VMware vSphere  with  NVIDIA-Certified  Systems.

Note about GPU warranties: Manufacturer’s warranty only; Advanced Clustering Technologies does not warranty consumer-grade GPUs.

Request a Consultation from our team of HPC and AI Experts

Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.