NVIDIA Tesla V100 Powered by Volta and Designed to Accelerate Science and AI Computing

Advanced Clustering Technologies offers NVIDIA® Tesla® GPU-accelerated servers that deliver significantly higher throughput while saving money.

The NVIDIA® Tesla® V100 accelerator is powered by NVIDIA Volta architecture, which is the computational engine for scientific computing and artificial intelligence.

NVIDIA Tesla V100 dramatically boosts the throughput of your data center with fewer nodes, completing more jobs and improving data center efficiency.

A single server node with V100 GPUs can replace up to 50 CPU nodes. For example, for HOOMD-Blue, a single node with four V100’s will do the work of 43 dual-socket CPU nodes while for MILC a single V100 node can replace 14 CPU nodes.

With lower networking, power, and rack space overheads, accelerated nodes provide higher application throughput at substantially reduced costs.

Lower Data Center Cost by up to 50%

  • This field is for validation purposes and should be left unchanged.

Advanced Clustering Technologies is now offering the NVIDIA® TITAN RTX™, which delivers the ultimate PC computing experience for the most demanding users in the world including AI researchers, university labs, deep learning developers and data scientists.
Click here to learn more about Titan RTX.

GPU Computing with the NVIDIA Tesla; Educational discounts are available here

NVIDIA Preferred Solution Provider

Advanced Clustering Technologies is offering educational discounts on NVIDIA® Tesla® V100 GPU accelerators, which are the most advanced ever built for the data center.

Higher performance with fewer, lightning-fast nodes enables data centers to dramatically increase throughput while also saving money.

Qualified educational institutions are entitled to special pricing on Tesla P100-PCIe cards that are purchased in qualified servers from Advanced Clustering, which is an NVIDIA preferred solution provider.  

Advanced Clustering’s GPU clusters consist of our innovative Pinnacle Flex compute blade products and the Tesla V100 GPUs. Our modular design allows for mixing and matching of GPU and CPU configurations while at the same time preserving precious rack and datacenter space.

Contact us today to learn more about the educational discounts and to determine if your institution qualifies.

NVIDIA, the NVIDIA logo, and are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. © 2016 NVIDIA Corporation. All rights reserved.

GPU Computing Systems

ACTserv e2180c

Our ACTserv e2180c is an ideal EPYC head or storage node. The addition of 2x expansion slots for GPUs also makes it a great compute node for accelerators.

  • CPU

    1x up to 32 core AMD Processors (EPYC)

  • MEMORY

    8x DDR4 2666MHz DIMM sockets (Max: 512 GB)

  • STORAGE

    8x 3.5"SATA,SSD drive bays (Max: 38.72 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv e2150

Our ACTserv e2150 supports high core count, high frequency cpus and offers a lot of fast NVMe storage. It can be used as a fast storage, or head node.

  • CPU

    2x up to 32 core AMD Processors (EPYC)

  • MEMORY

    32x DDR4 2666MHz DIMM sockets (Max: 1.5 TB)

  • STORAGE

    24x 2.5" NVMe drive bays (Max: 192 TB)

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv e2120c

Our ACTserv e2120 is an ideal EPYC head or storage node. The addition of 2x expansion slots for GPUs also makes it a great compute node for accelerators.

  • CPU

    2x up to 32 core AMD Processors (EPYC)

  • MEMORY

    32x DDR4 2666MHz DIMM sockets (Max: 4 TB)

  • STORAGE

    8x 3.5"SATA,SSD drive bays (Max: 96 TB)

  • ACCELERATORS

    Max 2x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv x2280c

Our ACTserv x2280c is a perfect balance 2x Xeon SP CPUs, 8x GPUs and storage.

  • CPU

    2x up to 26 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    24x DDR4 2919MHz DIMM sockets (Max: 6 TB)

  • STORAGE

    8x 3.5"SATA,SSD drive bays (Max: 96 TB)

  • ACCELERATORS

    Max 8x NVIDIA Tesla, NVIDIA Quadro, NVIDIA GTX accelerators

  • CONNECTIVITY

    Onboard 2x 1Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

ACTserv x1270c

This server is ideal for use in hyper-converged, data analytics, cloud and high performance computing applications.

  • CPU

    1x up to 28 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    12x DDR4 2666MHz DIMM sockets (Max: 768 GB)

  • STORAGE

    2x 2.5" SSD drive bays (Max: 15.36 TB)

  • ACCELERATORS

    Max 4x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    1U chassis

ACTserv x2210

Our ACTserv x2210 supports high core count, high frequency cpus and offers a lot of fast storage. It can be used as a compute, storage, head node or 2x GPU node. Provides option to add two GPUs.

  • CPU

    2x up to 28 core Intel Xeon SP (Cascade Lake)

  • MEMORY

    24x DDR4 2933MHz DIMM sockets (Max: 1.5 TB)

  • STORAGE

    8x 2.5" SATA,SSD,NVMe drive bays (Max: 64 TB)

  • ACCELERATORS

    Max 2x NVIDIA Tesla accelerators

  • CONNECTIVITY

    Onboard 2x 10Gb NICs & Optional: 10GbE, 40GbE, InfiniBand, OmniPath, 100GbE

  • DENSITY

    2U rackmount chassis with redundant power

Tesla P100 Features and Benefits

Increased Performance

The new NVIDIA Pascal™ architecture enables the Tesla V100 to deliver the highest absolute performance for HPC and hyperscale workloads.

Stronger Memory Performance

The Tesla V100 integrates compute and data on the same package by adding CoWoS® (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell™ architecture.

Scalable Applications

Performance is often throttled by the interconnect. The revolutionary NVIDIA NVLink™ high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5X higher performance compared to today's best-in-class technology.

Simpler Programming

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory.

Note about GPU warranties: Manufacturer's warranty only; Advanced Clustering Technologies does not warranty consumer-grade GPUs.

Request a Consultation from our team of HPC Experts

Would you like to speak to one of our HPC experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.