Ready to configure your cluster?
Use the buttons below to custom configure your perfect system.
We build customized, turn-key HPC clusters
Our engineers are always evaluating the newest technology to make our HPC clusters even better. By using our strong vendor relationships and technical expertise, we are able to adapt to the latest hardware and software offerings, often much more quickly than other cluster providers.
When a new processor or architecture is announced, rest assured that we have already tested it and are prepared to ship systems that include this new technology on or close to the launch date.
Extensive testing procedures for plug-and-play readiness
Before we ship a cluster, it spends a minimum of 24 hours running our own powerful stress-testing software, Breakin. This tool thoroughly checks processors, memory, hard drives and temperature and finds any ECC or MCE errors that may occur. If we find any issues during this extensive analysis, we fix them in-house to deliver a stable and fully operational product to you.
A simplified purchasing process to ensure needs are met
We make the purchasing process easy by working with you to determine the best configuration for your needs. Our highly trained and experienced technicians are readily available to answer your questions and ensure that the system you purchase is ideal for your applications.
After ordering, you’ll be able to keep track of the build process online or through a direct connection to your dedicated cluster engineer. Delivery is easy, too, since we ship all clusters pre-assembled into rack cabinets with all nodes, cables and accessories installed and labeled.
Download our HPC Pricing Guide
In order to give you a better feel for the cost of HPC, our team at Advanced Clustering Technologies has compiled a pricing sheet to provide you with a side-by-side comparison of cluster costs with or without Infiniband connections. Our pricing sheet is based on budgets of $150,000, $250,000 and $500,000.
We design systems built to match your specific needs.
Fully configured and tested prior to delivery
Experts at HPC with more than 15 years experience.
We use best-available parts and thoroughly test prior to delivery
We're your source for technical support and grant writing assistance.
Knowledgeable technicians available by phone, email or chat
High performance computing clusters increase computer performance exponentially by sharing the workload. They also pass on a massive cost savings benefit over SMP and MPP-based computers by leveraging the hardware made for consumer and general business usage.
A GPU cluster features nodes that are equipped with GPUs for fast calculations.
A visualization cluster is an HPC cluster with the addition of powerful graphics cards, normally designed to work in sync with each other to tackle high-resolution and real time simulations. Our visualization clusters add high powered 3D accelerated graphics cards to the compute nodes.
A Few HPC Clusters We Have Built
Things to Consider When Buying an HPC Cluster
When purchasing an HPC cluster, the first question that might come to mind is “How powerful are the machines going to be?” While the compute nodes play an important role in the way your machine functions, there are other pieces of a cluster that play an integral part in a top-notch system.
Let’s take a look at all of the pieces that can be integrate into a cluster, physical requirements you need to be consider when purchasing, and more.
Gigabit has never been more affordable. Every Pinnacle Server comes standard with at least one gigabit adapter onboard. Today most switches are wire speed to every port, and many support jumbo frames. Switch configurations come in every size from 4 ports to hundreds. We have found that switches up to 48 ports are extremely affordable and most have uplinks that allow them to expand for additional nodes. To help conserve rack space, switches 48 ports and less come in a convenient 1U form factor.
Depending on your job, gigabit may be the perfect solution. With a standard gigabit switch you can expect 1Gb/sec bi-directionally and 30µs latency. If you require high bandwidth or low latency, a high speed interconnect might be a better solution for your cluster. Ask us about testing your code using on one of our HPC clusters. We have most interconnects on hand for you to test. This will ensure you are getting the best solution for your particular code.
With the growing deployment of multiple, multi-core processors in server and storage systems, overall platform efficiency and CPU and memory utilization depends increasingly on interconnect bandwidth and latency. For optimal performance, platforms with several multi-core processors can require interconnect bandwidth equal to or greater than 100 Gb/s.
There are three environmental considerations to think about when purchasing a cluster: how much power will it require, how much air conditioning will it need to keep cool, and where is it going to be housed.
Power and electricity
When purchasing a cluster, it is easy to get caught up in how fast it will be, how many machines will you get, but without adequate electrical facilities, the fastest most powerful cluster is just a bunch of expensive metal boxes. Power should be a fore thought and not an afterthought. Most end-users do not have an unlimited amount of power, thus it’s important to know what you have and/or how much you can get before configuring a new system.
Knowing basic power information can save you time and money. There is nothing more frustrating than receiving a cluster and not being able to plug it in, because the power hasn’t been installed or the plugs were the incorrect style or amperage. Also, if you can have larger amperage receptacles or even 3 phase power installed, fewer power distribution units can be used. This can save hundreds and in some instances thousands of dollars.
Cooling / HVAC
Heat is the enemy of every
computer! To ensure optimal performance of your cluster adequate cooling must be provided. Advanced Clustering recommends that all systems be kept between 68°F-72°F and not to exceed 78°F.
Heat is measured in British Thermal Units per hour or BTU.
1 watt is approximately 3.4 BTU/h
1000 BTU/h is approximately 293 W
Air conditioning capacity is measured in tons.
12,000 BTU/h is equal to 1 ton in most North American air conditioning applications
Space to house your cluster
Most personal office space is not a suitable place for a cluster. Generally, the power and air conditioning are not adequate, and unless built to be low noise, the cluster will be too loud to sit near for an extended amount of time. Imagine 32 hair dryers running at one time, and you will have a fairly accurate idea of the noise and heat produced! If possible housing your high performance computing cluster in a computer lab, or computing center would be ideal. We understand that this is not always possible, and we will work with you to help determine any room alterations that may need to be made.
Managing a cluster can seem like a daunting task. It really can be quite easy with the inclusion of Advanced Clustering’s management software packages, free with any HPC cluster purchase, and some of the hardware devices described below.
IPMI (Intelligent Platform Management Interface) is an open-standard management system designed for remote monitoring and control of servers. IPMI is available as an option on most all of Advanced Clustering’s Pinnacle Servers.
IPMI works by embedding a small service processor or Baseboard Management Controller (BMC) in the system. The BMC will be powered on an operational as long as the system is plugged into the main electrical power. It operates even when the system is actually turned off, the operating system has crashed, and during most hardware failures.
The BMC can be controlled either in-band via the operating system running on the server or out-of-band via a TCP/IP network connection. In a cluster environment the out-of-band management functionality is especially helpful. The out-of-band management allows your system administrator to control all nodes in the cluster from a central point. The admin would have the option of checking fan, temperature, and power supply voltage sensor data, powering on or off a system, or even connecting to the console of the machine.
Serial Consoles are another hardware device that can be used for out-of-band management. With these devices you will be able to connect to the console of each node from anywhere you allow access.
This kind of access can be a real time-saver when your cluster is located in the data-center down the hall or across the world. Most of Advanced Clustering’s compute nodes allow for serial BIOS redirection, so you can monitor or change any board level setting without being in front of the machine. When purchased as part of your cluster Advanced Clustering will setup and enable serial re-direction of the BIOS, boot-loader, and operating system — giving you complete remote control over your entire system.
A KVM (Keyboard, Video, Mouse) switch is a hardware device that allows a user to control multiple systems from a single keyboard, video monitor and mouse.
Each system is connected to the KVM device via a dedicated cable. Control between systems can be achieved by pressing buttons on the KVM device or via hotkeys on the keyboard (often combinations of CTRL or SCROLL LOCK). Many KVM options are available allowing from as few as 2 computers to as large as hundreds making them suitable for most any size cluster.
While KVM’s are useful management devices, they do have some limitations. Access is limited to only a few feet away from the cluster without the additional KVM over IP capabilities, and they typically only allow for one console access at a time. Some enterprise models do offer multiple console access, but they can be quite expensive.
Network controlled PDU
To allow an administrator complete management over their cluster we recommend a remote power control device. These stand-alone devices are a combination power controller and power distribution unit. Through extensive testing we’ve found APC’s line of Masterswitch devices to provide the best feature set and are available in multiple configurations meeting the needs of any data-center.
Since most clusters larger than a few nodes in size would require more than one power control device, Advanced Clustering’s ACT Utils package is included with all cluster purchases to make using these devices easier. Instead of having to remember what outlet, on which device a particular node is plugged into, you can use a simple command line tool to turn on, off, or hardware reboot a node by just knowing it’s hostname.
They say a chain is only as strong as it’s weakest link. A cluster is only as strong as it’s software. With nearly limitless options choosing the right software can make the difference between and unmanageable mess that does not perform well, to a streamlined, reliable high performance tool. Our technical staff is constantly testing and evaluating new software and the interaction with hardware so you do not end up being the test bed. With hundreds of deployed clusters in wide ranging fields we know what works in any given situation.
Here are just a few software options that can come as part of your HPC cluster system from Advanced Clustering.
Unsure of what software is right for you? Contact one of our knowledgeable software experts at 866-802-8222.
All HPC clusters built by Advanced Clustering include our own innovative software, this includes: Breakin, Cloner, ACT Utils, and ACT dir.
We offer just about any distribution of Linux:
Compilers and development tools
All systems come standard with the GNU compiler suite (gcc, g++, gfortran)
Optional commercial compilers are available
Advanced Clusterings’s convenient ACT dir software package, which is free with your cluster purchase, includes MPICH, OpenMPI, MVAPICH and MVAPICH2 built against all compilers installed on your system.