We recommend placing orders as soon as possible to minimize wait times and price increases caused by global supply chain issues.

Advanced Clustering Technologies is a Platinum Partner in the United States offering the BeeGFS parallel filesystem in our turn-key HPC solutions. This solution is ideal for I/O intensive workloads because it allows you to spread user data over multiple servers.

Why BeeGFS?
As your cluster grows,  file systems can become a limiting factor. The expense of expansion can become comparable to the cost of the computing cluster itself. Therefore, we are offering BeeGFS as a cost-effective solution.

With BeeGFS, you can scale performance and capacity of the file system to the level you need by increasing the number of servers and disks in your system.

Experience the power of clustered storage:

  • distribute data across any number of storage servers
  • get high streaming throughput and high IOPs
  • gain parallel data access from all compute nodes
Yes! I want more information about the BeeGFS Storage Cluster
  • This field is for validation purposes and should be left unchanged.

Why Use BeeGFS?

BeeGFS is designed for ALL performance-oriented environments:

  • HPC
  • AI and Deep Learning
  • Media & Entertainment
  • Life Sciences
  • Oil & Gas
  • and many more

BeeGFS features a state of the art architecture that allows users to manage any IO profile requirements without performance restrictions. It also provides the scalability & flexibility that is required to run the most demanding HPC, AI, or business-critical applications.

Key Benefits of BeeGFS

  • Well-balanced performance for small and large files.
  • Scalability that increases file system performance and capacity
  • Easy to deploy and integrate with existing infrastructure
  • High availability design enabling continuous operations

Video:
Accelerating HPC Scale-Out Environments with BeeGFS

Hear about the features and benefits of the BeeGFS parallel filesystem. The Advanced Clustering team outlines our approach to implementing BeeGFS for our customers. We also highlight some recent installations, including real-world performance and feedback from customers. We are joined by members of the ThinkParQ team who built and maintain BeeGFS. They provide an overview into the technical and commercial benefits of BeeGFS, along with describing how BeeGFS accelerates extreme HPC scale-out environments.

Case Study:
Advanced Clustering's Storage Blocks with BeeGFS 
Support Large-Scale Biopharmaceutical Research at AbbVie

Scientists working for AbbVie’s Genomics Research Center are examining these genetic markers in search of more personalized approaches to medicines and treatments. These efforts require significant high performance computing support for such tasks as analyzing hundreds of thousands of complex datasets. The company’s researchers now have access to more than 1.1PB of high performance storage with the deployment of Advanced Clustering’s Storage Blocks with the BeeGFS parallel file system.
Download the case study. 

Proud to be a Platinum BeeGFS Partner in the U.S. Market

In May 2021 ThinkParQ promoted us to the level of Platinum BeeGFS Partner in the United States market. 

As ThinkParQ CEO Frank Herold said at the time: “We get excited when a new partner joins us, and then when you see that partner scale within your landscape so quickly and move up to Platinum Partner level, it is a real delight for the entire team. We really appreciate the active interaction with the technical-, sales- and management team of Advanced Clustering Technologies, providing detailed feedback about the product and customer requirements. It is fantastic to see Advanced Clustering Technologies be awarded Platinum Partner status, and I look forward to a continued successful relationship with them.”

Request a Consultation from our team of HPC and AI Experts

Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.

  • This field is for validation purposes and should be left unchanged.