01242 806 188 / 01420 565 682 | Available 09:00-17:30

Benchmarking

BENCHMARKING

When reviewing HPC, whether it be through a procurement exercise or you are wanting to explore the use of Cloud technology, there are several crucial elements that need to be tackled.  At Red Oak Consulting, our expert knowledge dives deep into supplier architectures, benchmarking, costs, and modelling.  Our focused area of expertise, HPC, means we have a long-standing history with analysing workflows and benchmarking, enabling us to bring that expertise to our customers.

Benchmarking is an important phase of any procurement or Cloud migration.  Why is it so important? –  that is because workflow requirements differ with various focal points and objectives. The associated running costs can be optimized by conducting rigorous, in-depth benchmarks.

If benchmarking is not conducted correctly, this is a strong possibility that the wrong hardware is being utilized to run bespoke software and thus incurring unnecessary long-term costs. It is the job of a dedicated technical specialist to understand a workflow’s requirement to better equip a Cloud infrastructure with the correct SKU or equivalent in the on-premise example undergo the correct procurement selection.

The benefit of benchmarking is an in-depth analysis of a workflow’s behaviour and lifecycle from start to finish. It also informs us of whether true scaling can be achieved (where more nodes and cores can be employed without seeing an early knee-jerk in the curve). A workflow has various instruction calls and methods to execute during runtime. By understanding its lifecycle, the correct argument can be made to choose and match the hardware its meant run on. Some workflows tend to be memory-bound, others compute-bound while some cases are a mixture and lie somewhere in between.

When benchmarking, it is often wise to look for how long each instruction call stays before switching to its next one, if there are any cache starvations, or high memory utilization. It is also important to understand how its MPI calls behave and whether core latency is an issue to address. This better informs the benchmarking specialist how to address the procurement or Cloud infrastructure to be chosen, thus building on credible results that maximize throughput, performance and value for money.

It is better to visualize an optimized, stable HPC environment as a philharmonic piece that sounds well and blends nicely to the listener’s ear. In the case of benchmarks, good results show close to perfect linear scaling as more nodes and cores are employed for its runtime whilst terrible results are the exact opposite; they show an early knee-jerk.

Figure 1: An example of a good benchmark result for an unnamed workflow.

Figure 2: An example of a bad benchmark for another unnamed workflow.

When considering the Cloud, there are also running costs that should be considered when benchmarking. For example, can a workflow reach optimal scaling using a different, cheaper SKU, similar to the more expensive one? Can the costs be optimized with reserved instances or are the proposed jobs low priority and can be placed on Spot instances? These are some questions out of many that we tend to ask when conducting benchmark activities.

Our technical consultants continue to work with customer to ensure best practices and the right HPC solutions are implemented to meet business requirements.  Whilst gaining a full understanding of customer workflows our team focuses that knowledge into the technology to ensure the best cost for the best performance is tracked.

Figure 3: An example of benchmarking to address cost optimizations for two Azure based SKUs.