
Join Open iT at NAFEMS DACH 2026. Discover how transparent usage reporting improves simulation governance—featured in our speaking session.
May 5–7, 2026
Welcome Kongresshotel
Bamberg, Germany
NAFEMS DACH Conference 2026
Conference for Calculation & Simulation in Engineering
VORTRAGSSITZUNG
Optimizing HPC Resource Utilization: Lessons from Regulated CFD in Elite Motorsport
Learn how transparent usage data enables better control, efficiency, and decision-making in constrained simulation environments
- Improve simulation governance with clear usage visibility
- Reduce waste under fixed compute and budget limits
- Align simulation effort with engineering outcomes
Presenter: Heinrich Bartels — Strategic Advisor – Cost Engineering & Program Optimization, Open iT
May 5, 2026 | Tuesday
5:00 PM–5:25 PM (CET)
Raum B
Welcome Kongresshotel
Elite open-wheel motorsport engineering operates in an environment where performance is determined by extremely small margins, and where aerodynamic efficiency is the dominant factor influencing race outcomes. Aerodynamics directly affect braking stability and cornering performance, with aerodynamic grip exceeding mechanical grip by a factor of four to five. In such a tightly coupled system, even small inefficiencies can result in lost positions, while relatively minor aerodynamic refinements can deliver lap time improvements on the order of 0.1 to 0.3 seconds. These conditions elevate Computational Fluid Dynamics (CFD) from a supporting tool to a central driver of competitive performance, making the effectiveness of each simulation critically important.
CFD as a Constrained Resource
What differentiates this environment is not simply the reliance on CFD, but the way in which its usage is constrained. In Formula One, CFD activity is governed by explicit regulatory limits that define how much computation each team is allowed to perform. These limits are dynamically adjusted based on competitive ranking, with lower-ranked teams receiving up to 115% of a baseline allocation, while the Constructor's Champion may be restricted to as little as 70%. This structure introduces a deliberate competitive balancing mechanism in which success reduces future computational capacity, forcing top-performing teams to operate with greater selectivity and efficiency.
Within this framework, CFD becomes a finite and measurable resource. Its consumption is quantified using a standardized metric:
where NCU represents the number of CPUs, NSS the runtime in seconds, and CCF the CPU clock speed. Each simulation consumes a defined portion of a fixed allocation that cannot be exceeded, and any unused capacity is forfeited. This creates a strict boundary within which all aerodynamic development must occur, shifting the focus away from the total number of simulations performed and toward maximizing the number of useful simulations that contribute to performance improvement.
From CFD Constraint to HPC Optimization
These constraints fundamentally change the role of high-performance computing (HPC). In most engineering environments, HPC systems are treated as scalable infrastructure, where increasing computational demand can be met by adding more hardware. In Formula One, however, CFD throughput is limited by regulation rather than hardware availability. As a result, expanding compute capacity does not increase the number of allowable simulations. Instead, the emphasis shifts toward extracting the maximum possible value from a fixed computational budget.
This shift places HPC optimization at the center of engineering performance. The objective is not to increase total compute usage, but to improve the efficiency with which existing resources are used so that more meaningful simulations can be completed within the same allocation. Every inefficiency in the HPC system directly reduces the number of CFD runs that can be performed, making system performance and resource utilization critical factors in competitive success.
Understanding Resource Constraints and Imbalance
HPC systems are complex environments in which performance is governed by multiple interacting resources, including compute capacity, memory size, storage I/O, network bandwidth, inter-process communication, scheduler configuration, and software license availability. In practice, these resources are rarely balanced. It is common for one resource to become saturated while others remain underutilized, creating bottlenecks that limit overall system throughput.
For example, simulations that appear to be compute-intensive may actually be constrained by memory bandwidth or latency, preventing CPUs from operating at full efficiency. Distributed workloads may encounter interconnect or network contention, reducing scalability across nodes. Storage systems can limit performance through inefficient data access patterns, while power consumption and thermal limits may introduce throttling that reduces effective processing speed. Additionally, parallel workloads are inherently limited by non-parallelizable components, meaning that increasing the number of CPUs does not always produce proportional gains in performance.
These imbalances highlight a key challenge: overall HPC efficiency is determined not by the total amount of available resources, but by how well those resources are aligned and utilized. When one constraint dominates, the rest of the system cannot be fully leveraged, resulting in reduced throughput and increased computational cost per simulation.
Using Measurements to Identify Inefficiency
Addressing these challenges requires detailed visibility into system behavior. Modern HPC environments generate extensive measurements that provide insight into both system performance and simulation usage. Hardware performance counters reveal low-level inefficiencies such as cache misses and memory stalls, while node-level measurements capture power consumption, temperature, and potential thermal throttling effects. System-wide profiling tools further expose patterns of contention and imbalance across compute, memory, network, and storage subsystems.
When combined with usage data such as job scheduling patterns, license concurrency, and time-based demand; these measurements provide a comprehensive view of how resources are being consumed. They enable the identification of idle compute capacity, peak-time bottlenecks, and misaligned workloads that may not be visible through conventional monitoring approaches. In many cases, inefficiencies are not obvious because they are distributed across multiple components of the system, making data-driven analysis essential for uncovering lost performance.
Optimization Actions and System-Level Improvements
Once inefficiencies are identified, targeted optimization strategies can be applied to improve overall system performance. These actions may include refining CPU allocation to increase core efficiency, improving cooling systems to eliminate thermal throttling, and resolving I/O bottlenecks through optimized data layout and access patterns. Adjustments to job schedulers can better align workloads with available resources, while improved license management can ensure that software constraints do not unnecessarily limit throughput.
In some situations, analysis may indicate that additional HPC capacity is required. However, this decision must be based on a clear understanding of which resource is limiting performance. Adding compute nodes to a system constrained by memory bandwidth, storage I/O, or network performance will not improve throughput and may increase operational costs without delivering meaningful benefits. Conversely, when resources are well-balanced and fully utilized, expanding capacity can provide measurable improvements in performance and productivity.
Commercial HPC: Performance and Cost Optimization
While Formula One provides a uniquely constrained example, the underlying principles apply directly to commercial HPC environments. In these settings, the primary constraint is not regulation, but cost. HPC systems represent significant capital and operational investments, including hardware acquisition, energy consumption, maintenance, and software licensing. As a result, the optimization objective extends beyond performance alone.
Commercial organizations must balance two goals: completing required simulations as quickly as possible and reducing the cost of HPC resources needed to achieve those results. For a fixed number of simulations, improving system efficiency can reduce compute time, lower energy usage, and decrease licensing costs. In many cases, better resource balancing can enable the same workload to be completed with fewer computational resources, directly reducing overall expense.
This introduces an important parallel with Formula One. In a regulated environment, inefficiency reduces the number of simulations that can be performed within a fixed budget. In a commercial environment, inefficiency increases the cost required to perform a given number of simulations. Although the constraints differ, the underlying challenge is the same: ensuring that HPC resources are used as effectively as possible.
A Data-Driven Approach to Resource Optimization
The common solution across both domains is a disciplined, data-driven approach to resource management. HPC systems must be continuously measured and analyzed to understand how resources are being used and where limitations arise. Bottlenecks must be identified and addressed, workloads must be aligned with system capabilities, and decisions regarding capacity expansion must be based on clear evidence rather than assumption.
By establishing feedback loops between system measurements and operational decisions, organizations can incrementally improve both performance and efficiency over time. This approach enables more effective use of existing infrastructure, reduces unnecessary cost, and ensures that any additional investment in HPC resources delivers meaningful value.
Schlussfolgerung
The experience of elite motorsport engineering demonstrates that when computational resources are treated as finite and measurable, optimization becomes both necessary and achievable. By focusing on improving resource utilization rather than simply increasing capacity, organizations can increase simulation throughput, reduce cost per solution, and make more effective use of their HPC investments. The same principles that enable competitive advantage in a constrained racing environment can be applied broadly to deliver more efficient and cost-effective simulation across a wide range of industries.
Attending NAFEMS DACH 2026? Make your time count. Reach out ahead of the event or message us on LinkedIn to discuss improving simulation governance and efficiency.
Connect with Our Team
Deputy Director – Head of Business Development DACH, Innovation Norway (Open iT Partner)
