When designing systems that need to scale you always need to remember that using better resources could help you to achieve better performance. Anyway, it is essential to define how better it would be. Also, we would need to choose between reducing the per request processing time (being able to handle more work, if required) or improving the quality of the results through more specialized (resource demandant) solutions.

Let’s talk about the Gustafson’s law.

Hello, Gustafson’s law

I think the best way to understand the Gustafsonā€™s Law is introducing a simple scenario.

Let’s assume we are implementing a face recognition solution. The business constraint is that we would need to recognize people in “real-time.” For doing that, we would need to define a right balance between the number of processed frames per second, and the level of computation per-frame we would support.

What would be the impact of improving the image processing cluster? What would be the impact of enhancing the face recognition algorithm? The easiest way to answer these questions is by using the Gustafson’s law formula.

The Formula (from Wikipedia)

The Gustafson’s Law formula is:

Where:

  • Slatency is the theoretical speedup in latency of the execution of the whole task;
  • s is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system;
  • p is the percentage of the execution workload of the whole task concerning the part that benefits from the improvement of the resources of the system before the improvement.

Let’s consider that, for example, in the previous scenario, we would be able to improve the performance of the face recognition algorithm in 30% (s=30%). Also, let’s assume that recognizing faces corresponds to 15% (p=15%) of the execution workload. So, the potential improvement would be 4.5%.

An important side-effect

The Gustafson’s law takes into account that as computational power increases, the resources can be used to process more significant problems in the same time frame. In our scenario, we would be able to accept higher resolution pictures or to capture more frames (improving the precision of the face recognition solution).

In simple words, the idea would be to set the size of problems to fully exploit the computing power that becomes available as the resources improve. If faster equipment is available, that could be used to solve more significant problems within the same time.

Sizing the cloud

The Gustafson’s law is wonderfully applicable to the cloud computing environment, where it is possible to provide computing resources on demand to process more massive data sets with a latency that meets the business needs. That means that as an organization’s problem size grows, more compute power can be provisioned to perform processing and analysis in roughly constant time.

Takeaways

I would like to finish this post proposing some intriguing questions that you should try to answer knowing the Gustafson’s law:

  • What resources could impact the performance of your application? Also, what is the relevance of these resources (percentage of the execution workload of the whole task concerning the part that benefits from the improvement of the resources)?
  • What would be the best strategies to maximize the benefits of the additional computation power? How could you extract more value?
  • What would be the relationship between the cloud costs structure and the potential improvements on your application?

More posts in "Architecting for Scale" series:

Leave a Reply

Your email address will not be published. Required fields are marked *