Description
Over the last 25 years or so, I’ve been often tasked to help optimize the performance of various solutions. I keep careful records of all the work I do and I recently decided it would be a good idea to collect all my thoughts/experiences around performance.
I’m actually not only collecting notes on performance, rather I’m trying to categorize all my notes. Unfortunately, most of my notes are in old notebooks and they are filling up several shelves at my house. I’d like to get some of the notes synchronized and made electronic so that I can make use of them in my consulting/teaching. I expect this will take many years…
One idea that I had was to organize the material around architectural viewpoints based on views that I’ve developed over the years (primarily using the IEEE 1471-2000 framework. For some of the viewpoints I literally have 10’s of thousands of notes, so I thought I’d tackle some of the smaller viewpoints first. So… for my next x blog entries, I’ll extract notes on performance.
To make these notes useful to others, it would probably be best if I discussed some of the topics that I take for granted. These are issues that my notes will not discuss because they are principles I consider them given (or immutable if you’d like).
What influences performance?
In my experience, there are only a few things that are worth focusing on:
- Algorithms and data structures. E.g.,
- Simple algorithmic improvements:
- Bubble-sort versus quick-sort
- Graph-search algorithms
- SQL query optimization
- Indexes
- Proper use of the database expression power (SQL)
- Locality of reference
- Caching
- Memory usage
- Taking advantage of immutable knowledge
- Processing power
- Scaling by adding hardware or reassigning resources
Process of performance engineering
Make sure you understand what are desired performance characteristics of the system you are trying to optimize. Sometimes the total through-put is most important, sometimes its reactiveness is most important, sometimes various scenarios have very different performance requirements. In other words, make sure you understand the problem before you start optimizing.
One of the most common errors I’ve seen is the premature and blind performance optimization. In other words don’t optimize before you know where you have a problem. Some performance optimization techniques come at a high expense from other architectural perspective (such as reuse, maintainability, continuity, …).
By premature optimization I mean performance optimization that are suggested before we even know we have a problem. Blind statements made by someone in the organization that are blindly applied across the organization. E.g., using getters provide a function overhead and hence should never be used. Let’s make the data members accessible to the client. Such a decision (apart from being wrong in many environments) has a high cost and are often not very effective.
By blind performance optimization I mean: DON’T optimize without profiling the application. More often than not, the ones optimizing are optimizing the wrong thing (e.g, who cares if you can save 1 millisecond by reducing a method call if the problem is that you are using several seconds in some database query…).
Let me illustrate the last point with an example (an old note from my notebook). Many years ago I was working on a large scale distributed real-time system. The system we were working on had an extensive framework on which we build a large set of specialized applications for various clients. The system had serious performance problems. When an operator pressed a navigation button to move to another view, it took seconds to move to the next screen.
The framework team were called in to an all-hands to improve the performance. The performance improvement went on for many months and the engineers were optimizing based on a set of principles set forth by one of the most experienced engineers in the company. Even so, the progress was slow. After about 9 months, they managed to optimize the performance with ~ 20%. Some of the performance optimizations degraded the architectural structure (often you can steel CPU cycles by collapsing architectural layers).
My team was eventually also called in (we were working on applications, but we had a good reputation and the framework team was getting desperate). Debugging the application was rather cumbersome (at that time we were using an in-circuit emulator (ICE) that very few engineers in the organization mastered. On my team, one of the engineers had been working on the design of the ICE and knew it inside and out. Using the ICE and a profiling tool, he and I sat down and started to narrow down the areas where we spent time. This was probably the first time this problem had been attacked this way. To make a long story short, after a few hours, we had optimized the performance by 40 times. A day later, the system ran 250 times faster. After a week the system ran many thousand times faster.
How did we do that? It turned out the problem was in a few lines of code. It was an algorithmic problem (unnecessary multiplication in a loop in the display driver). There was nothing we did that the other engineers would not be able to see, nothing particularly clever, no magic. It’s more like finding your car in a large parking garage… but we ‘cheated’ by turning the lights on.
Summary
This is the introductory article where I’ll be focusing on performance optimization. The idea is to collect some of my notes from my work since leaving school. This first article was written to establish some basic principles that I will assume known in the articles to follow. Some of these principles are:
- Don’t optimize without understanding the performance goals
- Don’t optimize without a profiler (or some way of measuring performance)
- Performance comes foremost from algorithms, locality of reference and computing power
Add a comment