"Premature optimization is the root of all evil" was told by Donald Knuth, right? Right, but he was misquoted. He said in full: that for 97% the premature optimization is not necessary. You can access the full paper where the quote is taken from here. Even more, he said so in context of using ugly constructs (he was refering on GOTO statements). And more, he did point out that statistically the hottest of the code is in 3% of the code, and the full statement of him was: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.". So, he doesn't say about stop to optimize (don't forget, that "premature" is a loaded word, having already a negative connotation) but the reverse, to optimize the code that is hot.
Based on this, I found many misconceptions regarding optimizations and at least this is my view on it (from the most important to the weakest ones):
1 - "You should not optimize in your game/application the loading time, this happen just once, after this application runs fast/properly". There is some truth to this statement, for example if you watch a movie, you should not care if the movie player starts in 0.1 seconds or 2 seconds. But what if the movie player starts in 30 seconds? Would you want to watch using this movie player to watch a 2 minutes clip? Many developers have quite powerful machines, like they have SSDs, 4 cores with at least 8 GB of RAM, but their application will arrive to users that do not have these cool components and the application will run visibily slower
2 - "The redesigning of our architecture will bring the application performance by 4x, and optimizing the current design will give to us only 2x speedup, so there is no need of small optimizations" - but very often this 2x speedup would mostly be transferred in many places to the new redesign, and the architecture redesigns are very expensive (as developer time) to do them. Let's say the company switches from any SQL kind of servers to NoSQL but the application logic is the same. Let's assume that in that scenario jumping from let's say MySQL to MongoDB will give a 4x speedup for queries, and let's say the queries were using 2/3 of the time of the entire system. The final speedup will be exactly 2x. But let's say that the company optimized only the part which is not related with SQL and the speedup is 2x and for the entire system the speedup is 33%. Even it is slow, customers will have a tangible benefit next week/month not after some months when maybe they quit already as potential clients. Even more, when the migration from MySQL to Mongo happens, the system will work 3x faster. But as real life happens, sometimes can be a pathological case that for a customer Mongo runs in fact 20% slower when migrated, but because the optimizations are done on the system level, the system would run still more than 10% faster. There is a lot of math here. done on the back of the envelope, but it simply states that it never hurts to have small optimizations done.
3 - "When I develop I don't need optimized workflow, my machine is really fast": this is kinda true, but sometimes is not that true. Many big applications take long time to start which is again kind of normal, for example it needs to get updates from server, and you as a developer you can pay (because you have SSD and 8 GB of RAM) at least when you are developing to wait for 10 seconds to get the real case data. But if you have to reproduce a bug, imagine that every second counts. It counts because it annoys development, it interrupts it. Especially if your build system takes minutes you will notice that you go to a blog and you read the news and rants (like this blog) but you lose the focus of which bug you were really working on. This is not a fault necessarily on your organization, but this is how human mind works. This is kind of the first point ("you should not optimize loading time") but is directed to developers. You as a developer have to make 1 step build if possible and focus to not do crazily much stuff.
4 - "You don't need to optimize your code because compiler knows better" - I am sure that micro-optimizations like using minimal numbers of variables a compiler will do always a better job than you, but this is blatantly false. Compiler can optimize your code but most code is not run within the compiled code, especially in managed languages (C# or .Net ones, Java, JavaScript) you will see that the compiler runs a lot of code with libraries. Most compilers cannot optimize string concatenation, even though Java will use StringBuilder when you use + to concatenate strings. And the reason it does it in this way it is because compilers don't work well with strings. Every time your code does read from files, a compiler will not know a lot about your file format, duplicates of data, or the fact that you could read less data and rebuild the information. No compiler cannot know if you load 2 times the same image, that it should load it once and cache it, and so on. Even worse, is that even we allow to think that your environment is well optimized, it means that only your code remains the slow part.
5 - "I should not speedup my web service, I will put it on Azure (replace this word with your Cloud solution)" Not sure about you, but having a faster web service means that you have a simpler administration as you need to spawn fewer instances, smaller costs, even the improvements of code could be a bigger upfront cost.
6 - "You don't need to optimize allocation, GC does it fast(er)" Did you measure this? GC definetly has quicker allocator than let's say C++ one, but every time when you do a "new" for a heap object, the object has to zeroed, it also moves the allocation pointer and it means that it makes the CPU cache line "dirty". If you have some code that reads from a file line by line and you have your own "read line method" (I'm saying especially if you want to improve the load time performance, see point 1), you may make a reactive interface, and instead of allocating a new buffer, it looks to me a fair design to just recycle the buffer. The speedups on .Net side are fairly significant, and I would expect the same on Java. Allocating more seldom these small objects will make the GC to be called less frequently.
A bit wider problem of architecture redesign. Today in most companies I work they do use Agile methodology, which is in itself an incremental methodology. This makes almost impossible to make in big systems architecture refactors and even they do them, they are done by the most senior team members, which they know "the core" well. This means that it is possible that an architecture refactor can take not months, but years sometimes, because you cannot risk to break existing iterations, so the code is prepared with small small steps to accomplish this redesign.
In conclusion, this post is not specially to use GOTOs, which both me and Knuth would disagree, but the idea is that every time when you can isolate (in a profiler) some slow code, optimize it now, not tomorrow. The later you do it, you will suffer it in testing it, having a bad application experience (and users will feel it also very often!).
No comments:
Post a Comment