Is Clean Code Really That Bad For Performance?

preview_player
Показать описание

Hello everybody I'm Nick and in this video I will address some of the comments that I get all the time around the refactoring that I apply to my code. Creating classes, instantiating obejcts and extracting methods are common practices when you're refactoring code but I got many questions regarding the performance implications of such operations. In this video I will address the 2 major ones.

Don't forget to comment, like and subscribe :)

Social Media:

#code #performance #cleancode
Рекомендации по теме
Комментарии
Автор

Hey everybody! Looking at the initial example (with the nested loops) again, I can see how it might not look "cleaner" to everyone, especially since I did a mistake during extracting and I over-extracted the second method which in hindsight wasn't needed. This was only done so we have at least 2 methods extracted for demo purposes. The point of extracting methods to smaller logical pieces still stands in spite of that but the example that I picked was not a great one. Sorry for that.

nickchapsas
Автор

Nice video! Just need to keep in mind that Spaguetti code is bad, but too many abstractions can be a pain to maintain as well. Sometimes a nested loop can be a lot easier to understand than a code so abstract that you need a debugger just to understand what's going on. I think Simpler comes before Cleaner. (Usually clean code is simpler, so I'm not saying you should stop doing clean code).

rafaelgil
Автор

You know what's really slow? Troubleshooting a customer-impacting bug, in spaghetti code, in prod.

VasiliSyrakis
Автор

Clean code to me has always been about maintainability and readability especially when I know other developers may be having to support my projects or looking at my code for inspiration on solving a problem they might be having... good to know it’s not affecting performance all that much.

ajcroteau
Автор

Thanks Nick for the next epic coding episode. Amazing as usual. I would like to add something from my experience. When we optimised the code manually, we managed to save milliseconds, but thanks to the team's openness to refactoring, we were able to improve the algorithms and save minutes. Rename something and extract method are the first two that are essential to encourage developers to touch the code "around" my task, which maybe "I should not touch".

bodek
Автор

Always strive to write clean (i.e. easy to maintain) code. If it turns out to be slow when testing, use a profiler to find the bottle neck(s) and optimise those. Unless you’ve been doing this for decades your intuition about what the bottle neck is will probably be different from what the profiler finds. In that case the profiler will be right.
This is the case for compiled languages like C, C++, ... but I assume the same holds for code going through a just-in-time compiler.

gvanvoor
Автор

Hey Nick, great Videos! Sometimes I watch ur Videos just for motivation. U really seem to love coding. That's very encouraging! All the best! Rob

robertkomarek
Автор

A good demonstration, and I fully agree with the points made in the video. In real-world scenarios, don't think about whether or not you should extract the method (spoiler: you should!). Instead, if you have to optimize something, think about the runtime complexity of your code (O-Notation) and about smart caching strategies when working with any type of I/O (disk/network/...). Use the right data structure for the right job and have some understanding of their key characteristics. Avoid doing useless work, in particular doing it multiple times. Apply data filters as early as possible. And always always always MEASURE with a profiler, don't guess.

AlanDarkworld
Автор

@Nick I'm glad you made this video, as you do come across as being somewhat performance-obsessed, and while that definitely is needed sometimes, I think in the vast majority of situations it is more important to write clean code. As you have very clearly demonstrated here, the two are not at odds. Inexperienced developers commonly fall into this trap of premature optimization at the expense of clean code, but what a lot of inexperienced developers fail to grasp is this: when it comes to optimization, the compiler is usually better than you! Having spent the last few months working on some decidedly "unclean" code, let me say that it is not a nice edxperience to have to clean up some else's mess. I have had to deal with "arrow code" that goes on for hundreds of lines, and violations of all of the SOLID principles. It is rather depressing to have to deal with that.

timlong
Автор

I'm from Brazil and I follow all the videos, very good!

iuryfranklin
Автор

Videos showing the implementation of principals while refactoring are definitely worth watching.

chefbennyj
Автор

Writing what's considered "clean code" almost never introduces performance problems. The problem here is the word "almost". If you are handling a few thousand per second on a somewhat higher abstraction level, you are most probably fine. Do whatever makes your code easier to understand and maintain. That said, if each of your messages contains a data structure storing a few thousand 3d-points or measurement data of some kind, that have to be processed in a non-trivial way, you are suddenly talking about handling a couple million "things" per second. That's where you probably should start to think a bit about the layout of your data in memory and what processing it actually means. Do you really need to access things via dynamic method calls, reflection or instance methods? Maybe you might want to avoid inheritance, some composition patterns or even start to unroll some loops? Do I really have to throw an expression on 50 million strings or can I maybe rely on bytes 17 and 18 having the values I look for? Given the stuff I work on, these cases are rare but exist. So it's worth knowing that there is at least some overhead involved with some of the practices and patterns.

lupf
Автор

In my experience (optimizing hot paths in video games) a few method extractions are usually very tiny, what really kills your performance is cache missed that result from some patterns of clean code where the compiler either A cannot predict which area of memory will next be required B crisscrossing pointers (read: references) especially if the calling function doesn't know what area of memory will be needed for the called function C unnecessary heap allocations, keep anything that is small and only needed temporarily on the stack at all times.

RiversJ
Автор

so we can call it: microseconds wise, weeks in maintenance stupid

adriankujawski
Автор

Your content is awesome. So well synthesized, structured and communicated. Thanks!

trocomerlo
Автор

If you're using the Unity game engine, you didn't used to get this performance from refactoring. It was significantly faster to not have method calls in loops with lots of iterations. Not sure if that's still the case, since this was in like 2016. Before just accepting performance from someone else make sure you test using your own framework!

NathanFranckGameDev
Автор

I work with IBM mainframes and was taught principles specific to the discipline of Data Processing back in the mid-80s (note that this is something that UK universities do not seem to teach as demonstrated by the fact that I have heard many tales of graduates having to be taught the right way of doing things). These are rules which were created in the days when the system you were working on might have kilobytes of memory to work with. One of the more basic rules is that you don't try to second-guess the operating system/compiler, another that you keep as little data in play as possible. Writing a load of code to buffer I/O, for instance, is something that you would not do unless absolutely necessary for the function of the program. Aside from the fact that it adds complexity to the code, even on a PC-based system, a read from a filesystem is going to return a minimum number of bytes (in the form of a block or sector) and therefore forms an automatic buffer that the operating system will maintain for you. Coding to do buffering is therefore an unnecessary overhead that duplicates what the OS is already doing.

Working in COBOL, I have come across the argument many times over the years that using the GOTO statement and coding in a more monolithic fashion rather than using a structured procedural approach improves program efficiency. The reality is that the compiler's optimisation blows this argument out of the water. I am one of those people who are perverse enough to have used the IBM compiler's "LIST" option to output the generated assembly language code and be able to understand what it has written. I have seen instances of the compiler moving blocks of code around in order to generate more efficient object code. For example, if there is a procedure called from one place in the program, the compiler can inline the procedure at the calling point in order to remove the call. Attempting to interfere with what the compiler is doing by de-structuring the code in a manner which appears logical to the programmer risks altering the optimiser's decision making process in a negative way with the added expense that the code is less maintainable/understandable.

What you have proven here is that these decades old rules still work. The correct way of doing things is to write for structure, readability and maintainability, particularly in a commercial environment where the code will potentially be operated and maintained for many years by people other than yourself. As long as you are writing as clearly and efficiently in the language you are using as you can, it is best just to trust the compiler and operating system to do their jobs as efficiently as they can.

johnbarleycorn_
Автор

It’s very subjective about something being slower just because there are methods used. In c /c++ there can be a performance hit but ultimately it comes down to whether it matters or not. For average business apps doing a single call that has a few methods is not going to cause issue s. It if you use that that algorithm in a method looping over 1m records then yeah you may face issues. But then it still depends. I love your demo out the process of benchmarking, so you can really determine for sure if you’re affecting performance. It’s a really clean way to check. Thanks for the video.

stephenyork
Автор

Wow, this pop up after Casey’s “clean code, terrible performance”

arphenti
Автор

I'm always wondering if refactoring makes my apps slower, and you answered it. You've got my like and subscribe, this is really helpful, thanks!

kblyr