why does inheritance suck?

preview_player
Показать описание
You've probably heard this a few times when talking to your fellow programmer friends. "Gee Billy C++ polymorphism sure is slow, I hope Sally doesn't know that I use it!" But why is it so bad? In this video, we'll do a deep dive on what C++ Polymorphism is, what "virtual" does under the hood, and ultimately why it is SUCH a performance hit compared to languages like C and Rust.

🔥🔥🔥 SOCIALS 🔥🔥🔥
Рекомендации по теме
Комментарии
Автор

I have a number of issues regarding this video:

First, the example given is a exceptionally poor choice. The number of unique cases is far to few for when Polymorphism should properly be used. With only "OP_ADD" and "OP_MUL" as outcomes, the code will compile into conditional jump statements. If there were more cases, the switch statement would likely be compiled into a jump table. This would have been a far better comparison to Polymorphism. As it stands, the video was effectively comparing if-statements to function pointers for a minuscule number of cases.

Second, This explanation on how vtables are structured is based on the assumption that the vtable isn't embedded into the object. While I haven't encountered any compilers that do this, actual implementation of an object's vtable isn't standardized. This video can wrongly give off the assumption that it is standardized.

Thirdly, in regards to why virtual methods are slow, while it is true that memory operations are slower than operations that take place on registers, the biggest slowdown actually comes from the fact your doing a dynamic jump. Modern CPUs try to fill the execution pipeline with as many instructions as possible with out-of-order execution by looking ahead for independent chains of instructions. This is far easier to do when the control flow of the instructions is static and works especially well when the branch predictor chooses the correct execution path. When a dynamic jump occurs, the processor basically has to halt everything until the memory location of where it's going is loaded into the processor.

Forth, I must disapprove of the blanket "This is Bad" approach this video takes. Polymorphism is just like any other tool in programming: there situations where it is good and bad. When choosing what mechanism to use in programming, one needs to compare the benefits and cost. The video you've done shows Polymorphism being used in an inner-most loop. This is quite likely the worst case scenario for polymorphism. It simply isn't worth the overhead that comes with polymorphism. If you wanted to stick with the calculator theme for the video, better options like Square Root, Trigonometric Operations, or Logarithm would have been a fairer comparison.

Lastly, the code presented at the start has some problems. Why are you using atoi in the conditional?! The code will be pointlessly executed every pass! Additionally, if you use optimization settings, the compiler might very well optimize away the entire body of the loop! If it tries to inline the operation code, it might very well see that the only one case is ever true and the operand assignments are pointless. This would result in an addition to a temporary variable that is only written to and never read from. Seeing it has no side effects, it might just empty the entire body of the loop. Unless you looked at the assembly, you wouldn't be able to tell how aggressively it optimized the code and if the tests were actually fair.

draconicepic
Автор

"It is important to note that by default, functions with the same signature in a derived class are virtual in the parent by default." What? No? This isn't Java.

Spirrwell
Автор

A non-virtual call is exceptionally fast, as it usually consists of a single instruction. On the other hand, a virtual call introduces an extra level of indirection, leading to the purported 20% increase in execution time. However, this increase is merely "pure overhead" that becomes apparent when comparing calls of two parameterless functions. As soon as parameters, especially those requiring copying, come into play, the difference between the two overheads diminishes. For instance, passing a string by value to both a virtual and a non-virtual function would make it challenging to discern this gap accurately.

It's essential to note that the increased expense is primarily confined to the call instructions themselves, not the functions they invoke. The overhead incurred by function calls constitutes a minor proportion of the overall execution time of the function. As the size of the function grows, the percentage representing the call overhead becomes even smaller.

Suppose a virtual function call is 25% more costly than a regular function call. In this case, the additional expense pertains only to the call itself, not the execution of the function. It's essential to emphasize this point. Usually, the expense of the function call is much smaller compared to the overall cost of executing the function. However, it is crucial to be cautious because though it may not always be significant, if you excessively use polymorphism, extending it to even the simplest functions, the extra overhead can accumulate rapidly.

In C++, and in programming in general, whenever there's a price, there's a gain, and whenever there's a gain, there's a price. It's that simple.

hugo-garcia
Автор

I get your point: virtual function have a cost and it's true.

But the video is not 100% honest. The c++ code can be extended by only creating new type of operator without touching the code that execute operation. The C code cannot, you must change the central switch case and the enum. The virtual function has a cost but also offer functionality. Do the functionality worth the cost? Maybe yes maybe no. Each case is different and must be evaluated.

Also c++ != virtual function and inheritance. It's not because the feature exist you must use it. They are tools in the toolset, nothing more.

Then you also did an useless operation the force the c++ to pass by the vtable with the line "Operation *op = &add;". If you just used "add.execute();" directly the compiler you know at compile time what function to call and would not pass by the vtable. I understand you did that to have a one page example. But it could lead someone to think an example this simple would always use overkill feature like the vtable. It make look c++ dumper than it is.

pheww
Автор

I've been coding a game engine from scratch for a few years now, and in most real scenarios I encounter, where functions perform actual work, the virtual call overhead is just negligible. I tend to avoid using virtuals in a hot path (functions that will run many times per frame), but I'm totally fine using them elsewhere.

This is how I dispatch game system update and render calls in my framegraph, for instance. My game systems must have state, and the engine can't know about the specifics of client-side game system implementations. All it knows is that some will modify a scene (update), and some will push render commands in a queue when traversing a const scene (render). So polymorphism is a good tool here: it makes the API clear enough, it makes development less nightmarish, the abstraction it introduces can be reasoned about, and the call overhead is jack shit when compared to execution times.

Guys, don't let people decide for you what "sucks" and what is "nice" or "clean". This is ideology, not engineering. Toy examples like this are *not* real code, measure your actual performances and decide for yourself.

tzimmermann
Автор

Mistake in 03:46: If a derived class has the same function as the parent BUT the parent function isn't marked as virtual it doesn't become virtual implicitly. The 2 functions will co-exist simultaneously. Which one will be called depends on the type of the variable that does the call. Is it base class variable or a derived class variable? Specifically if the pointer is of type base*, you assign it to an instance of derived class and call the overriden function it will actually call the base class's implementation of that function. This is a subtle C++ pitfall that can lead to bugs.
What you probably meant to say: If a base class has marked a function as virtual then it becomes implicitly virtual for any derived class. The derived class doesn't have to explicitly mark it as virtual. But for clarity reasons it should do so.

sledgex
Автор

Remember to mark your polymorphic classes as final if they are the final derivation! That lets your compiler optimize virtual function calls into regular function calls in situations where there can be no higher superclass.

Minty_Meeo
Автор

Cool video but i could'nt stop staring at the cpu with the swick sunglasses

abaan
Автор

It's always worth noting that when Stroustrup says that C++ obeys the zero-overhead principle, that in NO way means that the abstraction itself is free. It's just that you couldn't hand-code it better yourself with less performance overhead to use that feature (ideally). If you use inheritance with virtual member function overrides, you *will* pay a cost, because if you care about performance 1. you should always measure it, and 2. you should be aware of exactly how it is implemented. Otherwise, don't solve the problem that way.

There are some cases where inheritance is quite applicable, but needless to say, it is not exactly cheap, and its depth should be minimized if it's used at all. And to quote Gang of Four, "Favor object composition over class inheritance".

metal
Автор

You can write OO code in C. C just doesn't do the heavy lifting for you. You can write your own dispatch tables with function pointers.

There are plenty of OO libraries for C. GTK+ and Xt are two examples.

The code doesn't look as nice, but OO is about how you organize and reason about your code, not about whatever syntactic sugar your language gives you.

jeffspaulding
Автор

80% to 90% of code execution time is spend in 10% to 20% of the code. if your code is performance tuned so much that it's the performance degradation is due to vtable lookup (which I highly doubt), then perhaps you can say "polymorphism sucks". For 99.999% of all programs out there, that's not the case and polymorphism is a very useful feature and makes code simpler.

francoisgagnon
Автор

In the famous words of Chef "There's a time and place for everything, children. It's called college"

Mitch-xord
Автор

This video shows a only a compaison when C++ virtual approach is disadvanaged and dosen't count the disadvantages of the C switch approach, using that to arrive at a simplistic conlcusion "virtual bad".

The example show in this video the number of operation and the operation itself are very small, so here the comparison is "dereferencing a virtual table and call a function" vs "call diretly a slightly bigger function". But if we pass in a more realistic context, where there are more derived classes and bigger method, now in the C approach we have bigger switch that've to call other functions so that means that we've to do 2 function call in C approach insthead of 1 like the C++ approach, with all the overhead that comes to (or put all the logic of all cases in one function and have a massive switch). And THE CACHE MISS CAN HAPPEN EVEN IN C APPORACH TOO, you know the switch has to be loaded form memory too, like a vtable.

The C switch has the advantage of being easier to optimize and can be faster, but you've to now and implement the call for every type of operation in the base class. This means that it's not possible to use in situations like extending the functionality of an already compiled library.

And last which compiler and flags were used?

TheSimoriccITA
Автор

I teach college C/C++ and I think the most important part of introducing C++ is explaining the problems that the language was written to solve; they were problems with large scale software development, not writing faster algorithms. Projects were becoming really hard to manage with library name conflicts, simultaneous changes to common code stepping on eachothers toes, human problems like that.

Polymorphism lets developers build on top of eachother's code in a way imperative code can't support. Someone could subclass your calculator, add/remove/change operators, and pass the new calculator into existing calculator-using code with no change to your calculator class code and just instantiating a different class in code that depends on your calculator.

jonbezeau
Автор

You explained very nicely why polymorphic code is slower, but it sucks only if you need to squeeze all possible performance from the hardware by a specific task. Which is the case pretty rarely. The most common scenario is something (I mean, most of the code) is waiting for something slower, meaning it has more than enough time to not create any measurable lag. But yes, when you're optimizing a tight loop and every cycle matters, then the good old C is the way to go. But again it would be pretty tricky to provide a right example. In my practice - if I have a tight loop and I want to save every cycle - I just avoid calling anything, inlining as much as possible. And this can happen inside a C++ overridden method, but it just doesn't matter as long I don't call other other methods from said tight loop. Sometime I mix C with C++, where C++ is for general app logic, and C is doing time critical things. The common C++ usage is for tasks too tedious to code in C and usually not time critical. Like UI and network. The code waits "ages" for something to happen, then you invoke a C function that does the heavy lifting pushing gigabytes of data because the user just moved a mouse or clicked a button.

Adam_Lyskawa
Автор

I'd say that polymorphism doesn't necessarily suck. Poor use of polymorphism sucks. If your virtual function takes only couple of cycles to complete, then it might not be a good idea to use polymorphism, but when it takes couple of miliseconds, the slowdown caused by the extra lookup is negligible.

MyManJohnny
Автор

Incredibly misleading. You're not comparing the same things. Full-fledged polymorphism with virtual calls cannot be compared to a case over an enum. If you were comparing C++ to a function pointer call (with the destination of the pointer compiled in a separate TU and the program compiled without link-time optimisation) in C, that could be considered "comparable" at least.

givememorebliss
Автор

This video is misleading. Even if the vtable overhead affects the runtime of a time-sensitive program, this video misses the point of polymorphism to begin with. In most cases where dynamic dispatch is preferred over static dispatch (e.g., game scripting, networking, client programming, GUI apps, etc.), the overhead created by vtable accesses is negligible. You would lose more processing time in a program that statically generates similar functions for hundreds of classes and the various functions that take those objects as parameters than if you were to use polymorphism for said objects. There are different applications for both static and dynamic dispatch, and it is misleading to hold them to the same expectation.

makichiis
Автор

In large project, polymorphism could literally save tons of amount of time and be really helpful to organize the code. Things like OOP, design patterns, are basically just for large projects but not for something small like calculator or some kinda stuff.

nyssc
Автор

This is a nice video but I think it's also a bit misleading...

The C code isn't *really* polymorphic: Every 'method' added has its own switch statement, every 'child' added is a new case for every switch in every function and the monolith grows.

And the C++, of course it'd be silly to use that kind of indirection on the hot path (and std::visit(); might be preferable).

Either way, the compiler will optimize away where possible for the target platform, including switch statements.

sharksandbananas