Virtual still the bad way ?

Started by
21 comments, last by Shannon Barber 11 years, 4 months ago
Virtual functions are fine... the overhead that they incur negligible in most cases when you consider the benefits. That being said, like anything in C++ it can get out of hand - virtual functions are a tool, and should be used when they're needed and they fit your design pattern.
Advertisement
Virtual functions are the bedrock of modern, object-oriented C++, and are unlikely to be a significant performance bottleneck on today's relatively fast processors.
FWIW, the [font=courier new,courier,monospace]class[/font] is the bedrock of C++ OOP, and [font=courier new,courier,monospace]virtual[/font] is an add-on for the rare cases where you need runtime polymorphism.

Fast processors are one thing, but fast cache/RAM is another. Since C++ development started in the 80's, processors have speed up by over 10,000x, but RAM has only sped up by 10x, which means that relative to CPU speed, RAM is actually 1000x slower than it used to be!
If virtual is causing performance problems, it's unlikely to be because it's using too many CPU cycles -- it'll be because it's causing the CPU to sit idle for many cycles while it waits for RAM.

In my earlier example, both of these lines of code would end up calling the [font=courier new,courier,monospace]FooImplementation[/font] function, via the same mechanism that [font=courier new,courier,monospace]virtual[/font] uses:
X* object = ...;
object->Foo();
object->vtable->pfnFoo(object);
We can see that the virtual call has 3 opportunities to generate a cache-miss:
object->vtable // read 4 bytes at address [object] into 'temp#1' (possible dCache miss)
->pfnFoo // read 4 bytes at address [temp#1] into 'temp#2' (possible dCache miss)
();// jump to instructions at address [temp#2] (possible iCache miss)
Whereas a regualr function call has 1:
object->FooImplementation();// jump to instructions at address [X::FooImplementation] (possible iCache miss)This isn't getting any better over time -- as above, it's actually getting worse and worse (in relative terms -- cycles per cache-miss) as CPUs get faster!
So actually, it was fine to not worry about it decades ago, but it is a problem today.

If you're using C++, you probably care about performance (otherwise you should be using a less error-prone language, like C# or Lua or Python), and the biggest performance problem for modern C++ code is managing your memory access patterns to avoid cache misses. C++ gives you a lot of tools to achieve this goal, however, it gives you no control at all over the memory location of the [font=courier new,courier,monospace]vtable[/font] objects (equivalent of [font=courier new,courier,monospace]g_X_VTable[/font] in my previous post -- which can be manually organized in that hand-written virtual version), which makes virtual calls even less optimal.
For example, consider a pluggable image processing algorithm that needs to perform a certain operation per-pixel. We can provide a virtual function mutatePixel(Pixel p), which operates a pixel at a time, or we can provide a virtual function mutateImage(Pixel[][] p), which operates on the entire image. In the first case, we incur a virtual function call per-pixel (that's N^2 virtual function calls for an NxN image), and in the second we incur only a single virtual function call per image.
You can't just avoid virtual function calls entirely, but you can design them intelligently, with performance in mind.
That's an excellent point about how to amortize overheads.
A general case solution (to almost any problem) should operate on a range of objects at once (instead of using virtual to operate on only a single object).

[quote name='gdunbar' timestamp='1352923413' post='5000997']Virtual functions are the bedrock of modern, object-oriented C++, and are unlikely to be a significant performance bottleneck on today's relatively fast processors.
FWIW, the [font=courier new,courier,monospace]class[/font] is the bedrock of C++ OOP, and [font=courier new,courier,monospace]virtual[/font] is an add-on for the rare cases where you need runtime polymorphism.
[/quote]
IMO, [font=courier new,courier,monospace]class[/font] is the bedrock of most Object Based languages.
Object Oriented languages imply the availability of [font=courier new,courier,monospace]virtual[/font], as it allows for inheritance in addition to extension and aggregation...

...Perhaps it's more reasonable for us to define what [font=courier new,courier,monospace]bedrock[/font] means in programming. :D

I just discovered that my real time script processing module uses virtual for its nodes in its intermediate representaion.
- Might be a good idea to change that... (Thanks Swiftcoder for the example on dealing with many virtual function calls)

today's relatively fast processors


I don't know about you, but I find that processors are never quite fast enough! ;)
Object Oriented languages imply the availability of virtual, as it allows for inheritance in addition to extension and aggregation...
Yeah, I just meant to imply that inheritance and polymorphism aren't the main/most-important/most-common parts of OOP, and are actually quite rare compared to, e.g. the use of composition or encapsulation.
OK, OK, maybe I went too far with "bedrock". "Indispensable", maybe? C++ certainly wouldn't be much of an object-oriented language without virtual functions!

In any case, my true point stands: Use virtual functions. Don't worry about it. Then, if at some point in the future you have performance issues or otherwise want to optimize performance, you should measure performance. In the event that you find that you are calling a virtual function in a tight loop or something, you should be able to target a fix there; it is _extremely_ unlikely any type of major architectural change will be needed. swiftcoder's method of moving the function call to surround the loop (instead of the other way) is certainly an excellent option.

Good luck,
Geoff

OK, OK, maybe I went too far with "bedrock". "Indispensable", maybe? C++ certainly wouldn't be much of an object-oriented language without virtual functions!

In any case, my true point stands: Use virtual functions. Don't worry about it. Then, if at some point in the future you have performance issues or otherwise want to optimize performance, you should measure performance. In the event that you find that you are calling a virtual function in a tight loop or something, you should be able to target a fix there; it is _extremely_ unlikely any type of major architectural change will be needed. swiftcoder's method of moving the function call to surround the loop (instead of the other way) is certainly an excellent option.

Good luck,
Geoff

Use virtual functions if you need polymorphism. If you don't need, don't default to using virtual functions. Using of virtual functions is a design decision about where you want your abstractions to be. They should be used wisely or they will complicate whole code base. It is much easier to introduce virtual functions later on, than it is to get rid of them.

My answer to the OP is, the console is not going to change rendering API on runtime. Thus, it shouldn't be runtime polymorphic.
In most cases, IMHO, virtual methods are better replaced with a plug-in system providing dependency inversion rather than concretet dependency, since its more amenable to automated unit testing and mocking. Dependency inversion allows the assembly of the class functions at runtime without implying any specific hierarchy.

As an example; a pseudo logging class using inheritence -

Class SystemLog : FileWriter
{
public WriteLog(string comment) : base(comment);
}

Using dependency injection

Class SystemLog
{
public IWriteDestination WriterStream {get;set;}
public WriteLog(string comment)
{
this.WriterStream.Write(comment);
}

}

The first example requires a class called FileWriter whose functionality is expressly built into the class hierarchy, but the second does not need this dependency; its entirely ignorant of how the functionality on which it depends is provided. The first example has the class "knowing" at design time about which class it will use to implement its features, but the second example has the class being ignorant of what other class will provide this function (allowing for a test framework to provide a replacement implementation without affecting the calling class).

Although polymorphism is a solid OO concept, anything but the simplist inheritence hierarchy requires a lot of design level decisions which could reasonably be left until runtime.

So - in short; Virtual generally is better replaced by a

public MyProvider : IProviderClass
{
get;set;
}

Phillip

...

You introduce an indirection which adds an extra method call in means of performance.
Furthermore not using Polymorphism where it is appropriate will increase code size dramatically.

For testing this should imo not be a runtime decision, but a decision at build time.
In C++ you can easily swap in mockup classes in a type hierarchy with #ifdefs or by including different directories for unit/integration tests.
Not sure about C# though.

[quote name='PhillipHamlyn' timestamp='1353103516' post='5001649']
...

You introduce an indirection which adds an extra method call in means of performance.
Furthermore not using Polymorphism where it is appropriate will increase code size dramatically.

For testing this should imo not be a runtime decision, but a decision at build time.
In C++ you can easily swap in mockup classes in a type hierarchy with #ifdefs or by including different directories for unit/integration tests.
Not sure about C# though.
[/quote]

But inheritence uses a virtual method lookup anyway, so no net loss ?
Dont agree with the code size argument - I dont see a difference in code size either way. Same logic; same compiler.
Agreed you can use Mocking, but inheritence concepts indicate that the inheritor "is interested in" or "knows" about the implementation details of the inherited class and extends that implementation, Mocking replaces the implementation with a different one and therefore invalidates the assumptions on which the inheritence contract was originally made. With a dependency inversion approach, no assumptions are made about the implementation because these are interfaced away leaving only the calling contract.

This topic is closed to new replies.

Advertisement