Jump to content

  • Log In with Google      Sign In   
  • Create Account


Virtual still the bad way ?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
22 replies to this topic

#1 Alundra   Members   -  Reputation: 770

Like
0Likes
Like

Posted 14 November 2012 - 08:09 AM

Hi,
Using virtual on console dev is the bad way to have good perf when used on renderer.
The concept is to have IRenderer and have inherited class by API.
My question is quite simple : Is virtual still bad on both console ?

Thanks

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 27686

Like
12Likes
Like

Posted 14 November 2012 - 08:18 AM

Virtual is a known cost: looking up a function pointer in a table, then calling that function indirectly.
C++ is a "pay for what you use" language, so you should only use that keyword when you want to pay this cost.

e.g. This code:
class X { virtual void Foo() {
    printf("hello");
  }
}
Is equivalent to:
class X;
struct X_VTable { void (*pfnFoo)(X*); };
class X { X(); X_VTable* vtable; inline void Foo() { vtable->pfnFoo(this); }
  void FooImplementation() {
    printf("hello");
  }
}
void s_X_Foo( X* self ) { self->FooImplementation(); }
const static X_VTable g_X_VTable = { &s_X_Foo };
x::X() : vtable(&g_X_VTable) {}
N.B. yes, the low-level implementation of virtual tends to be more costly on PPC CPUs than it is on x86 CPUs.

The concept is to have IRenderer and have inherited class by API.

You don't need to use virtual for that, unless you want to allow the user to change between two different types of renderers at runtime.

#3 Burnt_Fyr   Members   -  Reputation: 1201

Like
0Likes
Like

Posted 14 November 2012 - 09:15 AM

Even if the user would like to use a different renderer at compile/linkage time, wouldn't the use of a interface class with virtual methods would still be required, to avoid changing client code to use one renderer or another??

Edited by Burnt_Fyr, 14 November 2012 - 09:17 AM.


#4 rip-off   Moderators   -  Reputation: 7660

Like
1Likes
Like

Posted 14 November 2012 - 09:22 AM

Why do you think that requires virtual lookup? Virtual lookup is for runtime decisions.

Which API does this use:
class CircleRenderer {
public:
     Renderer();
     ~Renderer();

     void begin();
     void circle(float x, float y, float r);
     void end();
private:
     // Omitted...
};
What client changes are required if the implementation was switched from/to OpenGL/Direct3D/Software?

Edited by rip-off, 14 November 2012 - 09:22 AM.
Wrong tags


#5 tivolo   Members   -  Reputation: 883

Like
7Likes
Like

Posted 14 November 2012 - 09:43 AM

Even if the user would like to use a different renderer at compile/linkage time, wouldn't the use of a interface class with virtual methods would still be required, to avoid changing client code to use one renderer or another??


No, that's what #ifdef/different .h files/different .cpp files are for. Virtual functions, abstract base classes/interfaces should be used when you need polymorphism at runtime. If you have different implementations of the same thing that need to be changed at compile-time, then don't use virtual functions.

One thing where I've often seen it is for e.g. building base classes for textures, vertex buffers, index buffers, and the likes.
As an example, consider having an abstract base class ITexture, and two distinct derived classes TexturePS3 and TextureXBox360. Do you need to change these implementations at run-time? No. Could you even *change* the implementations at run-time? No, because the PS3-code likely won't compile on 360, and vice versa. Therefore, there's really no reason to use an interface with virtual functions.

#6 Rattenhirn   Crossbones+   -  Reputation: 1661

Like
4Likes
Like

Posted 14 November 2012 - 09:53 AM

My question is quite simple : Is virtual still bad on both console ?


The answer is really quite simple, independent of the platform:
If you do not need the decision at runtime, you do not need virtual or other decision structures and their performance cost and vice versa.

#7 Waterlimon   Crossbones+   -  Reputation: 2362

Like
3Likes
Like

Posted 14 November 2012 - 11:45 AM

If you want multiple switchable renderers at run time and not suffer so "much" (?) performance loss, i guess you could just move the "virtual interface" to a higher level which would reduce calls but increase the size of the executable.

Not sure how to do that properly, perhaps pass the renderer as a template instead of as a pointer that has virtual methods... (and instead make the temolate class have virtual methods)

feels messy though.

Waterlimon (imagine this is handwritten please)


#8 iMalc   Crossbones+   -  Reputation: 2259

Like
5Likes
Like

Posted 14 November 2012 - 12:24 PM

I think the main point is that if youre design requires you to achieve the same effect as a virtual method call, then any attempt at doing this through any other means is usually going to be worse than just using a virtual method call, performance-wise.

Edited by iMalc, 14 November 2012 - 12:25 PM.

"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

#9 gdunbar   Crossbones+   -  Reputation: 850

Like
1Likes
Like

Posted 14 November 2012 - 02:03 PM

Virtual functions are the bedrock of modern, object-oriented C++, and are unlikely to be a significant performance bottleneck on today's relatively fast processors. I recommend you proceed with using them. If at some point in the future, as a result of performance profiling, you do identify a virtual function or two that are slowing your code down (perhaps used very often in a tight loop), then address the problem locally at that time.

Good luck!
Geoff

#10 swiftcoder   Senior Moderators   -  Reputation: 9612

Like
10Likes
Like

Posted 14 November 2012 - 02:32 PM

There is a much more subtle issue to do with the performance of virtual functions calls, though, and that is frequency.

For example, consider a pluggable image processing algorithm that needs to perform a certain operation per-pixel. We can provide a virtual function mutatePixel(Pixel p), which operates a pixel at a time, or we can provide a virtual function mutateImage(Pixel[][] p), which operates on the entire image. In the first case, we incur a virtual function call per-pixel (that's N^2 virtual function calls for an NxN image), and in the second we incur only a single virtual function call per image.

The overhead of a single virtual function call is measurable, but small, and can likely be ignored in 99.9% of circumstances. But, if you start making those calls in a tight loop, it can rapidly become a bottleneck. You can't just avoid virtual function calls entirely, but you can design them intelligently, with performance in mind.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#11 Orangeatang   Members   -  Reputation: 1406

Like
2Likes
Like

Posted 14 November 2012 - 03:24 PM

Virtual functions are fine... the overhead that they incur negligible in most cases when you consider the benefits. That being said, like anything in C++ it can get out of hand - virtual functions are a tool, and should be used when they're needed and they fit your design pattern.

#12 Hodgman   Moderators   -  Reputation: 27686

Like
9Likes
Like

Posted 14 November 2012 - 11:05 PM

Virtual functions are the bedrock of modern, object-oriented C++, and are unlikely to be a significant performance bottleneck on today's relatively fast processors.

FWIW, the class is the bedrock of C++ OOP, and virtual is an add-on for the rare cases where you need runtime polymorphism.

Fast processors are one thing, but fast cache/RAM is another. Since C++ development started in the 80's, processors have speed up by over 10,000x, but RAM has only sped up by 10x, which means that relative to CPU speed, RAM is actually 1000x slower than it used to be!
If virtual is causing performance problems, it's unlikely to be because it's using too many CPU cycles -- it'll be because it's causing the CPU to sit idle for many cycles while it waits for RAM.

In my earlier example, both of these lines of code would end up calling the FooImplementation function, via the same mechanism that virtual uses:
X* object = ...;
object->Foo();
object->vtable->pfnFoo(object);
We can see that the virtual call has 3 opportunities to generate a cache-miss:
object->vtable // read 4 bytes at address [object] into 'temp#1' (possible dCache miss)
	  ->pfnFoo // read 4 bytes at address [temp#1] into 'temp#2' (possible dCache miss)
	    ();// jump to instructions at address [temp#2] (possible iCache miss)
Whereas a regualr function call has 1:
object->FooImplementation();// jump to instructions at address [X::FooImplementation] (possible iCache miss)
This isn't getting any better over time -- as above, it's actually getting worse and worse (in relative terms -- cycles per cache-miss) as CPUs get faster!
So actually, it was fine to not worry about it decades ago, but it is a problem today.

If you're using C++, you probably care about performance (otherwise you should be using a less error-prone language, like C# or Lua or Python), and the biggest performance problem for modern C++ code is managing your memory access patterns to avoid cache misses. C++ gives you a lot of tools to achieve this goal, however, it gives you no control at all over the memory location of the vtable objects (equivalent of g_X_VTable in my previous post -- which can be manually organized in that hand-written virtual version), which makes virtual calls even less optimal.

For example, consider a pluggable image processing algorithm that needs to perform a certain operation per-pixel. We can provide a virtual function mutatePixel(Pixel p), which operates a pixel at a time, or we can provide a virtual function mutateImage(Pixel[][] p), which operates on the entire image. In the first case, we incur a virtual function call per-pixel (that's N^2 virtual function calls for an NxN image), and in the second we incur only a single virtual function call per image.
You can't just avoid virtual function calls entirely, but you can design them intelligently, with performance in mind.

That's an excellent point about how to amortize overheads.
A general case solution (to almost any problem) should operate on a range of objects at once (instead of using virtual to operate on only a single object).

Edited by Hodgman, 14 November 2012 - 11:21 PM.


#13 SuperVGA   Members   -  Reputation: 1118

Like
0Likes
Like

Posted 15 November 2012 - 01:06 AM

Virtual functions are the bedrock of modern, object-oriented C++, and are unlikely to be a significant performance bottleneck on today's relatively fast processors.

FWIW, the class is the bedrock of C++ OOP, and virtual is an add-on for the rare cases where you need runtime polymorphism.

IMO, class is the bedrock of most Object Based languages.
Object Oriented languages imply the availability of virtual, as it allows for inheritance in addition to extension and aggregation...

...Perhaps it's more reasonable for us to define what bedrock means in programming. :D

I just discovered that my real time script processing module uses virtual for its nodes in its intermediate representaion.
- Might be a good idea to change that... (Thanks Swiftcoder for the example on dealing with many virtual function calls)

Edited by SuperVGA, 15 November 2012 - 01:11 AM.


#14 Rattenhirn   Crossbones+   -  Reputation: 1661

Like
0Likes
Like

Posted 15 November 2012 - 01:50 AM

today's relatively fast processors


I don't know about you, but I find that processors are never quite fast enough! ;)

#15 Hodgman   Moderators   -  Reputation: 27686

Like
0Likes
Like

Posted 15 November 2012 - 07:43 AM

Object Oriented languages imply the availability of virtual, as it allows for inheritance in addition to extension and aggregation...

Yeah, I just meant to imply that inheritance and polymorphism aren't the main/most-important/most-common parts of OOP, and are actually quite rare compared to, e.g. the use of composition or encapsulation.

#16 gdunbar   Crossbones+   -  Reputation: 850

Like
0Likes
Like

Posted 16 November 2012 - 12:48 PM

OK, OK, maybe I went too far with "bedrock". "Indispensable", maybe? C++ certainly wouldn't be much of an object-oriented language without virtual functions!

In any case, my true point stands: Use virtual functions. Don't worry about it. Then, if at some point in the future you have performance issues or otherwise want to optimize performance, you should measure performance. In the event that you find that you are calling a virtual function in a tight loop or something, you should be able to target a fix there; it is _extremely_ unlikely any type of major architectural change will be needed. swiftcoder's method of moving the function call to surround the loop (instead of the other way) is certainly an excellent option.

Good luck,
Geoff

Edited by gdunbar, 16 November 2012 - 12:48 PM.


#17 Codarki   Members   -  Reputation: 462

Like
0Likes
Like

Posted 16 November 2012 - 01:12 PM

OK, OK, maybe I went too far with "bedrock". "Indispensable", maybe? C++ certainly wouldn't be much of an object-oriented language without virtual functions!

In any case, my true point stands: Use virtual functions. Don't worry about it. Then, if at some point in the future you have performance issues or otherwise want to optimize performance, you should measure performance. In the event that you find that you are calling a virtual function in a tight loop or something, you should be able to target a fix there; it is _extremely_ unlikely any type of major architectural change will be needed. swiftcoder's method of moving the function call to surround the loop (instead of the other way) is certainly an excellent option.

Good luck,
Geoff

Use virtual functions if you need polymorphism. If you don't need, don't default to using virtual functions. Using of virtual functions is a design decision about where you want your abstractions to be. They should be used wisely or they will complicate whole code base. It is much easier to introduce virtual functions later on, than it is to get rid of them.

My answer to the OP is, the console is not going to change rendering API on runtime. Thus, it shouldn't be runtime polymorphic.

Edited by Codarki, 16 November 2012 - 01:13 PM.


#18 PhillipHamlyn   Members   -  Reputation: 454

Like
1Likes
Like

Posted 16 November 2012 - 04:05 PM

In most cases, IMHO, virtual methods are better replaced with a plug-in system providing dependency inversion rather than concretet dependency, since its more amenable to automated unit testing and mocking. Dependency inversion allows the assembly of the class functions at runtime without implying any specific hierarchy.

As an example; a pseudo logging class using inheritence -

Class SystemLog : FileWriter
{
public WriteLog(string comment) : base(comment);
}

Using dependency injection

Class SystemLog
{
public IWriteDestination WriterStream {get;set;}
public WriteLog(string comment)
{
this.WriterStream.Write(comment);
}

}

The first example requires a class called FileWriter whose functionality is expressly built into the class hierarchy, but the second does not need this dependency; its entirely ignorant of how the functionality on which it depends is provided. The first example has the class "knowing" at design time about which class it will use to implement its features, but the second example has the class being ignorant of what other class will provide this function (allowing for a test framework to provide a replacement implementation without affecting the calling class).

Although polymorphism is a solid OO concept, anything but the simplist inheritence hierarchy requires a lot of design level decisions which could reasonably be left until runtime.

So - in short; Virtual generally is better replaced by a

public MyProvider : IProviderClass
{
get;set;
}

Phillip

#19 myro   Members   -  Reputation: 118

Like
0Likes
Like

Posted 17 November 2012 - 04:45 PM

...

You introduce an indirection which adds an extra method call in means of performance.
Furthermore not using Polymorphism where it is appropriate will increase code size dramatically.

For testing this should imo not be a runtime decision, but a decision at build time.
In C++ you can easily swap in mockup classes in a type hierarchy with #ifdefs or by including different directories for unit/integration tests.
Not sure about C# though.

#20 PhillipHamlyn   Members   -  Reputation: 454

Like
2Likes
Like

Posted 18 November 2012 - 10:29 AM


...

You introduce an indirection which adds an extra method call in means of performance.
Furthermore not using Polymorphism where it is appropriate will increase code size dramatically.

For testing this should imo not be a runtime decision, but a decision at build time.
In C++ you can easily swap in mockup classes in a type hierarchy with #ifdefs or by including different directories for unit/integration tests.
Not sure about C# though.


But inheritence uses a virtual method lookup anyway, so no net loss ?
Dont agree with the code size argument - I dont see a difference in code size either way. Same logic; same compiler.
Agreed you can use Mocking, but inheritence concepts indicate that the inheritor "is interested in" or "knows" about the implementation details of the inherited class and extends that implementation, Mocking replaces the implementation with a different one and therefore invalidates the assumptions on which the inheritence contract was originally made. With a dependency inversion approach, no assumptions are made about the implementation because these are interfaced away leaving only the calling contract.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS