Sign in to follow this  

[.net] Reflection in .NET making consoles easy!

This topic is 4842 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I had the idea that, by using reflection, you could fashion a q3-type console rather quickly and easily. You could just make a class that has the appropriately-named properties and link it up to your engine or whatever. Just get the type of it and enumerate properties, then use reflection again to invoke the property set with a new value!

Share this post


Link to post
Share on other sites
Just realize that if not used properly, reflection can be SLOW. In a high performance application like a game, reflection may prove to be too slow. However, I think that one way or another you should be able to get around that. I personally love reflection when used properly :). It certainly makes developing a plug-in system a heck of a lot easier :).

Share this post


Link to post
Share on other sites
And I also think that when it comes to implementing a console, it is easier to understand and is more scalable to use simple delegates. The fact is that most of the time, there will not be just one class that holds the data that needs to be changed. And the extreme, each subsystem might need to "register" its own commands to be used in the console. And in "registering" architectures in .NET, delegates are a god send.

I *really* hate when people say this, but if you follow the link to my articles in my signature, there is an article devoted to developing a Quake-like console (the architecture was even largely based on the Q2 console (with obvious changes made to bring it into the .NET world)).

EDIT: to make it easier, here are the articles.
The Command Manager
The Console

I hope this helps!

Share this post


Link to post
Share on other sites
I think DrGui is not suggesting using reflection in your game itself, just parsing the console input (and perhaps startup scripts etc).

This amount of usage is not going to prove a performance problem.

Mark

Share this post


Link to post
Share on other sites
Markr -

It depends on how much the console and commands are used. For example, to better test a game, it is really easy to make *all* game commands (including movement), go through a command processor. Then you can actually have a test "script" which is basically just a list of commands to parse and execute at a given time. When commands are implemented in this fashion to support automated testing, I believe the cost of reflection might be too high and would have a negative impact of the game.

Regardless, I just thought it would be best to be aware that there *might* be a performance impact because if you're not aware, then your coding of the system might actually make the impact worse. However, if you are aware of the potential impact, then you can architect your system properly so the impact is voided.

That's all I was saying :D.

Share this post


Link to post
Share on other sites
Quote:
Original post by bL0wF1sH
I believe the cost of reflection might be too high and would have a negative impact of the game.

Then slap a cache between the console parsing code and the reflection access. :D

A simple hashtable which caches the command name and then creates the delegate to point to the method on-demand would be trivial.

Since a single command can map to more than 1 function, the cached object would have to be a list of known functions with that name and their parameters. Then you just need to add some voodoo code to figure out the closest match.

Share this post


Link to post
Share on other sites
Quote:
Original post by ggs
Quote:
Original post by bL0wF1sH
I believe the cost of reflection might be too high and would have a negative impact of the game.

Then slap a cache between the console parsing code and the reflection access. :D

A simple hashtable which caches the command name and then creates the delegate to point to the method on-demand would be trivial.

Since a single command can map to more than 1 function, the cached object would have to be a list of known functions with that name and their parameters. Then you just need to add some voodoo code to figure out the closest match.


But if you already have the delegates, than reflection isn't needed. Remember, it was said that the reflection would be used to dynamically "set" the properties, not a delegate.

With that said, the actual "setting" of the properties wouldn't be that big of a performance impact. Once again, I was simply stating that you need to be aware of the possible performance impact so that you can mitigate any issues in your architecture to begin with.

However, I still feel that using reflection isn't a very "scalable" solution because there are numerous times where it won't be as simple as setting the value of one property. And most of the time, all the properties needing to be set won't be contained within one object. If properties span multiple objects, now you have to search across multiple objects to find the property in the first place.

Like you said though, if the lookup is "cached" somehow, the performance footprint isn't that large. However, you need to know there is a possibility of a performance impact before you can draw the conclusion that you need to design some sort of "cache".

I've never said that it isn't possible. It's very possible. And any performance issues can most likely be mitigated before they become problems. However, depending on the console's use, it may not be the optimum solution. That's all I'm saying :).

Share this post


Link to post
Share on other sites
I wouldn't expect it to be used much, mainly while debugging and I'm sure that reflection can't be sooooo slow as to take more than 100ms which I'm sure users could stand.

I was thinking - do you think that using delegates to virtual functions would be faster than calling a virtual function, because the actual procedure would already have been decided.

Share this post


Link to post
Share on other sites
Quote:
Original post by DrGUI
I wouldn't expect it to be used much, mainly while debugging and I'm sure that reflection can't be sooooo slow as to take more than 100ms which I'm sure users could stand.

I'm fairly sure the custom text processing to extract the stuff to lookup with reflection will be a bigger overhead than walking an object tree relatively rarely.

The biggest issue is it might cause a hit in your framerate if you dont manage it properly. Which is easiy enough todo. Spin it off to a worker thread in the thread pool, and then get a delegate to callback once the reflection and lookup stuff is finished.

Which you should do anyway. Doing blocking IO in the GUI thread is a very bad idea form a usability standpoint.

Quote:
I was thinking - do you think that using delegates to virtual functions would be faster than calling a virtual function, because the actual procedure would already have been decided.

While a virtual function/interface is much faster than a delegate, you are then limited to 1 such virtual function/interface per class. Its really a matter of flexibility.

Also delgates speed will improve when .NET 2.0 comes out.

Invoking a function via the .Invoke() memeber on a FunctionInfo object is slow. Much slower than wrapping it in a delegate and then reusing the delegate a few times.

Share this post


Link to post
Share on other sites
Quote:
Original post by ggs
Also delgates speed will improve when .NET 2.0 comes out.

Really? Can you provide some reference that confirms this? And does this apply to the current Beta 1 release, too? I'm asking because I 'm doing some tool developement atm and use the 2.0 beta 1 (mainly because of generics support [smile]).

--
Pat.

Share this post


Link to post
Share on other sites
I'm working on using reflection in a scientific application. The idea is largely to allow a user/developer to add a model, or add a new part of the model without having to include a lot of the machinery needed to make all of the possible variables availible to the user. He just has to mark which parameters need to be alterable by the user via dialoges, or to make them availible to statistical fitting routines. Speed is definitely a potential issue, but I think I have a solution. The user will choose which of all the availible parameters he wishes to allow to vary in a calculation, and automatic code generation will be used to hook these parameters up to the statistical fitting routine directly. I hear that doing this is somewhat of an anathema?

Share this post


Link to post
Share on other sites

This topic is 4842 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this