Jump to content
  • Advertisement
Sign in to follow this  
Jason Z

Hieroglyph Rendering System Design

This topic is 3863 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

For those of you who are interested, I have written a descriptive document detailing how my rendering system works. The system has been used in several different non-game applications as well as a non-commercial game. I find it to be a useful design, and wanted to share it with the community (if they want it...) I have posted it in my journal here. If you are interested, please check it out and let me know what you think. If your not interested, please pretend like you didn't see this post!

Share this post


Link to post
Share on other sites
Advertisement
I thought you were going to describe a way to render actual hieroglyphs, maybe using higher order primitives or something.

But no, that's not it at all. [depressed]

Share this post


Link to post
Share on other sites
interesting read [smile] I espeically like the idea of the RenderView, I have a similar system where ViewPort's are registered with the renderer and from there things such as Camera/Projection etc are inferred, but I like your concept of encapsulating it into a concrete inheritence based scheme. I think I will be making some modifications in light of your article [smile].

So far my entire pipeline is cleanly abstracted and XML configured, so in keeping with that I will probably be making the RenderView abstraction also data-driven rather than using concrete C++ classes, will let you know how progress goes if I manage to get around to it soon.

Cheers,
-Danu

Share this post


Link to post
Share on other sites
Hi Jason Z,

First of all, thanks for sharing your design. I must admit i like the idea of RenderViews (we've already talked about them in the past).

From my understanding of the system, i think there is one little "problem" about how you configure the RenderViews and how you communicate parameters between the views and the objects. From what i can tell, in your system, the owner of a render view must know the implementation of that particular view in order to set it up and get results back. E.g. in your example the water object must know the CRenderViewReflectionMap view in order to set the plane and get the reflection map back.

As silvermace pointed out a more data-driven design would be even more flexible. What i have in mind (and tried to implement) is a way to specify the required render views at the material level, e.g. through material scripts. E.g. you can create and attach a CDynamicReflectionCubemap render view to any object in the scene, without the need to know the internal workings of that particular view.

The way i've implemented it was by distinguishing the views into those requiring a Camera object in order to work (e.g.) ShadowmapView and your PerspectiveView, and those requiring an object to work (e.g.) ReflectionCubemap, PlanarReflectionMap, etc. In both cases initialization is done using a map<string, string> in order to be able to configure the views through scripts. In the first case (CameraRenderView) the view gets all the info required by a camera object which is specified by the owner (who knows it's a camera RV). In the second case (ObjectRenderView) the view gets all the info from the object's material. All renderable objects in the scene hold a list of render views, which get updated before the object is drawn into the main rv, if the object is visible in the current frame. I think this part is missing from your example in the pdf; this is, where the other required RVs get updated.

This way you can have a water plane object without needing a special class for it in order to initialize the required view (though in this particular case you may need one because you may want to deform it in certain ways, and have it interact with other objects). Either way (with or without a special object for the water plane) by not hard coding the RV type, you can switch between views based on machine specs, quality settings, etc.

One last addition to your design (though probably trivial) is that in my case every RV holds a list of passes (in order to be able to render things in multiple passes) and the objects specify a CRenderEffect for each pass (instead of one in your description). This way you can switch between a forward renderer which renders as much as it can in one pass, to a forward renderer which uses multiple passes or a deffered renderer (by specifying the CRenderEffect required for the G-buffer construction pass, the illumination pass, etc.).

I hope i haven't forgot anything. I know that you may not be interested in the above, because you've already have the system in place and you have your way of handling things, but because i found your design flexible i thought i should share my thoughts on it. What i described above may be incorrect/not practical, because i haven't had the chance to implement many different RVs. At least i hope the ideas to make some sense. Oh, and i almost forgot. Please, forgive my english :)

Thanks again for sharing.

HellRaiZer

Share this post


Link to post
Share on other sites
Cool, thanks for the feedback so far. I'll comment on a few of comments:
Quote:
From my understanding of the system, i think there is one little "problem" about how you configure the RenderViews and how you communicate parameters between the views and the objects. From what i can tell, in your system, the owner of a render view must know the implementation of that particular view in order to set it up and get results back. E.g. in your example the water object must know the CRenderViewReflectionMap view in order to set the plane and get the reflection map back.
This is actually how I implemented it, but your suggestions are valid. It is possible to make the render view somehow attach to a particular object, but since there are special cases (like the reflection render view needing a plane of reflection - that's not available in a standard object) I decided to just make the render views a concrete part of the objects. I had already been using special purpose objects for each game entity object anyways, so it wasn't an issue in my case.
Quote:
As silvermace pointed out a more data-driven design would be even more flexible. What i have in mind (and tried to implement) is a way to specify the required render views at the material level, e.g. through material scripts. E.g. you can create and attach a CDynamicReflectionCubemap render view to any object in the scene, without the need to know the internal workings of that particular view.
This sounds especially interesting, and is where I intend to take the system eventually. Right now its easy enough to manually configure things, but it would be nice to abstract the entire system into a material type setup.
Quote:
The way i've implemented it was by distinguishing the views into those requiring a Camera object in order to work (e.g.) ShadowmapView and your PerspectiveView, and those requiring an object to work (e.g.) ReflectionCubemap, PlanarReflectionMap, etc. In both cases initialization is done using a map<string, string> in order to be able to configure the views through scripts. In the first case (CameraRenderView) the view gets all the info required by a camera object which is specified by the owner (who knows it's a camera RV). In the second case (ObjectRenderView) the view gets all the info from the object's material. All renderable objects in the scene hold a list of render views, which get updated before the object is drawn into the main rv, if the object is visible in the current frame. I think this part is missing from your example in the pdf; this is, where the other required RVs get updated.
I did forget to add this to the pdf - the way that I work out the whole system is like so:

1. Perform a recursive update of the scene graph
1a. Each object in the scene graph has the option of adding render views to the renderer (i.e. reflective objects will require a CRenderViewReflectionMap, lights will need CRenderViewShadowMaps, etc...)
2. After the update phase, the renderer sorts the views by type to ensure that the proper order is maintained. This includes pre-requisites as well as post requisites - for example, I have several post processing render views that need to be performed after the main CRenderViewPerspective.
3. Each render view is rendered, and after all are complete the final results are presented to the front buffer.
Quote:
This way you can have a water plane object without needing a special class for it in order to initialize the required view (though in this particular case you may need one because you may want to deform it in certain ways, and have it interact with other objects). Either way (with or without a special object for the water plane) by not hard coding the RV type, you can switch between views based on machine specs, quality settings, etc.
I handle machine specs withing the renderviews themselves - a given render view would be able to support several hardware levels and selects different shaders/passes based on that info.
Quote:
One last addition to your design (though probably trivial) is that in my case every RV holds a list of passes (in order to be able to render things in multiple passes) and the objects specify a CRenderEffect for each pass (instead of one in your description). This way you can switch between a forward renderer which renders as much as it can in one pass, to a forward renderer which uses multiple passes or a deffered renderer (by specifying the CRenderEffect required for the G-buffer construction pass, the illumination pass, etc.).
This is actually handled in the *.fx files that I use within my CRenderEffects. I didn't mention it since it is not really part of my system, but it is capable of handling multiple rendering passes per render view. Even so, I think deferred shading would be encompassed by several render views - one to generate several G-buffers, and another that is tied to the first ones render targets to generate the final scene. This would separate the generation phase and the shading phase, which is good since the generation phase is unlikely to change while the shading phase would change quite a bit.
Quote:
So far my entire pipeline is cleanly abstracted and XML configured, so in keeping with that I will probably be making the RenderView abstraction also data-driven rather than using concrete C++ classes, will let you know how progress goes if I manage to get around to it soon.
Yes - I need to change my system over to be more data driven. I just don't have that much time to do it right now...

Thanks again for the comments. More are welcome if you think you can add something to the discussion!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!