• Advertisement
Sign in to follow this  

Unity How do YOU render?

This topic is 2985 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to find an optimal method to rendering a given number of 'renderables' by optimizing state changes. Ordering state changes is not hard, I'm just having trouble putting together a design that is easy to use. Here are my two designs I have come up with (Neither of which make me 100% happy): Design 1: I define a 'Renderable' as a class that holds pointers to vertex buffers, index buffers, a shader (Cg in this case), and a technique. My 'RenderList' is a list of these Renderables that are ordered in a linked list fashion. I order my Renderables by sorting them when I add them to the list. They are sorted by (Render Target, Transparency, Shader, Technique). For each Renderable in the list that gets sent to the Renderer, the Renderable is polled for new information. New parameters are set in the shader and vertex data is drawn. Pros: Linked list is easy to push, insert and pop from. Cons: Each Rendrable must know about the state of the world in order to set the shader parameters properly - ie. Transformation matrices, textures, etc. Design 2: As above, A Renderable class is defined. It only contains a pointer to a vertex and index buffer, nothing else. A RenderTree is now used as a collection of these Renderables. It groups the Renderables by (Render Target, Transparency, Shader, Technique, and Parameter) changes. It basically contains a stack of states that get pushed and popped off the stack (ie. you only set the world view and projection matrices once). This mimics the way OpenGL does it's transformation stack. This is similar to a scenegraph, I believe. Pros: This is a bit more efficient then the above in that it does not poll renderables for their states. Each renderable does not need to know the world state in order to set the shader parameters (like world view projection matrices). Cons: Relies heavily on the user of my code to set up her own render list. Does not automate code (maybe it's a pro?). Anyway, I have both designs on the table. I would like to hear what the GameDev community uses as a rendering mechanism. Perhaps, if anyone has experience with some of the more known engines out there, they can throw in their two cents with rigards to how they render objects (OGRE, Irrlicht, SlimDX, etc?). Thanks for the help.

Share this post


Link to post
Share on other sites
Advertisement
Here is the design for my (unfinished) graphics engine.

Goals:
* minimize and coalesce API-dependent code into a few key areas
* generality (works for different projection types, 2D/3D, shaders)
* built-in support for commonly used features (shadows, dynamic environment maps, reflections)

Classes:
* GraphicsObject - sort of like a scene graph node but only for transformations. Each object has a pointer to a GraphicsShape and a list of child graphic objects, plus a transformation (position, orientation, scale). Transformations are accumulated as one recurses down the tree. It also contains a bounding volume for fast hierarchical culling.
* GraphicsShape - generic shape superclass which contains information common to all types of shapes (bounding volume, material, object-space transformation). Uses the visitor pattern with the ShapeDrawer class to avoid RTTI when drawing.
* Material - contains a shader, list of texture units (shader name/index/texture), and a list of named shader attributes for the material (of any type: float, vector, matrix).
* ShapeDrawer - interface for shape drawing. has a draw*(shape) method for each type of shape (ex: sphere, cylinder, static mesh, articulated mesh, heightfield). Most of the API-dependent code is here.
* VisibilityCuller - abstract type which contains all GraphicsObjects being drawn and support queries to determine which objects are in the view of a given camera. Probably uses an octree internally to speed queries.
* Renderer - class which does the organizational work when rendering. For each camera/viewport (including shadow maps) it uses the VisibilityCuller to determine which objects to draw. It then sorts the list by transparency, then material (they can be shared between shapes), then shader. It then iterates through the list drawing all opaque objects (with a ShapeDrawer) making minimal state changes. Finally, it sorts all transparent objects back-to-front and draws them in order. The Renderer also handles pushing/popping transformation matrices.

This is probably quite a bit more complicated than your designs but I think it is the best I've come up with yet.

Share this post


Link to post
Share on other sites
I have a separate Spacial Partitioning system in my code, which I will leave separate from the rendering system. I also used to have a Material class that worked as you did, but I ended up dumping it for something else a while ago (now I'm starting to think I should have kept it). Your Material class seems very generic - it holds a shader and it's state. This seems like a decent design and I might end up resurrecting my Material class to match your description.

Quote:

built-in support for commonly used features (shadows, dynamic environment maps, reflections)


How exactly do you achieve this? Do you expect a certain shader type to interact with when drawing shadows/reflections/environment maps?

Share this post


Link to post
Share on other sites
Quote:
Original post by RealMarkP
Quote:

built-in support for commonly used features (shadows, dynamic environment maps, reflections)


How exactly do you achieve this? Do you expect a certain shader type to interact with when drawing shadows/reflections/environment maps?


Each material object also has 2 names (of the variables in GLSL) for the shadow and environment maps, and boolean flags indicating whether or not the shader needs them. The renderer takes care of rendering shadow maps for each light source and therefore the shader only has to perform depth comparisons. The assumption is that whatever is creating the material object will know whether or not it accepts shadows.

Of course this is just the "plan." None of this is actually fully implemented yet.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Florencia Sylvester Anelli
      Hi everyone! I've developed a game called The Last Dog (ZOMBIES + DOGS) and was wondering if you could give it a try in order for me to analyse your behaviour in analytics. Give it a shot! Please download it here: https://florsyl.wixsite.com/thelastdog2 and let me know if you detect any bugs. VIDEO.3gp
       

       
       
    • By Manuel Berger
      Hello fellow devs!
      Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed.
      My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.

      In Pseudo-Code:
      angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
      index = (int) (angle / (PI / 5));
      PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down

      Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.

      What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.

      I can't get my head around it. Any suggestions? Is the whole approach doomed?

      Thank you in advance for any input!
       
    • By devbyskc
      Hi Everyone,
      Like most here, I'm a newbie but have been dabbling with game development for a few years. I am currently working full-time overseas and learning the craft in my spare time. It's been a long but highly rewarding adventure. Much of my time has been spent working through tutorials. In all of them, as well as my own attempts at development, I used the audio files supplied by the tutorial author, or obtained from one of the numerous sites online. I am working solo, and will be for a while, so I don't want to get too wrapped up with any one skill set. Regarding audio, the files I've found and used are good for what I was doing at the time. However I would now like to try my hand at customizing the audio more. My game engine of choice is Unity and it has an audio mixer built in that I have experimented with following their tutorials. I have obtained a great book called Game Audio Development with Unity 5.x that I am working through. Half way through the book it introduces using FMOD to supplement the Unity Audio Mixer. Later in the book, the author introduces Reaper (a very popular DAW) as an external program to compose and mix music to be integrated with Unity. I did some research on DAWs and quickly became overwhelmed. Much of what I found was geared toward professional sound engineers and sound designers. I am in no way trying or even thinking about getting to that level. All I want to be able to do is take a music file, and tweak it some to get the sound I want for my game. I've played with Audacity as well, but it didn't seem to fit the bill. So that is why I am looking at a better quality DAW. Since being solo, I am also under a budget contraint. So of all the DAW software out there, I am considering Reaper or Presonus Studio One due to their pricing. My question is, is investing the time to learn about using a DAW to tweak a sound file worth it? Are there any solo developers currently using a DAW as part of their overall workflow? If so, which one? I've also come across Fabric which is a Unity plug-in that enhances the built-in audio mixer. Would that be a better alternative?
      I know this is long, and maybe I haven't communicated well in trying to be brief. But any advice from the gurus/vets would be greatly appreciated. I've leaned so much and had a lot of fun in the process. BTW, I am also a senior citizen (I cut my programming teeth back using punch cards and Structured Basic when it first came out). If anyone needs more clarification of what I am trying to accomplish please let me know.  Thanks in advance for any assistance/advice.
    • By Yosef BenSadon
      Hi , I was considering this start up http://adshir.com/, for investment and i would like a little bit of feedback on what the developers community think about the technology.
      So far what they have is a demo that runs in real time on a Tablet at over 60FPS, it runs locally on the  integrated GPU of the i7 . They have a 20 000 triangles  dinosaur that looks impressive,  better than anything i saw on a mobile device, with reflections and shadows looking very close to what they would look in the real world. They achieved this thanks to a  new algorithm of a rendering technique called Path tracing/Ray tracing, that  is very demanding and so far it is done mostly for static images.
      From what i checked around there is no real option for real time ray tracing (60 FPS on consumer devices). There was imagination technologies that were supposed to release a chip that supports real time ray tracing, but i did not found they had a product in the market or even if the technology is finished as their last demo  i found was with a PC.  The other one is OTOY with their brigade engine that is still not released and if i understand well is more a cloud solution than in hardware solution .
      Would there  be a sizable  interest in the developers community in having such a product as a plug-in for existing game engines?  How important  is Ray tracing to the  future of high end real time graphics?
    • By bryandalo
      Good day,

      I just wanted to share our casual game that is available for android.

      Description: Fight your way from the ravenous plant monster for survival through flips. The rules are simple, drag and release your phone screen. Improve your skills and show it to your friends with the games quirky ranks. Select an array of characters using the orb you acquire throughout the game.

      Download: https://play.google.com/store/apps/details?id=com.HellmodeGames.FlipEscape&hl=en
       
      Trailer: 
       
  • Advertisement