Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 30 Aug 2012
Offline Last Active Jul 19 2014 08:24 AM

Topics I've Started

Vertexbuffer - huge lag.

07 July 2014 - 10:03 AM

I'm using XNA but I assume the problem is analogous in DX9?


So I'm having a huge problem rendering a model that's created dynamically at runtime. It renders fine but it's creating a huge lag.


Here's the issue illustrated through a comparison:


1) Model with 10,000 vertices created in 3ds max and rendered with shader X in XNA ---> 200fps (after everything else has happened in the game)


2) Similar 10,000 vertex model constructed at runtime with a dynamic vertex buffer and rendered with shader X ---> 100fps

This is a completely unacceptable drop and I assume I'm doing something wrong. Something like minecraft would be impossible to run if this was a necessary drop. I can post my code if necessary but I'm just using the same approach used by the 3D particles sample. I've profiled the projected with NProf and all of the time is being spent in GraphicsDevice.Present() 



Multisampling in XNA?

12 April 2014 - 06:21 AM

Hey guys, simple question. How do you change the multisample count/quality in Xna 4?

I've seen plenty of articles talking about it back in xna 3.1 but none for 4.


And what different options are there? I've seen a project that chose between 2x and 4x. I know some modern games (and unity I believe) do 8x now. 

Thanks muchly!

Very general question about structuring of game classes.

06 January 2014 - 07:39 AM

Ok so I have a physics heavy simulation that's being chiefly run in my main level class.

I then have various characters that inherit from a general 'unit' class (it's similar to a top down RTS). I'm struggling to see which approach is best/ will be best for future efficiency when it comes to having these units interact with the physics of the world.


1) Pass all of the relevant data into each units Update function and allow them to manipulate it from there.


2) Have the units update function return an integer or something that corresponds to a particular action which is then carred out in the main game class.


So one is helpful in that it allows the entirety of the unit's capability to be contained within the class file. The other is helpful in that you don't need to pass a huge selection of variables to the update each time and all of the physics can be handled from within the main class.

Thoughts and suggestions are welcome. Is there a clear benefit to one over the other here that I'm missing or is it just a matter of preference.


Thanks in advance.

No normals generated with fbx file?

25 October 2013 - 09:18 AM

(I hope this is in the right section)

Ok so I'm using 3ds max. Up until now I've been using the Panda exporter for .X files. I've just switched to the built in fbx exporter as it allows for 2 sets of uv coords. However, when I use the same shader code for the same models that have been exported as fbx, anything using normals fails to work. There are no errors or anything, they just don't seem to have been generated.


So if for example in the pixel shader I end with:      return float4(input.Normal.xyz,1);
The entire model is black. Thoughts? Ideas?



Efficient lightmapping?

08 October 2013 - 02:02 AM

I'm playing around with lightmaps for my fairly small demo scene. I've looked around at some of the online literature

but I can't discern whether there's an agreed upon method for storing and implementing the lightmaps.


I realise it'll depend on the game in question as to whether you can spare texture memory or processing time but I'm just wondering if there's one approach that's considered superior. The way I see it, the main possibilities are:



1) Bake the lightmaps into the textures for each model. This means only 1 texture needs to be send to the shader but a unique texture needs to be held for each model (a lot of my models reuse textures otherwise).


2) Bake separate lightmaps for each model. A unique lightmap still needs to be kept for each model but you can probably get away with lower resolutions (given that the main textures are often shared between models and are more detailed). 2 textures need to be sent to the shader, but 1 can be shared between many models.


3) Given that lightmaps can use a second set of uv-coords (and I'm working with a fairly small scene), put all of the scenes models onto 1 huge lightmap. This'll make it a lot easier to assemble the scene and most textures will be shared quite a number of times. The main problem will be the huge amount of memory needed to get high-quality lighting.


Have I correctly dichotomised the problem? And which approach would you recommend.


Thanks a lot.