Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Shael

Member Since 30 Nov 2004
Offline Last Active Jan 15 2014 09:21 PM

#4971732 GPU Terrain Physics

Posted by Shael on 21 August 2012 - 12:44 AM

Thanks for the food :D

So my initial thoughts were somewhat correct in terms of using lower-resolution meshes. Do you know of any demo's that show this sort of thing? Most of the demo's I've found around terrain rendering on the GPU don't do any physics simulation.

I'd be interested in hearing what others have to say on this topic.


#4971716 GPU Terrain Physics

Posted by Shael on 20 August 2012 - 11:00 PM

How can terrain physics be handled when a lot of modern terrain rendering is done via the GPU?

Normally for brute force approach and small heightmaps you could create vertices on the CPU and load them into a physics engine as a collision mesh but with GPU approaches there isn't always a 1:1 mapping of vertex data on the CPU to whats displayed on screen (eg. hardware tessellation or vertex morphing). One idea was to generate a low-medium resolution version on the CPU to form the collision mesh but I'm not sure how practical this is as it may cause visual artifacts with physics objects sinking into the terrain or perhaps even floating.

What is the modern approach to this that games like BF3 are using?


#4900792 Emulating CBuffers

Posted by Shael on 08 January 2012 - 06:20 PM

Thanks for taking the time to discuss this topic :)

I hadn't thought of using Lua to describe the FX section but I'll look into it. At the moment I was going on the path of having my own HLSL-like syntax which I would parse. This FX section would basically define all the constant buffers and techniques/contexts for the shader - very similar to Horde3D except with cbuffer support I suppose.

The main problem I'm having is the part where you talk about a mask so the engine knows what buffers to set before drawing. With the design above, it forces the cbuffers defined in the FX section to have unique names so that a user could look up a cbuffer for a particular shader type. Also, how does the mask fit in with your state group idea where you have cbuffer bind commands, are these commands necessary if you already have a mask to know what has to set?

This is an example of the structure/usage I was going for:

Effect:

[FX]
cbuffer cbPerObjectVS : register( b0 )
{
matrix g_mWorldViewProjection;
matrix g_mWorld;
};

cbuffer cbPerObjectPS : register( b0 )
{
float4 g_vObjectColor;
};


context SIMPLE
{
    VertexShader = compile VS_SIMPLE;
    PixelShader = compile FS_SIMPLE;
}


[VS_SIMPLE]

cbuffer cbPerObjectVS : register( b0 )
{
matrix g_mWorldViewProjection : packoffset( c0 );
matrix g_mWorld : packoffset( c4 );
};

...


[PS_SIMPLE]
cbuffer cbPerObjectPS : register( b0 )
{
float4 g_vObjectColor : packoffset( c0 );
};

...

Usage:

perObjectVSIndex = model->Effect()->FindCBuffer("cbPerObjectVS");
perObjectVSData = model->Effect()->CloneBuffer(perObjectVSIndex);

This would mean I'd need to have a lookup table to know what shader type (GS/VS/PS/etc) defines the CBuffer the user is asking for. In the case above it would map to the vertex shader in the effect. The CBuffer defined in the FX section is mainly just for creating the lookup table, the actual CBuffer layout is created in each shader type via reflection.

I'm not really sure this is the best way to go about it so I'm open to suggestions as I'm still in design/thinking stages :)


#4887423 Organising engine

Posted by Shael on 24 November 2011 - 04:29 PM

Your design is probably just wrong. Each "layer" of the engine should only really depend on itself and lower layers. A good example is this engine architecture diagram from Jason Gregory's book. As for the scene graph and renderer problem - your low level renderer shouldn't know anything about a scene graph, it should simply take batches of render packets. It is the scene graphs job to generate these batches to submit to the low level renderer. Depending on your requirements it's possible you might need to split the renderer into a high-level renderer and a low-level renderer. The scene graph would then collect renderables and pass them to the high-level renderer which would perform some other operations before passing the render packets to the low level renderer to be drawn.


#4847987 Ambient occlusion simulation for AAA projects

Posted by Shael on 11 August 2011 - 05:03 PM

I can't believe this thread is still going. I can barely make sense of half of what this guy is babbling on about.


#4838198 Sorting Objects

Posted by Shael on 20 July 2011 - 06:05 PM

Usually you would treat opaque and transparent objects differently so it makes sense to have them in separate lists. As for sorting, you could use a sort key/hash which would allow you to sort by a number of properties (distance, shader/material, etc).


#4837681 looking for a library to load video with an alpha channel

Posted by Shael on 19 July 2011 - 04:52 PM

You sure about FFmpeg/Libav? I'm looking at the supported pixel formats and it supports RGBA.


PARTNERS