Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 01 Jun 2009
Offline Last Active Jul 11 2014 05:04 PM

Topics I've Started

Particles with DOF

15 June 2012 - 02:37 PM

Good evening,

I'm just wondering how everyone else tackles the issue of integrating depth of field with particle systems. The usual method of using the depth buffer as input to the DOF effect won't work with transparency since only the nearest depth layer is stored (obviously) - resulting in distant transparent geometry being incorrect when behind closer transparent geometry.

It doesn't need to be perfect, it just needs to look "ok".

Unfortunately, in this scenario all of the particles are packed throughout the scene - and the focal range for the DOF is fairly small - so the transition between in-focus, and out-of-focus, must be smooth. (So I can't say: "These are out-of-focus, and these are in-focus")

Any suggestions? My google-fu isn't great today.


Constant velocity across Cubic Bezier Curve

03 September 2011 - 03:29 PM


I'm currently interpolating the position p(t) along the bezier curve through points p0,p1,p2,p3.

However, I noticed that the velocity isn't constant along the curve.

I assumed that if the control points are of equal distance from another, the velocity would be constant. This turns out to be false in this case:

[Warning: crude diagram below]
|  .p0          .p3
|	'p2
|  .p1

I'd be happy with resolving this to ensuring that:
- at t=0.25 the position being tangent to: p0+(p0->p1)/2
- at t=0.50 the position being tangent to: p1+(p1->p2)/2
- at t=0.75 the position being tangent to: p2+(p2->p3)/2

Any suggestions on how to do this?

I've thought about the idea of creating many more control points along the curve - and using the length of those curves to attempt a moderate velocity, but this doesn't solve the issue, it only hides it - a little.

Any assistance would be greatly appreciated, thanks.

Efficiently rendering ordered GUI objects

16 August 2011 - 07:06 AM

Hi everyone,

I'm writing my own GUI lib using D3D11 ( but that's irrelevant ). I understand there are existing libraries out there for this, but I'm doing it for fun. :)

Anyway, each renderable quad has position, size, texture uv's, and colour (rgba) properties.
Each quad is stored within a quadbuffer that can hold 256 quads. And each source image holds a list of these quad buffers.
The typical render path is:

  • Set shaders/data etc
  • For each source image:
  • Set source image
  • Render each quad buffer
This seems to be very efficient, or at least efficient enough. Unfortunately, when designing the system I completely forgot about depth sorting.
Currently depth checks are disable for the GUI, and quads are literally rendered on top of each other as they come. As quads can have transparency,
ideally I'd like to render them from back to front to accumulate colour, but how would I go about sorting the quads and still render them efficiently?

I've considered giving each quadbuffer a depth layer, and then sorting the quadbuffers for rendering. This would still enable me to batch the quads, instead of rendering each quad individually.
But then I'd require more quadbuffers - and so more quads, than is necessary.

I guess this is the common battle between depth sorting and batching. Any thoughts?


Edit: I should point out that the rendering of the GUI elements is separated from the implementation of the functionality. e.g A button doesn't have a render method, but instead holds renderable quads/sprites which will be rendered by the GUI system.

[D3D11] Width/height of resource

13 March 2011 - 08:26 PM

Hello everyone,

After using D3DX11CreateShaderResourceViewFromFile to create a shader resource view, is there any way to find out the dimensions of the created resource?

The documentation on MSDN doesn't seem to shed much light on this, and I can't seem to find any solutions in other places.

I'm guessing at some level, there's a Texture2D object hiding away, is there any way to access that interface?

I will be be very grateful for any responses to this.


Rendering negative colour values

20 July 2010 - 10:26 AM

Hello there,
I'm not usually one for asking for help, I usually work out the problems myself...but if anyone would like to help, I'd be very grateful. :)

After outputting the normals to a D3DFMT_A8R8G8B8 format surface, I noticed that they weren't right, something, ever so slight was missing. After a long time trying to figure out what, I realised the format was unsigned, and the normal needs to be signed.

So I switched to the D3DFMT_Q8W8V8U8 format, as this allows for signage, only to find that I couldn't create a texture using this format whilst specifying D3DUSAGE_RENDERTARGET as usage.

So, how can I store the sign for each x,y,z of the normal vector in a non-signed RGB value?

I've had an idea, but I'm not sure if it'll work, I'll try it after this post...

Idea: If instead of outputting the values from -1.0f to 1.0f, they are output from 0.0f to 1.0f ( with 0.5f being 0.0f ), then when accessing the surface later, I ensure that the values get transformed back to -1.0f to 1.0f, it should be ok right? I think I'll need to normalize after the retransform.