Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


gsamour

Member Since 03 Jul 2009
Offline Last Active Nov 09 2012 04:47 PM

Topics I've Started

Pixel Shader Output doesn't match Final Framebuffer value in PIX

07 June 2011 - 05:24 PM

Posted Image

I'm writing out all four channels, but then somehow the rgb channels end up with the Alpha value, and the alpha channel ends up with 1.0... Any ideas? At this point in my render, alpha blending is enabled because I'm doing deferred shading. The current pass is the light pass and it does additive blending.

Any help is appreciated :)

Particle visual artifacts

06 January 2011 - 09:44 AM

Hi,

I've been working on a Soft Particles demo for a few days now. I got the soft particle part of it to work, but I'm having trouble with general particle stuff. I have some visual artifacts due to alpha blending.

I'm sorting particles back to front, but sometimes they intersect. If Z-Writing is enabled, then you can clearly see intersection lines. If Z-Writing is disabled, then the particles look ok in a screenshot... but in motion, there's a lot of "popping" going on as soon as particles get closer to the camera than others.


Here is a screenshot with Alpha Blending OFF and Z-Writing ON:



Here is a screenshot with Alpha Blending OFF and Z-Writing OFF:



Here is a screenshot with Alpha Blending ON and Z-Writing ON:



Here is a screenshot with Alpha Blending ON and Z-Writing OFF:




A few questions...

1. If Z-Writing is OFF, then I don't need to sort back to front, right?
2. If particles were view-plane aligned instead of viewpoint aligned, I shouldn't have any intersection problems, right?
3. I've heard of a way to use additive blending, but AFAIK it's meant for stuff like fire, not smoke... and how should particle art be created to work with additive blending?
4. Is there a way to get rid of intersection artifacts or popping artifacts completely?

Any help is appreciated!


EDIT: added question 4

[Edited by - gsamour on January 6, 2011 4:18:56 PM]

Soft Particles with DirectX 9

28 December 2010 - 03:35 PM

Hi, I'm trying to do Soft Particles with DirectX9.

I'm using the NVIDIA paper as guidance:

http://developer.download.nvidia.com/SDK/10/direct3d/Source/SoftParticles/doc/SoftParticles_hi.pdf

As far as I know, the general idea is:

1. Render the scene without particles into a depth map.
2. Render the scene with particles. For each particle's pixel, check the depth of the scene to see if the particle should be opaque or semi-transparent.

I have a few questions:

1. Should I write "z" or "z/w"? The NVIDIA paper says "z", but if I write just "z", it seems like all of the pixels show up with the same value. Should the depth map make sense if I save it as a jpg? If I write "z/w", I see something that makes more sense (subtle changes in color due to different depths).

2. I'm using D3DFMT_R32F for my render target. I've seen other posts saying that I can get away with just writing the depth into the x component of the color in the pixel shader. But if I do this, my depth map is completely light blue. If I write depth into all xyzw, then I see something that makes sense.

3. If I go with writing "z/w" and get the difference with the particle's "z/w", I see "soft" particles.

So I'm calculating "saturate((Zscene - Zparticle) * scale)". I'm not sure what the scale value represents, but I've found that 20.0f works for me.

What is the normal way of using the calculated value afterwards? I'm multiplying the pixel's alpha by that value. But maybe it's meant to multiply the entire color...

4. My test application's resolution is 640x480. I'm making my depth map also 640x480. Is it standard practice to make it the same size as the screen resolution? This is what makes most sense to me, but I've also seen depth maps being used for shadow mapping and they aren't the same resolution as the screen.

5. I'm using the same projection to render depth as to render the scene. Deciding on orthographic vs. perspective projections makes sense for shadow mapping, as it's based on different light types, but does it make sense to consider a different projection for a depth map intended for soft particles?


I'll try to post my code and screenshots to ask for a more formal critique.

Thanks in advance!

Asynchronous Asset Loading (data streaming)

15 November 2010 - 06:33 AM

Say I have two threads in my game, the main thread and an asset loader thread. For now, my system consists of the following:

1. a manager class that allows you to Get() assets by name
2. an actor class that references assets like textures and meshes by name
3. when an actor needs to render a mesh, it calls Get() on the manager. The manager checks within a <string,asset pointer> map to see if it has the asset. If it returns non-null, it uses this mesh. Else, it uses a default pre-loaded mesh and the manager kicks off an asset loading task on the loader thread.
4. same as #3 for textures when a mesh needs them.
5. When an asset loading task is done, the manager adds the asset to its <string,asset pointer> map. This means that the map has a mutex or critical section that is locked whenever the user calls Get() and also whenever the loader thread is done.

I think locking a mutex every time Get() is called is not a good idea. I'm trying to think of alternate ways to solve this. Can anybody help?

One thing I can think of is to reference assets by pointer instead of by name. If an actor has a non-null pointer, it uses it. And the loader thread, when done, sets the pointer. This still requires a mutex though, for the actor's asset pointer.

One other thing is to keep some sort of state on the actor that indicates whether or not it has the asset. If it's loading, then it'll keep calling Get() on the manager. When it gets a valid pointer, it'll cache it, and stop calling Get(). The pointer would have to be a shared_ptr or weak_ptr because multiple actors could use the same assets.

Can someone please give me pointers on how to implement a good asynchronous loader? Also, would the implementation change if I had another thread that processes the loaded data (decompress/create object(s) from data)?

Thanks in advance.

3D Studio MAX SDK UVW Unwrap Modifier Help

16 April 2010 - 12:33 PM

Hey everyone, I've been trying for a while to get the number of keyframes and the key times in a UV animation in 3D Studio MAX, using the SDK. The animation is made by adding a "Unwrap UVW" modifier to the object, then setting two keyframes, and offsetting the UV's. I've managed to get as far as getting a pointer to a IUnwrapMod object. But after that, I'm not sure what to do. If I call NumKeys() on the IUnwrapMod object, it returns -1. I can go into the vertices and sample at certain times and get the UV's at those times, but if there are keyframes, I'd like to be able to use those as my samples. Any insight on UV Animation in 3dsmax is appreciated!

PARTNERS