Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.

is blending faster with vertex shader ?

This topic is 5577 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hello coders.. i do mesh interpolating of positions and normals and vu .. i do it by my own code ( x= x1*weight + x2*(1-weight) ) is it faster to do that in the vertex shader and assemble it? and thanks you all.

Share this post

Link to post
Share on other sites

1) With hardware vertex processing, it''s fast because the GPU performs specialised SIMD processing; it takes the work off the CPU and also means you can usually avoid locking the vertex buffer, uploading new data to the chip etc (i.e. use a static buffer and perform any modifications in the shader).

2) With software vertex processing, when you create the shader, the D3D runtime generates native x86 SIMD code (SSE, SSE2, 3DNow!) for that shader to perform the operations in parallel. That should be faster than plain C/C++/x86ASM code, the only thing faster would be your own well pipelined hand written, specialised SIMD.


Simon O''Connor
Creative Asylum Ltd

Share this post

Link to post
Share on other sites
I suspect that if you have hardware support for vertex shaders, it would be faster than doing it in software. I''m not positive, but logically, you would be taking those operations off the CPU and giving them to the GPU instead, which should give a performance increase. Even if you don''t have support for hardware vertex shaders it still might be faster because of the optimized assebmly code that is used to run vertex shaders in software. I haven''t tested this myself though, so I''m not entirely sure, but I assume vertex shaders would be faster in this case.

I use a vertex shader for a similar thing in my programs and using hardware vertex shaders is WAY faster than software shaders.

Share this post

Link to post
Share on other sites
could someone please explain how you would do this.

How would you give the vertex shader the 2nd vertex and weight?

would you have to upload them to the gpu shader constants for every vertex of every frame ?

say you had a mesh (just a bunch of triangles, not the d3d class), would you draw it with 1 DrawPrimitives() call ? if so, how would you give the vertex shader the next vertex and weight for the interpolating?

or would you call DrawPrimitives() for every triangle, and set the shader constants just before you called it? (this way seems fundamentally flawed to me?)

(btw, incase you havnt noticed, i know very little about vertex shader theory, so the more details you give the better it will help me)

[edited by - Smurfwow on February 8, 2003 2:44:53 AM]

Share this post

Link to post
Share on other sites
How would you give the vertex shader the 2nd vertex and weight?

Generally, you pass the 2 positions in the same vertex, say:
struct T_Vertex
T_Vec3 pos1,pos2;
T_Vec2 tex;

And then you create the appropriate vertex declaration that matches your vertex structure (a vertex declaration tells the shader what to expect in the input registers).

A DX8 decl would look like:
DWORD dwDecl[] = {
D3DVSD_REG( 0, D3DVSDT_FLOAT3 ) // first pos
D3DVSD_REG( 1, D3DVSDT_FLOAT3 ) // second pos
D3DVSD_REG( 2, D3DVSDT_FLOAT2 ( // tex coords

In the shader:
V0: First position
V1: Second position
V2: Texture coordinates

For the weight: If it’s per-mesh or something ( for animation interpolation for example ), you can pass it as a constant. Otherwise (== if it's per-vertex), you’d put it in the vertex structure.

Hope this helps.

[edited by - Coder on February 9, 2003 5:21:48 AM]

Share this post

Link to post
Share on other sites
So this is the "tweening" feature of Direct X?

It seems that this would be a good thing to implement on landscape LOD in order to avoid the popping between different levels of detail. And you don’t really need to know anything about writing vertex shaders, which I don’t.

Share this post

Link to post
Share on other sites
ok... ive implimented vertex blending with an "effect" using "hlsl"

the animations seem to be slightly more jerky than when i was doing them manually.

if anyones interested... this is the "effect".


float t;

float4x4 World : WORLD;

struct VS_OUTPUT
float4 Pos : POSITION0;
float2 Tex : TEXCOORD0;

VS_OUTPUT main (float3 Pos1:POSITION0, float3 Pos2:POSITION1, float3 Norm:NORMAL0, float2 Tex:TEXCOORD0)

Pos1 = mul(World, Pos1);

Pos2 = mul(World, Pos2);

float4 newPos1;
float4 newPos2;

newPos1.x = Pos1.x;
newPos1.y = Pos1.y;
newPos1.z = Pos1.z;
newPos1.w = 0.0f;

newPos2.x = Pos2.x;
newPos2.y = Pos2.y;
newPos2.z = Pos2.z;
newPos2.w = 0.0f;

Out.Pos = lerp(newPos1, newPos2, t);

Out.Tex = Tex;

return Out;

technique T0
pass P0

VertexShader = compile vs_1_1 main();

pixelshader = NULL;

in the game... i just set the t value once a frame, then just put an effect.Begin(0) and effect.End() around my drawing code.

does anyone know what could possible cause "jerky" animations... given that they were smoother when i was doind them in C#, id expect them to be at least as smooth when done on the gpu...

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!