Sign in to follow this  
rohith291991

Is my understanding of Direct3D flawed???(Warning: long post)

Recommended Posts

Hi all. Im new to this forum. Im also new to both Win32 as well as DirectX programming. I presume myself to be an intermediate level C coder with no exprience in C++ or Win32 or DirectX programming whatsoever. Its been 8 days since i started learning and so far my progress has been this: day 1: I had no clue how to start programming in windows,methods available etc. I didnt know what the hell to use to program win apps... i didnt know what terms like MFC, win32 etc meant.. so i did some researching on this. day 2: Some clarity on the different ways to program windows apps.. decided that for real clarity and understanding the win32 API would be the best for me.. so started gathering resources for Win32 programming.. created a simple messagebox. day 3: got some good tutes and finally made a basic window.. went through the code again twice and got used to handles, window processes, and OOP in general.. but some hiccups like unicode problems etc. (L,_T macros etc) but soon got over them.. day 4: after becoming confident with creating a basic Win32 App, downloaded and installed DXSDK(November 2008). then i searched for LOTS of resources on basic Direct X tutorials.. Created a basic windowed app using direct3d to clear the screen with a specified color.. (using the brilliant set of beginner tutes at www.directxtutorials.com). day 5: started proper direct 3d programming (still following the same tutes). created a simple triangle.. got used to all the new functions, the COM model, and FVF etc.. then created a cube and rendered it. learnt aabout matrix transforms, primitives, primitive types. day 6: learnt more about FVF, learnt about zbuffers, backbuffers ,created lights and set them up, learnt about different kinds of lights in d3d. ended up creating a simple scene with a lighted ,rotating cube with only diffuse colouring(no texture). made the cubes rotation user controllable using GetAsyncKeyState().learnt about texturing and created the same scene but with a texture. day7: learnt about colour keys,alpha blending,sprites,basic directinput,loading meshes from files,rendering them. created a simple transparent sprite with two textured and lighted cubes behind it. day 8: got some resources on stencil buffers but started losing grip. couldnt keep up with the pace with which i was learning before. so went slow but still did not get good resources on stencil buffers (or at least i couldnt understand how to implement them). this is probably because i exhausted all the free utorials at www.directxtutorials.com and the rest were all paid tutes.. tried to understand the differences between fixed function and programmable pipelines... got some grasp of vertex and pixel shaders but yet to learn the syntax and exactly where they go in to the c++ program.. and from here on im stuck... i dont know how to use shaders in my programs , what exactly a shader does.. how it can be used... which parts of the fixed function pipeline programs i had written earlier can be replaced by these shaders.. how to use effect files(.fx)... how to do multitexturing (without shaders) how to blend the pixels of two or more buffers or perform operations on them ( whether such things are possible) etc.. basically after 7 days of proper structured thought and learning ... im starting to lose it... my mind is extremely muddled up and i dont know what direct3d can and cannot do... so here is my overall understanding of direct3d so far: 1)you can use direct3d to completely manage everything that gets displayed on your app's screen. 2)directx prevents tearing by backbuffer swapping 3)d3d uses FVF to store vertex information and stores vertices in vertex buffers 4) vertex information can be of different types such as diffuse colour ,position,normal info, texcoords etc... 5)d3d renders only in one way: by successively rasterizing triangles that make up a mesh 6)it renders to a backbuffer which is then displayed on screen. 7)you can set textures and lights etc. now some things i cant understand are surfaces.. is it possible to render directly to a surface??? it is my understanding now that all complex images which are drawn on screen are done by multiple renders of the same scene with different information.. (such as basic lighting texturing first... if multitexture then the scene is rendered with that texture to a buffer and both the rendered scenes are merged together) also what is the maximum possible level of effects that can be possible using the fixed function pipeline??? can bumpmaps, cubemaps,displacement maps etc be implemented using fixed function??? or only through shaders ?? or both??? how to use shaders in your program?? what can be achieved using shaders.. can you render two different components of the same scene (such as say diffuse and ambient terms separately ) to multiple render targets and blend the contents of the two buffers using the fixed function pipeline?? if so can this be output as the final frame to the front buffer?? can such things be achieved with shaders??? how to pass vertex information to shaders?? if there is a vertex format mismatch between the c++ program and a vertex shader (like say the fvf in my program is D3DFVF_XYZ|D3DFVF_NORMAL and the shader expects D3DFVF_XYZ|D3DFVF_NORMAL|D3DFVF_TEX2 )will there be any problems??? how to know what kind of vertex format the .fx file expects??? (how does directx viewer load ANY .fx file??? does it check for the expected vertex format of the .fx file??) what is the difference between setFVF and SetVertexDeclaration()?? how to u se SetVertexDeclaration()??? Please do clear my confusion... i dont know where to go next or what to do... if direct3d has any other features apart from the once i have discovered so far please do tell me...

Share this post


Link to post
Share on other sites
You are asking A LOT of questions, I'll try to clear few of them.

You can think of fixed function pipeline as pre-programmed shaders. Same way you can make shader that will light your object, there is already a function in the fixed function pipeline that will do that for you. You can only implement the most basic lighting with fixed pipeline and you can't change anything but a few parameters like light color, location and such. Fixed pipeline also handles few other things other than lighting.

You can do anything fixed function pipeline does with shaders. And generally, if you are using shaders you won't use any fixed function pipeline functions(although you can).

Think of shaders as the "next step" after fixed function. In FFP(fixed function pipleline) you could only change few parameters, with shaders you can change the entire code.

Multitexturing can be done on the FFP but I never done it that way. It's much simpler to use a shader. It is not done by rendering a scene twice, you simply multiply the 2 texture colors on a per-pixel basis and draw the result. (It's basically like 5 lines of code for multitexturing using high level shader language).

How does directx know which parameters to use for shaders? It doesn't, you have to pass them manually, you can't have an .fx file that is seperate from the actual code. (There are ways that will automatically try to parse the .fx file and set some parameters automatically but it's best to do it yourself). I just write a new class for each .fx I use and set the parameters for it inside.

When using shaders you don't use FVF, you use vertex declarations. They're basically equivalent and you can translate one to another.

Share this post


Link to post
Share on other sites
Thanks for the help BearishSun...:) Any more replies would be welcome too..

Just now i tried to compile the code givein in this link (after placing t he basic.fx file in the appropriate place) but The CreateEffectFromFile function fails!!! (i placed the macro 'L' before every quoted string to compensate for the fact that im using unicode mode....

http://www.xbdev.net/shaderx/fx/tutorials/dx_and_fx/index.php

Share this post


Link to post
Share on other sites
Here's a very basic shader:


float4x4 matViewProjection : ViewProjection;

struct VS_OUTPUT
{
float4 Pos: POSITION;
float2 Tex0: TEXCOORD0;
};

VS_OUTPUT vs_main(float4 inPos : POSITION, float2 Tex0 : TEXCOORD0)
{
VS_OUTPUT Output;
Output.Pos = mul(inPos, matViewProjection);
Output.Tex0 = Tex0;
return Output;
}


sampler Texture0;

float4 ps_main(float2 Tex0 : TEXCOORD0) : COLOR0
{
float4 color = tex2D(Texture0, Tex0);
return color;
}

technique Effect1
{
pass MyPass
{
VertexShader = compile vs_2_0 vs_main();
PixelShader = compile ps_2_0 ps_main();
}

}


What this does is just render the object and texture it.

In order for it to work your vertex declaration(similar to FVF) must include at least Vertex position and texture coordinates as seen on this line:

VS_OUTPUT vs_main(float4 inPos : POSITION, float2 Tex0 : TEXCOORD0)


This is the vertex shader entry function, it accepts those 2 parameters and returns those 2 aswel(as defined in the VS_OUTPUT struct).

The only thing this function does is transform the object-space vertex into screen-space pixel using the View-Projection matrix( matViewProjection ).
It will also pass through as texture coordinates for use in the pixel shader.

Once vertex shader function completes, the pixel shader entry function is invoked, and it's entry values are the vertex shader output values(texture coordinates, and the transformed pixel).

All your work in the pixel shader is done on that one pixel you got in the vertex shader.

In the pixel shader you just sample the texture (Texture0) at the provided texture coordinates and set the color of the pixel.


In order for shader to work properly you must set the "matViewProjection" from directx using some function(can't remember the name, it's something like "Effect->SetValue(matViewProjection)"), and also set the texture using Device->SetTexture(0, Texture).


Share this post


Link to post
Share on other sites
thanks again but im still a bit confused... say i have a very complex scene with different objects with different effects to be applied to them (like one of them has a metallic lusture, one of them is supposed to look like glass,another has animated vertices such as water rippling etc) then to render this scene i have to first compile all the .fx files required and store them in ID3DEffect9 interfaces...

then when i want to render say the metallic object... i set the corresponding textures, load the effect file, set a technique, begin a pass, render the object , end the pass ,end the shader and then in another part i have to do all these steps again to render the rest of the objects one by one... and they are constantly layered over the previously rendered images in the backbuffer( after z buffer checking of course). Is this the way direct3d works?? is there a way to render all objects at once ?? is this how commercial games render entire frames???

Share this post


Link to post
Share on other sites
Quote:
Original post by rohith291991
thanks again but im still a bit confused... say i have a very complex scene with different objects with different effects to be applied to them (like one of them has a metallic lusture, one of them is supposed to look like glass,another has animated vertices such as water rippling etc) then to render this scene i have to first compile all the .fx files required and store them in ID3DEffect9 interfaces...

then when i want to render say the metallic object... i set the corresponding textures, load the effect file, set a technique, begin a pass, render the object , end the pass ,end the shader and then in another part i have to do all these steps again to render the rest of the objects one by one... and they are constantly layered over the previously rendered images in the backbuffer( after z buffer checking of course). Is this the way direct3d works?? is there a way to render all objects at once ?? is this how commercial games render entire frames???


Thats pretty much how it works. If you want the best performance then you sort you objects by how they are rendered, set the settings for that group of objects, render them, set settings for next group and render them until out of objects. Thats done every frame for your list of renderable objects. When you think about it, with all the other game processing going on its amazing that with a seemingly clunky method to render that you can get such good framerates.

Share this post


Link to post
Share on other sites
thanks a lot!!! :) another question... is it possible to extract the image stored in any buffer (backbuffer ,z buffer, offscreen surface) and output to the front buffer??? for example if i wanted to view the entire zbuffer on screen could i do it?? if so how??

Share this post


Link to post
Share on other sites
You cant access (read) the Z_buffer directly in DirectX9 or below. It can be done in DX10.

To get access to depth values in Dx9, you have to render it explicitly to a floating point texture. This can be done using a z-pass, or using multiple render targets as in a differed pipeline.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this