Sign in to follow this  
chipmeisterc

OpenGL DX9 SDK Examples - Difference between shader and effect?

Recommended Posts

chipmeisterc    268
Hi Guys! I have always been an OpenGl guy, but recently after moving house and lacking an internet connection at home, i noticed the wealth of example apps / tutorials included with the SDK and figured I'd try my hand at some direct x programming. This has been pretty sucessfull :- I have written my own app, which creates a DX9 device, has keyboard and mouse direct input, and can load texture light and render models in my own format exported from max, and display on screen text text, so its all comming along nicely. Yesterday I decided to try and integrate some shaders into my app, so I had a look through the Basic HLSL example which is the 'tiny' model expanding and collapsing on a sin wave vertex shader, and integrated it into my progect afer modulising it to make it reusable. However one thing I have noticed in the documentation is that there seems to be a difference between a 'shader' and an 'effect'. I am currently loading a .fx file but I have noticed there are also .hlsl files? There is also a HLSL without effects demo which is the wave vertex displacement on a flat plane this uses a .hlsl file. This seems slightly different from my previous experiences with GLSL and just having a fragment and vertex shader. Can anyone shed any light on this ? Thankyou :)

Share this post


Link to post
Share on other sites
chipmeisterc    268
Anyone?

After looking on the internet a little more it would seem that a 'shader' is just a vertex / fragment program or single shader technique.

where as an effect (.fx) uses the microsoft effect framework with which you can specify multiple techniques for differing pc configurations as well as set up multiple render passes, basically manage the different pipeline/shader states?

Am I on the right lines here?

Share this post


Link to post
Share on other sites
Tom KQT    1704
Yep, you're right.
A common mistake is to think that effect frameworks are related just to vertex and pixel shaders plus some settings. But "effect" is ment as a "setup" of the whole rendering process, shaders are just a part (optional part!) of it.

You can make an effect which will use only the fixed pipeline, so no shaders - you can use it to set all the rendering states, texture samplers, transformation matrixes, lights (you can completely specify the lights in effect file, not just turn them on/off), materials (again - completely specify, not just use materials defined elsewhere) etc.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now