Sign in to follow this  
Emanuele Russo

OpenGL CG VERTEX SHADER ARRAY

Recommended Posts

Hello, I'm a newbie with shaders. I would need an array of varying input per vertex, in particular 16 coefficients for Spherical Harmonics... the code would be like this: //////////////////////////////////////////////////////////////// struct app2vertex { float4 f4Position : POSITION; float3 f4Color : COLOR; float vfTransfer [16] : BLENDWEIGHT; }; struct vertex2fragment { float4 f4ProjPos : POSITION; float4 f4Color : COLOR; }; vertex2fragment dotproduct( app2vertex IN, uniform float vfLight[16], uniform float4x4 mxModelViewProj ) { vertex2fragment OUT; OUT.f4ProjPos = mul(mxModelViewProj, IN.f4Position); OUT.f4Color = float4(0.0f, 0.0f, 0.0f, 1.0f); float shad = 0; for (int i = 0; i < 16; i++) { shad = IN.vfTransfer[i] * vfLight[i]; OUT.f4Color.r += shad; OUT.f4Color.g += shad; OUT.f4Color.b += shad; } return OUT; } ///////////////////////////////////////////////////// In particular the input stuff sounds like: struct app2vertex { float4 f4Position : POSITION; float3 f4Color : COLOR; float vfTransfer [16] : ??whatHere??; }; Anyway, the CGC compiler seems to compile this stuff, but the question is: Has it a sense? If yes how could I pass the vector in OPENGL to the shader? If no, how could I have 16 coefficients per vertex to use in a dot product with the uniform parameter? THANKS A LOT, IF THE QUESTION IS NOT CLEAR, PLEASE DON'T IGNORE THE POST, ASK ME TO BE CLEARER.. Cheers

Share this post


Link to post
Share on other sites
Quote:
Original post by Emanuele Russo
Anyway, the CGC compiler seems to compile this stuff, but the question is:
Has it a sense?
If yes how could I pass the vector in OPENGL to the shader?

If no, how could I have 16 coefficients per vertex to use in a dot product with the uniform parameter?


The problem comes down to how OpenGL handles "semantics" such as BLENDWEIGHT. If you are running OpenGL 2.0, no problem. You can use vertex attribute support. If you are running OpenGL 1.5 or earlier, it is not clear whether you can use OpenGL directly. I had posted something similar to this (to comp.graphics.api.opengl) about setting the source for tangent vectors when you do not have OpenGL 2.0, you do not have the GL_EXT_coordinate_frame extension available, and you do not use the Cg Runtime. No answer to that post...

With Cg Runtime, you can use cgGLSetParameterPointer to set the data source. The CGparameter input to this function is a handle you obtain by querying with functions such as cgGetEffectParameterBySemantic or cgGetNamedProgramParameter.

If you avoid the BLENDWEIGHT semantic, you can use TEXCOORD channels so that OpenGL 1.5 or earlier can do what you want. The following would work:

struct app2vertex
{
float4 f4Position : POSITION;
float3 f4Color : COLOR;
float4 vfTransfer0 : TEXCOORD0; // your vfTransfer[0..3]
float4 vfTransfer4 : TEXCOORD1; // your vfTransfer[4..7]
float4 vfTransfer8 : TEXCOORD2; // your vfTransfer[8..11]
float4 vfTransfer12 : TEXCOORD3; // your vfTransfer[12..15]
};

Share this post


Link to post
Share on other sites
Hi, thank you for your hints,

Actually I program with openGL 2.0, but I tried with the array

float vfTransfer [16] : BLENDWEIGHT;

loading with the

glWeightPointerARB(16, bla bla)

but gives the ENUM error for the 16.

After I tried as you suggested, with the 4 textcoord, and seems to work.
I'll post another time to say if it really works, now I've another stupid question...

I've to draw 2 geometries, a sphere and a box and I do it this way:

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
for(std::size_t i=0; i<2; ++i)
{
SH_OBJECT * currentObject=shScene.objects[i];


glVertexPointer(3, GL_FLOAT, sizeof(SH_VERTEX), ¤tObject->vertices[0].position);
glColorPointer(4,GL_FLOAT, sizeof(SH_VERTEX), ¤tObject->vertices[0].diffuseMaterial);

//HERE I WILL PUT TEXTCOORD

glDrawElements( GL_TRIANGLES, currentObject->indices.size(), GL_UNSIGNED_INT,
¤tObject->indices[0]);
checkForCgError("disabling vertex profile");

}

glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

But it renders only the second! Probably it doesn't know the depth... what should I enable, or do to see both?

thanks
Emanuele

Share this post


Link to post
Share on other sites
Another annoying problem...

now I try to pass double in place of float. It seems not to work.

I declared the
uniform double vFligth[16];

try to load with
cgSetParameterArray1d(blabla);

CG ERROR: The parameter is not of a numeric type.

isnt'it strange? For me it is, I hope it is not for you!
Is a type support problem?

Thanks
Emanuele

Share this post


Link to post
Share on other sites
Quote:
Original post by Emanuele Russo
glWeightPointerARB(16, bla bla)

but gives the ENUM error for the 16.


This extension function requires that the first input (the number of weights per vertex) be 1, unless somehow your drivers support more than 1. You can query to find out how large this number can be.
GLint maxVertexUnits;
glGetIntegerv(GL_MAX_VERTEX_UNITS_ARB, &maxVertexUnits);

You mention you are using OpenGL. Skip the extension glWeightPointerARB and use instead glVertexAttribPointer.

Quote:

I've to draw 2 geometries, a sphere and a box and I do it this way:


There is a lot of OpenGL state you have not mentioned, so it is difficult to say why you do not see what is expected. It is just as difficult to suggest what you should do.

Quote:

now I try to pass double in place of float. It seems not to work.

I declared the
uniform double vFligth[16];

try to load with
cgSetParameterArray1d(blabla);

CG ERROR: The parameter is not of a numeric type.

isnt'it strange? For me it is, I hope it is not for you!
Is a type support problem?


This is not strange at all. The graphics hardware does not support double precision input arrays.

Share this post


Link to post
Share on other sites
Ok then, everything solved.
Passing the 4 float4 texCoord it worked perfectly... and now I see both geometries, is was just a stupid error.

I didn't know about double precision exx... forgive my ignorance.

Now all I have to do is to insert this stuff into Wild Magic...

I didn't notice that you were Dave Eberly... if you write on this forum, then I can stop boring you by email!

Well, thank you again!
Regards, Emanuele

Share this post


Link to post
Share on other sites
Quote:
Original post by Emanuele Russo
Ok then, everything solved.
Passing the 4 float4 texCoord it worked perfectly... and now I see both geometries, is was just a stupid error.

I didn't know about double precision exx... forgive my ignorance.

Now all I have to do is to insert this stuff into Wild Magic...

I didn't notice that you were Dave Eberly... if you write on this forum, then I can stop boring you by email!



I already responded by email that my parser happens to handle arrays of uniforms, and I posted an example at my 3DGED3 book correction page. So all that you are currently doing should work in Wild Magic.

Regarding posting here about Wild Magic, please communicate instead by email. Folks here are interested in SDL, Ogre, and other packages. It is better not to "borrow" this forum for obtaining my technical support :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627719
    • Total Posts
      2978790
  • Similar Content

    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
  • Popular Now