Sign in to follow this  
martyj2009

OpenGL OpenGL Tessellation

Recommended Posts

martyj2009    135

I'm having a hard time getting OpenGL Tessellation to work. I have been struggling with it for about a month now. I've looked at a lot of example code and what I am doing doesn't seem that different to what they are doing in their code. I was wondering if anyone could be a second pair of eyes and maybe point out why it isn't working.

My scene is only rendering a black screen.

My shaders are as follows:

 

Vertex Shader

#version 400
#extension GL_ARB_tessellation_shader: enable
#extension GL_ARB_separate_shader_objects: enable

layout(location = 0) in vec3 vertexPosition;

void main()
{
    gl_Position = vec4(vertexPosition, 1.0);
}

Tessellation Control Shader

#version 400
#extension GL_ARB_separate_shader_objects: enable

layout(vertices = 3) out;

void main()
{
    float inLevel = 2;
    float outLevel = 2;
    
    gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
    
    if (gl_InvocationID == 0)
    {
        gl_TessLevelOuter[0] = outLevel;
        gl_TessLevelOuter[1] = outLevel;
        gl_TessLevelOuter[2] = outLevel;
        gl_TessLevelOuter[3] = outLevel;

        gl_TessLevelInner[0] = inLevel;
        gl_TessLevelInner[1] = inLevel;
    }
}

Tessellation Eval Shader

#version 400
#extension GL_ARB_tessellation_shader: enable
#extension GL_ARB_separate_shader_objects: enable

layout(triangles, equal_spacing, ccw) in;

uniform mat4 uMVPMatrix;

void main()
{
    vec4 p0 = gl_TessCoord.x * gl_in[0].gl_Position;
    vec4 p1 = gl_TessCoord.y * gl_in[1].gl_Position;
    vec4 p2 = gl_TessCoord.z * gl_in[2].gl_Position;

    vec4 newCoord = normalize(p0 + p1 + p2);
    gl_Position = newCoord;
}

Geometry Shader

#version 400
#extension GL_ARB_separate_shader_objects: enable

layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;

uniform mat4 uMVPMatrix;

uniform bool UseWaveGeometry;
uniform float GameTime;

layout(location = 3) out vec4 vColor;

float rand(vec2 n)
{
    return 0.5 + 0.5 * fract(sin(dot(n.xy, vec2(12.9898, 78.233)))* 43758.5453);
}

void setUpValues(int index)
{
    vec4 vertPosit = gl_in[index].gl_Position;
    
    if(UseWaveGeometry)
    {
        vertPosit.y = vertPosit.y + (GameTime + ((cos(vertPosit.x)) * GameTime) - ((sin(vertPosit.x))*GameTime)*2.2);
    }
    
    float cval = int(vertPosit.x) % 2 + int(vertPosit.y)%2;
    vColor = vec4(cval, cval, cval, 1.0);

    gl_Position = uMVPMatrix * vertPosit;
}

void main()
{
    for(int i = 0; i < 3; i++)
    {
        setUpValues(i);
        EmitVertex();
    }
    EndPrimitive();
}

Fragment Shader

#version 400
#extension GL_EXT_gpu_shader4: enable
#extension GL_ARB_separate_shader_objects: enable

uniform float ColorAlpha;

layout(location = 3) in vec4 vColor;

layout(location = 0) out vec4 FragColor;

void main()
{
    vec4 color = vec4(vColor.r, vColor.g, vColor.b, ColorAlpha);
    
    FragColor = color;
}

Thank you for your time,
Marty

Edited by martyj2009

Share this post


Link to post
Share on other sites
Vilem Otte    2938
First of all I assume that rendering single triangle with only VS and FS works (e.g. you don't have issues in calling glDrawArrays/glDrawElements).
 
Some points that probably cause issues:
1.) Names with out qualifier in some shader should correspond to names with in qualifier in the shader describing next stage. At least unless you use layout qualifier and separate shader objects extensions... e.g.:
// Standard way of passing "varyings" from one shader to another. The GLSL compliler does pattern
// matching between their names and connects them.
[Vertex Shader]
out vec3 vsVertexPosition;
out vec3 vsVertexNormal;

[Geometry Shader]
in vec3 vsVertexPosition[];
in vec3 vsVertexNormal[];

// Another possible way since ARB_separate_shader_objects - available since OpenGL 4.1, core
// since OpenGL 4.3 - this one explicitly says which "varyings" links together. Note that
// using this approach the corresponding ones can have different names! Otherwise they can't
[Vertex Shader]
layout(location = 0) out vec3 position;
layout(location = 1) out vec3 normal;

[Geometry Shader]
layout(location = 0) in vec3 pos;
layout(location = 1) in vec3 n;

// But this is NOT possible, the GLSL compiler can't tell which variables can be connected
// together.
[Vertex Shader]
out vec3 pos;
out vec3 normal;

[Geometry Shader]
in vec3 mPosition;
in vec3 mNormal;
2.) Version definition should be same in all shaders! On AMD this can result in error and compilation will fail, thus the shaders won't run.

3.) Don't mix gl_in/gl_out and in/out. This is bad and results in (very) messy shaders.

4.) Tessellation shaders should have
#extension GL_ARB_tessellation_shader : enable
These are just few points by quickly going through your code. I can post you shader code of very basic tessellation example that works - if you can't make it work. It's really very basic example that needs just 2 parameters - view and projection matrices. Edited by Vilem Otte

Share this post


Link to post
Share on other sites
martyj2009    135

Thank you for the reply. Sorry for the late response. I read your message a few days ago.

 

The reason I didn't use the layout location identifiers is that I am dynamically getting the location in my C code by the name of the attribute. I also keep my variables named the same. Although it would make for easier to understand shader code if I used them as well. That way users can see at a glance what output matches up with what input.

 

I will implemented this and post my results.

 

Thank you very much for the feedback. It is greatly appreciated.

 

Edit

--------------------------------------------

 

I modified my previous post with the updated shaders. I still get the same issues with tessellation.

Edited by martyj2009

Share this post


Link to post
Share on other sites
martyj2009    135

Anyone have any idea? Anything at all I could try? Any feedback would be greatly appreciated.

 

 

Edit:

---------------------------

I simplefied the code. I still get a black screen when the tessellation shaders are enabled. It renders a scene with them commented out.

Edited by martyj2009

Share this post


Link to post
Share on other sites
marcClintDion    435

I ran all the shaders through AMD's GPU shaderAnalyzer and shaders 2,3, and 5 all have reported errors.  Most notable, 2 is using variables that were not declared.

 

Also, there is usually a huge problem with shader tech that is very new.  A lot of it is not tested against multiple GPU's and it can take a while for the GPU manufacturers to play catch-up with one another. Often you will run into compatibility issues if you have a different brand than what the code was tested on.  

 

Here's another site that you can cross-reference against.  

 

http://prideout.net/blog/?p=48

 

ShaderAnalyzer only has a problem with the tess eval shader in this example. 

Share this post


Link to post
Share on other sites
martyj2009    135

I have been using that reference as a tessellation example. 

 

The biggest problem is that my code base is so huge. I have texture arrays for height maps, FBO for shaows, reflections, and a bunch of other stuff which complicate code slightly.

 

I will check out the AMD GPU shader analyzer. Thanks for the info. I will post an update when I test it

Edited by martyj2009

Share this post


Link to post
Share on other sites
martyj2009    135

I finally figured out my issue.

 

It wasn't shaders at all. My problem with tessellation was that I was drawing my objects using GL_TRIANGLES. Tessellation only works on GL_PATCHES.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now