Jump to content
  • Advertisement

bartman3000

Member
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

149 Neutral

1 Follower

About bartman3000

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I did not profile the code first, that would have been a smart idea. Having said that, this situation has become a curiosity experiment more than anything. I'm familiar with the feedback loop causes by reading and writing to the same attachment, but I figured if I read from one and wrote to another it would be okay... guess not. I suppose the real lesson here is don't do anything that isn't explicitly specified in the spec.
  2. Yup, that's exactly what I was trying to optimize, reducing the switches between bound FBOs. I wasn't sure how much overhead was associated with the bind call and figured I'd just throw together a quick test to benchmark and see if there's any difference. But of course my "quick test" ends up taking way longer than expected and now I'm curious as to whether or not this is even possible. I tried looking at the spec and couldn't find anything on the subject.
  3. Consider the following situation: - We have an FBO with two identical color attachments - Bind shader program 1 and render an object to FBO attachment 0 - Bind the texture on attachment 0 for sampling - Bind shader program 2 and draw a full screen quad. In the fragment shader we sample from the texture on attachment 0 and write it’s value to the texture on attachment 1. Can framebuffer objects be used in this way? The reason why I’m considering this is to reduce the number of FBOs I create. I’m experimenting to see if I can perform all of my rendering passes with a single FBO equipped with multiple attachments. In my current implementation this setup does not seem to work as expected, I'm trying to determine if there's a problem with my implementation or if this is even possible. Any insight would be appreciated!
  4. bartman3000

    OpenGL 4.5 - create buffer

    Here's an example of using an index buffer. I'd recommend using the website docs.gl, it lets you filter out functions based on GL Version. GLuint vao, buffer, indexBuffer; Vertex object[] = { Vertex(glm::vec3(0.5f, 0.5f, 0.0f), glm::vec4(1.0f, 0.0f, 0.0f,1.0f)), Vertex(glm::vec3(0.5f, -0.5f, 0.0f), glm::vec4(0.0f, 1.0f, 0.0f,1.0f)), Vertex(glm::vec3(-0.5f, -0.5f, 0.0f), glm::vec4(0.0f, 0.0f, 1.0f,1.0f)), Vertex(glm::vec3(-0.5f, 0.5f, 0.0f), glm::vec4(0.0f, 0.0f, 1.0f,1.0f)) }; unsigned int indices[] = { 0,1,2, 3,0,2}; //Init glCreateVertexArrays(1, &vao); glCreateBuffers(1, &buffer); glNamedBufferStorage(buffer, sizeof(object), object, GL_MAP_READ_BIT); // GL_STATIC_DRAW isn't a valid param //position glEnableVertexArrayAttrib(vao, 0); glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0); glVertexArrayAttribBinding(vao, 0, 0); //color glEnableVertexArrayAttrib(vao, 1); glVertexArrayAttribFormat(vao, 1, 4, GL_FLOAT, GL_FALSE, sizeof(glm::vec3)); // relative offset is the size in bytes until the first "color" attribute glVertexArrayAttribBinding(vao, 1, 0); glVertexArrayVertexBuffer(vao, 0, buffer, 0, sizeof(Vertex)); // The stride is the number of bytes between each "Vertex" // Create index buffer glCreateBuffers(1, &indexBuffer); glNamedBufferStorage(indexBuffer, sizeof(indices), indices, GL_MAP_READ_BIT); glVertexArrayElementBuffer(vao, indexBuffer); // Draw glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0); // Second parameter is the number of indices
  5. bartman3000

    OpenGL 4.5 - create buffer

    Try this out: Note: this assumes you have a vertex shader set up to read from attribute locations 0 and 1 for vertex positions and color. EDIT: If you wanted to strictly use GL 4.5 functions only, then you can remove the call to "glBindVertexArray" and use "glEnableVertexArrayAttrib" instead of "glEnableVertexAttribArray". Vertex object[] = { Vertex(glm::vec3(0.5f, 0.5f, 0.0f), glm::vec4(0.0f, 1.0f, 0.0f,1.0f)), Vertex(glm::vec3(0.5f, -0.5f, 0.0f), glm::vec4(1.0f, 0.0f, 0.0f,1.0f)), Vertex(glm::vec3(-0.5f, -0.5f, 0.0f), glm::vec4(1.0f, 0.0f, 0.0f,1.0f)) }; //Init glCreateVertexArrays(1, &vao); glBindVertexArray(vao); glCreateBuffers(1, &buffer); glNamedBufferStorage(buffer, sizeof(object), object, GL_MAP_READ_BIT); // GL_STATIC_DRAW isn't a valid param //position glEnableVertexAttribArray(0); glVertexArrayAttribFormat(vao, 0, 3, GL_FLOAT, GL_FALSE, 0); glVertexArrayAttribBinding(vao, 0, 0); //color glEnableVertexAttribArray(1); glVertexArrayAttribBinding(vao, 1, 0); glVertexArrayAttribFormat(vao, 1, 4, GL_FLOAT, GL_FALSE, sizeof(glm::vec3)); // relative offset is the size in bytes until the first "color" attribute glVertexArrayVertexBuffer(vao, 0, buffer, 0, sizeof(Vertex)); // The stride is the number of bytes between each "Vertex"
  6. bartman3000

    Implementing a Cube Map Lookup Function

    Unfortunately I don't have access to an AMD card. That was a great article thanks for the link! On an unrelated note (I should probably make a new thread for this), any idea why the derivative map idea hasn't caught on? Seems like a gift from the gods based on that article.
  7. bartman3000

    Implementing a Cube Map Lookup Function

    Oh sorry, didn't notice your edit.  I see, it "normalizes" the UV itself. mind == blown Thanks so much!
  8. bartman3000

    Implementing a Cube Map Lookup Function

    Damn that is clever. Seems like it just checks to see which direction the ray points in the most to figure out the face index. But in order for the UV calculation to work, the ray has to be normalized though, right? Here is my attempt at trying to answer that question: // Super simple cube map fragment shader written in GLSL #version 400 uniform samplerCube cube; layout(location = 0) out vec4 FragColor; void main() { FragColor = texture(cube, vec3(0, 1, 0) ); } This shader generates the following assembly: !!NVfp5.0 OPTION NV_gpu_program_fp64; OPTION NV_bindless_texture; PARAM c[1] = { program.local[0] }; LONG TEMP D0; OUTPUT result_color0 = result.color; PK64.U D0.x, c[0]; TEX.F result_color0, {0, 1, 0, 0}.xyxw, handle(D0.x), CUBE; END  I was expecting to see that black magic math you linked in the shader in the assembly, but it seems like its contained within the TEX instruction. Do you know how I can find out more about how the texture lookup instruction is implemented? I skimmed the documentation for NV_gpu_program5 but couldn't find anything.  Thanks!
  9. Suppose I did not want to use the cube mapping functionality that is built into a graphics API (i.e. samplercube in GLSL) and wanted to implement my own cube mapping, how would I go about doing this? Note that I do not actually plan on doing this for any practical reason, I just want to understand whats happening behind the scenes. Here is my guess: The lookup coordinate used with a cube mapping sampler is a direction vector from the center of the cube which goes in the direction you want to sample from. Using this vector, I would perform a ray - plane intersection test for each of the 6 faces on the cube to figure out which plane the ray intersects. I would then use the point of intersection on the plane as the 2D texture coordinate. Is this how graphics APIs perform cube mapping or do they have a more clever way of doing it? Interestingly the OpenGL wiki says the look up vector does not need to be normalized, does this mean the vector gets normalized by GLSL internally or do they use a completely different method from what I described? Thanks
  10. bartman3000

    Lighting question in GLSL

    The modelview matrix is actually two transformations concatenated into one, the model transformation which places your object in the world and the view transformation which puts the world space object relative to the camera. The view matrix is often created using a "look at" function, like gluLookAt for example. If you want to do your lighting in eye space, you'll need to use that view matrix to transform the light's position into eye space. You could send the view matrix to glsl as a uniform and do the transformation in the vertex shader, but if you think about it, the view matrix and light position are going to be the same for all of the vertices, aren't they? So what you could do instead is transform the light position into eye space just once cpu side and then send the transformed position as uniform. Either way will work but the latter saves you a per vertex matrix multiplication.
  11. bartman3000

    Lighting question in GLSL

    Awesome! Just note that this calculation will actually put your light (-100,0,0) units from the object you are currently rendering. If you want the light to be located at (-100,0,0) in world space, so it has the same position for all objects in your scene, then you want to do: LightPosEyeSpace = view matrix * LightPosWorldSpace
  12. bartman3000

    Lighting question in GLSL

    Try:   lightPos =   (gl_ModelViewMatrix * vec4(-100.0,-0.0, 0.0, 1.0)).xyz;
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!