// vertex shader
attribute vec4 position;
...
Then first I use:
//C++
GLint position_id = glGetAttribLocation(shader_program,"position");
glBindBuffer(GL_ARRAY_BUFFER, positionVBO);
glEnableVertexAttribArray(position_id);
glVertexAttribPointer(position_id, VERTEX_STRIDE, GL_FLOAT, GL_FALSE, 0, 0);
Easy.
However, passing data back into a buffer from within a shader is not clear to me.
In this tutorial for rendering a scene to a texture, they create a color variable:
//fragment shader
layout(location = 0) out vec3 color;
That is rendered to a frame buffer with an attached texture. The following openGL function:
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, renderedTexture, 0);
attaches the texture 'renderedTexture' to an FBO, and presumably GL_COLOR_ATTACHMENT0 corresponds to the 'layout=0' in the shader.
I guess that makes sense. But later in this shadow mapping tutorial, they render only the depth attribute to the FBO using GL_DEPTH_ATTACHMENT.
//C++
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthTexture, 0);
// fragment shader
layout(location = 0) out float fragmentdepth;
Now I don't really see how the ATTACHMENT parameter and the layout location value correspond. How does OpenGL know to render the 'fragmentdepth' value into the depthTexture? What if you had multiple frame buffer objects, and wanted to output color to one, and depth to another? How would you set the layout location value then?