Jump to content
  • Advertisement

calioranged

Member
  • Content Count

    17
  • Joined

  • Last visited

Community Reputation

4 Neutral

About calioranged

  • Rank
    Member

Personal Information

  • Interests
    Design
    Education
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. calioranged

    Mipmapping

    Thanks Steve_Segreto and Batzer.
  2. calioranged

    Index Buffer

    0xnullptr: Ah yes I see now. This was along the lines I was thinking, but I am grateful for your confirmation! Also thanks to all the other contributors.
  3. calioranged

    Index Buffer

    Thanks Zakwayda.
  4. calioranged

    Index Buffer

    Yes I understand how array indexing works. My question relates to how the index buffer communicates with the vertex buffer. How does the index buffer know that the unsigned short integers it has been provided with represent the floats inside the vertex buffer. This may be a poor example but hopefully it will illustrate my confusion: std::vector<std::pair<float, float>> Indices; Indices.reserve(4); Indices.emplace_back(std::make_pair(1.0F, 0.0F)); // Index 0 = 1,0F,0.0F Indices.emplace_back(std::make_pair(1.0F, 1.0F)); // Index 1 = 1,0F,1.0F Indices.emplace_back(std::make_pair(0.0F, 1.0F)); // Index 2 = 0,0F,1.0F Indices.emplace_back(std::make_pair(0.0F, 0.0F)); // Index 3 = 0,0F,0.0F With the above it is easy to see which vertex corresponds to which index (obviously this doesn't deal with the vertex or index buffers but it explicitly defines which vertex corresponds to which index). Whereas in the code I originally posted there is no moment in which OpenGL is told that said vertex corresponds to said index. Hopefully this is easier to understand.
  5. calioranged

    Index Buffer

    Sorry it is kind of difficult to articulate. ^ But why? Why would vertices[0] automatically = (1,0), and vertices[1] = (1,1). It's easy to see that this is indeed what happens but I am unsure of at what point OpenGL becomes aware that a particular index refers to the coordinates of a particular vertex?
  6. calioranged

    Index Buffer

    Yes sorry about that I will change it now. EDIT: Doesn't appear that I can change it now, but yes I meant element 2 not element 3
  7. calioranged

    OpenGL Index Buffer

    How does OpenGL know what data the elements '0,1,2, 2,3,0' refer to in the index buffer? There doesn't appear to be any explicit communication between the vertex buffer and the index buffer, where OpenGL is told that element 0 of the index buffer refers to (1.0F,0.0F) in the vertex buffer, element 1 refers to (1.0F,1.0F), element 3 (0.0F,1.0F) etc... How, and at what point, does OpenGL 'understand' which elements of the index buffer refer to which vertex of the vertex buffer? float vertices[] = { 1.0F,0.0F, 1.0F,1.0F, 0.0F,1.0F, 0.0F,0.0F }; unsigned short int indices[] = { 0,1,2, 2,3,0 }; unsigned int VB, IB; glGenBuffers(1, &VB); glBindBuffer(GL_ARRAY_BUFFER, VB); glNamedBufferData(VB, sizeof(vertices),vertices,GL_STATIC_DRAW); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0); glGenBuffers(1, &IB); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, IB); glNamedBufferData(IB, sizeof(indices), indices, GL_STATIC_DRAW); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
  8. calioranged

    Normalising Ranges to -1 to 1

    glm::mat4 proj = glm::ortho(0.0F,960.0F,0.0F,540.0F,-1.0F,1.0F); glm::vec4 vp(100.0F, 100.0F, 0.0F, 1.0F); glm::vec4 result = proj * vp; After multiplying the above matrix by the vector of x,y,z,w coordinates, I can now see how the normalisation works: X: ((2/960-0)*100)+(0*100) + (0*0) + ((-960+0/960-0)*1) = −0.7916666667 Y: (0*100)+((2/540-0)*100) + (0*0) + ((-540+0/540-0)*1) = −0.6296296296 Z: (0*100)+(0*100) + ((-2/1--1)*0) + ((- 1+-1/1--1)*1) = 0.0 W: (0*100)+(0*100)+(0*0) + (1*1) = 1.0 But how are the matrix elements decided in the first place? For example why is (2/right-left) in the top left corner, why is (- right+left/right-left) in the top right corner. Is there a computation before hand that determines where these matrix elements are placed?
  9. calioranged

    Normalising Ranges to -1 to 1

    Thanks a lot.
  10. calioranged

    Normalising Ranges to -1 to 1

    This stuff is so hard to get a grasp of as a beginner. I haven't yet managed to understand any of the tutorials I have been following, including that one. What do the two array operators represent ? [i][i] And why is there no [3][3]
  11. glm::mat4 proj = glm::ortho(0.0F,960.0F,0.0F,540.0F,-1.0F,1.0F); glm::vec4 vp(100.0F, 100.0F, 0.0F, 1.0F); glm::vec4 result = proj * vp; How exactly are these calculations performed? What is the actual equation.
  12. calioranged

    OpenGL Shader Queries:

    And if there is more than one out variable in the fragment shader? EDIT: Actually I think the page you have linked addresses that follow up question. Thanks.
  13. calioranged

    OpenGL Shader Queries:

    *** Beginner question - some terminology may not be correct *** I will relay my understanding of OpenGL shaders in this post in the hope that members can confirm the areas where I am correct and redress the areas where my understanding falls short. Firstly, here is my shader: #shader vertex #version 330 core layout(location = 0) in vec4 position; layout(location = 1) in vec2 tex_coord; out vec2 v_tex_coord; void main() { gl_Position = position; v_tex_coord = tex_coord; }; #shader fragment #version 330 core layout(location = 0) out vec4 colour; in vec2 v_tex_coord; uniform sampler2D u_Texture; void main() { vec4 tex_colour = texture(u_Texture,v_tex_coord); colour = tex_colour; }; First Query: I have been following a tutorial on shaders and have replicated the code used in the tutorial. My program renders a square, onto which a texture (image) is placed. I have bound the vertex position coordinates to the first attribute index of the vertex array, and the texture coordinates to the second attribute index of the vertex array *1*. The program works without any issues, but there are some aspects which have me confused and I therefore seek clarification from forum members. The vertex shader features the below lines: layout(location = 0) in vec4 position; layout(location = 1) in vec2 tex_coord; From my understanding, these lines are essentially saying: "take the layout of attribute 0 and place it in the 'position' variable" and "take the layout of attribute 1 and place in the 'tex_coord' variable". This makes sense to me since the vertex shader determines the position of each vertex on the screen. However, in the fragment shader, I am more dubious: layout(location = 0) out vec4 colour; If the lines in the previous snippet are saying "take the layout of attribute x and place it in the 'y' variable", then what exactly is the above line saying? Specifically, why is the layout from the position attribute (attribute 0) being used and not the texture attribute (attribute 1)? Second Query: I understand that the input variable in the fragment shader (v_tex_coord) is supplied by the output variable from the vertex shader (which originally gathered its data from the layout of attribute 1). But how is the final colour for each pixel here actually set? In the main() function of the fragment shader, the texture() function is used. From what I have gathered this function samples the colour of each pixel in the texture. Once a texture has been bound to a specific slot, we can set the uniform outside of the shader with glUniform() by returning the location of the uniform variable (location of "u_Texture" in this case) and then linking it to the previously bound slot. Presumably this is the step which links together the texture image and the uniform - meaning that 'u_Texture' now has access to the texture image via the slot to which it was bound (please correct me if I am wrong here). The next assumption is that the texture() function will sample the corresponding colour value through its parameters ('u_Texture' - which now contains the texture image (or at least has access to the slot to which the texture image is bound) and 'v_tex_coord' which contains the coordinates to which the texture should be rendered). Here is the part that confuses me most: The outcome of the texture() function is returned to the vec4 variable 'tex_colour' which then reassigns its value to the output vec4 variable 'colour'. What was the point in this last step? 'tex_colour' already contained the result of the texture() function so why does it then need to be reassigned to 'colour'? 'colour' is not predefined by OpenGL, I can change the name to 'the_colour', 'a_colour' or 'the_ice_cream_man' and the texture image is still rendered perfectly fine. Is it the case that the variable defined as the output in the fragment shader will be used to render the colour of each pixel regardless of its name? I suppose that the reason I ask this is because the 'gl_Position' variable in the vertex shader which sets the position appears to be a variable predefined by OpenGL whereas in the fragment shader this doesn't appear to be the case with the variable which sets the colour... some clarification of this would be greatly appreciated. *1* - terminology may be wrong here but essentially I have used glVertexAttribPointer() to set the below vertices position coordinates to attribute 0 and the below texture position coordinates to attribute 1. The full program consists of 16 files and it didn't want to include source code for each file because most of it is irrelevant to the questions I am asking. float positions[] = { // vertices // texture -0.5F,-0.5F, 0.0F,0.0F, // x and y coordinates 0.5F,-0.5F, 1.0F,0.0F, 0.5F, 0.5F, 1.0F,1.0F, -0.5F, 0.5F, 0.0F,1.0F };
  14. calioranged

    OpenGL Mipmapping

    *** Beginner question *** To my current understanding of mipmapping, If you have a 512x512 texture downsized to 256x256, then only 1 pixel can be rendered on the downsized version for every 4 pixels on the full sized texture. If the nearest neighbour method is used, then the colour of each pixel on the downsized version will be determined by which pixel has its centre closest to the relevant texture coordinate, as demonstrated below: Whereas if the linear method is used, then the colour of each pixel on the downsized version will be determined by a weighted average of the four full size pixels: But if mipmapping is not used, then how is the colour of each pixel determined?
  15. calioranged

    glShaderSource Parameters

    Thanks. Oh alright so we don't need to explicitly state '\0' at the end of an std::string because this is implicitly guaranteed by C++? Yes 'vertexShader' is an argument of a function call to a function that compiles the shader, and it becomes the 'src' parameter inside this function. Thanks a lot for all your help!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!