• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Ignifex

Members
  • Content count

    169
  • Joined

  • Last visited

Community Reputation

520 Good

About Ignifex

  • Rank
    Member

Personal Information

  • Location
    Leiden, The Netherlands
  1. OpenGL

    If it is white, it indicates there is something wrong with your fragment shader or its output assignment. With that statement, vec4(TexCoord, 0.0, 1.0), the blue channel should be 0, resulting in any black/red/green/yellow tint, regardless of your texture coordinates. Like the inputs of your vertex shader, the output of your fragment shader also needs to be mapped. In your case, you are writing directly to the backbuffer, which has 1 color output, index 0. For as far as i know, if you have one output variable in your fragment shader, that should automatically be bound to output 0, but this might be something worth checking. You can check here for more info on output assignments. Once you have checked your output assignment, it probably helps to check if your shader program has any errors. See here and here.
  2. OpenGL

    Hmm, I can't really find any other problems just by looking over your code. Could you try using VertexColors = vec4(TexCoord, 0.0, 1.0); in your fragment shader, to rule out texture coordinates as a problem? This should give you a fancy red/green gradient. Do you have a debugger you could use to check whether the image data returned by SOIL_load_image is correct? If it has any variation in pixel values, it should be fine. You could also try OpenGL's internal error logging functions. See here. If you want to test a part of your code, just call glGetError before it to clean up previous errors and call it again afterwards to get the error you are looking for.
  3. OpenGL

    Nice to see you got it working already! I found one thing that might be your problem. You are calling glUniform1i before glUseProgram. However, the uniform set by glUniform1i is actually set for the active program, set by a call to glUseProgram. This should however allow it to work in the next Draw call. Could you post your shaders as well? If you want to test your texture coordinates, you could try rendering these out directly in your fragment shader.
  4. OpenGL

    Hello, The problem you are experiencing has to do with the layout of your vertex data in C++, relative to the layout of your OpenGL buffer. You have already come a long way though and what you are doing is mostly correct. First off, on the intended layout of your OpenGL buffers, it is common practice to create only one buffer for all your vertex properties whenever possible. In this case, that means that the vertex position, texture coordinate and color are combined in an interleaved form into one buffer. This means you will only need one Vertex Buffer Object. Think of it as (v1 x, v1 y, v1 z, v1 u, v1 v, V1 r, v1 g, v1 b, v2 x, v2 y, ...). This also happens to be an exact match to the vertices in your C++ Vertex array. When you call: glBufferData(GL_ARRAY_BUFFER, numOfVertices * sizeof(vertices[0]), vertices, GL_STATIC_DRAW); your OpenGL buffer is actually filled with exactly that data. Using that buffer, it is up to you to specify where in that buffer your vertex positions are located. As you already wrote, you can use glVertexAttribPointer for that. You wrote: glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (GLvoid*)0); The first parameter here is an index to a generic vertex attribute. It is basically an index to the property in your vertex shader that will receive this bit of data. You can use glGetAttribLocation to get that index of your don't know what it is. If you have only one property, 0 is (probably?) a pretty safe bet. The second parameter you provided is correct. 3 shows you have 3 float for this property, an xyz triplet, turned into a vec3 in glsl. The third and fourth parameters are also correct. These denote the type and normalization of the data. The fifth parameter is where things get interesting. This is the stride, the byte offset between consecutive generic vertex attributes. In other words, in your case, the size in bytes of your Vertex structure. This is the offset between your separate vertices. If you specify 0 here, it defaults to the size of the given property, which is the size of 3 floats in this case. The sixth and final parameter is a 'pointer', a byte offset to the component you are referring to for this property. For each vertex, this should point to the first float (or just the vec2/vec3 itself) of the property you are declaring. C++ has a useful function here: offsetof. So you could write something like: (void *)offsetof(Vertex, m_Pos) I'm not entirely sure if having m_Pos private will cause problems for that call.   Now looking back at what you had written so far, what you actually see happening, is that your entire vertex array is interpreted as triplets of xyz. Noting that your displayed coordinates are in the [-1, 1] range for xy with an ignored z, what you see being drawn starts to make sense. Just two more small things that I would like to point out as advice. You are using quads, which makes sense for sprites, but please note that quads are now deprecated and have even been removed in modern OpenGL versions. Instead, try to form your quads using two triangles. Also, since you are programming in C++, try using containers for your data whenever you can. If you pass a pointer to an array and a length of that array, you could have probably used a vector instead. Those actually clean up after themselves, without you having to delete them, and you can query their size easily.
  5. OpenGL

    It seems you are looking for glBindFragDataLocation, which tells glsl what output variable writes to which color target. By default, if only a single output is specified for the fragment shader, that variable is chosen as the output to the (first) bound color buffer.   Depth is a separate case, since only a single depth buffer can be bound at a time. No separate variable should be declared and it is not required to write to the depth manually. If you wish to do so anyway, you can use gl_FragDepth.
  6. Just to point out a potential problem:   [source lang="C++"]vec3 CalcPointLight(int _index, vec3 _normal) {     vec3 LightDirection = worldPos0 - pointLights[_index].position;     ... }[/source]   It looks like your lightdirection is reversed with respect to your normal and eye vector. This could cause a lot te become black for the diffuse term and might mess up your specular computation as well (negative x in pow(x, y)), but it sounds like you solved that already.   What combination of lights have you used for testing? What happens when you use only a directional light?
  7. I first noticed that your triangle is not interpolating between red, green and blue, but between cyan, magenta en yellow. So somewhere your colors are being "reversed". Your problem is indeed in the code fragment you have shown below. The interpolation between corner points does not depend linearly on the distance, but the inverse of it. If the distance to your red point (d0) is very small, your "red weight" should be high. You can have a look at Barycentric coordinates, which seem to most closely match what you are doing.
  8. The second form, the "open" cylinder, is the common solution used in almost every model exporter available. The reason is exactly as you have mentioned. Texture coordinates are not the only attribute that can differ per vertex on the model. Multiple normals are also commonly available, forcing even more splitting into multiple "single attribute" vertices.
  9. To me at least, it is still not very clear what you are trying to achieve. As for your direct question, you could have a look at the depth of the fragment you are drawing in your pixel shader, using eg. gl_FragCoord.z. You can compare this value to 0, which will usually be the depth value at the near-plane, possibly shade the fragment in a different way, and then modify the depth that is used to compare it using gl_FragDepth.
  10. That is an interesting comparison Geometrian. I always thought transform feedback would have minimal impact on performance compared to compute shaders and OpenCL, but from your example it seems it does make a lot of difference. Are there any settings you could tweak to optimize the performance? Perhaps this is going a bit off topic, but I am curious to know. Does anyone know how compute shaders compare to OpenCL and transform feedback?
  11. OpenGL

    For OpenGL / GLSL, you need to set up your depth texture to do a comparison upon lookup using for instance: [CODE]glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);[/CODE] Then, in your shader, you should give your shadowmap uniform the type [font=courier new,courier,monospace]sampler2DShadow[/font]. This will allow you to call the regular [font=courier new,courier,monospace]texture[/font] function, or one of its variants, with a 3 dimensional texture coordinate. The third coordinate is the depth value used for comparison.
  12. You can also do all your computations in view space of course, which works perfectly fine. It is often even a little simpler because of the combined model and view matrices. A brief overview of the spaces you are using: Object Space -> World Space -> View Space -> Clip Space Your convert between these like so: Object Space -[ Modelview Matrix ]-> View Space -[ Perspective Matrix ]-> Clip Space At the moment your lighting computation seems to have all parts properly in view space. One thing to look out for though is the gl_LightSource[0].position. If you set this using something like glLightfv(GL_POSITION, mylightposition), the mylightposition you provide is multiplied by the modelview matrix before being sent to the GPU. That gives you two options. First, you can set it after setting up your camera's modelview matrix, in which case you don't have to transform it in your shader, but you will need to update it whenever your modelview matrix changes. Second, you can use an identity modelview matrix when setting up your light position, which means you will be transforming it into view space in your shader, but you will only need to update it when the light source moves. Note that multiplying in your shader for each shaded pixel is rather expensive, so I would go for the first choice. If any of this does not make sense to you, please let us know.
  13. It's important to have some understanding of the lighting equation you are using. The dot product you are computing is Lambert's cosine law, stating that the amount of light that hits a surface is proportional to the cosine of the angle at which it is hit. This dot product is computed between the light vector and the surface normal, both normalized so that the result is the cosine of the angle between them. This light vector is not the light position "vector", which is hardly a vector at all, just a position. It is the vector pointing from the shaded point to the light source. The shaded point is easiest to find by an interpolation of your vertex positions. Since your normal and vertices are likely in world space, you will need your light source in world space as well, to get correct and view independent results.
  14. Ah, that too yes. [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img] That would result in something like: vec3 lightvector = normalize(vec3(gl_LightSource[0].position) - position); intensity = dot(lightvector, n); And you should consider applying some attenuation if you are using a point light source.
  15. I would say the problem is with this line: intensity = dot(vec3(gl_LightSource[0].position),n); Here you are taking the dot product between the normal and the light source position vector, instead of the vector to the light source. This should have the form: intensity = dot(vec3(gl_LightSource[0].position) - position,n); where position is the position of the fragment (pixel) you are shading. You should pass this position as a second varying vec3. From far away, this difference will be neglible, since the light source position vector is so much bigger, but the result is still slightly incorrect.