• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Vanderry

Members
  • Content count

    219
  • Joined

  • Last visited

Community Reputation

252 Neutral

About Vanderry

  • Rank
    GDNet+
  1. OpenGL

    I'm starting to think perhaps rendering 3D hemispheres/cones with depth test enabled might be a better solution.
  2. Hello!   I'm attempting to create a classic RTS fog of war effect by rendering a number of circle gradients into a texture. It would appear black where there is no vision, white where there is an observer and an in between value for areas that have been explored and then left. The individual color channels could be used to store different stage values.   My first thought was to sample the texture in the shader, to make sure that the max(newColor, oldColor) is stored, but I have since learned that this is not supported by GLSL. OpenGL ES 2.0 doesn't seem to support glBlendEquation with GL_MAX. Might there be some other way that wouldn't require a second texture?   - David
  3. OpenGL

    For future reference, and as the unfairly downvoted gentleman in this post http://stackoverflow.com/questions/17707638/getting-black-color-from-depth-buffer-in-open-gl-es-2-0 mentions, a depth texture with linear mag filtering will not always work in OpenGL ES. With GL_NEAREST it looks great.
  4. OpenGL

      Right you are. That takes care of all the packing business.     I could see how this would be appropriate when using the depth value calculated by OpenGL, to linearize it if my understanding is correct. I have been assuming that a manual transformation wouldn't suffer the same "skewing". After all, if I performed the same calculation in C++, then the results should be linear, and w would remain 1.0. The 60.0, or view depth, was a desperate attempt to contain the result within the 0.0-1.0 range. I would use a uniform in practice. With the many opportunities for error here, please don't hesitate to correct me.     My math support classes are assuming row-majority. It would have been a good catch if this wasn't the case.     Isn't the purpose of varying variables to let OpenGL generate fragment-adjusted values across faces? It certainly seems to work, but I may just be doing a whole lot of overcalulation if my assumption is wrong.   Thank you very much for taking the time to read this mess :)
  5. OpenGL

    I guess I am getting better results from using gl_FragCoord.z in the shadow fragment shader, in which case the problem is how to depth test the fragments. I did at one point pass the target texture into the shader, to write the minimum of the old and new z but that didn't work so well in OpenGL ES...
  6. Hey guys!   I'm having some serious trouble implementing shadow mapping in my OpenGL/OpenGL ES application. There are many sources and discussions out there, but they often cover different versions of the API or omit some details. The way I'm doing it is by first constructing an orthographic projection matrix based on a light source. The "frustum" is adjusted to contain a terrain mesh.   Vector3 offset = {1.0f, -1.0f, 1.0f}; // Light direction offset.Normalize(); offset *= 30.0f; Vector3 center = {15.0f, 0.0f, 15.0f}; Vector3 eye = center - offset; Vector3 up = {0.0f, 1.0f, 0.0f}; Matrix4 viewMatrix = LookAt(eye, center, up); Matrix4 projectionMatrix = Ortho(-30.0f, 30.0f, -30.0f, 30.0f, 0.0f, 60.0f); // Depth range is 60.0 Matrix4 shadowMatrix = viewMatrix * projectionMatrix;   Then I am drawing all the models that populate the world to the shadow map, and projecting it onto the terrain. It looks great for things that are positioned above ground. The problem emerges when they are not, in which case they are projected just the same. Naturally they should appear in the shadow map, so I am assuming that it is the depth comparison that fails. The shadow map is being rendered directly to a texture (no render buffer involved), with settings GL_RGBA and GL_UNSIGNED_BYTE. The attachment is done through GL_COLOR_ATTACHMENT0, and for the color value I have tried to use a linearized and normalized gl_FragCoord.z, but now I am more inclined to use the same manually projected vertex z that is used in the other render step. The shaders look roughly like this:   // Shadow vertex shader attribute vec4 a_Position; uniform mat4 u_ShadowMatrix; uniform mat4 u_WorldMatrix; varying vec4 v_ShadowCoordinate; void main() {   v_ShadowCoordinate = u_ShadowMatrix * u_WorldMatrix * a_Position;   gl_Position = v_ShadowCoordinate; }   // Shadow fragment shader varying vec4 v_ShadowCoordinate; void main() {   gl_FragColor = vec4(v_ShadowCoordinate.z / 60.0); // Without packing }   // Render vertex shader uniform mat4 u_WorldMatrix; uniform mat4 u_ViewProjectionMatrix; uniform mat4 u_ShadowMatrix; attribute vec4 a_Position; attribute vec2 a_TextureCoordinate; varying vec2 v_TextureCoordinate; varying vec4 v_ShadowCoordinate; void main() {   // I have a suspicion this might mess up the z, but it's vital for the projection   mat4 shadowBiasMatrix = mat4(     0.5f, 0.0f, 0.0f, 0.0f,     0.0f, 0.5f, 0.0f, 0.0f,     0.0f, 0.0f, 0.5f, 0.0f,     0.5f, 0.5f, 0.5f, 1.0f);     v_TextureCoordinate = a_TextureCoordinate;   v_ShadowCoordinate = shadowBiasMatrix * u_ShadowMatrix * u_WorldMatrix * a_Position;   gl_Position = u_ViewProjectionMatrix * u_WorldMatrix * a_Position; }   // Render fragment shader uniform sampler2D u_Texture; uniform sampler2D u_ShadowMap; varying vec2 v_TextureCoordinate; varying vec4 v_ShadowCoordinate; void main() {   vec4 textureColor = texture2D(u_Texture, v_TextureCoordinate);   vec4 shadowMapColor = texture2D(u_ShadowMap, v_ShadowCoordinate.xy);   if ((shadowMapColor.x * 60.0) < v_ShadowCoordinate.z) // Same projection as in the shadow shader, right?     shadowMapColor = vec4(vec3(0.5), 1.0);   else     shadowMapColor = vec4(1.0); // No shadow      gl_FragColor = textureColor * shadowMapColor; }   I have tried different ways of computing the shadow map, with more elaborate packing methods. Ultimately I am assuming that OpenGL clamps the values between 0.0 and 1.0, or rather 0 and 255 as in the case of my texture format. There has to be a more obvious problem with my implementation. I can mention that changes to the render state such as dither, depth test and face culling doesn't make a noticeable difference. Can anyone think of a possible flaw? Any tips are greatly appreciated.   - David
  7. Hello Randy! I simplified one behemoth of a structure for the purpose of this post. In reality it is part of a quite elaborate (and mostly finished) project that I got some critique for when I passed a code sample to a potential employer. I suppose it might be a good start to unite objects contained in vectors and apply the changes step by step, not necessarily going "all the way".
  8. Hey you cool code crunchers!   I'm trying to wrap my head around data oriented design. There are many good explanations and examples out there, but I would love to verify that my understanding is correct before making changes to my old code.   So far I have been trying to keep my data simple by avoiding inheritance and using only pods, std::vectors and nested structs. As a very simplified example:   struct State {   struct Unit   {     I32 maxHealth;     I32 health;     Float rotationAngle;     Vector3 position;   };     struct Player   {     std::vector<Unit> units;   };     std::vector<Player> players; };   My first intuition would be to redefine the data like this:   struct State {   struct Unit   {     I32 maxHealth;     I32 health;     I32 positionVector3Index;     Float rotationAngle;   };     struct Player   {     std::vector<I32> unitIndices;   };     std::vector<Player> players;   IndexedVector<Unit> units;   IndexedVector<Vector3> vector3s; };   IndexedVector is just an std::vector that secures element indices by flagging vacant slots (unless there's a sequence at the end, at which point deallocation occurs). I'm sure there exists a refined variation somewhere.   Does this look right, or should I separate even pods of different types but equal memory sizes, such as 32 bit integers and floats? It seems to me like when an index would occupy the same or more memory than the referenced data type, separation wouldn't be a good idea. Thank you very much for any tips!   Ps. I managed to close my browser while writing this, so the post had to be restored from memory. Sorry if I lost some point along the way.   - David
  9. Thank you both! I had not considered either of the alternatives. I am not overly familiar with Boost, but it seems to have a trick for most problems imaginable. The stringstream solution is super elegant so I think I will explore that first. Again, much obliged :)
  10. Hey guys!   I frequently encounter situations where I want to map variables in C++ with data in scripts or xml-files. It feels most intuitive to associate them with string ids and so far I have done the parsing either by using templates such as:   template <class T> struct NamedVariable {   const char* name;   T* variable; };   And supply functions with one name but different parameter lists (making the compiler figure out which one to use). Or, I would add a datatype enumerator to the structure and manually specify the type:   struct NamedVariable {   enum Type { Type_Integer, Type_Float, Type_StlString, etc... };   const char* name;   Type type;   void* variable; };   Is any of these methods preferable or risky beyond my understanding (I have the most distaste for the void* casting), or might there be another way? I seem to remember one could paste snippets of C-like code on these forums before, hope you'll excuse the crude format. Thank you for your time!   - David
  11. Fantastic! I'm currently using SRT-transformed mesh hierarchies in a game, and this should help me extract the information needed to form matching collision volumes. You have my gratitude :)
  12. Thanks haegarr! That is a beautifully comprehensive explanation. So I gather that in order for an inverse transformation to have a directly inverse effect, it had best be applied right next to the original transformation, on either side like R-1 * R * S * T or S * T * R * R-1?     I am just a bit unclear about this part. Is the transformation application order important for this decomposition method to work?
  13. Hey guys!   Quite simply, I was wondering if I maintain a transformation matrix of translations, scalings and rotations and also another one with just the rotations - could I extract the correct resulting scale and possibly translation if I multiply the combined matrix by the inverse rotation matrix? I have a feeling the math might not be that simple, but I'm just not sure. Thanks in advance!
  14. Thank you guys for the clarification. It seemed strangely difficult to find a clear answer elsewhere, but this makes me confident to continue my research. Saludos !
  15. Hey you magnificent people !   I'm probably being a big dummy-dumb about this, but when using multiple VBOs for positional coords, normals, uvs etc - how would I use an index buffer correctly? Assuming only one IB can be used per draw call, is there any way to offset the buffer steps for each attribute so that a set of indices can be used for each vertex? If IBs can only hold one pointer per vertex then it seems to me like they wouldn't be very useful for more complex vertex definitions.   - Dave