• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Farfadet

Members
  • Content count

    81
  • Joined

  • Last visited

Community Reputation

175 Neutral

About Farfadet

  • Rank
    Member
  1. Right, there are arguments against their marketing approach. But what about the technology ? Polygons are great for rendering something between big surfaces or flat faces, and extremely detailed and irregular shapes (like vegetation, rocks, clouds...). For the latter, you eventually get to render polygons that are smaller than a pixel. In that case, voxels/point clouds used in conjunction with a sparse voxel octree data structure (I regroup these under the term "voxels" in what follows) are better, at least in theory, for a few reasons : 1) the sparse voxel octree is very efficient for rendering. This explains how they could obtain that frame rate without GPU acceleration 2) the sparse voxel octree is in itself a compressed data structure 3) you get rid of the real burden of polygons : UV-maps/textures/displacement mapping - especially heavy if you consider highly irregular surfaces, clouds, vegetation... Instead, you just store colour information with the voxel. The real limitation is that animation is virtually impossible with that technology. So the question is not : which is better ? but rather : which is better for what application ?
  2. OpenGL

    Maybe this can help : http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html In short, it says you can't manage one rendering context from multiple threads in openGL. I believe ths is what you're trying to do.
  3. Well, I followed your idea and noticed that the offset is related to the windows settings (such as border margin), and even to the window size. The further the window dimensions are from the texture's (1024*1024), the bigger the offset. This means that the copy of the texture attached to the FBO takes place as if the window framebuffer's dimensions were considered, not the FBO's. However, it is the FBO/texture image that's copied. I tried about every possible combination of glViewPort, but nothing changes (and that's logical, since glViewPort involves the rendering pipeline, and here I use glCopyTexImage). The strange thing is that after copying, I use the same FBO to render on the texture, with the proper glViewPort, and there it works perfectly well. This might simply be a bug in the ATI driver. Next I'll try copying the texture by rendering it to the destination texture. Thanks a lot
  4. I'm copying a texture to another one using FBO's and glCopyTexImage2D - glCopyTexSubImage2D. Both textures are 1024*1024 pixels. It works, except that the image is offset by 27 pixels vertically in the destination texture. If I specify an offset of y = -27 in glCopyTexImage2D , the images match, but of course I miss a strip of the source. This is true for both glCopyTexImage2D and glCopyTexSubImage2D, with or without specifying the margin in width/height. Anybody experienced something like this ? I have an ATI Mobility Radeon X1400. Thanks.
  5. OpenGL

    First, sorry it took me so long to answer. Well, I'm using openGL and GLSL, and I render to texture. Putting aside the render to texture technology (I guess there's not much difference btw HLSL and GLSL), I still have a problem with the standard blending formula Cs*aS + Cd (1 - ad). Consider the case : Cs = 1 0 0 0.5 Cd = 0 0 1 0 i.e. the source is partly transparent, and the destination is fully transparent at that pixel. The standard formula gives : C = 0.5 0 0.5 0.25 (a = as*as + ad*(1-as) = 0.5*0.5 + 0*(1-0.5) = 0.25) The resulting alpha is already strange. I can go around this by using : glBlendFuncSeparate(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA,GL_ONE, GL_ONE). This will add up the alphas, which makes sense for a brush adding paint to a layer. But there is another problem : the resulting color I get is a dark purple because of the blue contribution. But there should be NO blue contribution, since the destination layer is fully transparent. My conclusion : standard blending is valid only for a fully opaque back layer. It assumes an alpha of 1 in the destination layer. Proof of that is that the destination alpha doesn't appear at all in the formula. Dividing by alpha will still not give me a good result : the blue will always be there. My merging formula is alright to merge two layers without changing the visible result, but I'm still not sure it's OK for painting with a brush with an alpha component.
  6. I've been messing up some time with rendering a mesh with multiple see-through textures, until I dived into the theory. This is what I came with : 1) rendering multiple layers We can define the visible colour Vi of layer i, as the colour that would be seen if layers 0 to i were visible, and the remaining layers were invisible. Vi is obtained by computing, from bottom (layer 0) to top (layer i), the successive values of Vi with the formula : Vi = Ci. αi + Vi-1.(1 - αi) with Ci the colour stored in layer i, and αi the alpha value stored in layer i. This is the classical blending formula, but the destination colour is not the colour of the underlying layer, it is its visible colour. The lowest layer is the background, its alpha is by definition 1, so V0 = C0. This is easy to implement in a pixel shader: we need just to compute the successive values of Vi, from layer 0 upwards, possibly skipping a hidden layer. This is well known stuff. 2) merging layers Sometimes we want to merge two layers together, or better said replace two textures with their blended colours. This is the case, for example, when we paint with a textured brush on one layer. At first sight, all we have to do is use the classical blending formula (αdest, 1-αdest). Doing this we get a "halo" around the brush (where the alphas are somewhere between 0 and 1). The reason is that we ignore the source alpha, and we shouln't. Let us see how to correctly merge two successive layers together. What we want is to replace the two layers with one, and get the same visual result than when they are layered. With the two layers, say layers 3 and 4, the visible colour as defined above is given by: V3 = C3. α3 + V2.(1-α3) V4 = C4. α4 + V3.(1-α4) and we want to replace this by layer 3’ (merge of layers 3 and 4) : V’3 = C’3. α’3 + V2.(1-α’3) so that V’3 = V4 Therefore we can write the identity : C’3. α’3 + V2.(1-α’3) = C4. α4 + V3.(1-α4) = C4. α4 + (C3. α3 + V2.(1-α3)).(1-α4) = C4. α4 + C3. α3. (1-α4) + V2.(1-α3).(1-α4) Since V2 is an invariant, this can only be met if two conditions are satisfied : C’3. α’3 = C4. α4 + C3. α3. (1-α4) 1-α’3 = (1-α3).(1-α4) α is the inverse of the opacity : the transparency, so the merged layer’s transparency is the product of the two layer’s transparencies, or : α’3 = 1- (1-α3).(1-α4) And the merged color : C’3 = (C4. α4 + C3. α3. (1-α4)) / α’3 So far so good. I'd welcome comments on this by more experienced people. My problem is to implement this formula in glsl. I cannot use the existing openGL blending formulas, so I need to do it with GLSL. The problem is that I need to write the result of the merge in texture 3 corresponding to layer 3, but I also need a lookup in texture 3. Can I do simultaneous read and write on the same texture ? I guess not. Another solution would be to write in a temporary texture, and replace texture 3 by this texture when the blending is done. Considering my brush application, I need this to be quick (of course). Is there another option ? What's the best way to do this ? Any help/advice welcome.
  7. A vertex shader transforms vertices from world space into clipping coordinates space (screen space) (with the modelview-projection matrix), but also needs to transform the vectors, if they are fixed in world space. This is not evident, but you need another matrix to transform vectors : the normal matrix (gl_NormalMatrix). This is explained in the orange book, ch. 6.2. When I want my light source to be fixed in global space (i.e. turning with the scene) I add the line : LightPosition = vec3(gl_NormalMatrix * LightPosition); And just remove it when I want the light fixed in screen coordinates (always coming from the same apparent direction) I also apply this transformation to the normal, tangent and binormal vectors : vec3 t,n,b; n = vec3(gl_Normal); n = normalize(gl_NormalMatrix * n); t = Tangent; t = normalize(gl_NormalMatrix * t); b = cross(n, t);
  8. Hi, I've implemented Catmull-Clark subdivision in my app. The only thing left is to compute the tangent space vectors at the subdivided mesh vertices. Temporarily, the vectors (tangent and normal) are interpolated from the vectors at the base mesh vertices. As expected, this gives noticeable artifacts where the surface curvature is high. Internet searches returns a lot on Catmull-Clark and subdivision, but strangely very little on this topic. What I'm looking for is a method that does two things : 1) provide tangent vectors at subd surface vertices close enough to the tangent plane of the (exact) subd surface (the normal is computed by cross product of the 2 tangent vectors) 2) make it so that these tangent vectors (or vectors derived from them) correspond to the tangent space needed to apply correct bump mapping. I could of course compute the tangents and normals of the subdivision mesh the same way I compute them for the base mesh, but the CPU cost is prohibitive. Some details regarding the app : - polygonal mesh (triangles and quads) - shaders supporting lighting, color textures and bump maps - texture coordinates are simply linearly interpolated during subdivision - sharp and semi-sharp creases (for the former, normal discontinuity across the crease) - for the base mesh, tangent space vectors at vertices are computed as follows : - compute normals for each polygon - compute tangent for each polygon as explained in http://www.terathon.com/code/tangent.html - average those values at vertices, taking into account normal and texture discontinuities (creases, mesh edges and seams) - orthogonalize and normalize the vectors (only tangent and normal are stored per vertex, the binormal is computed in the vertex shader). Any link / suggestion / hint would be welcome.
  9. Why would the x velocity be <> 0 ? the force is vertical and the reaction too (assuming the ground is flat and horizontal). The distance between the reaction vector and the c.g. will yield a rotation, but no X displacement, I think.
  10. Thanks. As explained, this is very practiccal. I'll go for Cramer and see if it's fast enough. Alvaro, thanks for the link, the performance gain is impressive, but as I understand this is for Intel only - no portability.
  11. refering to another post : http://www.gamedev.net/community/forums/post.asp?method=reply&topic_id=573015 I'm trying to find an efficient way to interpolate a function whose value is known at the 4 vertices of a quadrilateral ; in other words : the same as barycentric interpolation, but for a quad. I found another way than those discussed in the post : I take the non-linear function f(x,y) = a.x.y + b.x + c.y + d. I know the function's value at 4 points (x1,y1), (x2,y2), (x3,y3), (x4,y4), this gives 4 equations with 4 variables (a,b,c,d). I need to do this for quite a few polygons, preferably real-time, and twice for each polygon (I map 2D quads on 2D quads), so speed matters. I assume the matrix has, in a general case, no particular form.
  12. What about : the least multiplication / division operations ?
  13. Hi, What would be the fastest algorithm to solve a 4 variables - 4 unknowns linear system ? I have the feeling that it must be somewhere between brutal force methods (substitution) and more sophisticated iterative methods.
  14. I'm devastated to hear this, but thanks anyway
  15. How do you manage this in an application then ? Wait until it crashes ?