• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Marc J

Members
  • Content count

    20
  • Joined

  • Last visited

Community Reputation

311 Neutral

About Marc J

  • Rank
    Member
  1. Okay, I could figure out that this happens if the ram and the video ram are both full and then a context switch happens. If everything runs in one thread its slow but OpenGL reacts fine. I think this is a driver "bug" more or less ( there should be some kind of error/exception what ever in my opinion), either the memory managment in the threading context or the context switch are involved too. I dont have a solution to this, but maybe if someone has the same problem this helps a bit.
  2. Thanks for the reply and I am aware that this design is not optimal. I maintain the the OpenGL part of the application, in general the context switches work fine (at least 20 persons use the software on a daily base). OpenGL parts are saved with locks, so your example should not be possible. But I understand what you are saying and I think you are right. I would love to understand why it only happens in the one (relatively big) dataset, that might be a hint that it might be something on myside. And what I really dont understand is, that wglMakeCurrent returns TRUE and directly afterwards the OpenGL call is ignored, and this scope is completely saved with locks. But yeah I am aware that this is a very special case and hard? to say something with out in depth analyzing the code. I just thought maybe someone has an idea what I could try to find the reason for this behavior. thanks again!  
  3. Hello everyone, I have a strange problem. I have an application (3d viewer) which uses OpenGl in 2 threads but has only 1 opengl context. It switches the context between the threads via wglMakeCurrent(HDC, HGLRC) and wglMakeCurrent(nullptr, nullptr). In general this works. But with one special dataset in a special situation wglMakeCurrent(HDC, HGLRC) returns TRUE but after that every OpenGL call is ignored (even the ones directly after wglMakeCurrent). - GetLastError() returns 0 before and after the call. - OpenGLDebug context does not give a hint (which is expected since the context seems to be out of order). - GetCurrentContext returns expected values. - glGetError returns 0 For debug purposes I do int i = 0; glGetIntegerv(GL_MAX_DRAW_BUFFERS, &i); ASSERT(i != 0); after the context is out of order this assert fails. I also logged the context switches and they seem to be correct, the context is bound to zero or one thread (which will do the next calls to OpenGL) at a time. I have no clue what I could try next, what could be the reason for this behavior. I use a Quadro K2000 with a 354.13 driver. Every hint is appreciated. Thanks Marc
  4. That the points become smaller if they are far away is intended. Dependend on the distance I use a smaller pointsize, therefore this is okay and I use glEnable(GL_PROGRAM_POINT_SIZE) when I am using the shader to set the point size.   But I agree with you that in my understanding of the spec the behaviour should be identical. Which means that in both cases the points should or should not vanish but not vanish in one case and dont vanish in the other one. I am using a NVIDIA Quadro K2000, I have to look up the driver version tomorrow since I am not at my desktop at the moment. I will let you know and thanks for making clear that it is probably a driver bug.  
  5. Hello everybody, at the moment I am trying to set the point size depending on a zoom level in my OpenGL 3.3 viewer. I found that in my environment there is a difference between using glPontSize in the C++ program and using gl_PointSize in the vertex shader. When I zoom out the pointsize value gets small, in both cases (I compute the value myself and use the same one in both cases). In the shader case, the points vanishes at some point due to the small point size I guess (or maybe fading is involved too). If I set the point size in the main program using glPointSize, the point never vanish completely, I guess it always has a one pixel width.   I wonder what is the reason for this difference. I tried to use multi sampling as well, but in the case of not using the shader I never achieved that the point vanishes completely.   In the OpenGL 3.3 spec in my understanding there is no statement that would indicate that behaviour. Does anyone has a hint what both cases do internally or can me explain what the difference really is?   Thanks Marc
  6. Hello, you may want to have a look at this: http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/point-lights/ The names are a little different there because its OpenGL 3.3 and there you dont have gl_NormalMatrix and so on by default. And the shader there also uses ambient and specular light but the basics for the diffuse part are what you want to do. I am sorry right now I dont have the time to look deeper into your problem. If nobody will tell you what exactly the problem is, I will look into this after work.  
  7. Hello, do you have backface culling activated? Would look like: glEnable(GL_CULL_FACE); glCullFace(GL_BACK); GL_BACK is the initial value, therefore it dont have to appear in your code, but it would be good to know if you use "glEnable(GL_CULLFACE)". I am asking this because even the second order of vertices should not give you the right result if backface culling is enabled and glFrontFace(GL_CW); is not used. glFrontFace sets the winding order of vertices in OpenGL and its default is GL_CCW. Therefore by default the order of vertices should be counter clockwise. In your second example they are clockwise, which would mean that the quad is oriented away from the screen(viewer) and if backface culling is enabled you should not see anything. (backface culling means that only the front face is rendered and visible) I think the right order should be as an example: glBegin(GL_QUADS) glVertex2f(-0.5,-0.5);//0 glVertex2f(-0.2,-0.5);//1 glVertex2f(-0.2,-0.2);//2 glVertex2f(-0.5,-0.2);//3 glEnd(); every other counter clockwise order is fine too. Even if the quad is divided (which is the only way in modern opengl, where GL_QUADS is deprecated and deleted) in two triangles botth of them (0,1,2) and (2,3,0) have the right winding.   I have no clue why the first result is looking like this, but I dont found a specification how exactly GL_QUADS works. What I would do is enable backface culling and then try it, because then you are learning it the right way and you dont have to struggle again when you need to use backface culling. And like DiegoSLTS suggests I would use triangles, because this is the way to go in OpenGL 3.0 and higher. I know that were perhaps alot of information and I did not explain everything in depth, please feel free to ask.  
  8. I just took a glimpse into the code, first thing that attracted me was int l_Major = glfwGetWindowAttrib(l_Window, GLFW_CONTEXT_VERSION_MAJOR); int l_Minor = glfwGetWindowAttrib(l_Window, GLFW_CONTEXT_VERSION_MINOR); int l_Profile = glfwGetWindowAttrib(l_Window, GLFW_OPENGL_PROFILE); printf("OpenGL: %d.%d ", l_Major, l_Minor); followed by: glBegin(GL_QUADS); ...... glEnd(); According to the screenshot you are initializing glfw with opengl 4.4 (or glfw does that for you, look into "glfwWindoHint (GLFW_CONTEXT_VERSION_MAJOR /GLFW_CONTEX_VERSION_MINOR, x)" to set it manually), but "glBegin(GL_QUADS)" is removed since 3.3. I am not completly sure why it display something at all. But I would try to either a OpenGL Version where glBegin is not deprecated or removed or alter the program so that it is written in an OpenGL 3.3 (or newer) way. Another thing is if (l_Profile==GLFW_OPENGL_COMPAT_PROFILE) printf("GLFW_OPENGL_COMPAT_PROFILE\n"); else printf("GLFW_OPENGL_CORE_PROFILE\n"); I think there might be one ";" too many the else case is not evaluated at all, otherwise you should have either one of those prints in your console. And it shows that the profile is not "GLFW_OPENGL_COMPAT_PROFILE". I am not sure that this is the main problem but it is worth to give it a try. Hope that this helps a little bit, otherwise please ask.  
  9. Hello, thanks again for your advices. Oh I forgot the lightColor in the formula, of course it has to be in there, According to Andreas Kirsch and his Annotations paper http://blog.blackhc.net/wp-content/uploads/2010/07/lpv-annotations.pdf, the 1/6 came from the the approximated solid angle of one surfel of the texture viewed with a 90 degree field of view, this gives aus for the solid angle approximately: 4 Pi/6 *1/(rsmwidth*rsmheight), the 4 Pi, then is canceled because the equation to that point is: outgoing flux of a rsm surfel= diffuse material Color * roh/4Pi * total flux*cos (Theta), with roh being the solid angle described before. I saw other formulas for the rsm flux computation as well but this was good documented and makes sense to me, if there are other recommandations I would love to here them. Oh ok that sounds like the way I do it at the moment, create spherical Basis functions in direction of the negative normal and do a dot product with the coeffients saved in the according volume cell. The point why I was thinking about changing this was that the crytek paper points out to use the haf cell size to convert intensity to incident radiance " However, since we store intensity we need to convert it into incident radiance and due to spatial discretization we assume that the distance between the cell’s center (where the intensity is assumed to be), and the surface to be lit is half the grid size s." (Cascaded Light Propagation Volumes for Real-Time Indirect Illumination from Kaplanyan and Dachsbacher) With grid size they mean cell size. And in a way I would like to somehow use the cellsize somewhere in the process because it feels wrong not do use this, but as said I think I do it in the way you recommend right now.   You are welcome and I really appreciate your hints and the way why I am thinking about the theory is because I want to understand it completely if I impelement it then is another question. I have already ways to tune with some factors during runtime and you are right they can make the scene visually better. Have you any thoughts about the gbuffer occlusion injection? I do it with the squared distance to the camera and one factor I can set myself during runtime at the moment. Thanks a lot  
  10. Ok, I work with Light Propagation Volumes at the moment, there are some things in the theory I dont know how to implement them correct. This is from http://blog.blackhc.net/wp-content/uploads/2010/07/lpv-annotations.pdf Lets start with the Reflective Shadow Map and the flux which should be stored in it. I have a "directional" light source with an arbitrary light Color, a total incoming flux, diffuse material color , width and height of the reflective shadow map texture and an angle theta  between the light direction and the normal. My flux for this surfel of the map then should be: fluxOut=diffuse material Color * 1/6 *1/(rsmWidth*rsmHeight)* total flux*cos (theta) The Injection of light then is, just take the flux and divide it by Pi. for every surfel of the reflective shadow map and add up the values in the appropriate cell of the light volume. Then you have an intensity function in each cell . Ok now we have the occlusion injection or in my case 2, one from the reflective shadow map and one from the gbuffer. In the paper above I believe there is onle the injection from the refelective shadow map described, the formula there is surfelArea= 4.0*distance*distance/(rsmWidth*rsmHeight), distance means the distance from the light source to the object this is clear since the area will grow with the square of the distance but where does the 4 came from?? And for the GBuffer injection I also think about what is the right way to do it. The difference is that one is a orthographic projection and one a perspective (Gbuffer).   Propagation is clear to me, I compute which amount of flux came from which face of the 6 neighbour cells and reproject this amount and divide it by Pi to get an intensity. This is just the short version but as I said this is clear to me. Now I have the volume with the propagated light and now I want to use it for rendering the indirect light into the scene. Question here is, I have intenstity(I) functions in my volumes and what I need for rendering is the radiance(L). The function is: L= I/A, so I need either the area of the Fragment or whatever I am just rendering or i can use the intensity to compute the irradiance E: E=I/r².  The LPV paper from crytek says that the are using half the cellsize as r so then I can came up with the irradiance but what I have then to do? I mean L=E/w (w the solid angle, but what is the solid angle in that case)?   If someone has some explanations I would appreciate it, or if some one has questions regarding the described theory feel free to ask. Thanks  
  11.   Sorry I am not sure that I understand what you are saying. In the first pictures I posted I had no overlapping cascades (the big one is left empty where the smaller one fits in, and the propagation also accounts for this), and I had propagation between cascades in it. The new picture uses overlapping cascades (way easier to implement) where each cascade propagates independently. Then for rendering I took (3*Cascade1Value+2*Cascade2Value+Cascade3Value)/6 for a position in the finest cascade. I know that summing up something there is physically completly wrong, just tried it to see the visual result. At the moment I am just trying how good this cascade approach works, but I really appreciate your ideas.  
  12. Yes you are right more or less, I look up which is the finest cascade at the current position and use that one.   I tried your suggestion and I scaled the radiance from the biggest cascade with 3 the middle one with 2 and the smallest with 1. This results in the following Image:   It is in a way not as bad as before but not really great. I could play with the scale factors and so on. However you seem also too agree that the base problem is there and that it might be difficult or impossible too overcome it without "cheating" a little bit. Other ideas someone?  
  13. Yes I use the same amount of iterations (propagate steps) for each cascade. And yes what you are saying about the obvious fact that light "goes further" with larger cells is exactly what I would expect. And most of the thesis about LPVs or implementations do not consider cascades (maybe for this reason??). But the orignial ones from Crytek just gave me the feeling that I miss something important and that it should just look fine or at least not as worse as it was with my implementation.   Beside that the Crytek paper mentions that they convert the intensity of the spherical harmonics to incoming radiance by using half of the cellsize but only for rendering the result but not in the propagation, therefore this can not be the solution of the problem.   When I find something new about this I will post it here. I will try it with the sponza scene in the next days, at the moment I think that it might be that with centering around player ormaybe  select the cascade to use with caution, as an example first evaluate if the complet object is in that cascade et, it might not be terrible. Thank you for your thoughts and the acknowlegement that I am not completly crazy.
  14. Hello, at the moment I work on implementing cascades for my Light Propagation Volumes implementation and I always have "problems" with the different cell sizes and the results produced by this. First a picture of the Problem ( I only draw indirect light here, the bottom plane is white and the light Comes from straight above): As you can see the 3 different cascades are clearly visible, and now a picture where I draw spheres for each cell shaded with the spherical harmonic of this cell (of the cascade with the smallest cells and the one with the biggest cells) : Here you can see where the problem is, in both displayed cascades the light comes from straight above and therefore the cells near to the bottom are the brightest, thats fine. (The blue line is the bounding box of the cascade with the smallest cell size) And in both cascades the first 1- 3 or 4 levels of cells (from bottom to top) are receiving a considerable amount of light. The problem is that the cell size is different and therefore in the end with greater cell size the light seems to travell much further than with small cells and this is the problem.   To me this problem is clear and I really have no idea how to really solve this. In the papers about cascaded light propgation volumes, they dont seem to have this problem or because of the fact that usually the center of all cascades is centered around the camera (with some offset sometimes) it might not as obvious but I dont really believe that, they must do something different. Another way would be to use the cellsize in the propagation of light and try it with a quadratic or some kind of falloff, but even the you would see a difference just because of the different positions of the brightest cells. My feeling is that there is no really solution, just some fixes which makes this a little bit smoother like Interpolation between cascade borders. So far I tried to handle each cascade independently and in another approach I even propagate light between cascades which is a little bit better but the described problem is in both cases clearly visible.   If some one has any thoughts or ideas I would really appreciate it. Thank you Marc