• Content count

  • Joined

  • Last visited

Community Reputation

159 Neutral

About dingojohn

  • Rank
  1. [quote name='mrbastard' timestamp='1346259877' post='4974489'] Happy to help. I highly recommend Alexandrescu's book too. I'm beginning to wonder if there's enough interest in EoP on gdnet for some kind of informal study group... [/quote] So far I've been printing the newest set of notes from Stepanov's webpage, reading through it, trying things out in C++11 (where some features he asks for have been implemented properly) and waiting for EoP to arrive in the mailbox. A study group would indeed be a welcome addition! How would one go about setting that up? I must say that so far I'm very impressed with Alex Stepanov's writings and to me the book seems like could well be the perfect step after just finishing Effective STL, which is another must have. I actually found out that I've already read a lot of Alexandrescu's book at work, and while it is good, Effective STL and Stepanov's notes so far have made a bigger impression on my way of dealing with structures and algorithms in C++. Cheers!
  2. Hi all Thank you very mich for the time you put into this, and sorry for my late answer. Elements of Programming indeed looks like that might be the book I'm looking for - I'm fairly sure it is. Modern C++ Design however also looks like a very decent book - I might just take that as a bonus and have a more thorough look at it. Again, thanks for your time; it's highly appreciated.
  3. Hi all I hope you can help me out with some detective work I cannot figure out myself. I remember reading a about a book here on gamedev.net a few years ago and now I cannot find it on google. Back then it was hailed as a programming book that was advanced, but really opened the eyes for many and that the posters here on gamedev.net saw it as a must read for everyone on their team. The topic was general programming from a mathematical perspective, and the author(s) defined functions, general objects and from a everything was performed generically, and implemeted in what I recall as being C++. The first chapter was available online. I might have mixed something up, but I think that the author(s) had an Eastern European sounding name. It was not about implementing algebraic structures or doing math in programming, but about using mathematical theory as an approach to better your generic designs, if I recall correctly. I know it's a long shot, but I really hope anyone might recall a book like that mentioned here (or elsewhere). Thanks in advance! (I hope this is the right place to put the post..)
  4. Thanks for the info. There's flying some rate++ towards you! I'll try this now and hopefully everything will work out brilliantly. Thanks a bunch!
  5. Yeah, that was what I was thinking. However, "solved" my depth problems is not exactly right, as I still have no clue as to how I am going to utilize the texture as the depth buffer for the second pass, as I am not using MRT for that, due to the fact that I render to the window. Depth testing that I have to use for the stencil generation works with the current depthbuffer unless I am mistaken. In this case as I do not use MRT that would be the window's own depth buffer, and not my custom texture that I just rendered to in order to use the info to reconstruct position. I might be missing something quite simple, but I'm still not getting anywhere.
  6. Thanks for the help - either I am not getting you or you are not getting my, so I'll try again. In my first geometry pass I do the following: I first render to a MRT with a GL_TEXTURE_RECTANGLE as GL_DEPTH_ATTACHMENT and a few buffers as GL_COLOR_ATTACHMENTn. I just let OpenGL store its own depth in the texture, as I do not write to gl_FragDepth. That is, OpenGL stores its own hyperbolic depth just as usual, but in a texture I have attached. In the second pass I do the following: Bind my color buffer textures for normals etc., AND bind my depth buffer texture so I can fetch data from it, as I need to reconstruct position. However, before reconstructing the position I'd like to do a pass (or two) where I only write to the stencil buffer so I can use that to discard pixels, so I do not run my expensive light shader more than needed. For this I need to utilize the depth that is written to the texture to do depth testing against backfaces of the light volume (and perhaps also front faces), but I'm not sure how I can do that, as for the second pass I draw directly to the window's default framebuffer. Did that make it more clear? Thanks so far.
  7. So far I am rendering to a depthbuffer attached as a GL_TEXTURE_RECTANGLE_ARB, and then for the lighting pass retrieving the information as non-linear depth, converting it and everything is fine. However, I would like to know how I can reuse the depth texture for standard glDepthTest afterwards, when I have unbound the framebuffer object in order to draw to the window framebuffer. Any help is appreciated.
  8. Thanks. If I write depth info to a texture attached as a depth attachment in my geometry pass, how will I then be able to reuse that particular texture as depth buffer for z-fail testing afterwards? I know how to sample it, but would like the glEnable(GL_DEPTH_TEST) to work. :) Thanks so far.
  9. Thanks - hyperbolic depth might have been the keyword I was looking for when describing the situation. I have read those articles, but thanks for the link anyway. If I deciphered your message correctly, then you recommend that I just use the hyperbolic depth buffer, and then reconstruct the position from that, given it provides me with enough precision. Then, for the second part. Yes, I am using OpenGL. For reading back from the depth buffer, can you read the depthbuffer directly or do I need to attach a texture to a depth attachment, ie: glBindTexture(GL_TEXTURE_2D, depth); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depth, 0); Or is it possible to read from the "default" depth buffer where I do not add a depth attachment? (glReadPixels is slow, so I guess another way ;) ) What if I use a renderbuffer as a depth attachment, i.e.: glGenRenderbuffers(1, &fbo.depth); glBindRenderbuffer(GL_RENDERBUFFER, fbo.depth); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, fbo.width, fbo.height); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, fbo.depth); I'm not quite sure what the driver does internally. Does it always reserve RAM for the "standard" depth buffer, and does adding my own as either a texture or renderbuffer have any cons?
  10. Hi, I'm currently working on a deferred renderer and now trying to optimize the lighting calculation. The way I have done it so far is the a color attachment to store depth value linearly, which I then use to reconstruct position. Then, I figured that I need to do depth testing for more efficient lighting calculation - or rather not to do the calculations. However, early z does only work with the default depth buffer as far as I understand, and thus I have to duplicate depth information, stored in two different ways. How do you normally solve this? Use the default depth buffer and letting opengl write to that itself (possibly flipping far and near for more precision), or are you simply saving the info in a dedicated color attachment? If you save it in a depth attachment, do you do it as a texture or as a renderbuffer? Can you do depth testing on a MRT with an attached texture as depth attachment, and is there a performance penalty for this? (The way I currently store depth is the negated view-space z value divided by zfar, and I figure I need to fetch it much differently if I get it as z/w after perspective transformation from the standard depth buffer) Thanks in advance. Tell me if I need to elaborate anything.
  11. It is necessary a graphic engine?

    Have you thought about just modding a current FPS game? Many of the big titles offer tons of customizability. It would probably be the easiest way to get your feet wet.
  12. Walking on a terrain in a FPS?

    I'd suppose the easiest and straightforward way would be to link the camera to a player which has a bounding box, checked for collisions each frame. In a graph that would equal the camera being a child of player, which has a bounding box or something equivalent to that which does the same trick. Keeping the camera over the terrain is then "just" a question of doing collision detection and response between the bounds and terrain.
  13. Q3 BSP Rendering and Vertex Buffers

    I now have much better performance with my VBO's; better performance than without. I'm storing one large indexbuffer and one large vertexbuffer, but the indexbuffer is modified from the raw data of the .bsp file. I altered it to now index the vertices from 0 instead of from the start vertex given at a surface, meaning that I do not have to rebind anything or call glXXXPointer more than once. The offset into the indexbuffer is provided at the glDrawElements call, a call that I still do per surface. I can now look into the proposed ideas of altering my indexbuffer whenever I move to a new cluster, knowing that the VBO's are performant enough. Thanks anyone who helped me arrive at this, rating++.
  14. Q3 BSP Rendering and Vertex Buffers

    Currently when I have one large vertex- and one large indexbuffer, I have to setup my glVertexPointer, NormalPointer etc., for every different object, done with an offset. I have thought about making it easier to make one large draw call that renders everything, but that would require me to modify the indexbuffer every time I update the frame. Do you suggest I do that and keep the indexbuffer as Read/Write and modify it and its size, whenever origin and frustum changes? DrawRangeElements perform just as when I use DrawElements, I used it here for testing purposes.
  15. Q3 BSP Rendering and Vertex Buffers

    Quote:Original post by Krypt0n quake levels dont have that many polys, the PVS is mainly for reducing overdraw, so you could fill your VBO just when you move (changing visibility sets) and in all other frames just push the same VBO data. that's probably the fastest way on nowadays hardware. like "Ohforf sake" mentioned, cpu<->gpu syncs cost you performance, some brute-force vertex pushing is way faster. So you'd fill a VBO every time I update position or move the view (because frustum culling and changed frustum)? What I did earlier was just to mark whether or not something was potentially visible, and I only updated those markings if the camera had moved, which indeed gave me a performance increase. However, that was still using glDrawArrays without VBO's. The VBO's are slowing my program, with or without culling.