• Advertisement

harveypekar

Member
  • Content count

    136
  • Joined

  • Last visited

Community Reputation

219 Neutral

About harveypekar

  • Rank
    Member
  1. Dumping every frame to an image file

    2 ways:   use a video encoder on the gpu   if your game is deterministic, which is a good idea, force the framerate and replay the input. That way you could render to a much larger framebuffer and capture 120Hz.
  2. G Buffer

    Have you considered doing pre-pass lighting? It's basically the same as what you conclude, but for all objects in the scene. It uses less render targets (usually just one if you have hardware support for reading depth buffer) that classic deferred rendering, and also enables more variety in the materials (which is sorta that problem you're having). Probably drawing the emissive mesh a second time is a nice thing (if there aren't too many of them), as it gives you the most flexibiltity. Imagine extruding the triangles and using some fancy-pants shader, you could probably have a volumetric glow rather than a flat emissive.
  3. Dirty hack : just scale the checker piece on one axis to become smaller. Circle->Oval->Line->Oval->Mirrored Circle. It's a dirty hack, but you might get away with it.
  4. Inspecting Visual C++ exe size

    Thanks guys, you remind me of why this forum is awesome. @jewzorek You're absolutely right, in the end this will be run on a mobile device, so space is critical. I just wanted to inspect so I can see where stuff is going. Exe size sometimes does sometimes matter on desktop though, sometimes you want smaller instructions, as it will run faster than less arithmetic due to the instruction cache not being trashed. Loosely related to global exe size (only matters in hot loops though).
  5. Hey everyone, I want to reduce my exe size. My project pulls in a large lib (LLVM). Before I start amputating at random, I would like to have an idea what takes up the most space. Is there any tool/script that can get some meaningful numbers for me? I know I can generate a .MAP file, but it can't seem to relate it to footprint. EDIT already got it [url="http://aras-p.info/projSizer.html"]http://aras-p.info/projSizer.html[/url] [url="http://code.google.com/p/mapfile/"]http://code.google.com/p/mapfile/[/url]
  6. Visualize purposes of graphics pipeline phases

    Step through an existing application with PIX/gDebugger/Perfhud... Should show you some nice intermediate data.
  7. Sub Frustum Rendering

    Honestly, I have no idea what you're trying to do. Are you frustrum culling? Occlusion culling? Ray tracing?
  8. Order of post processing pipeline

    -HDR isn't a pass. Tonemapping is the part of HDR that's implemented as a postprocess. -you're more or less "correct" already, as you should do most effects in HDR. You'll pay for it though. -to fold together passes, just look at it in general (we can't say exactly HOW you'll implement things). Probably vignetting/gammacorrecting can be stuffed into another pass. But since all your convolutions depend on each other, you'll have to store the results somewhere.
  9. Questions about multiple viewports

    Thanks, I got it working with multiple contexts now. I ended up writing my custom Opengl widget for Qt, because it doesn't really allow easy sharing with external contexts.
  10. Hey everyone, sorry about asking a question that should be clear from documentation, but right now I'm tired of banging my head against the wall. Anyway, I'm implementing a GUi app that uses GL for multiple widgets, using windows but looking forward for crossplatform. I use QT, but decided to prototype further using my own widget as Qt borks as soon as I try to implement global resources (shaders et al) and right now, I'm just trying to understand what's going on. From digging through the anecdotal evidence online, I realize I have two options: Either use only one context (created on the main window handle) and call glViewport on rendering. Or, use multiple contexts, and use wglShareLists to share shaders, textures, and VBO's. I''m very unexperienced in GL, so I'm unable to judge the pro's and cons of either way. Can anyone tell me what the standard way of doing this would be?
  11. OpenGL glClear locks system

    Run it through gDEBuggegr (free). Does it give you any warnings/errors?
  12. Passes in a real world game

    [quote name='IMYT' timestamp='1302349029' post='4796290'] Hi everyone: I have no experience in game developing so I want to ask you people a very simple question: How many passes do people actually use in the rendering of a real world game like starcraft2 and Crysis (If all the special effects supported are used)? [/quote] Define "Passes". Full Screen post processing passes? Foward rendering passes? DrawIndexedPrimitive calls per frame? If you really want to deconstruct existing games, try and run it through specific GPU profiling programs, like gDEBugger, PIX or Perfhud.
  13. [quote name='mancubit' timestamp='1301950569' post='4794367'] quite simple but impressive shader [url="http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html"]Volumetric Light Scattering as a Post-Process[/url] [/quote] How about procedural stuff? Wood shader, heightfield terrain with procedural vertex displacement.
  14. Debugging a release build. Joy!
  15. Vertex and Texture arrays

    Please, please, please use vectors. You will get less bugs, I promise. It has bounds checking an all that. You can predeclare a vector's size at construction if you're worried about data copying. It does most of it's error checking at debug time, and you can turn it off in release mods. Read up on the reference [url="http://cplusplus.com/reference/stl/vector/vector/"]here[/url] Also, do it with the struct now. It's more readable.
  • Advertisement