Jump to content
  • Advertisement

powly k

Member
  • Content Count

    65
  • Joined

  • Last visited

Everything posted by powly k

  1. Upload them only once just before rendering - though the driver will probably already do this for you behind the scenes.     The more important point you might want to consider is how you measure your performance. If you get >1000fps with those various effects, your scene is probably too small to test on. If your rendering is not the bottleneck, then your memory lanes are. So you could probably throw a way more complex scene at the program and it'd run at the same speed. Another point is that measuring with fps can be deluding - a drop from 1200fps to 170fps is not that massive, the render times went from 1ms to 6ms - you should maybe measure which parts take how much time, opengl has time query objects for this.
  2. For 1: the default context is a compatibility profile, it should let you mix up any and all OpenGL extensions and versions your graphics card supports. This often makes writing it less nice; I'd recommend to stick to either the core 3.3 or 4.3 profiles, depending on what hardware you have. That way you'll have way less driver dependent behaviour to debug.
  3. powly k

    Path tracing - Direct lighting

    This is all about the probability of each path. As with all monte carlo integration, you have to compensate the values of each sample with the probability of choosing that sample. The probability of hitting a point or directional light with a random chance is exactly 0; they're a single point on a continuous hemisphere. On the other hand, such lights act as dirac functionals; their contribution is only non-zero at a single point and infinitely large there. This, when integrating over the hemisphere, means that you can just pull them out of the integral and sum them with the approximation of the integral; the indirect lighting.   In a mathematic sense all surfaces and lights are equal (which makes sense, secondary or tertiary light sources are just as much light sources as primary ones) but when actually tracing the scene it makes sense to try to focus on the important stuff; this also leads to importance sampling of materials and multiple importance sampling of materials and lights together.   A lot of these approaches result in the exact same results - you can trace two bounces per incoming ray, for example, just keep track of your probabilities and you'll be fine as in the result will be unbiased. The vast majority of the ideas and research on the subject don't really focus on getting the result right, but reducing variance (which is visible as noise). With bidirectional path tracing using multiple importance sampling and materials that are importance sampled accurately, you can save a lot of rendering time (orders of magnitude) and get the same level of noise in your results.
  4. powly k

    Display Loop with Win32

    glFlush() is a signal to the driver that you want to have all the submitted rendering ready before you go on with the code. And you're correct, as far as I know the wait occurs on SwapBuffers() and there's nothing you can do about it - why precisely would you, though? The best you can do is probably to measure frame time and if your frames are consistently shorter than your desired frame rate, do some extra stuff. It would also kind of fight the nature of VSYNC to be able to control it - the display works on a specific rate, it cannot be altered by programs.
  5. powly k

    Some ideas for a tic tac toe

    There are languages that index arrays defined for N elements in [1...N] (matlab comes to mind) or even [0...N] (Blitz Basic).   There is a good reason for the typical choice of [0...N-1], but it goes somewhat deep. You need to remember that the end result of the code is meant to run on actual hardware. In this case you want to access memory, but memory itself doesn't know the concept of an "array". It knows what data it holds and what index it's at. The most important thing is that an array is actually some index to the memory, and more precisely it's the index of the first member of the array. To access this, there is the typical array[j] syntax, which pretty much means "give me the j:th element of array 'array' " which is the same as finding the bit of memory that's at array+j. Since array is an index in the global memory and j is the index inside that array, we can just sum them to find the location of j in the memory.   Now, it (hopefully) makes sense to choose that the first element of "array" is at memory address "array", not at "array+1", so the first element should be at array[0] instead of array[1], since it gets translated to an actual memory address. Why it ends at N-1 instead of N is because we still want to have N elements - and if we count from 0 and go forwards until we have N elements, we reach element N-1.   This is, of course, only a chosen convention but it implies other nice things, like for(int i = 0; i<N; i++) /*operate on array; */ instead of for(int i = 1; i<N+1; i++) /*operate on array*/; the length of the for loop can instantly be seen from the ending condition instead of having to remember to take one out - it's a simple thing, but would cause (I believe) even more headache, especially for beginners or less enthusiastic programmers.
  6. powly k

    Programming a "TV"

    Bump mapping is a lighting effect and completely unrelated. You just want to add some offsets to where you read your texture from in the GLSL shader - where you have something like texture(textureUniform, uv); try uv*=strength; texture(textureUniform, vec2(uv.x*sqrt(1.0-uv.y*uv.y*.5),uv.y*sqrt(1.0-uv.x*uv.x*.5))/strength); or some other similar mapping from square to disc instead.
  7. What? Last time I checked the red book was about what people did in the 90s.   http://www.opengl-tutorial.org/ should be a nice introduction to OpenGL if you already generally know what's going on but want to learn the API.
  8. powly k

    Is SSAO view dependent?

    The last few paragraphs of this post explain the reconstruction from Z buffer quite nicely, but go ahead and show a bit of your code if you're still having troubles. Your second picture looks just like a typical SSAO, so your problem is indeed most likely in the reconstruction.   The real problem here is that corners don't look like that. SSAO is a fakey trick and doesn't really resemble real world lighting too much. If you tone it down enough so it's not black noisy bars but a slight darkening at some places, it can be a nice artistic touch.
  9. Just do more draw calls then, that really is the simplest and very probably most efficient solution. Keep all the fully opaque stuff in one batch, though. You could reorder your objects otherwise (read their coordinates and texture atlas IDs from an attribute buffer or a lookup texture or something). There are tricks to do actual order independent transparency too, but they need relatively new hardware and/or are very computationally demanding.
  10. I would definitely try adding 0.5*PixelSize to your UV coordinates - sampling at texel edges tends to be problematic. I'd also play around with the size of hpix, at least halving and doubling are usually good candidates to fix sampling artifacts.
  11. Things faster than A* include, for example, summing two numbers, calculating a dot product or doing nothing at all. Most problems have many good solutions, and the best one must be picked by you depending on the specifics. And even better than a good solution is avoiding the problem altogether, which is also sometimes possible.
  12. powly k

    OpenGL won't render anything

    Winding is probably not it - if you don't specifically enable backface culling, it's not on. What I'd try is first negative z values for the vertices and then ditching the matrix multiplication to see if the matrix is really okay.
  13. powly k

    Poly Count for a Standalone game?

    What you do in your shaders tends to matter a lot more than how many triangles you rasterize. The only actual way to know these is, like you said, to test it yourself.
  14. powly k

    Motion Blur with FBO

    That very much seems to be the issue, if averaging 4 images that should be different doesn't produce a blurred result. Can you actually somehow verify that the 4 samplers have different textures attached?
  15. powly k

    Motion Blur with FBO

    Okay, since you can't actually read from an FBO but only use an FBO to render to a texture and then read that texture, here's the question: do you set up the active textures, bindings and uniforms correctly? How this should go is like this:   set active texture to N bind your texture[N] to GL_TEXTURE_2D glUniform1i(uniform N location, N)   N loops through 1...4, obviously. What this sounds like is you forgetting to call the gluniform1i - all the uniforms samplers are texture unit 0 unless you specify otherwise.   And please, do "uniform sampler2D Blur1, Blur2, Blur3, Blur4;" instead of what you do now. And to be extra sure, always set the alpha to something.
  16. powly k

    OpenGL core profile question

    The OpenGL wiki itself has a few tutorials on the subject, though they're not using glfw.   Though I'd guess that the way you use vertex arrays or something related is the problem itself - if you got the context without any errors, it was probably working correctly. The OpenGL vertex array object thing makes no sense anyways. I'd double check that you're actually getting an error of some sort from GLFW or GL related to context creation, not the rendering code itself, before jumping into conclusions. Checking for errors is usually a good idea anyway.   The way I like to use OpenGL extensions is with a small offline script that generates me a header with all the function pointers and a function to load all the extensions based on a core specification I give as a parameter. It took a couple of nights to code, but works like a charm and whenever the ARB releases a new header, I can just ask for the new core (when the drivers arrive, of course) without being tied to anyone elses code.
  17. powly k

    Help me understand .cpp vs .h in C++

    This is a hard topic unless you know just a bit about what's going on behind the scenes when compiling. What happens is basically the compiler looks at what .cpp files you have in the project (or folder or the ones you tell it to compile, this depends), turns each one of them into an object file (.obj with visual studio, .a and .o also exist with some compilers IIRC) and it's done. After this the linker jumps onto the scene and starts looking at what pieces of code it needs to make the program into an actual executable - this is why you, for example, tell the linker, not the compiler, where the execution starts. Here you can have so called statically linked libraries too, that's the list of .lib files you feed to your linker. The most usual linker errors occur when you either forget to link against something or have multiple definitions - the same variable or function exists in several object files and the linker can't figure out which one to use.   Now, you might notice I didn't mention headers at all. This is because a header is never compiled itself, it's just included into (usually several) other files. So if your header has a global variable definition (for example "int x = 5;"), it'll be in all of the .cpp files and the linker won't like this. So you want your headers to only hold declarations, not definitions. int foo(int x); is okay, int foo(int x){return x*2+1;} isn't. Unless it's code you only use in one file, though then you don't even have to keep it in the header at all.
  18. Indeed, that seems to be just the unavoidable banding you always with such a slowly changing gradient and current monitors. Another thing you can do is apply some very slight noise, it won't be visible as noise but will smooth out the gradient.
  19. Shaders are not correct, texture2DGrad doesn't exist since GLSL 1.3 - use textureGrad instead, it automatically chooses the dimension based on the sampler you pass as argument.
  20. powly k

    How do I manage loaded geometry?

    A texture can easily be read in a shader and written to on the CPU, so you can upload your positions and orientations as a texture, though you probably should do it with uniform buffers or something. The draw calls i was referring to are glDrawArraysInstanced and glDrawElementsInstanced.
  21. powly k

    How do I manage loaded geometry?

    In OpenGL, there are specific draw calls for instancing (drawing multiple copies of the same thing) so you don't have to care about keeping them all in memory - just keep their locations in a texture and render based on that and instanceID. Just write a class that handles a single model and load all your geometry as instances of that class in the beginning of your program.
  22. powly k

    C++11 Lesson One: Hello World!

    The parameters of the main() function come to haunt once in the code examples after they're commanded to be gone - minor detail, might want to fix it anyway :)
  23. powly k

    Raytracing via compute shader

    Not sure how you derived that sphere intersection formula, but it looks quite different to what I've used - are your implementations exactly the same on CPU and GPU?
  24.  C++ is definitely not impossible to learn as a 12-year-old, that's actually when I started trying to work with it after some time with CoolBasic. It'll probably take some time getting used to and actually producing something, even a very simple game (compared to your examples).   OpenGL is not a "type of C++", it's a library for GPU-based rendering (and, as of some of the 4.X versions, general computing). It's not even restricted to C++, you can use it with C or Java or Python or very many languages. But the actual choice you probably mean is your API/engine.   This is where people differ a bit - some people are in it to make a product relatively quickly, in which case they usually choose an engine (like UDK, Cryengine or Unity) where lots of the tinkering and details are already done for you - loading an animated and textured mesh becomes a couple of calls instead of a few nights of coding that stuff yourself. Some people, however, want to write their own graphics system from ground up, and in that case they choose pretty much between OpenGL or DirectX.   I won't go too far into that debate, they're both good APIs with some pretty minor differences - DX is maybe a bit more streamlined and straight-forward to develop for and OpenGL is a cross-platform standard, but not as modernly designed and tends to actually follow up some of the thing DX does nowadays. But the main point is, both of them require you to write lots of code just to display a triangle. Or text. Or anything - though the point is, if you do this, you can do anything the GPU can - you won't be restricted at all by the design choices of a third party engine. You pretty much want to go with an engine anyways, since it's tremendously more likely to actually finish something that way - and they can do pretty amazing things.   I don't know much about 3D modeling programs, but it would seem that no matter which of the popular ones you pick, just keep at it and you can produce very cool meshes. If Blender slows you down or is an awesome program, I don't know - I've seen lots of arguments about it, but there seems to be a clear consensus.   If you want to learn C++, I'd say go with C++. Learning Python or something is not a bad idea - C++ isn't exactly the most rapid prototyping language, but it can still do everything and then you won't have to do a full rewrite after testing the idea out. Though I don't think I can emphasize enough that you should first learn "normal" C++ rather well - what pointers are, how they work and how you use them and dynamic memory together. How and why you use classes and for what entities do you want to make one. It's important to know your language before digging into the jungle of all sorts of libraries available.
  25. powly k

    First Game Help Ideas

    Ah, that's the stuff you had in mind. A couple of things come to mind, though lots of coding is experimentation and depends a lot on the game and the guy writing it.   Hexagonal movement isn't always easy - having a clear idea of your coordinate system should be very helpful, since lots of things require you to fiddle with positions - you might want to have a class to handle a position, so you can ask where things are, move them without problems (maybe turn them to look at a certain point) and change between world and screen positions for rendering and mouse interactions. Your unit class, for example, could have the position it's at and the position it wants to get to once it has enough movement points (or however you limit your movement per turn).   For a battle you might actually want a class instead of a function, so you could handle everything that goes on in there separately from the possible overworld view and possibly interrupt battles to continue different ones. And archive them somehow, if you want statistics. But it's already justifiable for a nicer flow for the program and having less global variables to care about - they're not as evil as many people say, but having a few member variables in a class is usually easier to handle than the same globals. And you clearly see where they should be used and it's even enforced by the language if you use privates.   The other option would just be a battle function that gets some details as a parameter - which map file to load, maybe a briefing text, goal of the battle, stuff like that. You might even want to have a map file complicated enough to hold all scenario details.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!