Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

228 Neutral

About Koehler

  • Rank
  1. There seems to be this assumption on the internet that you have to stick with one language as you learn the various concepts of programming. I'm of the opinion that the idea of Languages being interchangeable tools is one of the most important.    I'd like to propose a different appraoch to learning:    Start in a memory-managed language. I'd recommend C# or java just due to syntax ( { and } and stuff makes it easy to understand how code is laid out). Learn how this language works, and learn the basics here. Types, variables, Functions, Arrays, strings, loops, arithmetic, conditionals, and finally recursion. Some good practice would be learning how to sort arrays of numbers, and how to find out if two words are anagrams of each other. The language you settle on doesn't really matter, so find one you like. I would avoid most object-oriented concepts at this point (If you don't know what a class is that's perfectly fine.)   Next, go to C (Not C++!), and re-learn the same things there. This will MOSTLY be familiar, but you will have the added challenge of basic memory management. Learn about how memory is allocated and freed, how pointers work, how they relate to arrays and strings. Now might also be a good time to learn about Binary numbers, Hexadecimal numbers, and bitwise operations (not language-related, but C supports them and they're handy to know.) - This is probably the one that will be most argued over, and I understand why. I just feel like C as applied to the basics of procedural programming and without any clutter from trying to design OO programs is a good environment to begin understanding memory management in.    Third, go back to your original language, and begin learning Object Oriented concepts. Learn how to make a class, learn the differences between static and non-static, public vs private,  pass-by-value and pass-by-reference. Learn about how classes can inherit from one another, and what virtual inheritance / interfaces / abstract classes are. The advantage is that you can focus on how to accomplish this without worrying about memory again.    Now, start learning about how to apply those classes. Look up some common data structures (linked lists, stacks, trees), and some algorithms (tree or graph traversals or self-balancing trees would be a good place to start), and implement them. Bonus points if you test and see why and when different algorithms are faster or slower. By the end of this you should be able to make a text-based maze game if you put your mind to it.    Finally, start learning how to implement this stuff in C++. This will still involve loads of foot-shooting, but you will understand the basics and have freedom to focus on the quirks of the language rather than the basics of computer science. Even now, I wouldn't touch templates until you've re-implemented everything from the previous step in C++.   Around this point (Could very well be many months of work) you'll have a solid foundation which you can apply to most any language you want. Now's the time to play with LISP, or learn you some Erlang for great good, and open up your mind to a completely different set of concepts. 
  2. Glad to see you caught my mistake. I was calling "indices" weights, also. Clearly I didn't test that code :/   Those results look good! I am surprised that the ancients have so many bones. If I had to guess, maybe WC3 probably did software skinning so it didn't matter?   As an option, maybe you could look through the model and split the mesh based on the bone indices accessed? (half for indices < 110 or something, half for >110) and do two draw calls for the big guys.. This would work best if pieces don't rely on the root bone bones too much.   Alternatively you could split the model and duplicate the most shared bones into each of the two smaller models' bone arrays, changing the indices on your vertex data appropriately. It still might let you cut down the number enough to fit into your uniform space. 
  3. Koehler

    Questions about OpenGL

    1) Are things better? That depends on where you look. On the one hand, we just got 4.4 drivers from NVidia, and some of the ARB extensions in those include features that won't hit DirectX until 11.2... On the other hand, many parts of the OpenGL API move VERY slowly, meaning old concepts stay in use longer than they would in DirectX. (For example, it is just now kinda-sorta-almost getting away from the concept of working in terms of bind points for textures and buffers.)    2) I've been using NVidia's parallel NSight to debug on windows, and it's worked pretty well for me so far (Visual Studio + Nvidia cards only, unfortunately). I have used GDEBugger in the past, but the original version died out, was picked up by AMD, and now exists as CodeXL. I can't comment on its newer incarnations, but historically it served well enough. It appears to run without VS, but probably requires an AMD card and Catalyst driver. Most of the open source projects on this front seem to have died out, unfortunately.    3) I personally am a huge fan of Sublime Text for coding. It's a bit more of an editor than an IDE, and It's not free ($50 and worth every penny imo), but it's got an unlimited-length trial period, so you could learn it while saving money and not have a period where you're stuck without an editor. For Windows in particular, the express editions of Visual Studio have served me well over the years and are completely free.   4) I originally learned on DX9, and have since mostly used OpenGL due to jobs. If anything I'd say it feels like they are converging more and more, and the underlying concepts are often the same even though the APIs are laid out differently. You'd only stand to benefit from making the transition, even if you went right back to DirectX afterwards.    5) I can't give any real recommendations for books, but I have thumbed through the OpenGL SuperBible, and it seemed reasonably thorough. The newest (6th) edition claims to cover up to 4.3, and will be out in a few days. Hopefully someone else here can speak to how useful it is for learning. 
  4. Koehler

    Rendering a texture

    I think the main case where you'd be unable to use a context is due to multithreading. An OpenGL context cannot be active on multiple threads, so in the case where you wanted to call OpenGL functions from different threads, you'd need to either create more than one context and have them share data (with wglShareLists), or have each thread sync and make the context current / not current as needed. 
  5. Koehler

    OpenGL 4.4 spec is published

      Learn something new every day! Good to know this.. 
  6. Koehler

    OpenGL 4.4 spec is published

    I'm excited about ARB_Sparse_Texture, though I'm a little confused as to why they don't support any of the 3-component texture formats.
  7. You're on the right track!  A uniform array of bones, and vertex attributes that index into said array is the common way to handle this.    For your specific problem, I have a solution that should work but will limit you to 4 bones per vertex (I can't imagine this is a problem for WC3 models, but please let me know if it is.) You could try representing your bone weights as a vec4 instead of an array in the attribute. From there, you could add a second vec4 attribute representing how many bones affect a vertex (such as [1.0, 1.0, 0.0, 0.0] for two bones).   Finally, If you take the dot product of this vector with itself, you conveniently enough get the number of bones out! (if we call the vector above v, then dot(v,v) = (1.0*1.0  + 1.0*1.0 + 0.0*0.0 + 0.0*0.0) = 2.0)   This would change your attribs to:  attribute vec4 a_position; attribute vec4 bone_weights; attribute vec4 bone_mask; You would also remove the for loop above, and just say  vec4 p = vec4(0,0,0,1); p += u_matrix_list[int(bone_weights.x)]* a_position*bone_mask.x; p += u_matrix_list[int(bone_weights.y)]* a_position*bone_mask.y; p += u_matrix_list[int(bone_weights.z)]* a_position*bone_mask.z; p += u_matrix_list[int(bone_weights.w)]* a_position*bone_mask.w; gl_Position = p / dot(bone_mask,bone_mask); Hope this helps!
  8. My mistake, it seems that I misinterpreted your goal. I assumed you were going for physical accuracy and based most of what I said off of microfacet models.  The statement about Oren-Nayar having more in common with Phong specular was limited to "The result changes as both light direction and view direction change."   Anyhow, glad to hear you have a solution you're happy with in Minnaert (which sounds a lot like what you propose with inverse N dot V)
  9. I'm guessing your problem is here:   glDrawArrays( GL_TRIANGLES, 12,indices.size());   This is for non-indexed geometry, where all vertices are stored in order in an array.   You have indexed geometry from your OBJ file, (So you have a minimal array of vertices, and then the array "indices" indexes into that vertex array.) This means you want to use: glDrawElements( GL_TRIANGLES, indices.size(), GL_UNSIGNED_INT, &indices[0]);   Note that &indices[0] is because I'm not sure if you are using a vector or an array. If an array you could just pass "indices". If array vector you could pass "indices.data()"
  10. Short answer: No such BRDF exists.   The effect you're looking for has more in common with the Phong model specular term than its diffuse one. The information you're storing is for the diffuse term (sum of N dot L). The usefulness of that information hinges on the assumption that diffuse reflectance is equal in all directions (Lambertian surface). Rough surface models do away with that assumption.   This isn't to say you can't try and fake the effect: Treat the stored value as a maximum and modulate it based on view direction, but even then the modulation requires some light direction to compare with the view direction, or else you're back at a Lambertian model. This means you assume all  light comes from one place (might work outdoors, will not work elsewhere), or  you store a table of prominent light directions for regions of your scene. You don't seem to want to do this, however, so I am thinking you're out of luck.   In summary: To model a change in reflectance as viewing angle changes, an incident light direction is required to compare to the viewing direction. 
  11.   Since texture2D was deprecated some time ago, it's entirely possible that the latest drivers compile it to a no-op and it always returns some constant value. This means that the sampler uniforms could indeed be optimized out, since they do not affect the output of the program. 
  12. Just set a stencil bit when you draw your skybox, and stencil test for that bit being unset when applying bloom. If you've got the same depth/stencil buffer attached, you'll apply bloom to everything else, while leaving your skybox untouched.
  13. You've probably tried some of this already, but here are some possibilities. If this is happening when you actually run, then you need to place the glew32.dll where your OS can find it (Same dir as your executable would work, which is <Solution dir>\Debug or <Solution dir>\Release. If you're trying to link statically with glew, you'll need to link to glew32s.lib (for release) and glew32sd.lib (for debug). Finally, check and see if the first error isn't something like "Couldn't find library glew32.lib". If that's the case, you need to tell the linker about the folder your glew32.lib is in. (Project properties -> Linker -> General, modify "Additional library locations")
  14. Koehler

    Theory of FBOs

    Each time you draw to an FBO you're writing to memory. So, if you draw a full-resolution pass to an FBO and then draw a screen-space quad to get it on-screen, you've done double the number of writes as compared to a normal render. Using large FBO's is one of the easiest ways your performance can become fillrate-bound. I think you're on the right track with the idea of multiple FBO's. I'd try to draw your scene to a low-res FBO first, storing only the data you need for your luminance calculation, and blurring that. Assuming your bloom FBO is drawn with a viewport of w/2 by h/2 , you're drawing (w*h)/4 pixels, a quarter of what you were drawing before per-pass. Therefore drawing and blurring, followed by drawing the scene at full resolution, could be done in under double the number of writes (assuming no overdraw). For sampling, you could try doing it as you are, or just build it into the pass that draws your scene at full resolution (though if you're using different shaders for your skybox, etc, that may be messier) Regarding the Moire patterns - Shrinking your texture should eliminate these as well. You'll probably need to try sampling the blurred texture different ways to get your bloom data to not look blocky, though.
  15. The original Guild Wars uses a technique very much like this for rendering, so your idea can definitely be used to good effect. You might see if someone from ArenaNet ever documented / presented on it at some point to get some insights into what challenges it would present. What I can say is that the effect is certainly noticable in Guild Wars, however increasing the distance at which you perform the technique could remedy that - this is a six year old game after all. An advantage of your idea is that if you're willing to sacrifice fidelity you can dramatically lower the resolution of these impostor textures to improve the speed at which theyre generated.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!