Advertisement Jump to content
  • Advertisement

Ben Bowen

  • Content Count

  • Joined

  • Last visited

Community Reputation

115 Neutral

About Ben Bowen

  • Rank
  1. Ben Bowen

    Euclideon Geoverse - Latest Video Calms More Critics

    What? I completely get what he's claiming when I take him literally. Literally, he's claiming O(infinitesimal frusta) with lazy evaluation.
  2. Ben Bowen

    Lens Flares: We're in business [demo included]

    I get the same artifact on my 6950.
  3. Ben Bowen


    There seem to be 3 kinds of communication. 1. Crying. 2. Artifice. 3. Doug Engelbarting. A few make sure they aren't either. Some of us are sociopaths. Most of us seem to be ballers. Did I get that backwards.
  4. Ben Bowen

    Stop Bludgeoning Normal Mapping

    Thank's for the optimism.
  5. Ben Bowen

    Stop Bludgeoning Normal Mapping

    I'm just sick of things.
  6. Ben Bowen

    Stop Bludgeoning Normal Mapping

    Fuck all of you   lol
  7. Ben Bowen

    Stop Bludgeoning Normal Mapping

      The problem I'm blaming them for is bludgeoning it without sufficient...   This is rediculous. I already explained my point in the original post:     The point of the topic is not: NORMAL MAPPING IS A PROBLEM. The topic is: STOP BLUDGEONING NORMAL MAPPING.     Diffuse maps are the same thing, but regardless of dynamic lighting conditions. It's a texture map. Instead of having a point cloud, voxel volume, or trillions of solid colored triangles, we use texture mapping. What's your point?     There's an incredible amount of abstraction behind the idea of a diffuse map. Transmission and reflection themselves are both intransitively-distinct to the idea of diffusion, though both phenomena are part of surface diffusion's broad notion in many complex ways. Diffuse maps approximate a lot, but we subtract anything that is approximated in real time e.g. strong specularity is usually first for the boot. You can approximate any lighting effect (just as megatexturing boasts), but only as long as it's not dynamic.
  8. Ben Bowen

    Stop Bludgeoning Normal Mapping

    Baked => physically-based accuracy can be simulated statically even without the hardware to "update it" every frame. This is primarily about diffuse.
  9. Ben Bowen

    Stop Bludgeoning Normal Mapping

    And yes, I perfectly well understand why the normal mapping looks bad in these screenshots (and many many many more), but here's the lesson: It's their mistake of putting it into the game without having it up to par with "baked" approximations which are at least bearable to the eye. Despite it being a 2004 game, Halo 2 is the first time I noticed this problem. Yes. They had technical limitations, but so do many modern games. Halo 1 had normal mapping (and even more technical limitations), but it was always subtle and looked great everywhere they used it. Halo 2 I can't find any screenshots of Halo 1 that demonstrate its application of normal mapping, but I might take some of my own. Don't worry, I'll try to pick the ugliest.
  10. Ben Bowen

    Stop Bludgeoning Normal Mapping

      I don't even know where to start.  
  11. I'd just like to note that I find many "modern" games unappealing due to the excessive amounts of normal mapping they use without sufficient artistic control or physically-based accuracy. If you can "bake in" physically-based effects into a diffuse map and make them look better than real-time approximations for the most part, then do it! There's a quality threshold between when an effect needs to be baked -- despite having many dynamic properties -- and when technology permits to achieve the same amount of static quality, but also properly capturing the dynamics. I often find myself appreciating the 3D graphics and artwork of older 1998-2007 games more than many modern games for reasons like this. Edit, other bludgeoned 'modern' effects: SSAO. Low-res "Megatexturing" Terrible yellow-ish color graded fog which oddly seems to have transmission disproportionate to absorption. For instance: "Given the amount of over saturation the fog has caused, wouldn't the camera would be blind after only about 5 meters of depth? ... Yet it can see far beyond that." Over-exaggerated depth of field with a ridiculously horrid blur kernel And yet we still see INSANE amounts of bloom, though slightly (yes, I said slightly) more accurate than it was several years ago. Your increase in familiarity with color theory justifies little.
  12. Ben Bowen

    AMD's Mantle API

    Language wars, eh? Why don't we all just migrate to array programming languages so we don't need to worry about these tough decisions on things like whether C# is a real language or how awesome AMD is and we can fundamentally focus on extremely particular vectorization targets becoming a pleasantly defunct notion? ... And then shave our eyebrows ... and pretend to make tongue in cheek statements when they're actually serious ideas but we're too afraid to point them out. Eyebrows look nice, though. Edit: woah. That's a long run-on sentence I have there. Oh well. Jokes on you!
  13. Ben Bowen

    Recursive Screen-Space Reflections

    Diverging a bit more off topic, in response to Vilem Otte: Great stuff. I was considering prototyping nearly the exact same thing early this year, but I've been pulled away with other things to do and I still haven't gotten around to it. My reply here will mostly be in regards to your remark on BVHs and optimization (essentially, if you want to be general, the "visibility function"). I also predicted that visibility determination will be the most critical point to implementing this technique. Since I've worked on level modelling tools, I'm familiar with processing geometry. I realized it will be more effective to avoid any secondary scene graphing techniques and consider the content to be a scene graph itself. Back face culling is capable of assumatively omitting non-visible faces by comparing the surface normal with the Z-vector in view space, just as it is possible to select concave patches and seperate meshs into convex hulls by comparing surface normals. Surface normals are produced by the cross product, which uses model space vectors. You can uniformally construct a complete surface basis by only a few simple vector operations using a surface primitive's spatial relations. That is why most back face culling algorithms don't even need to know surface normals, they can just have a look at vertex winding in view space. This conceptual perspective reveals massive opportunities. Why be limited to the evaluation of single primitives (i.e. backface culling => whole mesh => entire scene context)? Is there a virtual relation that can used be to model geometric holonymes? Must this system only evaluate geometric relations individually, or is there a relational symmetry which promises reduction to linear time? Following these questions, you might develop a highly effective method for light transport. Anybody following what I said here? Edit:   Oh.  
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!