Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Offline Last Active Today, 06:37 PM

#5263693 A question about cross country trademark implications

Posted by Hodgman on Yesterday, 08:00 AM

That entirely depends on the patent. I could have a claim on a patent for the "method of obtaining answers via internet forum posts", and my lawyers could be sending you threats right now :lol:


Holding a patent on an entire category of product -- like a car, or a legal simulation -- is very rare. Usually patents cover smaller details, such as the way that you represent legislation within your system, etc... I just brought it up as copyright and trademarks are not an issue for your example, only a theoretical mega-patent (which might not even hold up in court) could affect that situation.

#5263684 A question about cross country trademark implications

Posted by Hodgman on Yesterday, 06:05 AM

Neither of those are trademarks. Trademarks are marks (images, names) that are used to identify a product that is being traded. The title, "God of War", is a trademark.

Trademark infringement can also include misleadingly similar names, e.g. "God of Waugh"...


The creative elements of a product are covered by copyright. If you copy anything exactly, that's copyright infringement. However, the rules of a game are exempt from copyright -- you're free to steal game mechanics.


With the second example, that's just a competing product in the same category. It's as if someone's created a car already, and now you're building your own car.

However, that's where patents can come in -- the first person to invent the car can patent their new invention, say, "A method and apparatus for transporting a plurality of persons via automated mobility"... or in your case "a simulation of a legal system within a computer", etc...

Bloody well almost everything is at risk of patent trolling these days; it just depends if people have "claimed" their inventions or not.

#5263654 Stencil Shadows

Posted by Hodgman on 25 November 2015 - 06:28 PM

Other articles mention "Carmacks reverse" as the solution, but warn that it is patented. How did they work around this patent issue in the open source release of Doom 3?

Carmack's reverse is the solution smile.png

You could check out the Doom 3 source code - I hear they modified it to remove the patented algorithm before releasing the source code.

@ID_AA_Carmack: Lawyers are still skittish about the patent issue around "Carmack's reverse", so I am going to write some new code for the doom3 release.
This demonstrates the idiocy of the patent -- the workaround added four lines of code and changed two.


I've personally used it in a game before. I read the actual patent and compared their claims against my implementation of the "Carmack's reverse" algorithm, and in my professional opinion, my method did not overlap with their claims. My lead agreed with me, so we shipped it smile.png So, IMHO, it's possible to implement the algorithm without using the patented method, but maybe I'm wrong, and maybe we opened our employer up to a lawsuit... Yay for patents.


However, stencil shadows are rarely used these days. Almost everyone uses shadow mapping methods instead - unless you specifically need pixel-perfect hard-edged shadows.

#5263381 PBR precision issue using GBuffer RGBA16F

Posted by Hodgman on 24 November 2015 - 03:14 AM

A single byte per component is perfectly sufficient for normals, you just need to not approach it as a 1 byte floating point conversion, use the raw byte and perform rather scale multiplication from 0,255 to -1.0,1.0, or use fixed point format if you are sure it performs well.

Check out the gif I posted earlier -- all 8bit-component encodings have really obvious banding for mirror-like reflections, except for the Crytek BFN encoding.

I have calibrated my Gbuffers to be a multiple render target, such as 4x8bit for color, 4x8bit for normals and specular sample, 1x32F for position.

You can use a 32F depth-stencil target directly, instead of doubling up by also storing depth in the gbuffer.

#5263378 Communicating with Modelers

Posted by Hodgman on 24 November 2015 - 01:39 AM

We write up extremely detailed work briefs, outlining all of the requirements and constraints for that specific bit of art (and linking to the project-wide technical and style guideline documents). Usually they contain concepts, but a collage made of stock photos is often also used.

#5263354 Render target not behaving as expected

Posted by Hodgman on 23 November 2015 - 07:39 PM

Do you set an appropriate viewport when you change render-targets? D3D9 used to do this automagically, but IIRC D3D11 doesn't.


When doing your blurring, etc, how are you generating the vertex positions for the corners of your quad? Are you using NDC coordinates directly, or are you computing them via a projection matrix?


Have you run your program through RenderDocto see at which stage the bug occurs?

#5263061 Phong versus Screen-Space Ambient Occlusion (with source code)

Posted by Hodgman on 21 November 2015 - 06:39 PM

¿Porque no los dos?

Multiply them together :)

#5263057 PBR precision issue using GBuffer RGBA16F

Posted by Hodgman on 21 November 2015 - 06:00 PM

Why not?
edit2- because I remember reading that in a 'true' HDR pipeline even the textures are in an HDR format.

When dealing with light energy, you neef HDR formats because energy is unbounded. Emissive maps, light maps, light accumulation buffers, environment maps, etc, all store light energy values from zero to infinity, measured at the red/green/blue wavelengths.
Albedo maps and specular maps store reflection coefficients that are bounded from 0.0 to 1.0. There's no need to use "HDR formats" for those - sRGB 8bit is ideal.

edit - also why would spherical encoding have an uneven distribution, I'd think by definition they would be evenly distributed...?

Picture the Earth with lat/lon lines on it. At any latitude, there's an east/west line circling the globe. At the equator, those circles are large, but as you approach the poles, they get smaller. At the pole itself the circle becomes a point, meaning your longitude coordinate is useless there!
If those are your two coordinates, then the amount of distance covered by the longitude coordinate depends on the latitude coordinate. You can see this circular banding at the poles in the image.

#5262951 Getting to grips with data oriented component based design

Posted by Hodgman on 20 November 2015 - 06:57 PM

DoD is about looking closely at your actual data and use cases - thinking at such an abstraction where all your data is the same, and just called "data" defeats that. What's the data-flow per frame? 'Systems get updated' says nothing about flow.

Come up with some concrete use-cases - e.g. psuedocode for how you'd perform frustum-culling, collision-detection, a die-when-touched mechanic, animating the visible objects, and drawing them.

Do that first, and then build your architecture to support the patterns that emerge (and do the DoD thinking on those patterns). Don't build a generic framework first with only nebulous potential use cases, and then force reality to conform with that framework later.

#5262827 Software skinning for a precise AABB

Posted by Hodgman on 20 November 2015 - 12:48 AM

Interpolate AABB for each animation, then merge (enlarge) the interpolated AABBs of the animations running concurrently.

That doesn't produce valid AABB's [when blending animations].

What I suggested is precompute the interpolated AABB of Animation A and, separately, of Animation B at several keyframes.
When animation A and B are both active at the same time, extend them.

I still don't follow.
If neither A nor B alone require the AABB to be extended, but blending A+B moves the character's fist 30cm outside the front of the AABB, how does your system know to extend the AABB to compensate?

But you're blending A + B. I'm talking about blending A + A' (A' is at a different time frame).
In the case you're specifying, there will be a third sample added specifically for the AABB between A and A' to deal with the increased depth.
For blending A + B, extending is the best/safest solution.

How do you extend the AABB? The increased depth is not present in A/A'/B/B'. The increased depth only appears after you blend A + B together.
It is possible for A + B to have bounds that have no relation whatsoever to the bounds of A and B individually. Look at the example again, or Johnny's example.

#5262820 PBR precision issue using GBuffer RGBA16F

Posted by Hodgman on 19 November 2015 - 10:17 PM

Why? I would think spherical using the same number of bits versus a competing method would be superior.

8-bit channel Images from the knarkowicz link, spherical and octohedral:
Spherical bunches up precision at certain parts of the sphere. e.g. Look closely at the top of both of them -- the first has circular banding, which is very obvious in some cases. Octohedral more evenly distributes it's precision across the entire sphere, so you get the same quality in all directions (instead of some directions being great and some being shite).
At 16 bits per channel, you can use almost any encoding you like and get good results, so just pick a cheap one smile.png

But is it faster than a (IIRC) cube map lookup for crytek's BFN's?

BFN has a super-cheap decode function, but a moderate encode function (which yep, involves a texture fetch -- I'm currently using this with a 1024*1024 2D lookup texture). The relative costs will depend on the GPU model, and what else your gbuffer shaders are doing at the time...

What about HDR?

You don't generally put HDR colours into a gbuffer.
11_11_10 / 10_10_10 fixed point in linear colour space is about equivalent to 8_8_8 in sRGB space, so IMHO it's not usable for HDR, unless you've got a very small range and are ok with banding artifacts. I've shipped one game where we used 10_10_10 fixed point with a pow2 gamma space to minimize banding, but it wasn't good sad.png
11_11_10 / 10_10_10 floating point is just barely enough to get HDR working, maybe (if your API/GPU supports it).  
16_16_16 fixed or floating point is good. Floating point wastes a bit on storing the sign, plus you have to deal with NaN's and infinities sad.png, but it gives you a range from 0 to ~60k with logarithmic precision distribution, which is actually good for HDR data (as your tone-mapper probably has logarithmic weighting). Fixed point means you get that extra bit and don't have to worry about inf/nan, and you get to choose your own maximum scale value (which is a pro and a con).

#5262809 PBR precision issue using GBuffer RGBA16F

Posted by Hodgman on 19 November 2015 - 07:41 PM

Out of curiosity how come no one suggests using spherical coordinates for G-buffer normals?

If you click MJP's link, and then the links on that page, you end up here: http://aras-p.info/texts/CompactNormalStorage.html#method03spherical biggrin.png
To go full cirlce, there's a link on that page pointing back to MJP's blog laugh.png
Short answer is the transformation between Cartesian and spherical is slow (relative to other transforms), and the quality isn't as good as other approaches either. It's definitely a feasible approach though, and I wouldn't be surprised if there's games that have shipped using that technique.


Here's a comparison between spherical and octahedral: https://knarkowicz.wordpress.com/2014/04/16/octahedron-normal-vector-encoding/

Note that if you want good quality normals, you shouldn't use 8-bit channels like in those examples though smile.png

As mentioned at the end of that link, this is a good paper to read: http://jcgt.org/published/0003/02/01/

#5262793 Optimizing OpenGL FPS

Posted by Hodgman on 19 November 2015 - 04:31 PM

This could also be Windows D3D vs Linux GL performance.

Different engine renderer, different API, different drivers, different calling patterns, different shader language, different OS.

I would expect that any D3D->GL port would be initially slower, before all the GL workarounds are added in.

#5262732 Visual Studio includes a 3d modeller!

Posted by Hodgman on 19 November 2015 - 05:32 AM

I used to be able to open DAE files and debug / perform minor tweaks to the XML. Now it renders the mesh instead. Not helpful :P

#5262730 Question about type, and displaying the bits of a char

Posted by Hodgman on 19 November 2015 - 05:17 AM

isn't the code in the original post technically venturing into undefined behavior land (although supported/working in most compilers)?
It writes to 1 data member of the union (char c) and then accesses a different data member (bits b).

Yes, technically you're only allowed to read from the same union member which was previously written. However, (almost) every C/C++ compiler actually recommends using unions in this exact way when you intend to perform a bitwise reinterpretation of a value. So in the real world, it's actually recommended / good style. Compilers will recognize the pattern and correctly deal with the aliasing issues.