Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Today, 07:15 AM

#5316063 What is the future of Real Time Rendering?

Posted by on Yesterday, 05:50 AM


#5315990 Questions about Physical based shading(PBS)?

Posted by on 20 October 2016 - 02:58 PM

If a simple refection map is physically based then I don't know what is not physically based.

A simple reflection map is not physically based. A complex IBL system can be.
e.g. instead of just taking an environment map and texturing it on there to get a gloss effect (which will intuitively look good, but is not going to match the real world), if you ray-traced that environment as a surrounding sphere and integrated a thousand different rays using a physically-based BRDF and importance sampling, then your results would come close to matching the real world. The key is using a Physically-based BRDF to represent the surface, as 'shading' is all about surfaces.

Modern games do do the above, but with some clever approximations. The result of importance sampling that sphere can be done ahead of time and stored in a big look up table... The problem is that it's many-dimensional and would be way too big, so the common approximation basically stores it as a 4D cube array plus a 2D error correction table.
The final implementation looks like simple reflection mapping, but the inner workings are based on physically plausible BRDFs, ray-tracing and importance sampling (or unbiasedMonte carlo sampling if you like), and most importantly, in many cases the results get quite close to the ground truth, similar to the hideously expensive true ray tracing renderers but at a fraction of the cost.

#5315944 D3DQUERYTYPE_TIMESTAMPFREQ Always GetData equal to 0.1GHz

Posted by on 20 October 2016 - 07:04 AM

That's not necessarily the GPU clock frequency... it's just the frequency of some abstracted clock that D3D9 has decided to use.

10 nanosecond resolution is good enough to implement coarse-grained profiler with, which is all the D3D9 timer events are good for anyway.

#5315905 Shader class design question

Posted by on 20 October 2016 - 01:19 AM

- it's not useful/ worth it to handle a VS and PS as a completely different object
- in practice you should 'author' an 'Effect'(shader) which consists of a passes, a vertex shader, pixel shader, maybe a geometry shader and more
-- the effect can have many/ multiple permutations (like the ubershader principle)
- a VS should always be linked to some PS and the other way around;
-- chances that you'll be mixing VS/PS combinations in runtime, independent from the assets/ content you load.
- this also means that there should be a balance in codebase/ design flexibility and assumptions on the pipeline/ content generation.
From this I conclude it might be actually better to create 1 class for an 'effect', for now having just a VS and PS.

Yeah I'd definately go for having something like a "Program" class, which contains a full set of VS/PS/etc... If you ever want to port to GL, it actually prefers that you work this way anyway!
Sure, if you go with "effects", you can start with effect having one program, and expand it to being able to select one from a collection of permutations/passes.

Note that it's valid to have a program that contains a VS but no PS -- this is common for depth-only rendering.

#5315883 Shader class design question

Posted by on 19 October 2016 - 04:54 PM

When authoring shaders, IMHO you really want to be authoring something like a Microsoft FX / CGFX / shader, which is a whole collection of different programs that will be used by one object in different circumstances.

In my engine, I author "shaders", which are a collection of passes (where 'pass' is shadow-map, gbuffer attributes, forward-translucent, etc...), which contain one or more permutations, which are a tuple of entry points for PS/VS/etc.
The tool actually bundles many of these "shaders" into a shader pack file (which lets me do things like de-duplicate a bytecode program if two "shaders" happen to have produced the same bytecode at some point), therefore the main class used by a graphics programmer when loading shaders is called ShaderPack. This class then lets you iterate the individual "shaders" and use their IDs to render objects. The back end of the renderer can then use those IDs to peek inside a "shader" and select the appropriate pass, select the appropriate permutation of that pass, and then get the right PS/VS pointers to use.

So as a user of the library (graphics programmer) you never get given any classes to act as an interface for PS's/VS's or even pairs of the two.
Moreover, because the engine loads entire shader packs, all the logic for deciding which PS/VS to use where is move to the shader compiler tool. Basically, this file format has a big list of VS's/PS's/etc, a big list of shaders with indices of passes, a big list of passes with indices of permutations, and big list of permutations with indices of VS's/PS's/etc.

#5315814 How does compute shader code size affect performance

Posted by on 19 October 2016 - 06:43 AM

It's a bit hard to tell from your general description... Can you post some code of the theoretical-but-not-practical optimization for us to deconstruct and theory-craft about?

#5315752 Do you ever have to worry about your stories being stolen?

Posted by on 18 October 2016 - 07:49 PM

Yes, you're paranoid. No one cares about your ideas. Everyone has their own ideas that they want to make.


Is there any reason to be afraid of someone making a ripoff? For example, look at Paladins and Overwatch. 

Some of the "ripped off" characters in Paladins were available in a public beta before the "original" character was ever shown to the public in Overwatch... which would make Overwatch a rip off of Paladins?... or alternatively both are rip offs of team fortress + generic fantasy archtypes...


Also note that even if one of these games was a direct clone of the other, copyright has no issue with that. Generic ideas, themes, settings, archtypes -- e.g. "a dwarf who is good at engineering" -- are not protected IP.

#5315677 What programming language should someone pair with LUA?

Posted by on 18 October 2016 - 07:22 AM

Almost every (console game quality) game engine uses C++ primarily, so that pretty much limits your answers to C++ & Lua :lol:


Besides Cry, Stingray makes extensive use of Lua (more than most engines), and the previous few proprietary engines that I've worked with have been C++(engine) plus C++/Lua(game).

In my own engine, and some proprietary ones that I've worked with, Lua is also used within the toolchain, where it's C# & Lua, and as a DSL for a high level shader ("FX-like") language alongside HLSL.

#5315619 Sunlight theory

Posted by on 17 October 2016 - 07:41 PM

The sun is a sphere light, but any sphere light can be safely approximated as a point light once you're a certain distance away (I think 5x the sphere's radius is the rule of thumb?), and any point light can be safely approximated as a directional light if you're really far away from it, as at that point all the rays will be pretty much parallel.

#5315496 Game engine and script language

Posted by on 17 October 2016 - 12:04 AM

Data driven render pipeline require a file to describe how to draw a frame
 That's an exact example from Stingray :)

The advantage is the same as with gameplay code. You can iterate these kinds of engine features faster (e.g. code reloading), and high level modifications to the engine structure can be made very easily without having to wade through C++ code.

On every single game I've worked on, the team has always made significant changes to the engine, because every game has different requirements from a game engine (there is no "one size fits all" game engine). Stingray is attempting to deal with this by writing engine modules in C++, and then configuring them / "plugging them together" in Lua, to make an engine that can be used for many different kinds of games without the need to rewrite any C++ code.

#5315469 How simultaneously both read from and write to a texture works through UAV

Posted by on 16 October 2016 - 05:07 PM

If you've got potential race conditions (e.g. scattered writes, with multiple threads writing to the same locations), then it's up to you to specify appropriate memory barriers within your shader to provide appropriate synchronization between threads.

#5315467 Questions about Physical based shading(PBS)?

Posted by on 16 October 2016 - 03:50 PM

I would not call the first method physically based since it does not even trace the path of photons.

Physically based doesn't mean physically accurate, just that it is based on real physics (unlike, e.g. Phong shading, which was based on intuition, or the standard implementation of Blinn-Phong which is based on a solid theoretical foundation but blatantly violates the conservation of energy).

However, yes, comparing any PBR technique to the real world lets us measure how correct it is. Or, in games we often compare our PBR renderers to existing film-quality PBR renderers :lol:

PBS is the approximate way to PBR, am i right?

No. Rendering covers the whole system (lighting, geometry, shadows, shading, post processing, etc) whereas shading is just the interaction between lights and materials.
PBS is a small part of PBR.

In (1), it use environment map to do light calculation. And in (2) it use light that we define (just like point light, directional light) to do light calculation.

My question is:

(1) What difference between two method?

(2) In my understanding, (1) do the indirect lighting, (2) do the direct lighting, are they?

(3) If so, do i need both of them in my light calculation? and how to combine them in the final result?

I haven't read your two articles yet, but just to be clear, PBS/PBR are not nouns / they are not a specific thing. PBS/PBR are adjectives that can be used to describe endless things.
Two different games might both say "We do PBR!", but their algorithms/shaders/code are probably completely different from each other, because what they mean is "our renderer uses some algorithms that are based on approximations of physics".

As for combining direct vs indirect lighting - all lighting is additive. Indirect lighting is just another type of light source such as spot, point, directional, etc...

In the real world there is a lot of indirect lighting going on. When you reproduce a real world material using PBS, but only light it with direct lights (point, spot, etc) they tend to look very wrong, as this lighting situation is unrealistic... So once you have PBS, figuring out a way to do more and more PBR (such as indirect lighting) becomes important.

There is no one standard implementation of PBS, but the most common techniques at the moment are covered in the Unreal 4 rendering course notes by Brian Karis.

#5315410 UML diagrams for video games

Posted by on 16 October 2016 - 07:39 AM

i'm a student and i want to develop a project (video game), and the "UML" is mandatory. When using a game engine, how do you "uml" these abstracted parts (physics, tweening, audio players, etc).

Can you be more specific in the part that you're having trouble with / what you've tried so far? What's different about a game engine to the other problems that you've used UML for so far?


If two parts are not associated with each other (physics/audio) then you end up with separate UML diagrams for each part, which is a good thing! This means that each diagram is simpler, and when working on physics, you only have to look at the design for physics.

At a higher level in the engine you might end up with diagrams that tie more systems together -- e.g. Race-Car is a Car, Race-Car has a PhysicsRigidBody. Race-Car has a Audio-SoundSource.


Agreeing and disagreeing with what other people have said above -- UML is used as a tool to communicate, but it is used quite rarely. Often a kind of "pseudo UML" (like pseudocode) is used rather than a strict interpretation of UML...

For an example, here's a very, very early design from our rendering library:




#5315371 Shader Permutations

Posted by on 15 October 2016 - 05:42 PM

D3D9 - individual uniforms, set on device.
GL2 - individual uniforms, set on program (such silly).
D3D11 - uniform buffers set on device.
GL3/GL4 - can mix GL2 and D3D11 approaches.
Vulkan/D3D12 - can mix D3D9 and D3D11 approaches.

When using uniform buffers, you can use the location keyword (register keyword in D3D) to associate the shader variable with a particular integer binding location. This lets all the permutations use the same buffer binding logic. What you do need is some kind of bitmask that specifies whether a particular UBO binding slot / location exists for a permutation or not so that you don't try to preform an invalid operation or simply waste time binding unused data (unused uniform declarations are optimized out).

Alternatively, yes, without your shaders explicitly declaring UBO locations, you'd need a big dictionary where you could look up how to bind a buffer based on the shader variablr name and the current permutation...

#5315276 An alternative to "namespaces"

Posted by on 14 October 2016 - 09:31 PM

You may want to change the names to a specific job, for example instead of "Node" in your scene solution, you may call it "SceneNode" and then you'd know... it's a scene node.

Yeah, this. More general nouns are valuable lexical real-estate. Names such Node/Handle/Buffer/etc could be used by endless different systems, so it's almost selfish to use them. This is especially important when working in larger teams.

An alternative to using more descriptive names is extensive use of namespaces, so within each module you're free to use common/bland names.

I'm not a fan of namespaces either, so a middle ground is inner classes, such as std::vector<Foo>::iterator, or
struct QuadTreeTerrainRenderer {
  Struct Node {...};