Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Today, 05:34 AM

#5316700 Pcars game to xbox1 and ps4

Posted by on Yesterday, 06:52 PM

Maybe I said it wrong, lets say a big game was made, and was all ready running and for xbox 360 and ps3, how much do you think it would cost to make it play on xbox1 and ps4 ???

I've done 360/PS3 -> One/PS4 ports as a contract job before. At a complete guess, there's around six man-months of effort required in porting a large game engine to a new platform, so 12 man-months for two platforms.
A high skilled contractor for that kind of work might ask for $10k per man-month, so $120k total.
Or if you can find someone who can do it in half the time (a 3 month port per platform) and only charge $20/hr, the cost would be about $20k total...

(and that's just the technology -- if your art "isn't HD enough" and has to be updated, you might have to fork out $1M or more...)

#5316699 When will DX11 become obsolete?

Posted by on Yesterday, 06:48 PM

As you say, DX9 is still in active use, so DX11 seems pretty safe.


DX10 is "dead" because there's no reason to use it -- DX11 includes a "D3D10 feature level", which allows it to act just like DX10.

#5316546 Weird behavior on Timestamp query heap for GTX 1080

Posted by on 24 October 2016 - 08:59 PM

Does setting SetStablePowerState(TRUE) affect your results at all?

The GPU profiler I write is very standard one with Timestamp query heap, and map/unmap once every frame when copying data out.

Do you use ResolveQueryData to copy the timestamps into a buffer and then map that buffer?

How do you synchronize this so that the GPU isn't accessing the buffer while the CPU has it mapped?

#5316387 [c++] ENet

Posted by on 23 October 2016 - 07:04 PM

Both RakNet and ENet have pretty quiet commit logs these days because they're mature/stable products that don't need to be tinkered with further.




There's this branch though which contains community updates to RakNet:


#5316310 Blit depth buffer not working

Posted by on 23 October 2016 - 06:13 AM

Does anyone know if there's alternative way to combine forward shading with deferred shading?

The normal thing to do would be to just use the same depth buffer for both passes. Why do you have to use two and blit the data between them?

#5316272 Estimating the performance of my application a priori?

Posted by on 22 October 2016 - 03:46 PM

Add up how many megabytes of data you need to access per simulation timestep. Multiply by the framerate. Compare this to your RAM throughput. If your estimate of your simulation's data throughput is larger than your RAM's actual throughput, then yes, your sim is too complex (or the timestep is too low). Tweak it and try again.

#5316268 Questions about Physical based shading(PBS)?

Posted by on 22 October 2016 - 03:23 PM

Shading genrally means just the interaction between lights and materials, which is a subset of rendering.
There's the "rendering equation", which describes the truth about how to correctly render anything. You can implement that equation using many different algorithms, including ray-tracing or rasterization, so no one algorithm gets to have a monopoly on it.

Blinn-Phong is often used within PBR/PBS renderers. Instead of simply using Blinn-Phong as a specular BRDF as is, you add a normalization factor to deal will energy conservation and use it as an NDF within a more robust BRDF, such as Cook-Torrance. The resulting BRDF (which internally uses Blinn-Phong to describe the distribution of normals) is both very soundly rooted in physical theory but also can reproduce many real world materials (as measured by a gonioreflectometer) with very small amounts of error.

If you put that shader in a game engine, the marketing team would want to put the "PBR tickbox" on the blurb, when a more accurate description would be that youre in the process of working towards PBR by first adjusting to PBS - it would be fair to say that just your shading "is PB" while the rest of the renderer is seat-of-your-pants.

The difference between current PBR real-time renderers and typical/previous real time renderers is that we didn't care about matching our work against the rendering equation to see how wrong it is.
Sure, they're still not doing "juet ray-trace all the things!" but they're using the same shading that a PBR ray-tracer would use, and while their light transport is incredibly simplified in order to run in real time, its is based on the same (correct) principles of how light should behave, and this is validated by comparing rendered results against "known good" ray-tracing renderers. In simple scenes where the appropriations aren't too aproximate, the realtime engines can actually produce very nearly the same result as the ray-traced engines, simply because both are physically based.

#5316208 What is the future of Real Time Rendering?

Posted by on 22 October 2016 - 06:41 AM

imho all (or at least most) screen space techniques should go.

for example for AO you would want something like this but it is too performance/memory hungry so we fake it in screen space.

In the meantime, the state of the art involves refining screen-space approaches to use algorithms that are consistent with reality and to validate their outputs against these more robust approaches (voxels, ray-tracing, etc). e.g. see "Ground-Truth AO" SSAO technique: http://iryoku.com/downloads/Practical-Realtime-Strategies-for-Accurate-Indirect-Occlusion.pdf

#5316063 What is the future of Real Time Rendering?

Posted by on 21 October 2016 - 05:50 AM


#5315990 Questions about Physical based shading(PBS)?

Posted by on 20 October 2016 - 02:58 PM

If a simple refection map is physically based then I don't know what is not physically based.

A simple reflection map is not physically based. A complex IBL system can be.
e.g. instead of just taking an environment map and texturing it on there to get a gloss effect (which will intuitively look good, but is not going to match the real world), if you ray-traced that environment as a surrounding sphere and integrated a thousand different rays using a physically-based BRDF and importance sampling, then your results would come close to matching the real world. The key is using a Physically-based BRDF to represent the surface, as 'shading' is all about surfaces.

Modern games do do the above, but with some clever approximations. The result of importance sampling that sphere can be done ahead of time and stored in a big look up table... The problem is that it's many-dimensional and would be way too big, so the common approximation basically stores it as a 4D cube array plus a 2D error correction table.
The final implementation looks like simple reflection mapping, but the inner workings are based on physically plausible BRDFs, ray-tracing and importance sampling (or unbiasedMonte carlo sampling if you like), and most importantly, in many cases the results get quite close to the ground truth, similar to the hideously expensive true ray tracing renderers but at a fraction of the cost.

#5315944 D3DQUERYTYPE_TIMESTAMPFREQ Always GetData equal to 0.1GHz

Posted by on 20 October 2016 - 07:04 AM

That's not necessarily the GPU clock frequency... it's just the frequency of some abstracted clock that D3D9 has decided to use.

10 nanosecond resolution is good enough to implement coarse-grained profiler with, which is all the D3D9 timer events are good for anyway.

#5315905 Shader class design question

Posted by on 20 October 2016 - 01:19 AM

- it's not useful/ worth it to handle a VS and PS as a completely different object
- in practice you should 'author' an 'Effect'(shader) which consists of a passes, a vertex shader, pixel shader, maybe a geometry shader and more
-- the effect can have many/ multiple permutations (like the ubershader principle)
- a VS should always be linked to some PS and the other way around;
-- chances that you'll be mixing VS/PS combinations in runtime, independent from the assets/ content you load.
- this also means that there should be a balance in codebase/ design flexibility and assumptions on the pipeline/ content generation.
From this I conclude it might be actually better to create 1 class for an 'effect', for now having just a VS and PS.

Yeah I'd definately go for having something like a "Program" class, which contains a full set of VS/PS/etc... If you ever want to port to GL, it actually prefers that you work this way anyway!
Sure, if you go with "effects", you can start with effect having one program, and expand it to being able to select one from a collection of permutations/passes.

Note that it's valid to have a program that contains a VS but no PS -- this is common for depth-only rendering.

#5315883 Shader class design question

Posted by on 19 October 2016 - 04:54 PM

When authoring shaders, IMHO you really want to be authoring something like a Microsoft FX / CGFX / shader, which is a whole collection of different programs that will be used by one object in different circumstances.

In my engine, I author "shaders", which are a collection of passes (where 'pass' is shadow-map, gbuffer attributes, forward-translucent, etc...), which contain one or more permutations, which are a tuple of entry points for PS/VS/etc.
The tool actually bundles many of these "shaders" into a shader pack file (which lets me do things like de-duplicate a bytecode program if two "shaders" happen to have produced the same bytecode at some point), therefore the main class used by a graphics programmer when loading shaders is called ShaderPack. This class then lets you iterate the individual "shaders" and use their IDs to render objects. The back end of the renderer can then use those IDs to peek inside a "shader" and select the appropriate pass, select the appropriate permutation of that pass, and then get the right PS/VS pointers to use.

So as a user of the library (graphics programmer) you never get given any classes to act as an interface for PS's/VS's or even pairs of the two.
Moreover, because the engine loads entire shader packs, all the logic for deciding which PS/VS to use where is move to the shader compiler tool. Basically, this file format has a big list of VS's/PS's/etc, a big list of shaders with indices of passes, a big list of passes with indices of permutations, and big list of permutations with indices of VS's/PS's/etc.

#5315814 How does compute shader code size affect performance

Posted by on 19 October 2016 - 06:43 AM

It's a bit hard to tell from your general description... Can you post some code of the theoretical-but-not-practical optimization for us to deconstruct and theory-craft about?

#5315752 Do you ever have to worry about your stories being stolen?

Posted by on 18 October 2016 - 07:49 PM

Yes, you're paranoid. No one cares about your ideas. Everyone has their own ideas that they want to make.


Is there any reason to be afraid of someone making a ripoff? For example, look at Paladins and Overwatch. 

Some of the "ripped off" characters in Paladins were available in a public beta before the "original" character was ever shown to the public in Overwatch... which would make Overwatch a rip off of Paladins?... or alternatively both are rip offs of team fortress + generic fantasy archtypes...


Also note that even if one of these games was a direct clone of the other, copyright has no issue with that. Generic ideas, themes, settings, archtypes -- e.g. "a dwarf who is good at engineering" -- are not protected IP.