Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Offline Last Active Yesterday, 07:04 PM

#5311145 3D Manipulators/Gizmos in DirectX

Posted by on 16 September 2016 - 06:33 PM

These are sometimes just called widgets and gizmo's too :lol:

I'm using this one https://github.com/CedricGuillemet/ImGuizmo

#5311134 Data-Driven Renderer - how to modify the pipeline on-the-fly?

Posted by on 16 September 2016 - 04:38 PM

* Make it code driven :lol:
One non-facetious way of doing that would be to use something like Lua to store the graph - it's good at storing JSON-like data tables, but is also a fully fledged language.
* Add a small amount of branching logic to your DSL - e.g. this branch of the graph only active when blah condition is true.
* Make a whole boatload of different data files and load the right permutation depending on the situation.

As for the pooling, we describe every target as if it's unique to its pass - e.g. Bloom declares that it requires a half-res buffer, then DOF also declares that it requires a half-res buffer. If you then walk through the graph, allocating a target as a pass needs it and returning it to a pool when the pass is done (using only size/format, not name), then you've now got a minimal pool of targets that can be reused across passes (e.g. DOF and bloom will reuse the same half-res buffer).

#5310999 Would you try a language if all you had to do was install a Visual Studio ext...

Posted by on 15 September 2016 - 05:06 PM

It seems like a good strategy to me, except for the Linux/Mac crowd obviously, where you'll have to duplicate your efforts on an easy IDE set up based on some OSS.

But... Personally my barrier to trying a new language is interest. Most new language websites do not convince me to even look at their hello world example. And many that do, don't have enough interesting code examples on their site to keep me interested enough to download it. Lastly, there's platform support - if it's not on all my platforms, then I'd only be looking into it for curiosity's sake, which is low.

A really good pitch as to why this language will make my life easier, followed by some convincing code examples, followed by, "oh btw just click this and VS will be good to go" would certainly do it though.

e.g. The last 'new' language that I actually picked up was ISPC, as the pitch was targeted at me - "it's shader-like, C-like, automatic SIMD usage for the CPU", it showed me some code samples that proved the pitch, and I was able to easily download it and integrate it into Visual Studio :D plus as a backup for compiler portability, they support transpiling to C++.

#5310997 Writing a Dx12/Vulkan engine

Posted by on 15 September 2016 - 04:57 PM

IMHO, if you design for Dx12/Vulkan, then that design will work well on Dx11/GL too... But, if you design for Dx11/GL, then that design will not be good for Dx12/Vulkan.

My design is roughly described here: http://tiny.cc/gpuinterface

These kinds of "stateless" designs map to the modern APIs very well, because it allows you to prepare a lot of work ahead of draw-time, such as the pipeline state.

#5310872 Frame Time Swap Throttled

Posted by on 14 September 2016 - 11:47 PM

Does this happen to be on an Xbone with the Remote Control window active? If so, try closing it.

#5310707 Alpha Blend for monocolor (or min/max depth buffer)

Posted by on 14 September 2016 - 02:33 AM

However, I didn't use the other three channels at all, so I was wondering can we enable alpha blend for format like DXGI_FORMAT_R16G16_FLOAT? which essentially have only one color channel and one alpha channel?
Or there is a much better way (efficient way) to get pixel's min/max depth?

As well as the MRT suggestion, you could output r=z and g=1-z, and then just use min blending (and remember later that G contains 1-z, not z :wink:).

#5310683 One Index Buffer Multiple Line Strip

Posted by on 13 September 2016 - 11:31 PM

also BTW I am using DirectX 12.

In D3D11, primitive restart was permanently enabled, but in D3D12 it must be configured.
In your D3D12_GRAPHICS_PIPELINE_STATE_DESC struct, there is a IBStripCutValue member -- you need to set this to D3D12_INDEX_BUFFER_STRIP_CUT_VALUE_0xFFFF.

#5310520 Affordable Copy Protection

Posted by on 12 September 2016 - 06:56 PM

So as time progresses it becomes are horrible buggy mess that displays adverts to buy the game across the screen.

Ads could be fine, but DO NOT MAKE IT BUGGY! I can't find a reference right now, but I know of at least one example where the devs thought it was a good idea to introduce bugs in the pirated version. This resulted in horrible reviews and complaints on their forums.

That might have been Batman Arkham Asylum -- IIRC, at least one reviewer wrote their review using a pirate copy of the game, which contained deliberate bugs.

The FADE system mentioned by tragic was not subtle though. It used to print the text "Original copies do not FADE" into the chat area right before it triggered any bugs -- letting you know the game was misbehaving because they'd flagged you as a pirate.

#5310431 Faster Sin and Cos

Posted by on 12 September 2016 - 05:14 AM

On the GPU, I've used these before: https://seblagarde.wordpress.com/2014/12/01/inverse-trigonometric-functions-gpu-optimization-for-amd-gcn-architecture/

You might find his discussion of different methods of interest.

#5310414 DX11 black screen when windowed.

Posted by on 12 September 2016 - 02:21 AM

I've heard of other people having black screen issues when they forget to use GetClientRect to find out the actual size of their window.

e.g. if you make a 1280x720 window, and Windows draws a border around the edge, you might only have a 1270x710 client area -- so that's the back buffer / swap chain resolution 

that you should use.

#5310339 How to use a big constant buffer in DirectX 11.1?

Posted by on 11 September 2016 - 08:23 AM

Just curious, in which scenarios or stages in the rendering pipeline would it be preferable to use a large CB over several smaller ones?

This technique (*SSetConstantBuffers1) doesn't have too much benefit outside of simplifying resource management for the driver (a potential reduction of CPU overhead). You can create one buffer resource instead of many (less things for the driver to track), and you can update/map that big buffer in one go instead of having to do many smaller update/map operations. This is especially useful if your engine is structured in such a way where you're able to update the constants for many objects at a single point in the frame (after updating the game logic for all of them, but before drawing any of them).


In the general case, you should generally use as few buffers per draw as possible, as every resource binding incurs CPU overhead... However, if you take that advice to the extreme, ever draw call would use a single cbuffer containing every constant/uniform variable that it requires, which creates a different problem: now the CPU is doing a lot of CBuffer updates -- one very large update per draw... So you also want to use as many buffers per draw as possible in order to reduce the number of cbuffer updates required. Yes, that's two opposite rules of thumb :lol:

This sweet spot in the middle of those two bits of advice generally means separating constants by update frequency -- so a draw might have a cbuffer for things that change once per frame (camera matrices, etc), things that change once per material (colours, scales, etc), and things that change once per mesh (world matrix, etc)...

#5310304 Affordable Copy Protection

Posted by on 10 September 2016 - 09:56 PM

Standard one these days is Steam :D
But that only allows content based demos, not time based ones.

#5310294 Does NDA really work?

Posted by on 10 September 2016 - 06:51 PM

The only NDAs I've had to sign (or requested others sign) have been with business partners, which means there's already a certain level of trust and legitimacy in the relationship.

I wouldn't send my trade secrets along with an NDA along to any old yahoo on the internet... Of course that would end in a leak and an unenforceable legal situation.

Usually the professional thing to do is to assume that everything between you and your patterns is confidential, but an NDA let's each other know explicitly what it is that you really do want to keep secret (and that a leak of this would destroy the relationship at the least).

#5310247 Vector and matrix multiplication order in DirectX and OpenGL

Posted by on 10 September 2016 - 08:16 AM

So, in C++ if you have a 1D array of 16 elements, the order in memory should be "column order indexing" even though in the shader you are doing "math row major" multiplication.

Yes, whether you're doing "math row major" multiplication or not is irrelevant.
The only time you should use row-major ordering in C++ is if you've also used the row_major keyword to tell your shaders to interpret the memory using that convention.

and if you are doing "math column major" multiplication your C++ memory layout for the array should be in "row major indexing".

No. There's no connection between whether you should use a particular "comp sci majorness" and a "math majorness". Comp-sci-row-major and math-column-major will work together just fine.

You just need to make sure that:
* If you use comp-sci column-major memory layout in the C++ side, then your shaders should work out of the box (just avoid the row_major keyword!).
* If you use comp-sci row-major memory layout in the C++ side, then use the row_major keyword in your shaders so that they interpret your memory correctly.
And separately:
* That your math makes sense, from a purely mathematical perspective  :)
* i.e. The choice of row-vectors / column-vectors, basis vectors in rows / basis vectors in columns, pre-multiply / post-multiply all depend on which mathematical conventions you want to use. These are all well defined and work as long as you're consistent.
* The math conventions that you choose have no impact whatsoever on which com-sci conventions you can use.

#5310236 Vector and matrix multiplication order in DirectX and OpenGL

Posted by on 10 September 2016 - 06:54 AM

Alright to to summarize,
Row major order:-
Vector is always on the left side of the multiplication with a matrix.
P = vM
Translation vector is always on the 12, 13 and 14th element.
Column major order:-
Vector is always on the right side of the multiplication with a matrix.
P = Mv
Translation vector is always on the 3, 7 and 11th element.

First up, I personally try to avoid using the terms row-major and column-major with matrices, because they mean different things to mathematicians and computer science people...
Maths people think that you're talking about whether you've got a matrix that's made up of basis vectors that are row-vectors / column-vectors, and comp-sci people think you're talking about the 2D->1D array indexing scheme. These are two very different things :(

Above, you're talking about the mathematical concept of "row matrices" vs "column matrices", and the comp-sci concept or array indexing is irrelevant.

So the only difference between HLSL and GLSL is how they layout this data in memory.
HLSL reads the matrix row by row. GLSL reads the matrix column by column.
So this is how HLSL layout the data in memory

0  1  2  3 
4  5  6  7 
8  9  10 11
12 13 14 15
And this is how GLSL layout the data in memory.
0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15
Alright. This makes sense. Now I have to understand how I should layout the matrix in C++ and send it to HLSL and GLSL correctly.
Man..... Why couldn't OpenGL just read things row by row...... WHY !! :P

Not quite :D
By default, both HLSL and GLSL do actually use column-major array indexing! So if you're sending matrices from the C++ side, the individual floats should be stored in column order.
i.e. the storage order in RAM is:

0 4 8  12 
1 5 9  13 
2 6 10 14 
3 7 11 15

Despite column-major being the default element storage order, HLSL for some reason chose to make their constructors interpret the arguments in row-major storage order... I always thought this mismatch was odd, but your example shows that it makes more visual sense, as the code appears in rows on the screen after all!

In HLSL, you can explicitly choose the (comp-sci) array indexing scheme / element storage order of a matrix with:
row_major float4x4 foo;
column_major float4x4 bar;

And GLSL similarly with:
layout(row_major) mat4 foo;
layout(column_major) mat4 bar;


And just to reiterate: your choice of row_major or column_major array storage order has absolutely no impact on the multiplication order that you should use / no impact on whether your data is mathematically row-major or column-major (as in, whether the layout of your basis vectors is going across or down:wacko:


You see a lot of posts on the internet where people are doing maths one way in D3D and maths another way in GL (opposite multiplication orders), or posts where people say "D3D is row-major, GL is column-major" -- nope, old crud.

Both D3D and GL use column-major storage ordering by default (but can be overridden), and neither forces a particular mathematical convention on you (column-vectors and row-vectors are equally supported).