Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Mar 2014
Offline Last Active Yesterday, 07:56 PM

#5179780 1D and 3D textures useless?

Posted by spazzarama on 12 September 2014 - 12:17 AM

I want to create a (100,200,300) 3D texture, how will this look in a 2D texture (?,?)


To represent the same number of texels as that 3D texture you would need to create one very larger 2D texture, e.g. 30,000 x 200 - of course it won't look exactly the same in memory and I would think that mipmapping will be very different and probably give undesirable results. Otherwise 300 separate 100x200 2D textures? 


Promit's link on texture tiling is probably what you are after here.

#5179754 1D and 3D textures useless?

Posted by spazzarama on 11 September 2014 - 09:58 PM

It's important to note that textures are not only used to store image data. You can store tables of lookup values, pseudo-random numbers, height maps, all sorts of information, for some of these the use of a 1D or 3D texture makes sense.


A 1D texture might be used in cell-shading to map the diffuse reflection component to a colour, or a range of other lookup tables.


A 3D texture might be used for some volumetric effects (e.g. smoke and the like). Pretty good overview of 3D textures here.

#5177825 Compile shaders in build time with common functions

Posted by spazzarama on 03 September 2014 - 04:42 AM

Hodgman's answer is definitely what you are after.


Another option, that although is not what you after is interesting non-the-less, is that DirectX 11.2 supports HLSL shader linking. Adding support for precompiled HLSL functions that can be packaged into libraries and linked into shaders at runtime. This would allow you to build up your shader libraries to support more variations without the cost of runtime HLSL compiler times.

#5177809 Domain vs Geomtry Shader

Posted by spazzarama on 03 September 2014 - 02:51 AM

You should definitely try to calculate your normals etc in the domain shader. This is the 3rd and final stage of the optional tessellation stages and is specifically used to calculate the final vertex position and data of the subdivided point. Because you are using the tessellation pipline the domain shader is going to be called no matter what, whereas the geometry shader stage is still optional and will incur additional cost (even for an empty shader).


Depending on the domain (tri or quad ) you may need to use barycentric, bilinear or bicubic interpolation to determine the correct values.


You will want to implement backface culling and/or dynamic LoD within the hull shader.


Below is an example domain shader taken from Chapter 5: Applying Hardware Tessellation of my book Direct3D Rendering Cookbook. It uses bilinear interpolation and a combination of patch and constant data for the inputs:

// This domain shader applies control point weighting with bilinear interpolation using the SV_DomainLocation
PixelShaderInput DS_Quads( HS_QuadPatchConstant constantData, const OutputPatch<DS_ControlPointInput, 4> patch, float2 uv : SV_DomainLocation )
    PixelShaderInput result = (PixelShaderInput)0;

    // Interpolate using bilerp
    float4 c[4];
    float3 p[4];
    for(uint i=0;i<4;i++) {
        p[i] = patch[i].Position;
        c[i] = patch[i].Diffuse;
    float3 position = Bilerp(p, uv);
    float2 UV = Bilerp(constantData.TextureUV, uv);
    float4 diffuse = Bilerp(c, uv);
    float3 normal = Bilerp(constantData.NormalW, uv);

    // Prepare pixel shader input:
    // Transform world position to view-projection
    result.PositionV = mul( float4(position,1), ViewProjection );
    result.Diffuse = diffuse;
    result.UV = UV;
    result.NormalW = normal;
    result.PositionW = position;
    return result;

And bilinear interpolation on float2, float3 and float4 properties for the simple quad domain:

// QUAD bilinear interpolation
float2 Bilerp(float2 v[4], float2 uv)
    // bilerp the float2 values
    float2 side1 = lerp( v[0], v[1], uv.x );
    float2 side2 = lerp( v[3], v[2], uv.x );
    float2 result = lerp( side1, side2, uv.y );
    return result;    

float3 Bilerp(float3 v[4], float2 uv)
    // bilerp the float3 values
    float3 side1 = lerp( v[0], v[1], uv.x );
    float3 side2 = lerp( v[3], v[2], uv.x );
    float3 result = lerp( side1, side2, uv.y );
    return result;    

float4 Bilerp(float4 v[4], float2 uv)
    // bilerp the float4 values
    float4 side1 = lerp( v[0], v[1], uv.x );
    float4 side2 = lerp( v[3], v[2], uv.x );
    float4 result = lerp( side1, side2, uv.y );
    return result;    


For tri domains you would use barycentric interpolation - something like the following:

// TRIANGLE interpolation (using Barycentric coordinates)
    barycentric.xyz == uvw
  C ______________ B
    \.    w    . /
     \  .    .  / 
      \    P   /
       \u  . v/
        \  . /
         \ ./
float2 BarycentricInterpolate(float2 v0, float2 v1, float2 v2, float3 barycentric)
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;

float3 BarycentricInterpolate(float3 v0, float3 v1, float3 v2, float3 barycentric)
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;

float4 BarycentricInterpolate(float4 v0, float4 v1, float4 v2, float3 barycentric)
    return barycentric.z * v0 + barycentric.x * v1 + barycentric.y * v2;

Good luck.

#5161251 How to install latest SharpDX from GitRepository.

Posted by spazzarama on 18 June 2014 - 01:30 AM


I have SharpDX 2.6.3 synced in GitHub. But I'm not sure on how to actually install it, other than how I've done it in the past and just copied the bin folder and the documentaions folder over to my main drive. But should I also copy packages and external or not?

Thanks for any help.




If you are not trying to build it, you only need the bin directory.

#5158816 (c#) list of lists of lists or outside helpers?

Posted by spazzarama on 06 June 2014 - 05:58 PM

Keep in mind that your objects within those lists are only references to the object, so aside from the small overhead of the List<T> or Dictionary objects you are not necessarily wasting much in the way of resources.


I would also think about the life-cycle of some of these objects and how they interact, and ensure that you have an efficient system for maintaining these relationships. E.g. if the bottle is lifted from the table, do you also have a reference on the bottle object to the table or do you have to look through all your objects looking for this elusive bottle reference? After considering some of these events this might help you determine the approach that works best for you. I think that maintaining a two-way parent/child relationship is a common approach.


I wouldn't use Object as the type however, I would create a base MyBaseObject class that implements support for these relationships and then inherit from this.


As to the choice between a List or a Dictionary, it really depends on how you are iterating/finding items within those lists. If you find yourself iterating Lists looking for an object that has a property X of value Y then you might want to consider using a dictionary instead. If however you find yourself iterating over the list in order to perform an action on each item within the list then the List would suffice.

#5158321 What's a quick way to get a database going for a .net text-based game?

Posted by spazzarama on 05 June 2014 - 01:51 AM

I completely agree with nfactorial,


Write a set of simple services that interact with the database as needed on behalf of your client app.


For some quick service development perhaps take a look at ASP.NET WebAPI for the server side... Of course, then you have to secure those services, you may need to have a set of services to allow each player register a username and password and require this authentication before allowing any interaction with the services that talk to the database.

#5158290 VS2013 Graphics Diagnostics problem

Posted by spazzarama on 05 June 2014 - 12:17 AM

Windows 8.1 with VS2013 (no update).


I can confirm it is working for me for the scenario you mentioned, both when using the CSSetConstantBuffers call as you have posted and with something more complex with multiple CS dispatch calls.


Doesn't work if I am using multiple render targets (stalls on capturing the frame).

#5157507 Beginners problem

Posted by spazzarama on 02 June 2014 - 04:42 AM

Sounds like something is zero smile.png


Try outputting the light.ambient, input.normal and possibly light.dir or light.diffuse:

float4 PS( VS_OUTPUT input ) : SV_Target
    return light.ambient;
    //return input.normal;
    //return light.dir;
    //return light.diffuse;

My guess is that at least light.ambient is zero. You're correct, even if the normal vector or light direction is zero (dot product with zero vector = 0) you are adding this to the finalColor so += 0 should still return you the diffuse * light.ambient.


What edition of Visual Studio are you using? If VS 2012 or VS 2013 there is a built-in graphics debugger that would allow you to step through and evaluate some of the values within your shader. When compiling the shader you must supply D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION and the shader file name - see this topic for an example. Alternatively there are a number of other debuggers available see this GameDev topic about free HLSL debugging tools.

#5157259 Visual 2013 problem with debugging shader

Posted by spazzarama on 31 May 2014 - 11:45 PM

VS can't work out which file your shader belongs to.


In your ShaderEngine.cpp


Change line 13 from:

hr = D3DCompile(buffer.data(), buffer.size(), NULL, NULL, NULL, mainFucName, "vs_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, NULL, &VS_Buffer, &errorVS);


hr = D3DCompile(buffer.data(), buffer.size(), name, NULL, NULL, mainFucName, "vs_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, NULL, &VS_Buffer, &errorVS);

then do the same for your pixel shader compiler function.


That worked for me with your sample (passing name into the pSourceName parameter). Not very intuitive smile.png

#5156665 Staying motivated while learning? [Advice needed]

Posted by spazzarama on 29 May 2014 - 03:35 AM

It helps if you truly love doing it!!


If you don't have that luxury, then 3Ddreamer's advice is doubly important!

#5156660 Best tutorial/article to begin with C#

Posted by spazzarama on 29 May 2014 - 03:14 AM

You might want to run through these http://msdn.microsoft.com/en-us/library/aa288436(v=vs.71).aspx first.


Then if you're looking to do game related stuff, you might find the Unity scripting tutorials useful, there is one there showing the differences between C# and JavaScript.

#5156240 SQL Server Table naming conventions

Posted by spazzarama on 27 May 2014 - 03:35 AM

dbo.tbl_ prefix


The use of prefixes such as tbl, sproc, vw (view), func (function), clr and so on are really just hints as to what the type of the database object you are dealing with is.


Sometimes when writing queries it is handy to know that the table you are querying really is a table and not a view. Mostly helpful in really big projects (i.e. "I know I have to call a stored procedure, but what was the name again?? oh i'll type .sproc_ first to get the list of 100 in code completion and hide the 100's of tables").


Personally I don't care what naming conventions are used as long as they are consistently applied, my old boss on the other hand would often say "Mr Codd would be turning in his grave!" when he saw something he didn't like - including tbl_ prefixes and pluralisation smile.png.


(edit: sometimes what technology you use impacts your naming convention, e.g. using the Entity Framework or other ORM tools might steer you towards using singular or plural depending on the implementation and what the code it generates looks like)

#5156224 Tweening Questions

Posted by spazzarama on 27 May 2014 - 02:08 AM

If you are using Visual Studio 2012/2013 it natively supports *.fbx, compiling them into a *.cmo file which can then be read using the DirectXTK library.


The animations that are saved within the FBX are then available to you. Just need to apply the transforms correctly based on the current time etc...


Although it is in C#, my book linked below has a chapter dedicated to implementing animation with Direct3D (Chapter 4 - Animating Meshes with Vertex Skinning). Using an FBX -> CMO, it shows how to perform interpolation between frames using quaternions and spherical interpolation (Slerp).


The basic approach is:

// animation loaded from FBX/CMO file
foreach (keyFrame in animation)
    if (keyFrame.Time <= time) {
        // use this keyFrame transform for the appropriate bone index
        transforms[keyFrame.BoneIndex] = keyFrame.Transform;
        // perform interpolation towards next future keyFrame

// Apply bone transforms (i.e. apply transform of parent upon each child bone)

// Convert from bind pose into bone space using inverse of bind pose

Some screenshots showing incorrect and correct bone transforms:

Incorrect Bone Transforms
Bone Transforms


#5154163 [SOLVED] D3DX11CreateShaderResourceViewFromFile - deprecated.

Posted by spazzarama on 16 May 2014 - 06:57 PM

* Is using "WIC" is the main format for those listed file/image formats and the standard way of doing things?
* Am I actually using an official and main method to load images using Direct3D11 ?


Windows Imaging Component or WIC is a standard way of loading images of various codecs on Windows (since Vista I think). Loading textures (to be used as image data) using the WIC is a common approach in Direct3D.


With DDS textures you can store data in the same format that is used within a D3D texture object. It is important to remember that a texture doesn't necessarily contain image data (i.e. you might use a texture to store a height map or a whole range of other things), the DDS format allows you to store this data in the various texture formats. You can also use the D3D texture block compression (BC) formats within a DDS.


The DirectXTK project is maintained by Microsoft. Chuck Walbourn has been involved in DirectX since the early days on Windows 95. These methods for loading your texture data are great to use. The actual format you choose to use really depends on your storage constraints, and your asset workflow.