Jump to content

  • Log In with Google      Sign In   
  • Create Account


TiagoCosta

Member Since 26 Nov 2008
Offline Last Active Today, 05:39 PM
-----

#5169272 Anyone guess what exactly all these mean?

Posted by TiagoCosta on 26 July 2014 - 04:24 AM

To my understanding of vectors, the look at vector should be camera's position substracting the position of look at target.


That's incorrect. The look at vector should (position of look at target) - (camera's position).
 
The vector AB = B - A.
 
So:
cam.mvFacing = OgreVec3ToBourneVec3(mSceneMgr->getSceneNode("Worker00001Node")->getPosition()) - cam.mvPosition;

But it looks perhaps at the opposite direction. But I have tried to reverse the order of subtraction to no avail.

 
What exactly happens when you reverse the order of subtraction?
 
Does Ogre and Dolly use the same coordinate systems? Maybe you are mixing Left hand and Right Hand coordinates...


#5168105 Anyone guess what exactly all these mean?

Posted by TiagoCosta on 21 July 2014 - 04:04 AM

Assuming that mvFacing is the look direction, then mvCross is probably the right (or left) vector because it can be calculated by calculating the cross product of mvUp and mvFacing.

 

I'm not sure about mvView. My guess about mvView is that it is the position the camera is looking at. So mvFacing = mvView - mvPosition.

 

Can you post the code where those vectors are initialized? 




#5154636 GPU bottlenecks and Sync Points

Posted by TiagoCosta on 19 May 2014 - 09:02 AM

Hi,

 

After reading a few presentations from past GDC about DX performance I'm a little confused:

 

1 - (From GDC 2012 slide 44) How is it possible to be vertex shading limited? Aren't ALU units shared between shader stages (in D3D11 hardware anyway)? So there shouldn't be any hardware resources waiting for the vertex shader to finish...

 

2 - Regarding CPU-GPU sync points, currently my engine uses the same buffer to draw almost every object (so it's Map()/Unmap() using DISCARD hundreds or thousands of times per frame, every frame, the same cbuffer). Is this crazy unsure.png ? Most samples do it this way, but they're samples...

Anyway I'll add an option in debug builds to detect sync points like suggested in the presentation.

 

3 - "Buffer Rename operation (MAP_DISCARD) after deallocation" (slide 9 from 1st link above) - What are these rename operations?

 

Thanks.




#5154139 [SOLVED] D3DX11CreateShaderResourceViewFromFile - deprecated.

Posted by TiagoCosta on 16 May 2014 - 04:56 PM

DDS (DirectDraw Surface) is a texture format.
 
If you want to load a .jpg texture, use the WICTextureLoader (since it supports BMP, JPEG, PNG, TIFF, GIF, etc).
 
Anyway, I think that error is being caused by something else. Which Visual Studio version are you using? And did you use the appropriate project file to compile DirectXTex?

 

Did you call CoInitialize or CoInitializeEx? 

The library assumes that the client code will have already called CoInitialize or CoInitializeEx as needed by the application before calling any DirectXTex routines

 

 

EDIT:



I have tried compiling it into a library using visual studio project files from "DirectXTex\DirectXTex" and linking via "additional library paths" (or similar name) and putting a dependency of generated ".lib" in the linker. This failed to even compile with an unresolved external symbol (needles to say the linker didn't like it).

 

I'm pretty sure you have to compile it to a library in order to use it.

 

Can you write a copy of the errors here?

I've only used DIrectXTK (not DirectXTex) but I didn't have any problem linking.




#5152005 Managing instancing

Posted by TiagoCosta on 07 May 2014 - 04:02 AM

Hi,
 
Currently my engine only supports instancing in a few limited cases, and I'm trying to fix that by implementing a more generic system.
 
When do engines usually find objects that can be instanced? Dynamically every frame after culling? Or at a "higher-level" by keeping a list of objects that use the same model?
 
Currently my scene struct looks like this:
 

struct Scene
{
    uint             num_actors;
    InstanceData*    actors_instance_data;
    uint*            actors_node_handles; //handles of actors' scene graph node(used to get world matrix)
    Model**          actors_models;
    BoundingSphere** bounding_spheres;
};

I could do it every frame after culling, by sorting the actors by model and if two actors a and use the same model (actors_models[a] == actors_models[b]) then they can be instanced by copying actors_instance_data[a] and actors_instance_data[b] to a constant buffer.

 

Does this seem reasonable or should I go with a "higher level" option?

 

Thanks.




#5148558 HLSL float4x4 vs float3x3

Posted by TiagoCosta on 21 April 2014 - 11:13 AM

No.

For example, in order to transform an tangent space normal to objects space (while applying normal mapping) I use a 3x3 matrix since translation doesn't affect normals (vectors).




#5146494 Rendering a subset in DX11

Posted by TiagoCosta on 12 April 2014 - 05:12 AM

You just have to call DrawIndexed with the correct IndexCount (the numbers of indices in the index buffer of the subset you want to draw) and StartIndexLocation (the position of the first index in the index buffer of the subset you want to draw). The other draw calls have similar arguments.

struct Subset
{
    uint index_count;
    uint start_index;
    Shader* shaders;
    Texture* textures;
    CBuffer* cbuffers;
};

void EFFECTMESH::Render(uint num_subsets, Subset* subsets){
	unsigned int stride;
	unsigned int offset;


	// Set vertex buffer stride and offset.
	stride = sizeof(NMVertex); 
	offset = 0;
    
	// Set the vertex buffer to active in the input assembler so it can be rendered.
	GE->devcon->IASetVertexBuffers(0, 1, &VB, &stride, &offset);

	// Set the index buffer to active in the input assembler so it can be rendered.
	GE->devcon->IASetIndexBuffer(IB, DXGI_FORMAT_R32_UINT, 0);

	// Set the type of primitive that should be rendered from this vertex buffer, in this case triangles.
	GE->devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

        for(uint i = 0; i < num_subsets; i++)
        {
            //Set correct shaders (subsets[i].shaders)
            //Bind correct textures (subsets[i].textures)
            //Bind correct cbuffers (subsets[i].cbuffers)

            //Draw current subset
            GE->devcon->DrawIndexed(subsets[i].index_count, subsets[i].start_index, 0);
        }
}



#5121893 Filmic Tone Mapping Questions

Posted by TiagoCosta on 07 January 2014 - 07:05 AM

c) If not c, does anyone have a better intuition for how these parameters work so that I can find the right combination?

 

In this presentation (slide 142) you can see the name of each parameter. Then watch this video for more info on how to set the parameters. 

 

You can also download MJP's tonemapping sample and play with the parameters in real-time.




#5116206 Tangents for heightmap from limited information

Posted by TiagoCosta on 11 December 2013 - 08:56 AM

Since the tangent has the X component = 0 you can assume that the tangent might be (0,0,1) so you can calculate the tangent basis with two cross products:

temp_tangent = ( 0, 0, 1 )

bitangent = cross( temp_tangent, normal )

tangent = cross( normal, bitangent )

float3x3 TBN = float3x3( tangent, bitangent, normal )



#5113875 Programming Vertex Geometry and Pixel Shaders

Posted by TiagoCosta on 02 December 2013 - 04:57 PM

I think that second edition wasn't published. There were problems with the publisher and the authors decided to publish the book online for free. I guess you can ask Jason Z for more info.

 

Check out Practical Rendering and Computation with Direct3D 11 (a newer book from two of the authors of the online book and MJP)




#5105262 Why use a Graphics Library instead of an Engine? (Ex: OpenGL vs Unity)

Posted by TiagoCosta on 28 October 2013 - 08:54 PM

Unity and other game engines make it easier to create games but there's two big trade-offs:

-You might want to implement a feature/effect that simply isn't possible in the architecture of the game engine that you're using. For example, is it possible to implement Forward+ in Unity? What about a custom GI technique? Or some game specific logic?

-Since game engines like Unity, UDK, are designed to support multiple types of games there's some optimizations that can't be made because it would reduce the engine flexibility.

 

Even the most flexible game engine will most likely have some kind limitation that will make it impossible to implement some feature or require some weird hacks to implement.

 

Some of the most complicated games on the App Store are probably created using proprietary engines that run on OpenGL. Most graphics/engine programmers will write a wrapper to hide the low level OpenGL details and improve their productivity while still enabling them to do whatever effect they want because they don't have to deal with the restrictions that other game engines have.

 

Basically:

If you feel like you can work within Unity restrictions and you don't have any interest in writing your own engine, then use Unity.

If you want to create custom effects, that Unity won't allow, and optimize the code for the needs of your project then you should write your own engine. 

 

Why weren't GTA V and other AAA-games written in Unity, UDK, etc? Those engines don't allow the flexibility and optimization required for projects like that.

For indie games though, those engines are probably powerful enough.




#5100619 tutorial deferred shading? [Implemented it, not working properly]

Posted by TiagoCosta on 11 October 2013 - 01:02 PM

There are a lot of tutorials about deferred rendering out there (most might be in XNA/DirectX but you should be able to port the code to OpenGL because it all works the same way):
 

There's an Hieroglyph 3 sample that with a good implementation of Deferred Rendering (it's in D3D11...)

 

Deferred Rendering (XNA) <- This one looks good and well explained

Simple OpenGL Deferred Rendering <- This one stores full position instead of reconstructing it from depth...




#5099380 Spherical Harmonics help?

Posted by TiagoCosta on 07 October 2013 - 02:14 PM

My main issue at the moment is how do you get the coefficients in the first place? The only method I've seen was based on rendering a set of environment maps. But I don't see that being mentioned very often....?

 

The most common methods are probably projecting a cube map into spherical harmonics or evaluating the light direction and scaling, both methods are explained in this paper (starting at page 9) and there are functions to do it in the D3DX Math.




#5096910 AMD's Mantle API

Posted by TiagoCosta on 26 September 2013 - 03:37 AM

AMD just announced a new low-level graphics API called "Mantle" that allows a more direct access to it's GPUs.

 

What are your thoughts?

 

Direct3D in high-end PC gaming shouldn't die any time soon because NVidia GPUs won't support Mantle, But if the performance improvements on AMD side are significant (eg: 9x more draw calls), consumers will shift towards AMD products so NVidia will be "forced" to release a Mantle alternative.

 

Only developers that really want to push the visuals forward are likely to use it though, small indie teams/studios will probably stick with DirectX (/OpenGL) because it's fast enough and easier (?!) to code.




#5096795 Gamma Correction

Posted by TiagoCosta on 25 September 2013 - 03:59 PM

output a linear gradient from black to white.

 

The point of gamma correction isn't to output colors in linear space...

 

You should convert gamma space colors to linear space, do lighting and apply post processing, convert the final color back to gamma space and write to backbuffer.

 

Something like this:

color = pow(abs(texture.Sample(sampler, texC)), 2.2);

//Apply lighting

output = pow( color, 1/2.2 )‏;

Also read this awesome presentation about gamma correction and HDR ligthing.






PARTNERS