Jump to content

  • Log In with Google      Sign In   
  • Create Account


SIC Games

Member Since 22 Jun 2012
Online Last Active Today, 01:14 AM

#5176078 What you think of my Depth Pass AA?

Posted by SIC Games on 25 August 2014 - 04:00 PM

I'm wanting to implemend Nvidia's FXAA and TXAA inside my game - however, I figure I create my own AA. Perhaps, someone already has already came up with the type of AA and there's more work to do to enhance the blurring. I want your feedback and let me know what you think. It's rough demonstration not perfect quite yet.

 

The firest picture is when it's just using the 8x MSAA on high setting possible. The second image is the 8x MSAA and my Depth Pass AA implematation.

 

I'm still working on how to make it better. Was just fascinated on the descripton in Nvidia's website how FXAA works - this is when I started to think about trying out my own. I have a simple blur but I am thinking about making it box blur with two passes or possibly gaussian blur.

Attached Thumbnails

  • DepthPassAA-Off.PNG
  • DepthPassAA-On.PNG



#5173363 What happened to Rastertek?

Posted by SIC Games on 13 August 2014 - 09:49 AM

I need to always expand my skills so I may be converting rastertek tutorials everything into DirectX11.1 and openGL. When fully finished I'll send the link to anyone who will needed the tutorials. Sounds cool?




#5169965 Either I'm feeling burnt out or losing sight

Posted by SIC Games on 29 July 2014 - 12:22 AM

Thanks for the awesome feedback! I've been hitting pass those sluggish times where I want to take a coding vacation. Weirdly enough, the volumetric texturing had me do a lot more research than I ever did before! I think I actually learned a lot more than I had like two days. Back to the interconnecting of the engine - that's still undergoing it's way. Still working on the current issues. Today I cleaned up some code crude that I didn't need. I commeted the ones I can use later. I learned a hell alot more about memory allocation and provide better means. I kept on getting a stackoverflowexception than I figured out what I was doing wrong. I stopped that and sit back think why did that give me an exception.

 

A lot of the times it's just keeping the eye on the goal and never looking back. Hell, if my game doesn't sell nor work out as well I can always build an electronic equipment that uses 3D mapping like gdar - it sends radio waves through the grown when detecting hidden rooms or caves or some shat like that.

 

I wasn't thinking of Doom 3 as ideal goal - then again I think focusing on my own would be better than focusing on other people's engines.




#5169523 What happened to Rastertek?

Posted by SIC Games on 27 July 2014 - 10:26 AM

He better get that site back up and running again - that was weird! I checked and like you said everything was wiped. I'm going to cry now - there was no explaination just vanished like a ninja.




#5169522 MSAA in Deferred shading

Posted by SIC Games on 27 July 2014 - 10:25 AM

When you create your render target and depth stencil view - you must make sure the samples are the same and can be made for that format. In my deferred shading I use 8x MSAA. In your question how do you access the samples? I got the deferred shading from Rastertek and sadly it's down for now  :(

 

Maybe a sniplette will help or an basic idea:

ID3D11_Texture2D_Desc texDesc;

//... fill in the required texture2D for the render target you're creating. 
texDesc.format = DXGI_FORMAT_R32G32B32A32_FLOAT;
texDesc.Samples.Count = 8;
texDesc.Samples.Quality = 32;
... - Normally how you'll create the texture2D. 

... create new Texture2D Description for depth Stencil View - just same as above BUT. 

depthTexDesc.format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthTexDesc.Bindflags = D3D11_BIND_DEPTH_STENCIL;

-- Create the depth stencil  view with the newly created depth stencil 2D Texture.

If you want to test to see if that format supports or what quality that format supports you can use ID3D11Device::CheckMultiSampleQualityLevels upon directx device creation




#5169368 Either I'm feeling burnt out or losing sight

Posted by SIC Games on 26 July 2014 - 02:47 PM

Upon dissecting the HPL 1 Engine, Doom 3 Engine and looking back at my engine...As far as coding skills are consern I have a lot to learn! The whole entire framework is whacked. I am going back to read the Game Engine Architecture. It helped point out a lot about the design of an game engine. 

 

In addiition, a API or SDK are just tools laid out for the developer to build a game. A house wouldn't be built without tools, right? Game is the house as in tools are the game engine's api or sdk.

 

What I've noticed is the Doom 3 engine uses a lot of macros definations and HPL 1 Engine uses a lot of macros. I thought you want to stay away from macro definations as best as you could?

 

Why in the world reading through someone's elses code so confusing? HPL1 uses a lot of interfacing; a lot of polymorphisms and don't forget the macro definations.

 

Remember that callback question that I asked about? Yeah I found out how that one works - never looked at the HPL Engine source code closely like today.

 

A lot of the time I was searching within the project how they co-relate to eachother. Callback function is all the way over to another header file, that  gets used by a lowLevelSystem header file and then over the script header file.

 

It's like connect the dots - literally! Over here and over there and over there and over there!

 

Aren't any API or SDK suppose to make sense or do they literally have the whole "Connect The Dots"? Or do they purposly do that to prevent others from dissecting their code and making a better engine then theirs?

 

man: It's like here's a bunch of nails, screws, bolts, washer, a some wood and scrap metal - there you go! Have at it - now build a shed.

Builder: Where's the instructions?

man: There's no need for instructions just go with what you feel best!

Builder: Huh?!

 

How many revisions do a game engine undetake anyways? This is perhaps my fifth or so.  I'm just venting - don't mind me! TIme for a smoke.

 




#5168561 Create Vertex Shader from compiled file

Posted by SIC Games on 22 July 2014 - 09:45 PM

I had to re-read  your post carefully. What I believe you would have to do is to read the HLSL Binary like you do with binary files. then use the function D3DX10CompileFromMemory - there's possibly some interesting tutorials. I'll need to convert my shaders into binary to protect my game so modders won't get funny with it. If you're looking into how you would actually read a whole binary file - you would have to get the full size of the file and create a pointer with that size.

char *data;
unsigned long fileSz; 

std::ifstream input("myshaderbinary.hlsl", std::ios_base::binary);
if(input) { //-- make sure it's opened first.
        input.seekg(0, std::ios_base::end);
        fileSz = input.tellg(); //-- Get the file size.
        input.seekg(0,std::ios_base::beg); //-- rewind to beginning of file.

        if(fileSz > 0)  {  //-- just making sure there's over zero.
        data = new char[fileSz]; //-- Allocate enough storage to stuff the data.
        input.read(data, 0, sizeof(char) * fileSz);
         }
}

//-- Now we can continue to use D3DX10CompileFromMemoryA function. 

The D3DX10CompileFromMemoryA reads ascii such as char or const char. it won't read const wchar_t * or wchar_t* or LPCWSTR which is shortened for const wchar_t*. The D3DX10CompileFromMemory will read the LPCWSTR, wchar_t * and const wchar_t*.

 

I'm sure there's some examples are here or on google but because I haven't read any shader files in binary yet - I believe this is the approach you'll have to take. Anyone that would like to point out a more effective way then you'll most likely benefit from them. I'm not too sure yet.  Just remembered how I loaded my video textures with D3DX11LoadShaderResourceFromMemoryA bit.

 the input.read is incorrect. it should be input.read(data, sizeof(char) *  fileSz);




#5168560 Create Vertex Shader from compiled file

Posted by SIC Games on 22 July 2014 - 09:43 PM

I had to re-read  your post carefully. What I believe you would have to do is to read the HLSL Binary like you do with binary files. then use the function D3DX10CompileFromMemory - there's possibly some interesting tutorials. I'll need to convert my shaders into binary to protect my game so modders won't get funny with it. If you're looking into how you would actually read a whole binary file - you would have to get the full size of the file and create a pointer with that size.

char *data;
unsigned long fileSz; 

std::ifstream input("myshaderbinary.hlsl", std::ios_base::binary);
if(input) { //-- make sure it's opened first.
        input.seekg(0, std::ios_base::end);
        fileSz = input.tellg(); //-- Get the file size.
        input.seekg(0,std::ios_base::beg); //-- rewind to beginning of file.

        if(fileSz > 0)  {  //-- just making sure there's over zero.
        data = new char[fileSz]; //-- Allocate enough storage to stuff the data.
        input.read(data, 0, sizeof(char) * fileSz);
         }
}

//-- Now we can continue to use D3DX10CompileFromMemoryA function. 

The D3DX10CompileFromMemoryA reads ascii such as char or const char. it won't read const wchar_t * or wchar_t* or LPCWSTR which is shortened for const wchar_t*. The D3DX10CompileFromMemory will read the LPCWSTR, wchar_t * and const wchar_t*.

 

I'm sure there's some examples are here or on google but because I haven't read any shader files in binary yet - I believe this is the approach you'll have to take. Anyone that would like to point out a more effective way then you'll most likely benefit from them. I'm not too sure yet.  Just remembered how I loaded my video textures with D3DX11LoadShaderResourceFromMemoryA bit.




#5165902 Specular Mapping in Deferred Rendering

Posted by SIC Games on 09 July 2014 - 04:28 PM

So, I realized my problem was the lack of understanding how deferred shading differs from forward pass rendering. Putting tangent data in a deferred engine wouldn't make sense where I can encode and decode the tangent and biTangent from the normal map already. That's what the spheremap exactly does. The math is confusing because I'm not a mathician and I hate math. I tried to break it down as much possibly why in the world it works the way it works. I believe it takes the colors from the normal map and translate it to object or world space.

 

In forward pass rendering where it calculates the light and the geometry data each draw call - that's what got me confused; What really confused me was the whole topic of this thread was about transforming tangent space into world or view space (object space) before rendering it out to the render target. Last night I've became frustrated on thinking how do you transform tangent space into view space. Transpose(TNB) gave me a closer idea what was going on. I stumbled upon this deferred shading demo that I downloaded and it had spherical normal mapping (spheremap). I plugged it in my gbuffer shader - now I can safely say it works. I looked at the code where the encoding and decoding parts to figure out how does it work. I suck at math but my guess is that it essentially transforms the normal map data into object space or view space.

 

Tangent normal maps look different than object space normal maps - thus giving them the effect of depth when lit.

 

So back to your question Phil - now does the specular lighting change when I"m in relation to the light? The light is at (0,-1,-1) inverted (0,1,1) When the light intensity is above 0.0 by the dot product ot the normal map and the light position then the reflection of the specular if full when I'm facing the object at 0,1,1 but dims when I'm viewing at position -5,1,1.

 

It doesn't pop back in nor pop back out like it use to - so I can safely say everything's in running condition. It wasn't just me that was confused about the difference of rendering normal mapping in deferred shading - a lot of people was unsure also.

 

So that's all for now. Thanks Julien and everyone who contributed to this thread. Also thanks for Phil for pushing me in the right direction.




#5155772 DirectSound or XAudio2

Posted by SIC Games on 24 May 2014 - 08:10 PM

FMOD was really easy to implemend! I love it! Great suggestion Cozzie!




#5149748 how draw multiple objects in DirectX 11

Posted by SIC Games on 26 April 2014 - 07:06 PM

A way to do what I think you're doing is -  you want to load multiple meshes that are different? So for example: bigBird.obj and Kermit.obj? You're able to load which is good. Now the OBJ parse loads all the vertex information and the indice data dn places them inside your struct, right? For instance:

struct MeshVertexData {
XMFLOAT3 position;
XMFLOAT2 textureCoords;
XMFLOAT3 Normals;
};

If this is the case then what you could do is stuff the important information that is required for rending: IndiceBuffer, VertexBuffer, PixelShader, VertexShaders, InputLayouts including the vertexData structure you hold the information about the OBJ file. Anything that's important in one different structure; including a texture resource. For an example:

struct MySpecialModels {

MeshVertexData *meshData;
ID3D11Buffer *VertexBuffer, *IndiceBuffer;
ID3D11PixelShader *pixelShader;
ID3D11VertexShader *vertexShader;
ID3D11InputLayout *inputLayout;
XMMATRIX WorldMatrix, TransformMatrix,ScaleMatrix,RotationMatrix;
ID3D11ShaderResourceVIew *textureResource;

};

That's the important information about your mesh. When you parse your OBJ; you fill out your SpecialMeshStruct - once you filled the entire structure up then you have to put it inside a list or a vector or whatever stl collector.

 

My custom mesh structure goes inside a vector.

#include <vector>

std::vector<MySpecialMeshStruct> meshCollections;

So after you're done filling up your structure like I said before you have to place them inside the vector by declairing. meshCollections.push_back(myspecialStructureThatBeenFilled);

 

The "mysecialStructureThatBeenFilled" is just a temporary structure to fill up the mesh collection list.

 

When you are about to render you can ilerate through the mesh collection list by calling AND after you cleared the render target and depth stencil view.. I'm not sure if this is overwhelming you or not.  But you should get the basic gist of how to render multiple meshes at once. 




#5148786 Why DX 11 Geometry Shader isn't rendering anything out?

Posted by SIC Games on 22 April 2014 - 12:14 PM

Okay, now I can scream and bang my head against the wall. Thank you again Julien, you were right on the money! It was the issue with input.worldpos + float3() section. I gave you thumbs up and I wish I can give again but thanks!




#4989129 string::size_type

Posted by SIC Games on 11 October 2012 - 09:27 AM

I have been wrong too; it is all about how you handle it.
http://www.gamedev.n...67#entry4965067

People don’t lose respect for you for being wrong. They lose respect for you for handling it poorly.


L. Spiro


True. I probably woke up on the wrong side of the bed today. You're right bro, if a game critic gave me poor game review - probably would aggravate me. So, yeah - I agree I have to react more proactive and professional about. I also have to realize critizism comes in a helping manner and less of a attack. I was being defensive and that's not good demonstration for others to see on this forum. Professionalism is the way to lead people in the right direction. Which, again I have to work upon if I am ever going to be taken seriously. So, yeah I agree with the quote above.


#4989123 string::size_type

Posted by SIC Games on 11 October 2012 - 09:13 AM


sizeof returns the length in bytes of a char. - whether it be a array or what not. Whatever - I was trying to help out; screw that idea.

You are, unfortunately, blatantly wrong on this count.

sizeof() returns the correct size for the given type, or the correct length of a statically allocated array. std::string uses dynamically allocated memory to store the string contents, so sizeof() will not see that memory.

#include <string>
#include <cassert>
int main() {
	std::string A = "Hello, World!";
	std::string B = "Hi";

	assert( sizeof(A) == sizeof(B) );
}


Yeah you too have a good day, keep on warning people; bro! Cause you're cool like that.


#4989122 string::size_type

Posted by SIC Games on 11 October 2012 - 09:12 AM


there is size_t; and plus sizeof() returns bytes of characters. Rather it be a sturct, a class or whatever.

“string” is a structure, so sizeof() returns the size of that structure, not the size of the string it manages.
If it was instead:
const char szString[] = "Hello fellow citizens.";
Then sizeof() would return the number of characters, including the terminating NULL.


screw that idea.

I supported you on SIZE_T but I feel less sorry for you after you posted this.


L. Spiro


Watever, dude. Seriously, all I hear is "IM WRONG" "I AM WRONG" from all you people on here. So, whatever. Have a good day; dude.




PARTNERS