I know, there were a few topics on FXAA, but it didn't help me with my problem, hence the new thread.
The problem is, I don't see any difference between the FXAA and Non-FXAA Render.
Then again I am not passing a non-linear color space texture to the FXAA shader. And not sure how to. If my understanding is correct Linear color space is the one that is got when you sample a texture (0 to 1 range or 0 - 255 range) where the colors change in a linear fashion.
I am not sure what sRGB is about ? Currently my texture is in RGBA8 dx format.
According to Fxaa3.11 release by Timothy. "Applying FXAA to a framebuffer with linear RGB color will look worse. This is very counter intuitive, but happens to be true in this case. The reason is because dithering artifacts will be more visiable in a linear colorspace." .
The FXAA paper mentions something about using the following in DX9 (which is what i am working on) // sRGB->linear conversion when fetching from TEX SetSamplerState(sampler, D3DSAMP_SRGBTEXTURE, 1); // on SetSamplerState(sampler, D3DSAMP_SRGBTEXTURE, 0); // off // linear->sRGB conversion when writing to ROP SetRenderState(D3DRS_SRGBWRITEENABLE, 1); // on SetRenderState(D3DRS_SRGBWRITEENABLE, 0); // off
This is what I am doing. 1. Render to texture using D3DRS_SRGBWRITEENABLE = 1, and turn it off after I am done. When I render this texture, it looks brighter than usual. 2. Render screen quad with this texture using D3DSAMP_SRGBTEXTURE = 1, and turn it off after I am done. When this texture renders, it looks correct.
But the aliasing still remains. I figured I shouldn't be doing step two because that would turn the non-linear color to linear while sampling. But doing that results in the texture/scene got from the first step.
I have attached my shaders here. Any help is greatly appreciated.
I have an abstraction library for DirectX and OpenGL. I want to load textures from a single file and read it into memory for either rendering systems directly, without manipulations.
But I can't find an input pixel format shared by both DirectX and OpenGL.
OpenGL Supports the function glTexImage2D on format GL_RGBA, GL_BGRA with type GL_UNSIGNED_INT_8_8_8_8
DirectX supports the function CreateTexture on format D3DFMT_A8R8G8B8
I could actually swap around the bytes when reading texture for one of the systems, but that could incur performance loss. Asynchronous texture loading is one thing I have in mind and want it to be fast when loading.
The following works for me in Windows. I have one header file for declaration "TemplateChk.h" I have one header file for definition "TemplateChkDef.h"
And in your target application you could define concrete classes for the corresponding template classes in Concrete.cpp So your header file for template classes need not contain implementation, thereby improving compilation time for template classes, if I am not mistaken.
Just want to know if what I am doing is correct and is it supported on other Platforms like Linux and Apple IOS ?
//Compilation time for this file is reduced since it does not include template class with function definition ------------------------------------------------- Application.cpp ------------------------------------------------- #include "TemplateChk.h"