Back around 2006 I spent a good year or two reading books, articles on this site, and gobbling up everything game dev related I could. I started an engine in DX10 and got through basics. I eventually gave up, because I couldn't do the harder things.
Now, my C++ is 12 years stronger, my mind is trained better, and I am thinking of giving it another go.
Alot has changed. There is no more SDK, there is evidently a DX Toolkit, XNA died, all the sweet sites I used to go to are 404, and google searches all point to Unity and Unreal.
I plainly don't like Unity or Unreal, but might learn them for reference.
So, what is the current path? Does everyone pretty much use the DX Toolkit? Should I start there? I also read that DX12 is just expert level DX11, so I guess I am going DX 11.
Is there a current and up to date list of learning resources anywhere? I am about tired of 404s..
I've been trying to implement a basic gaussian blur using the gaussian formula, and here is what it looks like so far:
float gaussian(float x, float sigma)
float pi = 3.14159;
float sigma_square = sigma * sigma;
float a = 1 / sqrt(2 * pi*sigma_square);
float b = exp(-((x*x) / (2 * sigma_square)));
return a * b;
My problem is that I don't quite know what sigma should be.
It seems that if I provide a random value for sigma, weights in my kernel won't add up to 1.
So I ended up calling my gaussian function with sigma == 1, which gives me weights adding up to 1, but also a very subtle blur.
Here is what my kernel looks like with sigma == 1
I would have liked it to be more "rounded" at the top, or a better spread instead of wasting , ,  with values bellow 0.1.
Based on my experiments, the key to this is to provide a different sigma, but if I do, my kernel values no longer adds up to 1, which results to a darker blur.
I've found this post
... which helped me a bit, but I am really confused with this the part where he divide sigma by 3.
Can someone please explain how sigma works? How is it related to my kernel size, how can I balance my weights with different sigmas, ect...
Is it possible to asynchronously create a Texture2D using DirectX11?
I have a native Unity plugin that downloads 8K textures from a server and displays them to the user for a VR application. This works well, but there's a large frame drop when calling CreateTexture2D. To remedy this, I've tried creating a separate thread that creates the texture, but the frame drop is still present.
Is there anything else that I could do to prevent that frame drop from occuring?
i'm trying draw a circule using math:
coordenates(float x=0, float y=0)
X = x;
Y = y;
coordenates RotationPoints(coordenates ActualPosition, double angle)
NewPosition.X = ActualPosition.X*sin(angle) - ActualPosition.Y*sin(angle);
NewPosition.Y = ActualPosition.Y*cos(angle) + ActualPosition.X*cos(angle);
but now i know that these have 1 problem, because i don't use the orign.
even so i'm getting problems on how i can rotate the point.
these coordinates works between -1 and 1 floating points.
can anyone advice more for i create the circule?
I managed convert opengl code on http://john-chapman-graphics.blogspot.co.uk/2013/02/pseudo-lens-flare.html to hlsl, but unfortunately I don't know how to add it to my atmospheric scattering code (Sky - first image). Can anyone help me?
I tried to bind the sky texture as SRV and implement lens flare code in pixel shader, I don't know how to separate them (second image)