• Content count

  • Joined

  • Last visited

Community Reputation

517 Good

About BlackBrain

  • Rank
  1. Correct UVs after Resoloution change

    Thanks for the answer. Actually I am developing a AR (augmented reality) application. By the Camera Image I don't mean the result of a computer generated rendering task but the Image that the Camera of the device captures .     We are developing it for children . They color a drawing and we scan their painting and use it as a texture of a 3D Object . So we have different resolutions on different devices .
  2. Hi .     I am developing an app that scans a portion of camera image and use it as a texture . The texture works fairly nice in 512x512 resolution but in the app this is not possible and I may have the texture in 512x128 . When I use the scanned texture it is maladjusted and not correctly at the place it has to be. I guess it's because of the resoloution.       How can I add an coefficient to uv to correct this , knowing the Best possible resolution(example: 512x512) and the captured resolution(example: 512x128)?  
  3. I once implemented volumetric light effect . My method was this: - Render enough planes . These planes are parallel to the near plane which means they are always facing the camera . How many you may ask? The more the better ;) - When rendering each of these planes in the pixel shader determine the attenuation of the light on it. Also you can use a shadow map to get shadows.   This way you get fairly good result .    Nowadays , but I see modern games use more interesting approaches . For example take a look at the Kill Zone : Shadow Fall . They basically use the same idea but without multiple draw calls to render the planes . They ray march through the cone or sphere (based on the Light type) and do the same things (attenuation , Shadows , etc).
  4. Blurring a specific object in the scene

    You can have another same size texture as your backbuffer. Only draw light to it , blur it and render it on top of your final image additively. Check the temporary texture before and after blur to see if it is working as expected .
  5. Suggestions for simulating ambient light

    Thanks . I am not interested in past because it's past . I am more interested in 'today' way of doing things . Spherical Harmonics has a scary math behind it and I all I know about it is this " It's a way to regenerate low frequency functions by saving some coefficients" . I ended up using AMD CubeMap Generator to build Irradiance map and Pre Filtered Blurry radiance Cubemap with chain of mip maps for specular . (Choosing mip slice based on object's roughness)  It works but I am kinda curious to know more about how these maps are generated . Do you think it is necessary for a graphics programmer to know? Do you know any good resources on this ?    Thank you again
  6. Suggestions for simulating ambient light

    I wish that life would be that easy but it's not. We have diffuse and specular terms for ambient . In past it was usually ignored ,Though nowadays most games use PBR which is necessary to have both terms of it. Hemispheric is just a harsh hack  .
  7. Hello .   What do you think is a decent approach to have diffuse ambient and also specular ambient light in our games ?  For specular I am using cubemaps that hold the surrounding environment , It's not really bad but it needs a lot of memory usage. For diffuse lighting I either use a constant ambient color or Hemispheric ambient light Which is not satisfying . And I couldn't find any good resource on how to bake diffuse information and further use it in real time for moving objects .     Can u please give suggestions on ways that provide better and more physical based that are decent ?   Thanks in advance  
  8. You may want to change the rasterizer state in order not to do MSAA . It's better to sort your work so that you make minimal changes to rasterizer state. 
  9. I do the culling against the whole camera frustum on CPU so that I can reject Lights that will have no effect on final result. The lights which are in the camera whole frustum then are sent to the GPU and the Tile Frustum Culling is done (In the last shader I put , in ProcessLight function ).    Thanks for these great helps  . I think I need to devote quite a lot of time to read these and understand 
  10. Thanks for your reply , It was really helpful. After I posted this on forum I worked on it to improve it. First of all I do every thing now in one shader. I mean determining minimum depth and maximum depth and light culling and etc are done in one shader and thus with one dispatch call. Now I use Tile Frustum to cull lights. And as you suggested I keep LightInfo in group shared memory (not the indices).   Performance is much more better than before now. In Sponza scene I get(45-60 fps) with 512 Point Lights ,however I think it must be possible to get better performance with Tiled Shading. If knowing how I do things on CPU helps here it is :   When Tiled Shading Renderer Is called to process it draws geometry to G-buffer in main thread and on another thread meantime , it Frustum Culls Lights . After culling, a LightShadingInfo[] variable is filled from visible lights (Also in second thread). When the drawing is finished in main thread it waits for the job of the soecond thread . When waiting finishes it continues and updates LightShadingBuffer (it's a CPU Write Dynamic StructuredBuffer) .Then , Dispatching the final shader : cbuffer Globals { int GroupCountX; float FarPlane; float4 FarPlaneCorners[4]; int Width; int Height; int LightCount; bool ShowTileLightCount; }; #define ThreadSize 32 struct LightShadingInfo { float3 ViewSpacePos; /// Light's direction in view space float3 Direction; /// used for SpotLights Only float CosTheta; /// 0=>Directional , 1=> PointLight , 2=> SpotLight int LightType; float Range; float3 Color; }; Texture2D NormalSmoothness; Texture2D DepthBuffer;//in view space unnormalized depth. int [NearPlane , FarPlane ] Range Texture2D Albedo; Texture2D SpecularColor; StructuredBuffer<LightShadingInfo> LightShadingBuffer; RWTexture2D<float4> Output;//HDR output Accumulation Buffer // fills output with 4 side tile planes, void ConstructTileFrustumPlanes(uint4 TileRect,out float4 Planes[4]) { //Use Bilnear Filtering to Get Correct FarPlane Pos float XLerp = ((float)(TileRect.x)) / Width; float YLerp = ((float)(TileRect.y)) / Height; float3 Upper = lerp(FarPlaneCorners[0].xyz, FarPlaneCorners[1].xyz, XLerp); float3 Lower = lerp(FarPlaneCorners[2].xyz, FarPlaneCorners[3].xyz, XLerp); float3 p00 = lerp(Upper, Lower, YLerp); //point p10 XLerp = ((float)(TileRect.x+TileRect.z)) / Width; YLerp = ((float)(TileRect.y)) / Height; Upper = lerp(FarPlaneCorners[0].xyz, FarPlaneCorners[1].xyz, XLerp); Lower = lerp(FarPlaneCorners[2].xyz, FarPlaneCorners[3].xyz, XLerp); float3 p10 = lerp(Upper, Lower, YLerp); //point p01 XLerp = ((float)(TileRect.x)) / Width; YLerp = ((float)(TileRect.y + TileRect.w)) / Height; Upper = lerp(FarPlaneCorners[0].xyz, FarPlaneCorners[1].xyz, XLerp); Lower = lerp(FarPlaneCorners[2].xyz, FarPlaneCorners[3].xyz, XLerp); float3 p01 = lerp(Upper, Lower, YLerp); //point p11 XLerp = ((float)(TileRect.x + TileRect.z)) / Width; YLerp = ((float)(TileRect.y + TileRect.w)) / Height; Upper = lerp(FarPlaneCorners[0].xyz, FarPlaneCorners[1].xyz, XLerp); Lower = lerp(FarPlaneCorners[2].xyz, FarPlaneCorners[3].xyz, XLerp); float3 p11 = lerp(Upper, Lower, YLerp); float3 n0 = cross(p01, p00); n0 = normalize(n0); float3 n1 = cross(p00, p10); n1 = normalize(n1); float3 n2 = cross(p10, p11); n2 = normalize(n2); float3 n3 = cross(p11, p01); n3 = normalize(n3); Planes[0] = float4(n0,dot(n0,p00)); Planes[1] = float4(n1, dot(n1, p00)); Planes[2] = float4(n2, dot(n2, p10)); Planes[3] = float4(n3, dot(n3, p11)); } bool ProcessLight(int LightIndex, float2 GroupMinMaxZ,float4 Planes[4]) { if (LightIndex >= LightCount) return false; LightShadingInfo info = LightShadingBuffer[LightIndex]; bool ZReject = (info.ViewSpacePos.z + info.Range >= GroupMinMaxZ.x) & (info.ViewSpacePos.z - info.Range <= GroupMinMaxZ.y); bool Condition = ZReject; for (int i = 0; i < 4; i++) { Condition = Condition & ((dot(Planes[i].xyz, - (Planes[i].w + info.Range))<=0); } return Condition; } float3 ShadeDirectionalLight(LightShadingInfo LightInfo, float3 Normal, float3 DiffuseColor, float3 ReflectiveColor) { float NdotL = saturate(dot(Normal, LightInfo.Direction)); return NdotL*DiffuseColor*LightInfo.Color; } float3 ShadePointLight(LightShadingInfo LightInfo, float3 ViewSpacePos, float3 Normal, float3 DiffuseColor, float3 ReflectiveColor) { float3 LightVector = LightInfo.ViewSpacePos - ViewSpacePos; float atten = 1 - saturate(length(LightVector) / LightInfo.Range); atten *= atten; [branch] if (atten == 0) return float3(0,0,0); LightVector = normalize(LightVector); float NdotL = saturate(dot(Normal, LightVector)); return NdotL*DiffuseColor*atten*LightInfo.Color; } float3 GetViewSpacePos(int2 Position, float Depth) { /* Layout Of FarPlane Corners [0] ---- [1] ---------- ---------- [2] ---- [3] */ //Use Bilnear Filtering to Get Correct FarPlane Pos float XLerp = ((float)(Position.x)) / Width; float YLerp = ((float)(Position.y)) / Height; float3 Upper = lerp(FarPlaneCorners[0].xyz, FarPlaneCorners[1].xyz, XLerp); float3 Lower = lerp(FarPlaneCorners[2].xyz, FarPlaneCorners[3].xyz, XLerp); float3 ToFarPlane = lerp(Upper, Lower, YLerp); return (Depth / FarPlane)*ToFarPlane; } groupshared LightShadingInfo ShadeInfoCache[256]; groupshared uint MinIntZ = 0xffffffff; groupshared uint MaxIntZ = 0; //groupshared uint LightIndices[512]; groupshared uint LastIndex = 0; groupshared float4 Planes[4]; void AddToIndices(int LightIndex) { int index; InterlockedAdd(LastIndex, 1, index); //LightIndices[index] = LightIndex; ShadeInfoCache[index] = LightShadingBuffer[LightIndex]; //ShadingCache[index] = LightShadingBuffer[LightIndex]; } [numthreads(ThreadSize, ThreadSize, 1)] void main(int3 dispathThreadId:SV_DispatchThreadID, int3 groupId : SV_GroupID, int3 groupThreadId : SV_GroupThreadID) { int LightIndex = groupThreadId.y*ThreadSize + groupThreadId.x; /* Reading From G-Buffer */ float depth = DepthBuffer[dispathThreadId.xy].r; float3 albedo = Albedo[dispathThreadId.xy].rgb; float4 normalSmoothness = NormalSmoothness[dispathThreadId.xy]; float3 specColor = SpecularColor[dispathThreadId.xy].rgb; float3 ViewSpacePos = GetViewSpacePos(dispathThreadId.xy, depth); /* Let's Determine MinMaxZ For EachTile */ InterlockedMin(MinIntZ, asuint(depth)); InterlockedMax(MaxIntZ, asuint(depth)); // Wait for all threads to do their job GroupMemoryBarrierWithGroupSync(); float2 GroupMinMaxZ = float2(asfloat(MinIntZ), asfloat(MaxIntZ)); /* Let's see what lights affect this tile */ if (LightIndex == 0)//if it's first thread { ConstructTileFrustumPlanes(float4(groupId.xy*ThreadSize, ThreadSize, ThreadSize), Planes); } // Wait for all threads to do their job GroupMemoryBarrierWithGroupSync(); while (LightIndex < LightCount) { [branch] if (ProcessLight(LightIndex, GroupMinMaxZ,Planes)) {//this light has an effect on this tile , we have to add it to LightIndices AddToIndices(LightIndex); //ShadingCache[LastIndex - 1] = LightShadingBuffer[LightIndex]; } LightIndex += ThreadSize*ThreadSize; } // Wait for all threads to do their job GroupMemoryBarrierWithGroupSync(); int TileLightCount = LastIndex; // we have the initial variables we needed let's shade ! int index = 0; float3 color = float3(0, 0, 0); [loop] while (index < TileLightCount) { //LightIndex = LightIndices[index++]; LightShadingInfo LightInfo = ShadeInfoCache[index++]; // LightShadingInfo LightInfo = ShadingCache[index++]; /* [branch] switch (LightInfo.LightType) { case 0: color += ShadeDirectionalLight(LightInfo,, albedo, specColor); break; case 1: */ color += ShadePointLight(LightInfo, ViewSpacePos,, albedo, specColor); //break; //} } if (ShowTileLightCount) { Output[dispathThreadId.xy] = float4(TileLightCount / 256.0f, TileLightCount / 256.0f, TileLightCount / 256.0f, 1.0f); } else Output[dispathThreadId.xy] = float4(color, 1.0f); } technique11 Tech0 { pass P0 { SetVertexShader(NULL); SetPixelShader(NULL); SetComputeShader(CompileShader(cs_5_0, main())); } } I also removed branching in the final loop . What's your suggestion to support spot lights now ? Should I just create seperate buffers in this shader and have two final loops or should I create another shader for spot lights and thus have two dispatching ? I like to go with the first way but because I am now storing LightShadingInfo not the indices I am worried that maybe it crosses the limits of group shared memory each tile can have.   Without Dispatching the final shader (Only drawing to G-Buffer and filling LightShadingInfo[]) the fps is about 200 .  My GPU is Geforce GT 636M .    Again thanks and if you share your code it would be awesome and helpful .
  11. No body can help me ? please I need guidance and mainly for optimizing this.
  12. Clipping planes in XNA 4

    You want clipping plane , don't you ? Clipping plane is the same no matter it is reflection or refraction or anything else . Just set it up correctly and you should get correct results.
  13. Clipping planes in XNA 4

    This is how I clipped pixels when I was drawing to reflection texture : clip(dot(float4(,1.0f),ClipPlane)); I simply passed the wolrd space position to the Pixel Shader. My Clip Plane is also built in this form : ClipPlane = new Vector4( Plane.Normal , - Plane.Postion.Length ); Hope it helps , ...
  14. For the second problem I did this : if camera is in the bounding of the light , it occupies all of the screen so the code for it chages to : public void GetBoundingInfo(Camera cam, out LightBoundInfo BoundInfo) { Vector3 Center = Owner.Position; BoundingSphere sphere = GetBoundingShape(); Vector3 min = new Vector3(Center.X - Range, Center.Y - Range, Center.Z - Range); Vector3 max = new Vector3(Center.X + Range, Center.Y + Range, Center.Z + Range); BoundingBox BoxInWorld = new BoundingBox(min, max); BoundingBox BoxInViewSpace = Utility.MathUtility.TransformBox(BoxInWorld, cam.View); Vector2 MinMaxZ = new Vector2(BoxInViewSpace.Minimum.Z, BoxInViewSpace.Maximum.Z); if (sphere.Contains(ref cam.Owner.Position)==ContainmentType.Contains) {//it occupes all of the screen BoundInfo = new LightBoundInfo(0, cam.TargetBuffer.width, 0, cam.TargetBuffer.height, MinMaxZ); } else { BoundingBox BoxInClipSpace = Utility.MathUtility.TransformBox(BoxInWorld, cam.ViewProjection); Vector2 MinClipSpace = new Vector2(Math.Max(BoxInClipSpace.Minimum.X, -1.0f), Math.Max(BoxInClipSpace.Minimum.Y, -1.0f)); Vector2 MaxClipSpace = new Vector2(Math.Min(BoxInClipSpace.Maximum.X, 1.0f), Math.Min(BoxInClipSpace.Maximum.Y, 1.0f)); MinClipSpace.X = ((MinClipSpace.X / 2.0f) + 0.5f) * cam.TargetBuffer.width; MaxClipSpace.X = ((MaxClipSpace.X / 2.0f) + 0.5f) * cam.TargetBuffer.width; MinClipSpace.Y = (1.0f - ((MinClipSpace.Y / 2.0f) + 0.5f)) * cam.TargetBuffer.height; MaxClipSpace.Y = (1.0f - ((MaxClipSpace.Y / 2.0f) + 0.5f)) * cam.TargetBuffer.height; float temp = MinClipSpace.Y; MinClipSpace.Y = MaxClipSpace.Y; MaxClipSpace.Y = temp; int width = (int)Math.Ceiling(MaxClipSpace.X - MinClipSpace.X); int height = (int)Math.Ceiling(MaxClipSpace.Y - MinClipSpace.Y); BoundInfo = new LightBoundInfo((int)MinClipSpace.X, width, (int)MinClipSpace.Y, height, MinMaxZ); } }  This seems to work , As it should.