MichaelCat

Members
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

119 Neutral

About MichaelCat

  • Rank
    Newbie
  1. Hi, what if I have an arbitrary renderTarget, that is smaller than the screen (say it is 1x1 pixel) and I want to make sure in the VertexShaderFunction that all my pixels end up exactly in that 1 pixel region? Nomatter what I do, they all seem to get culled at some point, though GraphicDevise.Clear() works OK. Where is the top left corner of the renderTarget Vertex-shader-vise? I tried output.Position = (0,0,0,0)/(0,0,0,1)/(1,1,1,1)/(-0.5,0.5,0,1) NOTHING works!   Fullscreen quad is not an option cuz I actually need to process geometry in the shaders to get the results I need.
  2. What I am trying to achieve is the following: my pass will return a huge array with several unique numbers repeating over and over, which I need to retrieve and process on CPU. I tried rendering into a 2000x1 texture and then sampled it on the CPU with RenderTarget2d.GetData<>() and a foreach loop. It was awfully slow . So I sidestepped my problem, the idea now is to render to a 1x1 texture multiple times. Inbetween passes I will extend a parameter array in my shader to include the numbers already returned.Each pixel will than query the array and clip itself if it holds a number that already appeared. The problem now is that pixel color never changes nomatter what I render - it always returns some random numbers. When I added `Game.GraphicsDevice.Clear(Color.Transparent);` before the draw call it started returning zeroes, though the shader code should return 0.2f (or its 0 to 255 equivalent). Can it be because I render too small a content into it (I am drawing a line on the screen and sample the scene's color along the line) and it gets lerped at some point? C#/XNA code:   CollisionRT = new RenderTarget2D(Game.GraphicsDevice, 1, 1, false, SurfaceFormat.Color, DepthFormat.None); ... Game.GraphicsDevice.BlendState = BlendState.AlphaBlend; Game.GraphicsDevice.SetRenderTarget(CollisionRT); Game.GraphicsDevice.Clear(Color.Transparent); Game.GraphicsDevice.DepthStencilState = DepthStencilState.None; foreach (ModelMesh mesh in PPCBulletInvis.Model.Meshes) // PPCBulletInvis is a line that covers 1/18 of the screen (approx). { foreach (Effect effect in mesh.Effects) { //effect.Parameters["Texture"].SetValue(vfTex); effect.Parameters["halfPixel"].SetValue(halfPixel); effect.Parameters["sceneMap"].SetValue(sceneRT); effect.Parameters["World"].SetValue(testVWall.World); effect.Parameters["View"].SetValue(camera.View); effect.Parameters["Projection"].SetValue(camera.Projection); } mesh.Draw(); } Game.GraphicsDevice.SetRenderTarget(null); Rectangle sourceRectangle = new Rectangle(0, 0, 1, 1); Color[] retrievedColor = new Color[1]; CollisionRT.GetData<Color>(retrievedColor); Console.WriteLine(retrievedColor[0].R); // Returns zeroes. Shader code:   float4x4 World; float4x4 View; float4x4 Projection; texture sceneMap; sampler sceneSampler = sampler_state { Texture = (sceneMap); AddressU = CLAMP; AddressV = CLAMP; MagFilter = POINT; MinFilter = POINT; Mipfilter = POINT; }; float2 halfPixel; struct VS_INPUT { float4 Position : POSITION0; }; struct VS_OUTPUT { float4 Position : POSITION0; float4 ScreenPosition : TEXCOORD2; }; VS_OUTPUT VertexShaderFunction(VS_INPUT input) { VS_OUTPUT output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.ScreenPosition = output.Position; return output; } float4 PixelShaderFunction(VS_OUTPUT input) : COLOR0 { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 screenTexCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); screenTexCoord -=halfPixel; return (0.2f,0.2f,0.2f,0.2f);//tex2D(sceneSampler, screenTexCoord); } technique Technique1 { pass Pass1 { VertexShader = compile vs_3_0 VertexShaderFunction(); PixelShader = compile ps_3_0 PixelShaderFunction(); } }
  3. I ended up using normal packing technique from Crysis 3. Thankfully they explained it quite well in their presentation.
  4. Ok the code in the article is either wrong or improperly explained. Firstly it seems to constantly compute imaginary square roots, which leads to errors, but when I solved it with abs() the ligts it produces are absolutely stupid. The BornToCode's advise (sqrt(1-(G.x*G.x)-(G.y*G.y))) also led to wrong results. The author seems to have mistaken the Guerilla's approach to normal packing with CryTech's one.
  5. Hi!   I am creating an engine on XNA 4.0 that utilizes deferred shading technique. My GBuffer is 4 R8G8B8A8 textures for Albedo, Depth, Normals and Lights, where Normals were stored like R - N.x, G - N.y, B - N.z. I was dissatisfied with the quality of my specular highlights, especially when I saw how they look when normals are stored with 16 bit precision so I started searching for other techniques. Right now I settled on CryEngine2 technique of Spheremap Transforms that stores only N.x and N.y and reconstructs N.z. Here is the link to the PPt file: http://www.crytek.com/sites/default/files/A_bit_more_deferred_-_CryEngine3.ppt   I did it just like the slide #13 suggests and everything looks awesome, though a little different from the original approach, but I am in doubt whether I did it correctly. The problem is, their formula of normals deconstruction has this part: "N.z = length2(G.xy)" and I don't know what length2 means. I simply used "length(G.xy)" there and gave the formula a float2 vector (G.xy), just like in the slide. Is that correct? I also tried "length (G.x - G.y)" and it gives a totally different result.   Would be grateful for any advise/link. Thanks!
  6. Proudly possesses The First In The Debsoc Original Dave Laurence's Autograph