Jump to content
  • Advertisement

danromeo

Member
  • Content Count

    249
  • Joined

  • Last visited

Everything posted by danromeo

  1. Phil,   Very helpful, thank you very much.  Simply put, when I set a new rendertarget I lose the stencil buffers on previous rendertargets.  Is this correct?  So stenciling needs to be done in a multipass inside of a single SetRenderTarget, yes?  What a PITA....
  2. hi.    I'm trying to create a simple stencil mask on a RenderTarget2D from a bunch of primitives, and then later draw pixels from that render target to another rendertarget in a shader based on the stencil test pass/fail.  Code is below.  The results that I'm getting seems to be that the stencil test either always passes every pixel or always fails every pixel, regardless of which settings I try for DepthStencilStates.     The idea is to create an overhead view of a forested world and then lay that view over the terrain when viewed from overhead rather than redrawing the forests on every frame, BUT my question is about Stencil Buffers..     I set up the following resources:   MyRenderTarget = new RenderTarget2D(graphicsDevice, mapSize, mapSize, true, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, 0, RenderTargetUsage.DiscardContents); NewRenderTarget = new RenderTarget2D(graphicsDevice, mapSize, mapSize, true, SurfaceFormat.Color, DepthFormat.Depth24Stencil8, 0, RenderTargetUsage.DiscardContents); DepthStencilState writeStencil = new DepthStencilState() { StencilEnable = true, DepthBufferEnable = false, ReferenceStencil = 1, StencilFunction = CompareFunction.Always, StencilPass = StencilOperation.Replace, }; DepthStencilState stencilMask = new DepthStencilState() { StencilEnable = true, DepthBufferEnable = false, ReferenceStencil = 0, StencilFunction = CompareFunction.NotEqual, StencilPass = StencilOperation.Keep, }; During initialization to create my overhead render target with stencil I set the DepthStencilState to stencilMask and draw the forests to the rendertarget, which SHOULD give me a stencil buffer containing 0's where there are no trees and 1's where there are trees.     graphicsDevice.SetRenderTarget(MyRenderTarget); graphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Stencil | ClearOptions.Target, Microsoft.Xna.Framework.Color.Black, 1.0f, 0); graphicsDevice.DepthStencilState = writeStencil; foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); graphicsDevice.DrawUserIndexedPrimitives<Position4Texture>(PrimitiveType.TriangleList, Vertices, 0, 4, Indices, 0, 2); } graphicsDevice.DepthStencilState = DepthStencilState.Default; And then at render time I render my terrain, and then in a second pass I set the DepthStencilState to stencilMask and render a quad over the terrain pulling pixels from MyRenderTarget based on stencil test pass/fail:   graphicsDevice.SetRenderTarget(NewRenderTarget); graphicsDevice.Clear(ClearOptions.DepthBuffer | ClearOptions.Stencil | ClearOptions.Target, Microsoft.Xna.Framework.Color.Black, 1.0f, 0); graphicsDevice.DepthStencilState = DepthStencilState.Default; < DRAW TERRAIN TO NewRenderTarget > graphicsDevice.DepthStencilState = stencilMask; effect.Parameters["Texture"].SetValue(MyRenderTarget); foreach (EffectPass pass in effect.CurrentTechnique.Passes) { pass.Apply(); graphicsDevice.DrawUserIndexedPrimitives<Position4Texture>(PrimitiveType.TriangleList, Vertices, 0, 4, Indices, 0, 2); } graphicsDevice.DepthStencilState = DepthStencilState.Default; And in the simple pixel shader I am returning:   return = tex2D(Texture,input.TexCoord); I've tried various settings in the DepthStencilStates, and the end result is the stencil test either always passes all pixels, giving me overhead forests with black terrain, or always fails, giving me terrain with no forests.  I've never used stencil buffers before but would like to make extensive use of them.  Can somebody tell me what I'm doing wrong?     THANKS                  
  3. Phil,   You answered with "So you can't set a new render target and use the stencil buffer from a previous one.", and then asked exactly what I was trying to do.  I was unable to come back to this for several days so I started a new thread, including code explaining exactly what I'm trying to do.   Maybe I misunderstand you answer, but I'm not setting a new render target and using the stencil buffer from a previous one.  Are you saying that I need to set the rendertarget, write to the stencilbuffer, and then immediately send the rendertarget to the pixel shader for the comparison operation?  This makes no sense.....you can't send an active rendertarget to the pixel shader, it will error out.  Or do you mean that I can only perform a stencil test on the active rendertarget?  In this case if I lose my stencil buffer as soon as I set the rendertarget, how can stencil buffering be in any way useful at all?  Would I have to create the stencil and perform the comparison test all inside one draw call?     SO if what I'm trying to do isn't possible maybe I could trouble you to explain exactly how I can do this?  Really all I'm trying to do is mask  certain pixels out of a rendertarget with an early stencil test.....seems pretty simple.  This is driving me nuts, and I've found several other examples of people with pretty much the same problem who haven't found a resolution, or people who made it work in XNA 3 but couldn't make it work in XNA 4.  I found one guy who says you need to use preservecontents on a rendertarget in order to write to the stencil buffer, but still no dice, although my tests do indicate that the stencil buffer is never being written to.       I can't even find a decent explanation of how stencil buffering works.  I might be conceptually completely out of the ballpark.  For example, I *think* what I'm doing is comparing the stencil buffer of a rendertarget to a simple reference value.  Is this correct?  Am I comparing the contents of the rendertarget stencil buffer to the stencil buffer of the back buffer?, noting that I'm not drawing to the back buffer?  Is drawing to the backbuffer the only circumstance where stencil buffering will work?     Or maybe you could help me with simple code describing how simple a stencil mask works, that actually does work?     Much Confusion.  I really appreciate your help.  
  4. Hi.     Can somebody explain to me or point me to a resource on using a basic stencil buffer on a render target in XNA 4?  I'm finding a lot of XNA 3 resources and I know that it's changed in XNA 4, and a lot of articles on XNA4 that are overkill and confusing.     I want a rendertarget with a stencil buffer, which is drawn to during initialization.  Then at draw time I want to pass the render target to the pixel shader, selecting pixels from the passed render target to render to the current render target, but discarding pixels based on the original render target's stencil buffer.     I THINK, correct me if I'm wrong:   The graphics device has a stencil buffer which is zeroed out when you set the rendertarget.     I have to enable the stencil buffer.     I can set the game state to compare the graphics device stencil buffer and the render target stencil buffer at draw time.     BUT, how do I set the values in the render target stencil buffer?     Also I'm using Multiple Render Targets, if this makes any difference.     I know this is probably simple stuff but I'm not finding any resource that gives a simple explanation of What It Is and How To Use It.   Thanks
  5. How can I see the contents and allocations of my video card memory, either in real time or in Pix or any other (free) diag program?  How can I poll the available video memory from a program before making allocations?  What is the best way to fine tune allocations to video memory?  I just realized that allocating beyond the available memory seems to crash the computer with no warning.  How do I avoid this while still taking advantage of the full available memory?  I'm using XNA at the moment but am porting to Sharp DX.     Thanks 
  6. How do I determine camera view angle to an object with the object's rotation factored in, to display a 2d "imposter" based on the actual view angle of the object?   I can get the camera angle to the object with: half3 center = mul(input.inPos, xWorld); half3 EyeVector = normalize(center-CameraPosition); float lookYaw = atan2(EyeVector.x, EyeVector.z); But I'm not having any luck determining the view angle with object rotation factored in either by adding or subtracting the EyeVector with the ObjectRotation vector or by atan2'ing the ObjectRotation vector and adding or subtracting the result with lookYaw.  All vectors in question are normalized.     Hope this makes sense.  Thanks!    
  7. danromeo

    Recalculating terrain normals

    I'm by no means an expert.  I don't think you need to calculate normals in the shader unless you're terrain is moving every frame, as opposed to the camera moving every frame.  Below is the code I use to get the terrain normals when I initialize new terrain, in C#, which seems to work.  It's not my code, You could probably find a better algorithm for generating the normals, since it doesn't run every frame speed is not a huge factor.              private static Vector3 SimpleCrossFilter(int x, int y, ref float[] heightfield, float normalStrength, int width, int length) { // Create four positions around the specified position Point[] pos = new Point[] { new Point(x - 1, y), // left new Point(x + 1, y), // right new Point(x, y - 1), // higher new Point(x, y + 1), // lower }; // Get the heightvalues at the four positions we just created float[] heights = new float[4]; for (byte i = 0; i < 4; i++) { // Check if we can access the array with the current coordinates if (pos[i].X >= 0 && pos[i].X < width && pos[i].Y >= 0 && pos[i].Y < length) { int j = pos[i].X + pos[i].Y * width; heights[i] = heightfield[j]; } else { // If not, then set value to zero. heights[i] = 0; } } // Perform simple cross filter. float dx = heights[0] - heights[1]; float dz = heights[2] - heights[3]; float hy = 1.0f / normalStrength; // Create and normalize the final normal Vector3 normal = new Vector3(dx, hy, dz); normal.Normalize(); return normal; }
  8. Hi.   I have a version 3.0 vertex shader for (what I call) Static Imposters that essentially decides which image in a texture atlas should be used and sends that info to the pixel shader.     I am confident that the shader is written correctly but it always displays the wrong image!  If I run the shader through a debugger, the numbers are all correct, but the wrong image displays.  I have isolated this to a single variable, whose value is assigned from the program with Effect.Parameters["ImagesPerView"].SetValue(ImagesPerView);     Running the shader through the Pix debugger, the value of ImagesPerView is always set correctly to 12, but the program displays the image as if the value of ImagesPerView is set to 11.  If I hardcode the value of ImagesPerView in the vertex shader to 12, overriding the program assignment, the program displays the correct image!  WHY am I getting different results even though the value of ImagesPerView is always 12?     The program is assigning the value of 12 to ImagesPerView on every draw call.  I don't think this is due to any sort of  implicit conversions as I have checked the code thoroughly, and even did an (int)ImagesPerView conversion on every operation to make sure.  All of the other variable assignments in the shader appear to be correct and functioning correctly.  Stared at it until I was crosseyed.....What The Heck am I doing wrong?   Any ideas welcome.   THANKS
  9. This topic is old because I've been involved with other things.  I've seen conflicting info re: use of int's, seems I saw a MS recommendation to use int's to avoid rounding errors.  Regardless, I get better results when replacing ints with floats.   Thanks!
  10. Hi.  Did you run scandisk with both checkboxes checked, or with both the ./F and /R switches (chkdsk /f /r)?  The /R switch, or the bottom checkbox, repairs bad sectors by copying the data to another location and marking the sector so it won't be accessed anymore.  Check your logs after running, if it found any bad sectors run it again and see if it finds anymore.....or if it finds a lot of bad sectors.....replace the drive.  The /R switch may run for a long time depending on how many bad sectors it finds....maybe hours.  BACK UP YOUR HARD DRIVE BEFORE SCANNING WITH /R.....if your boot sector is bad the /R switch can cook your operating system and cause much sorrow.       Also scandisk isn't 100% reliable, for example it won't find problems with the drive motor, which would still cause errors if it's failing.  If you can't get past your error message you should clone the drive and replace it.  IMO you should replace the drive now.....I'm not a DX expert but I don't think hardware errors are a result of bad code because code never directly accesses hardware, so you can't write any code that would access the hard drive in the wrong way, etc...all of this is negotiated by DX and the bios.  Also if you suddenly have bad sectors then more will probably follow, and keep in mind that you're only seeing bad sectors that your program is trying to access.....there may be many more.  .       If you're dealing with a drive failure you'll never find the source of the problem until you repair or replace the drive.  If the drive crashes before you replace it recovering will be a lot more difficult, or impossible, or very expensive, so make sure you're backed up.     
  11. I thought for sure that unbird must be right with rounding errors and/or int behavior, but I think I've covered this and I'm still getting incorrect results.     Declaring all globals as floats in the shader.  Rounding values up to the nearest integer and truncating.  Still the same results and still seems to come down to this one single global.  Again, the number I'm sending to the pixel shader displays in PIX as 0.00 but the program behaves as if it is a different value.     More Facts: If I hard set the global value in the .fx file and never throw a value from the program at all I still get the same incorrect results.   BUT if I declare a variable in the vertex shader and assign the same value everything displays correctly.  This must be a significant fact....not sure what to do with it though.  What's the difference between a global and a variable in an .fx file?  How are they treated differently?     I tried changing the declaration order in the .fx file, still same results.  I keep thinking it must be a syntax error somewhere in the .fx file but I can't find any problems and obviously it compiles.  
  12. Indexing, definitely a good idea but I've already checked this.     Also if I throw a hard value from the program (Effect.Parameters["ImagesPerView"].SetValue(12)) it still seems to munge the value, but if I hard set a variable value in the shader (ImagesPerView=12) everything displays correctly.  What's the difference??   I'm not an expert but I know shaders make some assumptions about variables.  For example in different parts of the program I'm sending different values to ImagesPerView....but I still don't understand how the debugger can display all of the correct values and yet the program seems to be doing something entirely different.     Rebuilt the entire project....still same results.     Before I post code I'm going to remove all of the instances of throwing different values to ImagesPerView and then strip the shader code down to it's bare minimum.  This looks like some sort of munging to me.....for example hard coding the variable value in the shader results in a lot of code being skipped, so the problem could be in any line of the skipped code.  There is a small bit of branching in the shader that I'm suspicious of, although I've used branching in shaders lots of times without any problems.  it stands to reason that if all the vertex shader did was accept a value from the program and send it to the pixel shader that it would send the correct value!     If any of this inspires any ideas please speak up!  THANKS for your suggestions.  
  13. Hi.     I'm implementing a spherical billboard system for overhead views.  When the camera is directly overhead I don't want the billboard to rotate on it's Y axis but instead want to assign an arbitrary y rotation to the billboard, for example trees with varying y axis rotations.     I'm forcing the billboard to face upward with these statements:     sideVector=float3(1,0,0) //right upVector=float3(0,0,-1)   //forward   How do I rotate this an x amount on the Y axis?     Thanks   
  14. sideVector=cross(float3(0,1,0), -input.rotation); upVector=input.rotation;   does the trick, up facing billboard with rotation
  15. You can try rendering to a rendertarget and then presenting to the screen, instead of rendering directly to the screen.  
  16. I'm trying to implement Static Imposters and am having a hard time finding info or samples on how to create a pixel shader....can anybody point me to any resources?  I'm finding lots of theoretical discussions but no solid code, and I'm thinking my math isn't strong enough to write an efficient shader.     Specifically, I have spherical billboards that will draw images from texture atlases of different angles based on the camera angle to the object, always lerping between two images that describe the nearest angle, with images for every 45 degrees, including top-down angles.  Everything works except the final mechanism to decide which images to use.  So I THINK I need a pixel shader that determines the camera angle to the object on the Y axis and picks the two closest images to the angle, lerping between them as the camera angle changes, and determines the camera angle to the object on the X axis to decide if top-down images should be used.  IE if I have 24 images in my atlas, the first have a 0 degree rotation on the X axis with 45 degree intervals on the Y axis, the next 8 have a 45 degree rotation on the X axis with 45 degree intervals on the Y axis, and the final 8 have a 90 degree rotation on the X axis (straight down) at 45 degree intervals on the Y axis.  SO if the camera is above the object add 8 to the image index to get a down-angle image and add 16 to the index to get a straight-down angle.  Hope this makes sense.     I'm trying to add trees to my world that will allow for a top-down view, using deferred rendering to create lighting, shadows, and reflections.  This will be my final step if it works!   THANKS 
  17. Hi.   I'm seeking methodology to run a fairly data heavy, gpu intensive highly graphical .net game from a web page.  I am a .net programmer but not a web guy at all so my level of expertise in this area is effectively zero at the moment.     My goal is that people can log into a web site, download the necessary data, run the app at an acceptable frame rate, and interact with other online players.  Note that this app challenges the GPU even when run locally, so I have to keep latency to a bare minimum.  One method that occurs to me is to install and run the app from the local machine, while downloading data and interacting with other users on the web site.     As in real life, I am approaching this naively and bright eyed.  ANY comments, suggestions, guidance, resources, and yes even bare face laughing at me, would be useful at this juncture.  From a Square One, Hello World perspective, HOW DO I DO THIS?????   Thanks!     gamedev.net is awesome.
  18. How do I convert a 0-360 float value representing a Y-axis rotation in degrees to a Vector2 value representing the direction on the XY plane IN A PIXEL SHADER?   Thanks 
  19. Still SO close to finishing optimized deferred lighting.   Using the "standard" deferred algorithm of multiple render targets, etc.  With shadows.  Pretty cool. So in brief I calculate shadows depth, then render the lightmap to a rendertarget, copy the RT to Texture2d, send the Texture2d to the pixel shader for the next lightmap, combine the existing light with the new light, repeat for each light.     I'm trying to clip my draw quads to the lights for optimization.  This means that I can't buffer my light maps to texture2d and send to the light map pixel shader because the light map will only draw within the bounds of the clipped draw quad, thereby clipping out the existing light.     Seems like I need to draw the first clipped light to rendertarget, copy to full screen texture2d buffer, DON'T send the buffer to the pixel shader, draw clipped light 2 to rendertarget without existing light, then send both the new rendertarget and the existing texture2d buffer to a pixel shader which will combine the two in a full screen draw, copy the whole shootin' match to a full screen buffer, repeat for each light.     By my very rough calculations it seems like both algorithms will take about the same amount of time, maybe a small optimization per light but nothing that will add up to any significant performance.     Any opinions on this?  Suggestions for a better algorithm for clipping light draw quads in a deferred renderer for optimization?     THANKS    
  20. Unfortunately the shadowing algortihm requires rebinding the render target.  
  21. >  it sounds much more like an XNA-specific workaround. This absolutely IS an XNA-specific workaround.  I am using XNA!     I may be misconcieved here....but I am avoiding using preserve contents because I have an eye towards porting to <as yet unnamed> platforms, particularly Windows Phone 8, which I am barely familiar with (or even Windows Phone 9...ie future mobile platforms will undoubtedly have the graphic power eventually.)...DEFINITELY moving to SharpDX & MonoGame asap, and yes I have read that preserve contents is not XBox friendly....so trying to keep my options open for unknown platforms.   Too much for one person to know and by the time I'm ready to port everything may have changed.  BUT without preserve contents RT's in XNA are volatile and the copying mechanism has been used by people who are a lot better at this than I am.  This is the first situation where I haven't been able to get around the preserve contents issue with a creative algorithm....as I said, still learning (yes, I know, learning a dead platform)....which is why I started this post.  The default behavior of DX9 re: preserve contents is not a concern because one of the few things that I do know for sure is that I'll be moving away from DX9 soon, and I am married to C# as being only one person development is SO much faster than C++.     SO....is preserve contents going to be an issue say with Monogame for WP8?  Hope that's not a dumb question, again I've barely explored the platform.           Appreciate your comments, definitely food for thought.          
  22. This is a learning process for me, so forgive any misconceptions.     I believe the basic algorithm for deferred lighting is render light pass to RT -> Copy RT to existing light buffer (texture) -> Send existing light buffer to pixel shader -> render next light combining existing light to RT, repeat.     Copy to texture is necessary because in XNA when you reset the render target to RT you wipe out the contents of that RT (unless you're preserving contents, which I don't want to do).  Resetting the rendertarget to RT is necessary because the shadow algorithm requires setting the rendertarget to a depth buffer RT, so the RT constantly switches back and forth.     I'm not making any of this up....culled from the likes of Reimer and Catalin.     I"m not using light geometry for the same reason....subsequent light pass needs to include existing light and drawing only the current light geometry will clip the existing light.  Likewise I don't believe that additive blending is possible because the RT contents are wiped out when setting the rendertarget.  So you copy the RT to texture and send the texture to the pixel shader, again all of this pulled mostly from Reimer samples.     I actually have a pretty quick running engine, with shadows, and am ready to wrap up deferred lighting and move on but I want to optimize as much as possible, mostly so I can call this phase Done DONE and won't have to revisit it later.  After culling and reducing the resolution of the RT's/existing light buffers the only thing left seemed to be clipping the light draw quads to the size of the actual light on screen, therefore only drawing into the screen area affected by the light while still combining with the existing light.  Without losing shadows, limiting the number of light sources, or using a ridiculous amount of memory.....so reusing the RT/buffer texture rather than using one RT per light and then combining.       I'm probably misusing the term "clipping".  I mean resizing my draw textures to clip everything outside of the screen area affected by the light.     As Gavin points out, I'm trying to reduce the total draw size while avoiding full screen draws, yet retaining shadows and combining with existing light, where light geomtery and additive blending don't seem to be an option.     Also as Gavin points out I suspect that I'm doing too much work.....so my basic question is, "Is there an efficient way to do this, or is it not worth the effort?".     Again, I am definitely not an expert.  For example:   >  you could use BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource   I have no idea what this means.  WIll find out.     Thanks for your replies.  
  23. I think I got this. Confusion re: coordinates.  The light frustum corners are in "Light World" space???, i.e., centered around the lights world position.  Transform the light frustum corners by the inverse of the light's world matrix to get them in model space and then viewport.Project.....bingo.  ATEFred if you're still following this does this seem correct?  Certainly appears correct on the screen.     No inverse projection, no divide by w....starting to make sense of this now.....none of that stuff happens until it hits the pixel shader....until then it's all world coordinates.  Simple....but you get caught up in the details.   I took the long way around on this one but definitely starting to understand transformations.....essential stuff.     THANKS AGAIN
  24. Hello again Gamedev all.     SO close to finishing my deferred renderer.  Working on optimizations now.     I have BoundingFrustum's that I would like to convert from Camera Space (I think) to Screen Space.  NOTE: If my knowledge here seems a little fuzzy that's because IT IS).     Before you tell me about Viewport. Project......   I *think* that a BoundingFrustum, created from the view and projection matrixes, is already in Camera Space (????), not in Object Space, and Viewport.Project wants to take objects from Object Space, so I can't use Viewport.Project.  ???   I tried transforming the corners of the frustum by the projection matrix and wound up with numbers that can't possibly by right.  Does this bring into play the always confusing Homogenous W thing????, which means I need Vector4's and my frustum corners are Vector3's.     Long story short I'm trying to clip my draw quads....no problem from object space but since I have frustum's in camera space (I think) converting directly to screen space might be faster (or not).     Wondering..... 1. If I am anywhere close to right about ANY of this??? 2. If not would somebody please straighten me out?  (can always count on you guys) 3. HOW do I convert a BoundingFrustum to screen coords? 4. If I need to multiply Camera Space by the projection matrix how can I convert my Vector3 corners into homogenous Vector4's?  (sounds pretty slick but I barely understand what this means)   THANKS, as always          
  25. Thanks for your reply.    I am familiar with the transformations....just can't seem to get them to work!   I have a spotlight, and the light volume is defined by a bounding frustum.  So it's not my camera's bounding frustum.  I need to get the screen coordinates of the lights' bounding frustum, or the area of the screen that the light is effecting, as seen by the camera.    I tried transforming the bounding frustum corners by the camera's projection matrix and then dividing by w.  Incorrect results.   I tried transforming by the inverse camera view matrix, dividing by w,  then transforming the result by the inverse world matrix, then dividing by w, to get the object space coordinates, and then using viewport.project to get the screen coordinates, but still incorrect results.    What am I doing wrong??
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!