Jump to content
  • Advertisement

zz2

Member
  • Content Count

    61
  • Joined

  • Last visited

Community Reputation

282 Neutral

About zz2

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. With that settings I get jagged edges. I am kind of lost on what else to try.
  2. zz2

    XNA 4 Texture2D and DXT5

    Why don't use built in DXT compression? If you use content pipeline you can, with image selected, chose Content Processor->Texture Format->DxtCompressed in Properties window. According to Shawn the Content Pipeline automatically chooses between DXT1 and DXT5, depending on whether your texture has an alpha channel.
  3. I have added screenshots to first post. While creating screenshots I have noticed that artifacts are visible in other areas too (not just when creating alpha override effect).   1st error is visible as a second outline in alpha override effect.   2nd error is that background text is outlined when gamma correction is set to on, but there should be no outline. I just discovered this. Background text is rendered as vector shapes (not with SpriteFont).   When gamma correction is set to off image is rendered correctly.
  4. Border appears around stencil masked shape not around full screen quad. This happens only when "Antialiasing - Gamma Correction" is set to on. If its turned off it renders correctly. I will post screenshots with that setting turned on and off. I tired creating device with PreferMultisampling false and multisampled render targets. The problem still persists. My guess is that artifacts are caused because I save stencil data into red channel which is being gamma corrected around stencil shape's edges (stencil mask is created by rendering 2D vector shape into multisampled render target).   Which driver version are you using. It doesn't say that it only affects OpenGL programs in the version I have installed, that's v331.93. Maybe they changed description? Researching the subject I found this forum thread, where OP claims that description in Nvidia control panel is false because Antialiasing - Gamma Correction option also affects Direct3D games like Half-Life 2 and Call of Duty.
  5. Can Nvidia's override be controlled/disabled?
  6. I don't know. I am using XNA framework and I am not sure which exact underlying D3D surface it uses. Here is the full list of supported surfaces: SurfaceFormat Enumeration (I'm using Color surface).   I tried setting SRGBWriteEnable  to false in shader as described here, but I get compiler error that it is obsolete.   However the plan is to migrate whole project to MonoGame after it is feature complete (MonoGame acts as drop in replacement for XNA). So even if correcting this might not be possible in pure XNA, any hack for MonoGame would be OK too.   Edit: Two channels are needed. One contains alpha override data and the other stencil mask that specifies where alpha override gets applied. Stencil mask is written to red channel and alpha override to alpha channel. I'm trying to limit what graphic features are used to XNA's Reach profile, so that porting mobile platforms would be easier later on. (Or at least have an alternative renderpath that for Reach profile.) Reach profile's texture formats are: Color, Bgr565, Bgra5551, Bgra4444, NormalizedByte2, NormalizedByte4, Dxt1, Dxt3, Dxt5
  7. 32-bit ARGB pixel format with alpha, using 8 bits per channel.
  8. Hi, I have multisampled rendertarget where I use red channel as some sort stencil data, but the problem is I get incorrect results around borders if "Antialiasing - Gamma Correction" is set to on in Nvidia Control Panel. Is there a way to turn that off while I am writing to that specific render target (if user has it enabled in graphics card's control panel)?   Edit: screenshots Antialiasing - Gamma Correction set to   On http://i.cubeupload.com/stXIKw.png   Off http://i.cubeupload.com/QaCO9K.png   While creating these two screenshot I noticed that artifacts caused by gamma correction are also visible in other areas. Like in background text. Text is outlined when gamma correction is on (but should not be). Text is not rendered with bitmap font, but as vector shapes.
  9. Hi! I'm trying to figure out where it is specified if the curve is convex or concave in this sample shader provided in this article: Rendering Vector Art on the GPU. Does it need to be rendered in two batches one for convex triangles and one for concave triangles? float4 QuadraticPS(float2 p : TEXCOORD0, float4 color : COLOR0) : COLOR { // Gradients float2 px = ddx(p); float2 py = ddy(p); // Chain rule float fx = (2*p.x)*px.x - px.y; float fy = (2*p.x)*py.x - py.y; // Signed distance float sd = (p.x*p.x - p.y)/sqrt(fx*fx + fy*fy); // Linear alpha float alpha = 0.5 - sd; if (alpha > 1) // Inside color.a = 1; else if (alpha < 0) // Outside clip(-1); else // Near boundary color.a = alpha; return color; }
  10. In case anyone else is interested here are some good articles I found while researching this subject.   Two buffers is minimum. One front buffer and one back buffer. Front buffers is on-screen buffer to which we cannot write. Back buffers are off-screen surfaces to which we draw. When creating swap chain we only specify back buffer count (front buffer always exist in one form or another. It may not be same size as back buffers if you game is in windowed mode. You do not have direct control over front buffer in D3D9). http://msdn.microsoft.com/en-us/library/windows/desktop/bb174607%28v=vs.85%29.aspx http://msdn.microsoft.com/en-us/library/windows/desktop/bb153350%28v=vs.85%29.aspx     MSDN article on OpenGL (may be old but still relevant) that has clear explanation of relationship between terms: framebuffer, front buffer, back buffer http://msdn.microsoft.com/en-us/library/windows/desktop/dd318339%28v=vs.85%29.aspx   More in depth article that describes differences in windowed and full screen mode (DX10): http://msdn.microsoft.com/en-us/library/windows/hardware/ff557525%28v=vs.85%29.aspx   Another interesting thing is, it seems that driver can create additional buffers on its own. http://xboxforums.create.msdn.com/forums/p/58428/358113.aspx#358113
  11. I think you forgot about front buffer in circular flip example. Front buffer becomes back buffer 2, back buffer 1 becomes front buffer and back buffer 2 becomes back buffer 1. That is for the case of circular flip technique where only the pointers (or names/purpose of buffers) change, but actual content stays in the same spot in memory.   I think I understand know. Triple buffering is meant to be used with V-Sync & the reason to use it is to prevent frame drop (like with double buffering) when V-Sync is enabled. Triple buffering can increase lag up to one frame (one frame of monitors refresh rate right? ... so 120hz monitors can be an improvement when triple buffering is used?) Anand's technique would only be improvement (reduce lag) compared to circular flip technique in old games where GPU is rendering quicker than monitors refresh rate. With slow GPU and low FPS, there would not be any notable difference between techniques (for both there will be lag of up to one frame of monitor refresh rate). A much better technique to reduce lag is to separate input,logic&physics loop from graphics loop and move them to separated threads, so the game's input stays responsive even when the frame rate drops below monitors refresh rate (useful even when not using triple buffering). The title of this thread is wrong hehe
  12. Thank you for such in depth response. How long is additional latency with triple buffering? Is it always one frame or is one frame lag worst case scenario? What happens in his case: Front buffer: frame 1 Back buffer 1: frame 2 Back buffer 2: gpu processing frame 3 --monitor refresh-- (circular flip) Front buffer: frame 2 Back buffer 1: gpu processing frame 3 Back buffer 3: empty? is second state correct? Also wouldn't detached input, logic & physics loop that is running faster than graphic part give similar result as frame skipping? Input, logic & physics:1,2,3,4,5,6,7,8,9,...               Graphics:1 ,2 ,4 ,5 ,7, 8 ,... So effectively graphics is skipping frames 3 & 6. How can this form of frame skipping be better than Anand's frame skipping? I know I am asking a lot of questions but please bear with me. I understand it can reduce input lag, but wouldn't skipping input,logic&physics frames produce the same stuttering effect as Anand's frame skipping? Also now we are talking about synchronizing three loops: 1) Input, logic & physics 2) Graphics 3) Monitor refresh rate   One mistake you made here I think. I think Anandtech suggest the method of dropping the contents of the 1st backbuffer (because it has the oldest frame) and start calculating/writing contents of that 4th frame in it. So the monitor will end up showing frames 1, 3, 4. And it is the GPU that must be ready for the 4th frame for that to happen, not just CPU. As we are focusing on synchronizing graphics and monitor refresh rate. By dropping one frame we have given GPU more time to process 4th frame right?
  13. The difference comes from synchronization timings I think. First buffer in sequence is front buffer. That means that buffer sequence cannot be flipped in circular fashion if monitor has not finished presenting front buffer. So rendering has to be halted in case when both back buffers are completed before monitor refresh is completed. With bouncing, only the two back buffers can be flipped, without affecting front buffer (same cannot be done when flipping all three buffers at once in circular fashion). This is how I think it works. I may be wrong. Many users report increased mouse lag when using triple buffering in some games, I guess this circular flipping is the reason?
  14. I am sorry about outdated link, I didn't check category. It was actually in top two results on MSDN when I searched for triple buffering and at the end of the document it said "Build date: 10/14/2013" so I thought article is current.   I am still learning on the subject. And there is a lot of contradicting information and scarce official resources that would go into detailed comparisons between techniques. I am asking this from two perspectives: as an user and as a indie developer.   As an user I am trying to understand why so many games have problem, that with v-sync enabled framerate will be halved when fps is below monitors refresh rate. What is the reason they don't use triple buffering? Why do technology such as Adaptive V-Sync even exist then if there is triple buffering (is it purely a marketing gimmick)?   As I developer, I would first like to learn about this (pros and cons), why it is so challenging to get it right, before tying to implement it.   I though minimum of two buffers are needed. One front buffer and one back buffer. How would only one buffer work?   Considering I don't know anyone at Microsoft or any of those game developers is it wrong to ask for your insight on this topic on this forum? Yes I made some wrong assumptions and I am sorry about that. I don't know what else to say.   Can anyone explain in a little more detail how triple buffering is correctly and set up in DirectX (with v-sync and without additional lag) (preferably in DirectX 9 if possible). DXGI_PRESENT_RESTART is it then, but only for DX11?
  15. So why don't DirectX games use triple buffering when v-sync is enabled?   If i understand correctly circular pattern of switching buffers in chain described on MSDN, circular flipping process introduces additional latency, which is not the method of bouncing between two back buffers (while front buffer is sent to monitor) that Anandtech described.   EDIT: v-sync should be enabled with triple buffering to prevent tearing here is what I think are combinations: - double buffering & no v-sync = screen tearing as front buffer and back buffer can be switched before monitor has finished displaying front buffer - double buffering & v-sync = no tearing, drop in FPS as GPU has to wait for monitor to finish displaying - triple buffering & v-sync (DirectX's circular flipping) = no tearing, introduces lag as GPU can complete draw on to both back buffers (and then wait), but buffers are send to monitor in fixed order (in that case old frame is displayed when there is already a completed new frame waiting) - triple buffering & v-sync (with bouncing back buffers as Anandtech describes) = no tearing, no additional lag (if GPU completes drawing on both back buffers, it starts drawing new frame instantly & overrides older back buffer. So there is always only latest frame waiting to be send to monitor (flipped to front buffer))   From what I am reading on MSDN, DirectX does not implement last option or it is only DX11 that does (and why it has taken them so long to implement this)?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!