Jump to content

  • Log In with Google      Sign In   
  • Create Account

Seabolt

Member Since 26 May 2010
Online Last Active Today, 06:24 PM

#5206283 Correct UVs after Resoloution change

Posted by Seabolt on 23 January 2015 - 05:41 PM

Well your uvs should be independent of the actual resolution of the texture, they should be values clamped between 0 and 1 where 0 is one side of the image and 1 is the other side of the image.

 

So the question is how are you generating your texture? If you have access to render targets, you could just render a quad that is the correct size of the image that you want into the source image at the correct spot in the source image, blit every pixel into a target texture and your UVs shoudn't have to change. 

 

This is me shot-gunning ideas though, I need to know more about your problem.




#5169188 [CSM] Cascaded Shadow Maps split selection

Posted by Seabolt on 25 July 2014 - 03:35 PM

It's been a little while since I've done CSM, but I did the texture atlas approach and then to avoid the branch I would pass in the tile size as a uniform and divide the tex coord by the tile width and height to determine what tile to sample from. Then I would have a uniform that determined the depth range between the two cascades that I would want to lerp between, sample from both maps and lerp based on the distance to the split.

 

I'm a little hazy on the details, so I apologize with the vague wording, but I hope you get the gist of what I'm saying.




#5169185 MSAA in Deferred shading

Posted by Seabolt on 25 July 2014 - 03:09 PM

You piqued my interest and I found this article

 

It sounds like 10.1 you have the ability to access the AA'ed samples, so you can reconstruct your values based on the knowledge of whether you're on an edge or not within your geometry. From a cursory look it seems to work when constructing depth samples, but I don't know how well it would work with the other parameters of a deferred shading pipeline like normals, since you would have an averaged sample, but they seemed to have gotten it to work. I'd love to know how that works if someone smarter than me understands it.




#5117742 Android NDK touch coordinates

Posted by Seabolt on 17 December 2013 - 10:32 PM

Okay, after a fair bit of searching; the issue is that the screen was using density pixels, (dp), which are a relative coordinate. If you add this line to your manifest: 

	<supports-screens android:anyDensity="true" />

Then it will no longer apply the DPI conversion and your touch coordinates will come in a screen pixels.




#5100249 Shaders Failed to link on new computer

Posted by Seabolt on 10 October 2013 - 12:17 PM

I think I solved it, for some reason glShaderSource wasn't accepting the '0' length for the shader code. According to the spec that means that the array will be NULL terminated, and that it would use the length of the string as the code length. But if I just use a simple strlen on the code and pass that size it works... That's a bit worrying. 

Now I get to figure out why nothing is drawing anymore, and why there are no errors because of it! Yipee!




#5099610 Shadow Blurring

Posted by Seabolt on 08 October 2013 - 10:19 AM

Also, if you want to use hardware filtering, you can take a look at Variance Shadow Maps, or Exponential Shadow Maps. They have their own quirks though.




#5098796 Problems after changing from Debug to Release

Posted by Seabolt on 04 October 2013 - 12:23 PM

Without knowing the details, 80% of the time, release bugs happen due to uninitialized memory. Also, since it's in release, all debug safety nets are disabled. Find your directx control panel and make sure that you're getting all of it's output and see if you're failing anywhere or any warning that DX maybe saving your from.




#5098089 Improving Graphics Scene

Posted by Seabolt on 01 October 2013 - 10:12 AM

Realism is predicated on a lot of different things.

Try buying and reading this book, it will expose you to many different techniques to fake or increase the realism in your games. It's a bit old, but still extremely relevant. Once you're done with that, this book will help take it to the next level, showing better ways to keep rendering physically plausible.

Now you may be thinking; that's a lot of work. It really is. If you're just looking for some marginal increases, take a look at SSAO post processing, and look up the various BRDFs you could add to your lighting.




#5097253 [XNA] Sprite in 3D World - problem with camera movement

Posted by Seabolt on 27 September 2013 - 10:03 AM

Hey! It's no problem, I'm making a couple of assumptions about your points that I should have asked about.
Also you're right, applying the scale for the corners would be screen aligned, so that's my mistake.

I'm not seeing anything too far off in your shader. You're not applying a world transform (unless it's already in your view matrix), so the object is in model space, but that shouldn't be an issue. I would check your view matrix and make sure there's nothing wrong there.

Sorry that I'm not more help.




#5097054 Creating a Shader class

Posted by Seabolt on 26 September 2013 - 12:48 PM

The way I do it in my engine is that I add meta data for my shaders, describing the size of the constant, it's register, it's name, whatever information I may need. Then in my game I have a SetUniform call that takes in a name or an enum that corresponds to known constants, a void*, and the size of the uniform. Then I map the name/id to a global shader cache, and apply it from there. It takes  a bit of setup to work, but that's all handled by tools now. And I can create custom uniforms without having to cram it into some sort of pre-defined register index.




#5097049 Transparency and the Z buffer

Posted by Seabolt on 26 September 2013 - 12:30 PM

Overdraw can be very expensive, but because it's just another draw call. 

Think of it this way. A 1080p resolution means the pixel shader gets hit 1million+ times. Now that means the most efficient you can be is exactly one pixel shader call for each pixel. Overdraw happens when you have to draw to the same pixel more than once. Alpha blending requires this. 

So the short answer is that, it depends on how expensive your pixel shader is.

Alpha blending can be expensive, texture fetches and be expensive, a whole mess of thing can make overdraw expensive.




#5097040 Contents of the board not moving down after a scoring move

Posted by Seabolt on 26 September 2013 - 11:52 AM

Hey, don't take this the wrong way, but figuring out this bug yourself would be a big help for you. It sounds like something's position is not being updated when you clear a row. I would step through the code after you clear a row and make sure that the code is doing what it should. If it is, make sure that when you are drawing the tiles that their position/texture whatnot is the same as your boards.




#5097032 Transparency and the Z buffer

Posted by Seabolt on 26 September 2013 - 11:08 AM

Alpha blending is dependent on the color already in the color buffer. So if you draw the lamp stand first, with z buffer write/test on, and then draw the lampshade next with z buffer write/test on, then the lamp shade will render fine, and the lamp shade's pixels would replace the lamp stand's pixels in the z buffer. The lamp stand's pixels won't have been rejected, because they have already passed the z test.

Now if you reverse that logic, then the lamp stand would not draw anywhere the lamp shade and lamp stand overlap, because the lamp shade pixels are in the z buffer, and the lamp stand pixels are behind them, thus failing the z test.

Z testing happens before the pixel shader is called. If the pixel shader is called, then the pixel will be outputted to the color buffer, (generally). It will not reject these pixels just because you've drawn something nearer afterwards, it will just draw the pixel again, (aka overdraw).




#5097026 Transparency and the Z buffer

Posted by Seabolt on 26 September 2013 - 10:18 AM

So the z-buffer doesn't care if you have transparent geometry or not, if a pixel is rasterized its depth will be outputted to the z buffer, (assuming z buffer is enabled and such.) Now you can avoid drawing those pixels entirely using alpha testing. Alpha testing will allow you to determine a threshold for minimum alpha, and any pixels with alpha below that threshold will not be rasterized. This will cause a cutout edge where the alpha drops off, but it will allow you to use depth testing.




#5096736 DirectX 9 alternative way to draw image

Posted by Seabolt on 25 September 2013 - 12:20 PM

Hey I'm not seeing any of the images you mentioned. Generally speaking though, if you increase the size of the rect, the image will just stretch across the new surface since the tex-coords haven't changed. 






PARTNERS