Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Seabolt

Member Since 26 May 2010
Offline Last Active Apr 29 2015 10:37 AM

#5218656 Tips for reading mathematic formulae?

Posted by Seabolt on 23 March 2015 - 06:17 PM

Holy hell... that's really helpful. I feel dumb for not thinking to just search for this.




#5218649 Tips for reading mathematic formulae?

Posted by Seabolt on 23 March 2015 - 05:54 PM

Hey guys, I'm a graphics programmer by trade that has been able to stumble his way through the math necessary to get the job done. But I want to take my understanding to the next level.

Currently I'm reading Introduction to Linear Algebra, 4th Edition by Gilbert Strang. So far it has been really awesome to learn these other less concrete uses of vectors/matrices! But a huge stumbling block for me is understanding a lot of the notation used, and searching for terms based on "that R looking thing with the extra line" isn't very helpful at times. 

I've not really taken Calculus or Linear Algebra at a high level, just did some quick "classes" (like a month long) in college. Do you guys have any tips for understanding the notation better?




#5216136 Forward+ vs Deferred rendering

Posted by Seabolt on 12 March 2015 - 03:03 PM

Sometimes rendering the scene twice can be too expensive.




#5216127 Forward+ vs Deferred rendering

Posted by Seabolt on 12 March 2015 - 02:17 PM

Hey, it's been a little while since I've looked at implementation, but IIRC it's like this:

Generate a G Buffer of only the information for normals and depth.

Generate an irradiance buffer for all the lights

Forward render your scene using the irradiance buffer as an input for your lighting.

Deferred Shading will have you generate all your material/albedo/depth/etc parameters and then render the lights to shade them further. I hope that was clear.




#5206283 Correct UVs after Resoloution change

Posted by Seabolt on 23 January 2015 - 05:41 PM

Well your uvs should be independent of the actual resolution of the texture, they should be values clamped between 0 and 1 where 0 is one side of the image and 1 is the other side of the image.

 

So the question is how are you generating your texture? If you have access to render targets, you could just render a quad that is the correct size of the image that you want into the source image at the correct spot in the source image, blit every pixel into a target texture and your UVs shoudn't have to change. 

 

This is me shot-gunning ideas though, I need to know more about your problem.




#5203777 Looking for a real-time, physically-based rendering library

Posted by Seabolt on 12 January 2015 - 03:54 PM

You could license something like UE, they have some out of the box systems in place (IIRC), but honestly PBR is still relatively new for wide spread adoption and is something that requires a lot of tuning/engineering to get to work right.




#5169188 [CSM] Cascaded Shadow Maps split selection

Posted by Seabolt on 25 July 2014 - 03:35 PM

It's been a little while since I've done CSM, but I did the texture atlas approach and then to avoid the branch I would pass in the tile size as a uniform and divide the tex coord by the tile width and height to determine what tile to sample from. Then I would have a uniform that determined the depth range between the two cascades that I would want to lerp between, sample from both maps and lerp based on the distance to the split.

 

I'm a little hazy on the details, so I apologize with the vague wording, but I hope you get the gist of what I'm saying.




#5169185 MSAA in Deferred shading

Posted by Seabolt on 25 July 2014 - 03:09 PM

You piqued my interest and I found this article

 

It sounds like 10.1 you have the ability to access the AA'ed samples, so you can reconstruct your values based on the knowledge of whether you're on an edge or not within your geometry. From a cursory look it seems to work when constructing depth samples, but I don't know how well it would work with the other parameters of a deferred shading pipeline like normals, since you would have an averaged sample, but they seemed to have gotten it to work. I'd love to know how that works if someone smarter than me understands it.




#5117742 Android NDK touch coordinates

Posted by Seabolt on 17 December 2013 - 10:32 PM

Okay, after a fair bit of searching; the issue is that the screen was using density pixels, (dp), which are a relative coordinate. If you add this line to your manifest: 

	<supports-screens android:anyDensity="true" />

Then it will no longer apply the DPI conversion and your touch coordinates will come in a screen pixels.




#5100249 Shaders Failed to link on new computer

Posted by Seabolt on 10 October 2013 - 12:17 PM

I think I solved it, for some reason glShaderSource wasn't accepting the '0' length for the shader code. According to the spec that means that the array will be NULL terminated, and that it would use the length of the string as the code length. But if I just use a simple strlen on the code and pass that size it works... That's a bit worrying. 

Now I get to figure out why nothing is drawing anymore, and why there are no errors because of it! Yipee!




#5099610 Shadow Blurring

Posted by Seabolt on 08 October 2013 - 10:19 AM

Also, if you want to use hardware filtering, you can take a look at Variance Shadow Maps, or Exponential Shadow Maps. They have their own quirks though.




#5098796 Problems after changing from Debug to Release

Posted by Seabolt on 04 October 2013 - 12:23 PM

Without knowing the details, 80% of the time, release bugs happen due to uninitialized memory. Also, since it's in release, all debug safety nets are disabled. Find your directx control panel and make sure that you're getting all of it's output and see if you're failing anywhere or any warning that DX maybe saving your from.




#5098089 Improving Graphics Scene

Posted by Seabolt on 01 October 2013 - 10:12 AM

Realism is predicated on a lot of different things.

Try buying and reading this book, it will expose you to many different techniques to fake or increase the realism in your games. It's a bit old, but still extremely relevant. Once you're done with that, this book will help take it to the next level, showing better ways to keep rendering physically plausible.

Now you may be thinking; that's a lot of work. It really is. If you're just looking for some marginal increases, take a look at SSAO post processing, and look up the various BRDFs you could add to your lighting.




#5097253 [XNA] Sprite in 3D World - problem with camera movement

Posted by Seabolt on 27 September 2013 - 10:03 AM

Hey! It's no problem, I'm making a couple of assumptions about your points that I should have asked about.
Also you're right, applying the scale for the corners would be screen aligned, so that's my mistake.

I'm not seeing anything too far off in your shader. You're not applying a world transform (unless it's already in your view matrix), so the object is in model space, but that shouldn't be an issue. I would check your view matrix and make sure there's nothing wrong there.

Sorry that I'm not more help.




#5097054 Creating a Shader class

Posted by Seabolt on 26 September 2013 - 12:48 PM

The way I do it in my engine is that I add meta data for my shaders, describing the size of the constant, it's register, it's name, whatever information I may need. Then in my game I have a SetUniform call that takes in a name or an enum that corresponds to known constants, a void*, and the size of the uniform. Then I map the name/id to a global shader cache, and apply it from there. It takes  a bit of setup to work, but that's all handled by tools now. And I can create custom uniforms without having to cram it into some sort of pre-defined register index.






PARTNERS