• Create Account

# Seabolt

Member Since 26 May 2010
Offline Last Active Feb 18 2016 06:56 PM

### #5218656Tips for reading mathematic formulae?

Posted by on 23 March 2015 - 06:17 PM

Holy hell... that's really helpful. I feel dumb for not thinking to just search for this.

### #5218649Tips for reading mathematic formulae?

Posted by on 23 March 2015 - 05:54 PM

Hey guys, I'm a graphics programmer by trade that has been able to stumble his way through the math necessary to get the job done. But I want to take my understanding to the next level.

Currently I'm reading Introduction to Linear Algebra, 4th Edition by Gilbert Strang. So far it has been really awesome to learn these other less concrete uses of vectors/matrices! But a huge stumbling block for me is understanding a lot of the notation used, and searching for terms based on "that R looking thing with the extra line" isn't very helpful at times.

I've not really taken Calculus or Linear Algebra at a high level, just did some quick "classes" (like a month long) in college. Do you guys have any tips for understanding the notation better?

### #5216136Forward+ vs Deferred rendering

Posted by on 12 March 2015 - 03:03 PM

Sometimes rendering the scene twice can be too expensive.

### #5216127Forward+ vs Deferred rendering

Posted by on 12 March 2015 - 02:17 PM

Hey, it's been a little while since I've looked at implementation, but IIRC it's like this:

Generate a G Buffer of only the information for normals and depth.

Generate an irradiance buffer for all the lights

Deferred Shading will have you generate all your material/albedo/depth/etc parameters and then render the lights to shade them further. I hope that was clear.

### #5206283Correct UVs after Resoloution change

Posted by on 23 January 2015 - 05:41 PM

Well your uvs should be independent of the actual resolution of the texture, they should be values clamped between 0 and 1 where 0 is one side of the image and 1 is the other side of the image.

So the question is how are you generating your texture? If you have access to render targets, you could just render a quad that is the correct size of the image that you want into the source image at the correct spot in the source image, blit every pixel into a target texture and your UVs shoudn't have to change.

This is me shot-gunning ideas though, I need to know more about your problem.

### #5203777Looking for a real-time, physically-based rendering library

Posted by on 12 January 2015 - 03:54 PM

You could license something like UE, they have some out of the box systems in place (IIRC), but honestly PBR is still relatively new for wide spread adoption and is something that requires a lot of tuning/engineering to get to work right.

Posted by on 25 July 2014 - 03:35 PM

It's been a little while since I've done CSM, but I did the texture atlas approach and then to avoid the branch I would pass in the tile size as a uniform and divide the tex coord by the tile width and height to determine what tile to sample from. Then I would have a uniform that determined the depth range between the two cascades that I would want to lerp between, sample from both maps and lerp based on the distance to the split.

I'm a little hazy on the details, so I apologize with the vague wording, but I hope you get the gist of what I'm saying.

Posted by on 25 July 2014 - 03:09 PM

It sounds like 10.1 you have the ability to access the AA'ed samples, so you can reconstruct your values based on the knowledge of whether you're on an edge or not within your geometry. From a cursory look it seems to work when constructing depth samples, but I don't know how well it would work with the other parameters of a deferred shading pipeline like normals, since you would have an averaged sample, but they seemed to have gotten it to work. I'd love to know how that works if someone smarter than me understands it.

### #5117742Android NDK touch coordinates

Posted by on 17 December 2013 - 10:32 PM

Okay, after a fair bit of searching; the issue is that the screen was using density pixels, (dp), which are a relative coordinate. If you add this line to your manifest:

```	<supports-screens android:anyDensity="true" />
```

Then it will no longer apply the DPI conversion and your touch coordinates will come in a screen pixels.

Posted by on 10 October 2013 - 12:17 PM

I think I solved it, for some reason glShaderSource wasn't accepting the '0' length for the shader code. According to the spec that means that the array will be NULL terminated, and that it would use the length of the string as the code length. But if I just use a simple strlen on the code and pass that size it works... That's a bit worrying.

Now I get to figure out why nothing is drawing anymore, and why there are no errors because of it! Yipee!

Posted by on 08 October 2013 - 10:19 AM

Also, if you want to use hardware filtering, you can take a look at Variance Shadow Maps, or Exponential Shadow Maps. They have their own quirks though.

### #5098796Problems after changing from Debug to Release

Posted by on 04 October 2013 - 12:23 PM

Without knowing the details, 80% of the time, release bugs happen due to uninitialized memory. Also, since it's in release, all debug safety nets are disabled. Find your directx control panel and make sure that you're getting all of it's output and see if you're failing anywhere or any warning that DX maybe saving your from.

### #5098089Improving Graphics Scene

Posted by on 01 October 2013 - 10:12 AM

Realism is predicated on a lot of different things.

Try buying and reading this book, it will expose you to many different techniques to fake or increase the realism in your games. It's a bit old, but still extremely relevant. Once you're done with that, this book will help take it to the next level, showing better ways to keep rendering physically plausible.

Now you may be thinking; that's a lot of work. It really is. If you're just looking for some marginal increases, take a look at SSAO post processing, and look up the various BRDFs you could add to your lighting.

### #5097253[XNA] Sprite in 3D World - problem with camera movement

Posted by on 27 September 2013 - 10:03 AM

Also you're right, applying the scale for the corners would be screen aligned, so that's my mistake.

I'm not seeing anything too far off in your shader. You're not applying a world transform (unless it's already in your view matrix), so the object is in model space, but that shouldn't be an issue. I would check your view matrix and make sure there's nothing wrong there.

Sorry that I'm not more help.