Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    22
  • comments
    28
  • views
    17370

D3D10 thoughts sputter

Sign in to follow this  
Cypher19

181 views

So, it turns out I basically won't be able to get my hands on Vista as a D3D10 dev platform, so I'm probably going to just take a break from that stuff.

There are two graphics things on my mind though that I've been thinking of and trying to come up with solutions for: area lighting and memory virtualization (emulation).

For a linear area light, I've got the basic idea down, but in my implementation I've been having problems with the diffuse light integration. Basically, the idea is that since the N dot L term should be normalized, it turns into just the cosine value, so then the formula integrates between two cosine values (i.e. integrate the light from L1 to L2). That is simply a conversion to sin vals, and then subtracting the difference, which is, from a math pov, fairly trivial (from a shader pov? Nooot as much). The identity cos^2+sin^2=1 is just rearranged to sin=sqrt(1-cos^2). Anyways, I've got that working, but the problem is that just using that, the values have to be added OR subtracted, since the sin value will always be positive (a side effect of the sqrt function). Now, the integration works fine, I've punched in two formulae on my ti83 that shows what the final result of the lighting is, and assuming the add/sub is chosen correctly, it works fantastically. The problem I've been having is that the add/sub ISN'T chosen correctly right now, so that's something I'm looking into. (For those curious, the method I'm using right now is to calculate the reflection of one of the light vectors about the normal, and comparing the results of L1 dot L2 and R1 dot L2)

Memory virtualization is the other thing I want to talk about, primarily with respect to John Carmack's memory virtualization antics. I want to finalize my thoughts on this, so I'll probably make an edit later (a few hours from now, during my lunch break). Basically, the idea that I want to experiment with is to, say, for a terrain system, figure out what parts of a really really big texture (i.e. >4kx4k) are needed when drawing a frame by using something like a memory management algo JC proposed several years ago in a .plan update, combined with some other memory address manipulation stuff.
Sign in to follow this  


1 Comment


Recommended Comments

Bit of a shame about the lack of Vista [sad]

I was reading some of that Carmack's super-texture whatsit stuff on B3D yesterday... sounds interesting, but I don't think I'd choose to use it myself. Not yet anyway.

The thought about it's connections with Vista/D3D10's virtualization of the GPU as a shared resource crossed my mind though. Performance would probably suck in a big way, but I wonder if such OS-level technology could help those sorts of algorithms....

Jack

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!