Since the docs were released tuesday, I've been looking them up and down, asking around for clarification on certain things, and so on. Plain and simple, I think D3D10 is absolutely fantastic, and I ask for only one more bit of functionality that I've mentioned a bit: source pixel access in the pixel shader so that we can basically do custom blending, as opposed to using the still-fixed-function OMSetBlendState. Aside from that, I think it's great. It's extremely flexible, slim and powerful. I know that MS will be releasing subversions every now and then after the final release, but I think that even without those, devs would be finding new things to do with D3D10 over 5 years from now.
Anyways, after studying the docs and samples for the last couple days I've got a pretty good handle on D3D10, but there are a couple unadvertised features that really surprised me. Namely, multiple viewports and scissors. Right now I'm getting mixed readings on those, as Redbeard (who is a tester for the Direct3D 10 team) says that only one viewport or scissor can be bound to a single render target, but nothing in the docs suggest otherwise. That is something I'll want to work with to try and figure out what's-what.
Also, one other thing I'd like to have expanded on in the docs (Even though a good chunk of it is used in the shaders) is the SetRasterizerState/SetBlendState, etc. properties in FX/HLSL. I imagine that one could guess what each possible state for FX properties is though, but it'd still be nice to have that in the docs.
Since I'm going to be working on a not-as-high-end-as-my-current-computer laptop for the next 4 months while on my co-op work term, one thing I want to do is do a lot of D3D10 work using the REF rasterizer in my spare time. I've got a couple tech demos that I want to try out, and was wondering if anyone has any input for them (suggestions, changes, stuff like that).
-A demo doing really souped up shadow mapping using D3D10 features such as a single pass cubemap and depth buffer lookup in a pixel shader. Also, if I have time, work with some dynamic scaling of the shadow map.
-A demo showcasing Bezier surfaces, hopefully with, when needed, a virtually infinite level of detail. I know some of you will mention that hardware vendors only want us spitting out a max of 20 tri-er..primitives in the GS, but that'll be part of the challenge behind the demo.
-A demo showing a game where all of the logic is calculated in the shaders (i.e. only inputs and time increments are sent into the tech logic), possibly like the Geometry Wars clone I wanted to do. The guys on #graphicsdev kind of poo-poo'd the initial idea (I just said "a game" and didn't really specify much about it) but I think I might still give it a try, since I think it'll be a fun experiment. Plus, it could be a good demo showing off the variety of buffer accesses and unlimited shader length stuff in D3D10. If not that, maybe I'll give GPU physics a whirl. At the very least, something that has typically been reserved for CPU only.
You aren't the first (presumably not the last either) person to want this. However, I wonder if its actually feasable with current IHV implementations. Even with D3D10 - things like the core memory controllers and so on are probably going to be similar to D3D9 parts (reading/writing/blending is essentially the domain of the memory controller). Just wondering if, with the massively parallel and block architectures, its actually possible to know the source pixel at the time of processing. I'm sure they could but whether it'd be even remotely efficient is another question [oh]
You must have more spare time than me [headshake] Been reading bits-n-pieces for ages now and I still keep finding bits I'm sure weren't there yesterday [lol]
I'd essentially call that an optimization of current technology. Nothing wrong with that mind [smile]
Most of what I've heard about the GS, at least early IHV implementations, seem to indicate it'll be more of a "discovery" feature - allowing the pipeline to be more autonomous and expressive, rather than as a programmable tesselator. I wonder if you could dynamically alter the LOD based on distance though... seams would be a bugger to solve, but with 1-ring adjacency it must be possible.
I have a different suggestion [grin]
It's one I was hoping to look into, but knowing my luck I won't have the time ([sad]). With the system generated PrimitiveID value being accesible in the GS, I wonder if it's possible to offload a lot of the material system to the GPU. At the very least it could greatly simplify the need to bucket sort by shader params and textures...
Use the value in the GS/PS along with integer instruction to look up into an attached buffer - the buffer is just a lookup/reference indicating what textures/values/properties should then be used for final rasterization.
Jack