I don't understand how the information in the lower three textboxes of the ContentControl stays synced up with the current selection in the list box. The binding for the ContentControl is bound to an ObservableCollection, which doesnt have any "CurrentItem" functionality. So what is the "IsSynchronizedWithCurrentItem" doing to make this all work?
Posted by maya18222
on 15 September 2012 - 11:16 AM
I was kinda hoping you could select the depth or colour target object as the current object, and then be able to see the each draw call being rendered onto the buffer as you click over drawcall commands. However, if I do this, I dont see anything, just the clear colour for the colour target, and the checkboard pattern for the depth target.
But my advice is just to use existing formats like FBX, as they can be exported from lots and lots of tools and are easy to extract from. Once you get to the point of efficiency you can then write a simple converter that will convert all your fbx data into something more efficient for you particular engine. OR even go as writing your own script for your favourite 3d program.
Posted by maya18222
on 30 September 2011 - 04:17 PM
First off, Im not trying to reinvent the wheel, I'm just doing this as an exercise.
Below is my ThreadPool class and first of all I was looking for some feedback on it. Secondly, you'll notice that queued tasks can only be dispatched to available threads in 2 places, when they are queued and then in the wait call. How would I go about implementing it so that tasks get dispatched as soon as a thread is free? Im thinking I'll need to have the ThreadPool operating on a thread of its own.
How is fur/hair generally rendered with a raytracer? Looking at Mentalray, they just create a number of base hairs that the real hairs interpolate between. But how would you do the ray-intersection tests?
I thought about just splitting each strand into a number of segments, and then expanding each segment to form a cylinder and then doing a ray-cylinder test. This is obviously going to be very slow when you have millions of strands with multiple segments. So would you then just use an acceleration technique such as a kdtree? where each hair is stored as maybe an AABB?
Im talking about particle effects that are created with editors offerering all sorts of options though, many, many configurations. Im wondering if they just peform the update on the CPU where all the options can be rocesssed and send it down to a shader that draws it, or if they have some sort of configurable shader that does the whole lot.
Take the unreal editor for example, in their effect editor, you can stack up different processes for the emiter to perform.