Hello,
I'm somewhat of a beginner with Directx. I implemented a simple ray tracer for a school assignment before that was CPU only. This ray tracer used hierarchical modelling to setup the scene. The primitives available were:
-A Sphere
-A Cube
-A mesh containing vertices and faces represented by the indices of the vertices
Eventually I would have a list of inverted matrices (to translate a ray into object space) for each primitive type specified in the scene and their associated materials before beginning the ray tracing algorithm. I wish to do something similar with Direct Compute (or simply Directx, but it must be a ray tracer). However, I'm completely lost as to how I should architect this. For example, I'm not exactly sure how I would send an arbitrary number of matrices, materials, textures etc... to a constant buffer. On top of that, the mesh would have an arbitrary number of vertices and faces defined by sets of those vertices. Should I use other parts of the Directx graphics pipeline for that? Could someone point me in the right direction?
Help starting Ray Tracing in Direct Compute
This ray tracer used hierarchical modelling to setup the scene.[/quote]
I'm assuming you mean setting up some sort of quad tree or k-d tree to split your scene into separate chunks?
When I did my raytracer, I didn't use Direct3D or OpenGL for my rendering; I just used SetPixel from the Win32 API (yes I know.. using SetPixel is slow but that was one of the requirements of the project). I'll suggest what I would do by combining my two separate experiences with Direct3D and implementing a ray-tracer together (if someone had already created a ray-tracer with Direct3D/OpenGL, please feel free to correct me).
You won't be utilizing the graphics/shader pipeline in Direct3D since it doesn't make sense in a ray-tracer context. The only thing that you'd probably use is the vertex shader but that is only if you're animating the mesh. I'm assuming you're not animating that mesh, since you didn't give indication that you are so... the only part of Direct3D that you'd be messing around with is setting the pixel's final color in the backbuffer. Don't worry about sending your vertices, indices, textures/materials, matrices to a vertex/index/constant buffer since like I said before you won't be utilizing the shader pipeline.
First you'll create your backbuffer with Direct3D (doesn't matter which version you use). This is where all of your final pixels will go. Next, you'll do all of your math-intensive parts of your ray-tracer (object intersection, matrix transforms, and what-not) with DirectCompute and after you determine the current pixel's final color, you go back to the backbuffer and modify the respective pixel. Repeat for all of your pixels and when you're done tracing that scene, simply swap the backbuffer to the "front" (through Direct3D), and that should be it.
Since you're fairly new to Direct3D and I'm assuming with DirectCompute as well, I'd take baby steps. First, write a small Direct3D program that just creates a backbuffer, sets the entire thing to a certain color, and then present it to the screen. Then figure out how to grab that backbuffer, and modify a pixel's color. Once you have that done, learn how to perform mathematical computations with DirectCompute.
Thanks for the reply. I'm just going to go with structured buffers. Seems like the right way to do it. I don't need to implement anything like kd-trees. And yea, I figured that I don't need to touch the rest of the Directx graphics pipeline. Only took me a day to make a Mandelbrot application via Directcompute, but there were lots of samples . I just need to think of a smart way to store the mesh in a buffer. Sleep deprivation doesn't help me any either lol.
Yeah if you have lots of scene data and you need to access it with random access patterns, then you'll want to put it in a StructuredBuffer and not a constant buffer. If you need the CPU to constant update the contents of any of the buffers, just create it with D3D11_USAGE_DYNAMIC and map it with D3D11_MAP_WRITE_DISCARD to get best performance.
Heh, I don't even need to use the CPU to do much. Just pass resources between multiple compute shaders. But that's something I'm still iffy on. Like for example, if I have a RWTexture2D in one shader, do some computation on it, and then I want another shader to use the same texture but this time just as a Texture2D, how do I go about doing that?
Create a shader resource view for that ID3D11Texture2D, and bind that when using the shader that will access it as a Texture2D.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement