Build a render farm using DirectX and radiosity?

Started by
5 comments, last by Vilem Otte 9 years, 10 months ago

Hello!

I would like to build a render farm to render light map using radiosity.

This one looks good: http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm

I would like to utilize the fixed pipeline rasterizer, it's fast and easy to get going. But does that means every render machine needs a monitor?

I don't know if one can run a DirectX application without a monitor.

One of mjp's presentation mentions that they use Optix from CUDA to render light map. It should be able to run on linux machine without monitor, but it's difficult to find CUDA guys here.

Any suggestion is welcome. Thank you!

Advertisement

Direct3D always runs in the context of a particular window. However, you don't need a monitor in order to create a window, and the window doesn't need to be visible anywhere.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Uhm... just to clarify - MJP most likely used ray traced radiosity (thats why Optix). I believe there are implementations of fast rasterizers for CUDA too (and also for OpenCL - if you prefer it) - in case you wouldn't like to use D3D in the end.

The thing is, if you'd go with CUDA/OpenCL instead of D3D (or alternative) - you will most likely end with better system (allowing you to do more ... with OpenCL you will have even more advantages, like you could run it on various systems) ... unless you dive into compute shaders (but still I'd recommend going with CUDA/OpenCL if that is the case).

but it's difficult to find CUDA guys here.

I have to disagree with this ... ph34r.png

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

I will take a look at OpenCL/CUDA.

By the way, I use AMD card at work and at home, but Nvidia's promotion is really good, they even have invited guys from Pixar to show off their Optix based tool, and it's really impressive.

On Windows you don't physically need a monitor hooked up, but in order to use D3D the video card needs to be set up as a display device with a compatible WDDM display driver. The same goes for CUDA, and I would assume OpenGL/OpenCL (I don't have experience with those so I can't tell you for sure). Nvidia has a special non-display driver called "Tesla Compute Cluster" that you can use for CUDA, which allows you to bypass WDDM. However only works with their (extremely) expensive Tesla line of video cards. There's a brief overview of the advantages here. On Linux the driver system is different, so it doesn't have the same issues.

Like Vilem Otte said, we use Optix for ray-tracing when computing our light maps. Our baking farm consists of a few PC's running Linux, each with several GTX 780's running in a non-display configuration. Ray-tracing is really natural to use for baking GI, since it makes it easy to sample your scene and it's trivial to parallelize.

If you don't want to deal with the mess of GPU's and drivers, you can consider ray-tracing on the CPU instead. Intel's Embree is very easy to use, and really fast. I don't think you could match the raw throughput of a monster GPU running Optix, but with a few beefy CPU cores you should get respectable performance.

D3D11.1 can be used in session-0 processes in Win8 but googling I've read someone seemed to have issues with it.

OpenCL can definitely run without a monitor but I'm fairly sure it needs the drivers installed for AMD since it uses the internal AMD IL compiler.

Nonetheless, I wouldn't use D3D for long-running tasks. You need to set a flag to allow it to not timeout. The performance seemed to be considerably worse for large datasets in my case. I cannot tell if this is a driver problem by my side but it seems likely D3D11 Compute might have been optimized to work in a latency-sensitive context of small datasets used for realtime manipulation.

OP:

never think about using FFP again. Not only you'll be shooting yourself in (at least) one foot, but you'll also have to go through ROPs instead of just streaming to memory and this can be a problem in modern architectures.

Previously "Krohm"

On Windows you don't physically need a monitor hooked up, but in order to use D3D the video card needs to be set up as a display device with a compatible WDDM display driver. The same goes for CUDA, and I would assume OpenGL/OpenCL (I don't have experience with those so I can't tell you for sure).

Actually with OpenCL, the situation is a little bit more "complicated". You can use different device types to execute your OpenCL kernels - as for now, there are 3 supported:

CL_DEVICE_TYPE_CPU - OpenCL will use host processor for computation. This option works fine in terminal without starting X (in linux), although you would need CPU-based renderfarm for this (of course with enough CPUs with lots of cores you can beat GPUs) - but it will most likely be more expensive.

CL_DEVICE_TYPE_GPU - OpenCL will use GPU device to perform the computations. Each vendor implemennts his own OpenCL driver differently:

  • AMD implementation allows running OpenCL applications without running actual xserver (I think you need root privileges to run the app like that - but the last time I tried this was like year ago, things might have changed since then - back then the feature was called "console mode" in clinfo ... since Catalyst 13.4 as far as I remember)
  • NVidia, I haven't digged into this one. I know it is possible for CUDA, so I bet you can do the same with their OpenCL.

CL_DEVICE_TYPE_ACCELERATOR - Special OpenCL accelerators (IBM CELL Blade or Xeon Phi), I would like to get my hands on Xeon Phi and try it out. But well, it's very expensive and also hard to get (I mean like compare it to any current GPU in terms of "what you get for your money")

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

This topic is closed to new replies.

Advertisement