General ray tracing timing question

Started by
19 comments, last by phresnel 14 years, 3 months ago
I've recently build a (very small) render farm running blender on ubuntu, because my brother was tired to wait 4 days on test renderings of his current animation project. I can tell you (although it should be clear at all) that the aspects are numerous:

Hardware
- count of cores
- core clock frequency
- CPU cache size (very important)
- no of RAM channels in use
- RAM clock frequency and delay
- amount of available RAM

Software
- operating system (yes, that plays sometimes a role; it does so definitely when using blender)
- background processes
- renderer version (e.g. compare blender 2.49b and 2.50; whow, what a difference)

Scene
- resolution
- oversampling
- ray depth
- material properties
- no of faces
- reflection model
- no of illuminants
- shadows
- compositing
- post processing
- ...
Advertisement
Quote:Original post by cignox1
IMHO, trying to guess those figures is close to useless. That really depends upon what features are implemented and how.


I'm precisely interested in the features and the how. It's educational to me, therefore not useless. Timings from (say) a GPU based approach has value in this thread even though I'm doing a traditional CPU approach.

Quote:Original post by cignox1
By the way, are there reasons why are you using 8 threads on a 2 core machine?


Nope. I just kept bumping up the # of threads until I saw something close to a 2x speed-up. The 1024 spawned rays is arbitrary as well. My optimized SIMD line ray trace has a free register component that I need to take advantage of. There's obviously some room for improvement which I will get to eventually.

But first, how about those timings? :)

btw, thanks all for feedback.

edit: a bit more clarity

[Edited by - Unfadable on January 5, 2010 9:54:50 AM]
Asking for timings might make sense, if you provided a sample scene in some widely accepted file format, along with the target-size and a reference computer spec. Other than that, there are FAR too many variables that can influence rendering times to great extent. This goes from the specific scene (and view in it), to the very specific implementation details of various renderers.
I could make up some totally arbitrary numbers here, and they would make just as much sense as some I actually got by rendering some of my scenes on my computer with my prefered raytracer, since alle these are just as arbitrary as well.

The differences go from near-realtime stuff up to hours of rendering based on what a raytracer looks like inside, while they are probably optimized for very different targets. A speed-raytracer will most likely accept various inaccuracies in the output in order to be fast, while some high quality oriented approach would need time x100 but outputs proper physically correct images down to the subpixel. With a difference of a hundred in magnitude (which IS realistic), of what use could these numbers be?

Provide one or more scene files, and we can talk. You may also want to separate per effect, e.g. one reflection-heavy scene, one geometry loaded one (with and without some AA) and so on.
It makes no sense to tell you "I needed X seconds for a sphere", since I can easily make up a comlex material that makes rendering the sphere a matter of several minutes just as well. Be specific!
The ray tracer I wrote for my graphics class took like two minutes to render an 800x600 image with one light source (sampled in 9 places, so 9 shadow feelers spawned), a maximum of 4 bounces for reflection, and 3 primitives in the scene (a sphere, a cone and a box). This was all single threaded and not very cleverly programmed and running in debug mode in the compiler with all optimizations turned off. Not exactly comparable to your situation, but it's a situation none-the-less.

Anyway, my uneducated guess is that with only ~4000 triangles, you should be able to get your ray tracer to render your scene in less time than 3 hours. Are you using any sort of space partitioning, kd-trees, etc? 1024 secondary rays per bounce is quite a bit, but not enough to warrant 3 hours of render time. Your ray tracer is currently taking about 22 milliseconds per pixel, which seems like an awful long time to process 1024 rays. But, then again, maybe I don't know anything!
Quote:Original post by Samith
...Not exactly comparable to your situation, but it's a situation none-the-less.


Thank you! That's exactly the kind of thing I was looking for. I wish there was something higher than "extremely helpful" that I could rate you with for answering my question.

Anyways, I am not using any spatial partitioning scheme and pretty much the way the camera is oriented, every triangle is considered (no early outs). I'll get to the optimizations eventually.

OK, I keep considering those numbers close to useless but here is an image and the numbers of my raytracer.
Anyway, after a few more optimizations I made that render two times faster than what reported in the link. And still, it is a highly unoptimized raytracer. In addition, rendering times heavily depended upon the camera position and viewing direction due to the high incidence of reflective/transparent surfaces and the fact that the background is way faster than the model to render.

In short, rendering times are influenced by the HW it is running on, by the renderer itself, the specific scene, the quality settings and the camera pos/dir. We are not talking about a 30% difference: I can say 5 minutes and 5 hours, and anything between, and anything less and anything more. A bit more AA and your rendering requires 2 times more. Or a ray max depth of 16 instead than 8 and in some scenes it may require far more time. I can tell a random number of minutes, and be sure that there is a combination of software/scene/settings wich match that randomly chosen rendering time...

Quote:
The ray tracer I wrote for my graphics class took like two minutes to render an 800x600 image with one light source (sampled in 9 places, so 9 shadow feelers spawned), a maximum of 4 bounces for reflection, and 3 primitives in the scene (a sphere, a cone and a box). This was all single threaded and not very cleverly programmed and running in debug mode in the compiler with all optimizations turned off. Not exactly comparable to your situation, but it's a situation none-the-less.


When executed in debug mode, my raytracer can be 10-20 times slower than when run in release with optimizations on :-)
My most recent one renders 512x512 pixels in 14 or so hours.
Quote:Original post by cignox1
Quote:
The ray tracer I wrote for my graphics class took like two minutes to render an 800x600 image with one light source (sampled in 9 places, so 9 shadow feelers spawned), a maximum of 4 bounces for reflection, and 3 primitives in the scene (a sphere, a cone and a box). This was all single threaded and not very cleverly programmed and running in debug mode in the compiler with all optimizations turned off. Not exactly comparable to your situation, but it's a situation none-the-less.


When executed in debug mode, my raytracer can be 10-20 times slower than when run in release with optimizations on :-)


The framework the TA had us use used a GUI library that wouldn't let you compile in release mode for some reason. Some students found a way around it, though, and apparently their raytracers were significantly faster. I wonder what it is about raytracers that makes compiler optimizations so much more effective than normal?
Quote:Original post by Samith
The framework the TA had us use used a GUI library that wouldn't let you compile in release mode for some reason. Some students found a way around it, though, and apparently their raytracers were significantly faster. I wonder what it is about raytracers that makes compiler optimizations so much more effective than normal?


Nothing particular, I think. 10 to 20 times the performance is what I'm used to when activating every optimisation in reach. On non-raytracing code.
----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.
Quote:
Quote:Original post by Samith


When executed in debug mode, my raytracer can be 10-20 times slower than when run in release with optimizations on :-)


I wonder what it is about raytracers that makes compiler optimizations so much more effective than normal?


It's not only raytracing: to load the scene geometry I use the Assimp library wich performs a lot of precomputation on the data (creating tangent vectors, normalization and so on). When I use the debug dll I get much slower performances.
It's just that those applications do not spend much time waiting for input or for data, and thus every additional instruction they must perform heavily influences the time required.

Quote:
My most recent one renders 512x512 pixels in 14 or so hours.


I remember something similar done with the D preprocessor (or the like) and a RT made with &#106avascript. Honestly, I though that there were not so many of those crazy guys :-)

This topic is closed to new replies.

Advertisement