Archived

This topic is now archived and is closed to further replies.

manuelb

Graphics on PC cluster

Recommended Posts

manuelb    152
Does someone know something about rendering graphincs on a PC cluster? What kind of software do this? thank''s and please, do not answer "go to google and make a search" -------------------------- Rory Gallagher and Chico Science music !!!!!!! --------------------------- ____ || ~~~~ | /\ TT

Share this post


Link to post
Share on other sites
Yann L    1802
What kind of graphics ?

All major 3D packages (Maya, 3DSMax, Softimage) support network rendering on a cluster.

Share this post


Link to post
Share on other sites
manuelb    152
Yes, 3D graphics. Thank''s, I didn''t know this softwares do this.


--------------------------
Rory Gallagher and Chico Science music !!!!!!!
---------------------------
____
||
~~~~
|
/\
TT

Share this post


Link to post
Share on other sites
Vich    142
The software that does this is a reasonably simple client-server application. All clustered pc''s in the network do calculations for the main computer, who gives them assignments. If the assignment is done, those client-pc''s send their information back to the main server, which gathers all the sub-calculations and puts them in one big result-frame.
It''s not only done for calculating graphics (each client get''s a small part of the image to calculate). I think that it''s more commonly used for calculations like quantum physics and on universities.

If you want to make something like this yourself:
- make a scripting languages for the server, so you can give him a tasklist, which he can divide over the clients
- make a client that accepts a task (or more, which are put in a que) and after completion, it will send the result back.

A known technique for real-time clustering (combining CPU''s and memory, to use it as one on a single pc in a network) is load-balancing.

"My basic needs in life are food, love and a C++ compiler"
[Project AlterNova] [Novanet]

Share this post


Link to post
Share on other sites
manuelb    152
Yes, I'm working in pc cluster projects at university, I use Linux, C(gcc) and VPM, and sometimes sockets. We work on large math systems. But I think "simple client-server application" is not the best definition.It's not only, "Eeach client get's a small part of the image to calculate", because each part of image have a diferent amount of work to do, to have good results you have to do a homogenious arrangement. But, I agree, do parallel systems with image rendering is easyer than other parallel systems.

--------------------------
Rory Gallagher and Chico Science music !!!!!!!
---------------------------
____
||
~~~~
|
/TT

[edited by - manuelb on May 30, 2003 7:45:07 AM]

Share this post


Link to post
Share on other sites
Yann L    1802
Actually most 3D packages will not distribute the rendering of a single image over the network, there are far too many dependencies between the different areas of an image. Think of shadows, reflections, global illumniation (radiosity, photonmaps, etc). It is pretty easy to distribute such a rendering process on an SMP multi-CPU system, as long as unified memory mapping is available. But it''s very hard over a network (although not impossible, ie. RenderMan from Pixar does it).

Most 3D packages will distribute multiple frames of an animation over a cluster, ie. each cluster node renders a complete frame, and as soon as it finished, it requests a new one. A central server dispatches non-rendered frames as needed, and composites the final animation from the finished renderings it gets from the individual cluster nodes. This approach almost eliminates synchronisation problems, but is only effective on animations.

Share this post


Link to post
Share on other sites
manuelb    152
Yann L, things like shadows, reflections, global illumniation and etc... don''t create any dependence. Each node processes part of the image based on information about ALL the cene. If a node calculate part of the image, it does not means that the node don''t have all the information (poligons, materials, lights..)about the cene. With light sahdows etc... each pixel depends on information about almost all the cene, but one pixel does not depend on another pixel (expect antialiasing, but it''s about adjacent pixels, and easy to implemment after cene is complete).
If there is no order about witch pixel to calculate first, there is no dependece.

--------------------------
Rory Gallagher and Chico Science music !!!!!!!
---------------------------
____
||
~~~~
|
/\
TT

Share this post


Link to post
Share on other sites
Yann L    1802
quote:
Original post by manuelb
Yann L, things like shadows, reflections, global illumniation and etc... don't create any dependence. Each node processes part of the image based on information about ALL the cene. If a node calculate part of the image, it does not means that the node don't have all the information (poligons, materials, lights..)about the cene. With light sahdows etc... each pixel depends on information about almost all the cene, but one pixel does not depend on another pixel (expect antialiasing, but it's about adjacent pixels, and easy to implemment after cene is complete).
If there is no order about witch pixel to calculate first, there is no dependece.


This is not what I meant. What I was trying to say, is that each rendered pixel can be (under certain circumstances) dependent on non-local scene data. That means, that high speed random access to the entire scene data must be available to all clusters, at the same time, if you want to run a single image on a parallel system. And it's more than only the geometry: illumination maps, shadow maps, photon maps and transfer volumes, radiosity form factors, refracted rays, caustic projections, etc.

On large scenes, this data can easily sum up to several gigabytes. And most important of all, this data is not constant over the rendering, but can be change during that process (eg. through a recursive photon mapper with multi-refractive properties). It is almost impossible to give all clusters parallel access to that data at acceptable transfer bandwidth, even with GigaBPS networks. In the end, the parallel rendering will almost always be slower than a single multi-CPU node (per frame) setup, simply because of data access and synchronisation stalls.

Edit: pure raytracing can operate in such an environment, since the scene data is generally static for the frame, ie. the rendering process will not modify the scene data. But 1) for good performance, each cluster node should have the entire scene in local RAM, which implies high memory costs and 2) pure raytracing without accelerations structures is going to be slow as hell, especially on global illumination solutions.


[edited by - Yann L on May 30, 2003 3:41:58 PM]

Share this post


Link to post
Share on other sites
manuelb    152
Yes, you are right!
This problems are good chalanges.
But I think it''s not impossible.

Thank''s

--------------------------
Rory Gallagher and Chico Science music !!!!!!!
---------------------------
____
||
~~~~
|
/\
TT

Share this post


Link to post
Share on other sites
Yann L    1802
quote:
Original post by manuelb
This problems are good chalanges.
But I think it''s not impossible.


Agreed, I also think it is possible. The key is to develop new algorithms, that are highly specialized for multi-node setups. Typical standard algorithms fail here, but it might be possible to create localized approaches, only operating on a certain subset of the data pool. Later on, the server would run a recombination pass over the separately preprocessed parts.

Research into this direction is running. A couple of good models for localized radiosity have already been developped. AFAIK, Pixar uses a similar multi-node system for their in-house version of Renderman.

Share this post


Link to post
Share on other sites
NaliXL    120
Okay, so it might be possible, but why on earth would you want that? I mean : if it''s so easy to distribute jobs on a per-frame basis, why then bother to do it this way?

Share this post


Link to post
Share on other sites
Dom77    122
Actually this is a very requested feature.
If you need to tweak materials or the lights in a scene, you do a lot of test-renders. So if you wait 20 seconds instead of 3 minutes, it saves you a lot (!) of time during the day.

And actually some render-engines already support this feature, like
final render and vray. 3ds max''s scanline render can fake it also through a script which is no more freely availible but which is included in the design extension for subscribers. Well ... i forgot, there''s a new script out there doing it now, too. )
Of course, all this is non-realtime

OpenRT is a project trying to get real-time raytracing through ... ehr grid-computing? Network computing.
I''ve seen a life demo with 5 dual computers. Nice!

So there are many reason on earth for wanting this! ;P

Share this post


Link to post
Share on other sites