Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Buzz1982

Any one worked on parallel polygon rendering?

This topic is 5319 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi, is someone out there who has worked on parallel polygon rendering on a cluster. I am thinking of working on creating a software framework that allows the programmers to just describe the scene or world and then the entire scene can be rendered by dividing the primitives on different nodes of the cluster in realtime(atleast 10 to 15 frames/sec). I am a computer science student & i m thinking on working on this for my final year project. I m familiar with graphics programming but havent worked with parallel programming. Although i am studying papers on parallel rendering & about sorting classifications, i still want some sort of guidance on this from someone who has worked on this before. thanx bye

Share this post


Link to post
Share on other sites
Advertisement
hmm, interesting, but I don''t know how you can avoid passing back stuff like depth+color buffers. Which is going to be a lot of memory copying.

When you pass stuff through the NIC, you gotta copy a lot of times. .. I am not sure if it''ll be worth it.

Share this post


Link to post
Share on other sites
Yeah, I''m not sure if it''s worth it in real time, but there have to be some benefits to using GPUs in parrellel rendering movie style rendering. I think there are some papers out there detailing this kind of rendering for realtime scenes, but you''ll have to google them.

Share this post


Link to post
Share on other sites
ngill,
i was thinking of that too(the delay/bandwidth problem in passing back depth+color buffers), & that was one of the reasons for posting on this group. i have read papers on sorting classifications but they havent discussed any thing related to this communication over head. But then again there are parallel renderers out there performing this, although they do not explain how they have come over this problem. Thats why i asked on this forum if someone have done this before can explain what can be done or even point me to some link or paper that explains this.

thanx

Share this post


Link to post
Share on other sites
Parallel ray tracing would be easier, and kewler =) But of course, because of its trivialty, it''s probably not such a good idea for study/research/paper/whatever...

- Mikko

Share this post


Link to post
Share on other sites
If all the models were distributed first, so only orientation would have to be sent per frame, the would really reduce the data path overhead. Then your renderer would be more like a network multiplayer game, except that each node only draws a specific set of objects. You could then send back RGBA and depth buffer data and reconstruct the entire frame without sorting polygons.

Something like:

Visible set = [1-5, 8-10]
NodeA : 1,2,3 (plus orientation data)
NodeB : 4,5,8 (plus orientation data)
NodeC : 9,10 (plus orientation data)

An advanced version of the server could monitor the performance per node and do load balancing. Or merely queue up all the reply data and do final processing when everything has come back in.

E.g.
TimeStamp(x) - Frame1-a
TimeStamp(x) - Frame1-c
TimeStamp(x) - Frame2-a
TimeStamp(x) - Frame1-b
TimeStamp(x) - Frame2-c
TimeStamp(x) - Frame2-b

then recombine.



lonesock

Piranha are people too.

Share this post


Link to post
Share on other sites
A control node handles inputs and updates position/orientation of the viewer to all rendering nodes. The control also gives a sheared view frustum to each rendering node. Each node then traverses its bounding hierarchy with its frustum and draws its screen. The color buffer (only) is sent back to the control node for compositing.

Since the frusta are not overlaping (each one has a unique portion of the screen, and their union is the whole screen to be rendered) there is not complex compositing, just a straight pasting of the image at an offset.

The hard part is in load balancing, so that each render node has equal work. Find the rendering bottlenecks, keep metrics for the current frame from each render node, then use those metrics to guide view frusta creation.

karg

Share this post


Link to post
Share on other sites
karg, you win.

(I''m sure if I looked hard enough I could find some pathological case where mine is better [8^)

lonesock

Piranha are people too.

Share this post


Link to post
Share on other sites
Hi,

What do u think? Is this worth a final year project for bachelor in Computer engineering. Any suggessions regarding modifying or adding something to the project that will make it a better project are welcomed. I also need sugessions on what application/demo/animation/walkthrough/etc to run on it for testing.

thanx
Bye

Share this post


Link to post
Share on other sites
The MPK library under SGI Irix systems can do that, so there''s really nothing new here. It even does automatic load balancing, or compositing of frame buffers, without any slowdown (assuming the hardware supports it). For example you can render a given type of objects on a first pipe (= video card), another type of objects (but on the same window) on a second pipe, and then merge everything before displaying the result.

Y.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!