quote: You are an employee at a company that designs microprocessors. You are handed a "multimedia" algorithm and told that it is running much too slowly on the current hardware. Your mission, should you choose to accept it, is to develop hardware that will increase the performance of this algorithm and then present your design and analysis in a persuasive fashion. "What is a "multimedia" algorithm?", you ask. Examples: - audio or video compression - audio or video encoding - audio or video filtering - voice recognition - speech synthesisBasically this project involves adding hardware to an existing x86 or MIPS processor to speed up whatever algorithm is chosen. Now, none of the 5 listed topics really interest me, but I was thinking that any software rendering algorithm would also qualify as a "multimedia" algorithm. (I don''t think anything having to do with transformation or rasterization of polygons would be acceptable, since that stuff is all done in hardware already.) My ideas so far: - ray-tracing - voxels - any rendering that outputs polygons to give to 3D hardware (I suppose voxels are an example of this) Note that ray-tracing and voxels are pretty broad topics... what I actually need is a specific algorithm, i.e., something you implement in a page or two of C code. So, any suggestions? p.s. I don''t know a damn thing about how computer processors work. If any of you know why my university is having CS majors make feeble attempts at hardware design, please let me know.
Software rendering algorithms
From one of my CS classes here at UT in Austin:
Polygon rasterisation is the kind of thing which already has specialised processors - from reading the question I''d say a rasteriser isn''t such a good application...
however... the transform and lighting, and physics parts of a software engine are the kinds of things which would be optimised with special extensions to the CPU...:
- Thats one of the big reasons for 3DNow!, SSE and SSE2 being added to x86 CPUs - I''d say take a good look at the instructions they added. [Homework hint: processing big batches of similar data in parallel].
- Also take a look at SH3 or SH4 (download the manuals from Hitachi), they added what are essentially matrix operations. [Homework hint: identify small primitive jobs and data types which occur regularly - those are the candidates for extra hardware].
- Finally, see if you can find any docs for the GTE chip on the Playstation lying around on the internet (they are out there). GTE has instructions for complete operations such as one which transforms a vertex and applies a perspective transform to it.
More recent console hardware has similar co-processor hardware. [Homework hint: if the hardware is being made for a very specific market, you can make the features very domain specific (i.e. 3d graphics rather than just "maths ops"]
--
Simon O''''Connor
Creative Asylum Ltd
www.creative-asylum.com
however... the transform and lighting, and physics parts of a software engine are the kinds of things which would be optimised with special extensions to the CPU...:
- Thats one of the big reasons for 3DNow!, SSE and SSE2 being added to x86 CPUs - I''d say take a good look at the instructions they added. [Homework hint: processing big batches of similar data in parallel].
- Also take a look at SH3 or SH4 (download the manuals from Hitachi), they added what are essentially matrix operations. [Homework hint: identify small primitive jobs and data types which occur regularly - those are the candidates for extra hardware].
- Finally, see if you can find any docs for the GTE chip on the Playstation lying around on the internet (they are out there). GTE has instructions for complete operations such as one which transforms a vertex and applies a perspective transform to it.
More recent console hardware has similar co-processor hardware. [Homework hint: if the hardware is being made for a very specific market, you can make the features very domain specific (i.e. 3d graphics rather than just "maths ops"]
--
Simon O''''Connor
Creative Asylum Ltd
www.creative-asylum.com
S1CA, right, as I''ve said, polygon rasterization is probably not acceptable. You mention transformation and lighting as candidates, but these, too, are now being done almost exclusively in hardware already.
Physics is something I initially considered... certainly any physics algorithm could benefit from the type of matrix and vector operations you describe. Unfortunately, I don''t think that physics would qualify as "multimedia". I mean, sure, games/simulations are "multimedia" in some sense, but I get the impression that our professor wants an algorithm that operates directly on audio/video data.
Anon, the marching cubes algorithm is an excellent idea! It has been suggested to me by others, as well, and I am looking into it.
Physics is something I initially considered... certainly any physics algorithm could benefit from the type of matrix and vector operations you describe. Unfortunately, I don''t think that physics would qualify as "multimedia". I mean, sure, games/simulations are "multimedia" in some sense, but I get the impression that our professor wants an algorithm that operates directly on audio/video data.
Anon, the marching cubes algorithm is an excellent idea! It has been suggested to me by others, as well, and I am looking into it.
What about something involving radiosity or high-quality still image/animation rendering?
Design a chip that does radiosity in realtime, so that it can be sent to an attached 3D hardware ! In addition of getting a nice project for your CS, you''ll also get hundreds of million $ license royalities from SGI, nVidia, ATI and all companies that would like to implement it ! Cool
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement