Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

_walrus

Speed of floating point pixel format

This topic is 5476 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hey anybody know what the speed hit for floating point buffers are (generally speaking)? It seems to slow right down to a crawl on my radeon 9600 when i use a pBuffer with a floating-point pixel type (64 bits) and resolution of 512x512. How do they perform on higher end cards?

Share this post


Link to post
Share on other sites
Advertisement
lol, that''s what i like to think, but it''s not quite. The guts of it arnt the r300 core used in the 9500/9600/9700. It''s actually a newer chip (rv350) that was designed to be more cost effective to produce and sell, but unfortuanately it''s slower than 95/97/98, so it''s not the top of the line card.

Share this post


Link to post
Share on other sites
Being a few steps down the ladder isn''t going to have such a big impact on performance.

You shouldn''t use floating point buffers. You gain nothing from them, and, obviously, they aren''t working very well for you.

Share this post


Link to post
Share on other sites
quote:
Original post by Deyja
Being a few steps down the ladder isn't going to have such a big impact on performance.



Actualy most benchmarks have the 9600 performing at roughly 2/3's that of the 9700/9800 and slightly lower than the 9500.

quote:

You shouldn't use floating point buffers.



Why are they an extension of openGL then? And why would they be implemented by hardware venders if we "Should be using them"?

Not yet but in the near future (with uberBuffers): Render to VertexBuffer, Displacement mapping, hardware generation of bicubic surfaces,..(the list goes on) will be realized in hardware USING FLOATING POINT BUFFERS!! Also, correct me if i'm wrong (i haven't used this extension yet), but Isn't the DepthTexture a floating point buffer?


quote:

You gain nothing from them



U dont even know why i'm using them so how can u presume that "i gain nothing from them" um....lets see, how about encoding object positions into the colorbuffer. (I'm writing a differed , per-pixel shader (image space light computation), I dont think 8bit precision will work.)



quote:
and, obviously,they aren't working very well for you.



I dont normally flame people, but your posting is rather unconstructive, unless you can justifly why you shouldn't use floating point buffers. and why you gain nothing from them.




[edited by - _walrus on September 22, 2003 10:09:14 PM]

Share this post


Link to post
Share on other sites
Your problem is not ''How can I make this way work faster?'' but ''How can I do this a faster way?''.

Would each pixel be one coordinate, then? Why not use more, lower precision, pixels for each object?

Share this post


Link to post
Share on other sites
quote:
Your problem is not 'How can I make this way work faster?' but 'How can I do this a faster way?'.


No, that is not my problem.

This is not neccessarly an excersise to get the best peformance out of this algorithms (i'm aware of some of the major drawbacks of differed shading, namely being fill rate limited, gpu-memory extensive). I am also aware of the major benifits of this technique (potential one pass rendering of geometry for n lights). Speed would be nice, but working implementation for educational purposes is what i'm striving for. Also, i find image processing to be very interesting subject, and this is an image processing based alogorithm. My initial question is reguarding the perforance of a middle-line consumer gpu using a specific hardware technology accessed using two ATI specific OpenGL extensions. Please don't presume to know what my problem is without fully understanding the question.


quote:
Would each pixel be one coordinate, then?


Yes each pixel would store a coordinate as well as a normal. We need to store world position of the fragments of the objects in our scene that visible in image space so we can compute the lighting of these fragments. (Dean Calver) Photo realistic Differed shading article refers to this a a geometry buffer (G-Buffer).


quote:
Why not use more, lower precision, pixels for each object?



I am using the lowest precision floating point buffer (16 bits per channel, i.e., 64 bits sorry to say but ReadThe__Manual). Do you mean standard 8bit textures?...the answer to that is You Can't, reason being there is not enough precision.

Judging from your last two posts, i dont think you know what your talking about. Dont claim that floating point buffers "have no use" and that "you dont gain anything from them". Your spreading false information that might thow off other visiters of this forum. If you dont know what your talking about then dont post crap or at least claim that what you stated is "only to the best of your knowledge".


http://www.beyond3d.com/articles/deflight/
http://www.delphi3d.net/articles/viewarticle.php?article=deferred.htm
http://oss.sgi.com/projects/ogl-sample/registry/ATI/pixel_format_float.txt
http://oss.sgi.com/projects/ogl-sample/registry/ATI/texture_float.txt
http://oss.sgi.com/projects/ogl-sample/registry/ATI/draw_buffers.txt


[edited by - _walrus on September 24, 2003 2:01:31 PM]

Share this post


Link to post
Share on other sites
quote:
Original post by _walrus
actually a newer chip (rv350) that was designed to be more cost effective to produce and sell, but unfortuanately it''s slower than 95/97/98, so it''s not the top of the line card.


The 9600 uses the same chip as the 9800, it is a top end chip it is just clocked lower, with 4 pixel pipelines locked out, and usually with lower grade components (eg slower ram) coupled with it.

As for your problem, I would expect a speed hit when using them, however I wouldnt expect it to be so drastic! It may simply be a simple misconfiguration in your pBuffers pixel format, but without actually having used floating point buffers, I cant say for sure.

-----------------------
"When I have a problem on an Nvidia, I assume that it is my fault. With anyone else''s drivers, I assume it is their fault" - John Carmack

Share this post


Link to post
Share on other sites
Hey Maximus, thanks for the reply.


OffTopic:

quote:
The 9600 uses the same chip as the 9800, it is a top end chip it is just clocked lower, with 4 pixel pipelines locked out, and usually with lower grade components (eg slower ram) coupled with it.


Your right about the 4 pixel pipeline, but the ram and core clock are actually higher than the 9800 pro (from what i've read). From what i gathered it's a total new design than the r300. The 9600 uses a 0.13u process as opposed to the 95/97/98's larger 0.15u core. The 9600 is core clock actually runs at 400mhz (with lots of room for overclocking,...if your into that) while the 9800 runs at 380 mhz.

EndOffTopic


quote:
As for your problem, I would expect a speed hit when using them, however I wouldnt expect it to be so drastic! It may simply be a simple misconfiguration in your pBuffers pixel format, but without actually having used floating point buffers, I cant say for sure.


yeah i agree, it could be a configuration problem and that was my first assumption, but i double checked and it looked right (i'll tweak around with the parameters some more when i get a chance), any other 9600 uses get slow f-buffer support?




[edited by - _walrus on September 24, 2003 1:52:57 PM]

Share this post


Link to post
Share on other sites
_ walrus is right, the 9600 is NOT the same chip as the 9500, 9700 or 9800 ... period. The 9500 pro was a 9700 that was stripped down, and slower (it had 2 pipelines TURNED OFF, but not physically removed). Then 9600 was created, to allow (theoretically) better than the 9500 (for people who hadn''t reenabled the 9500 to act like a 9700), but the 9600 is a much smaller core, therefore cheaper (it has half the shader abilty from the 9700 too).

Now, first of all, any program should run great with a 9500, 9600, 9700 or 9800, or else you are probably targeting TOO HIGH. They should also run good on anything GeforceFX 5600 or higher. Else, you are going to alienate all the early adopters of these floating-point supporting cards.

As for why to use floating point, the obvious reason Deyja is for image quality, specifically dynamic range, and lack of banding effects. When you perform operations upon numbers, and constantly truncate that number to the funal precision, at each step of the operation, you get enormous round-off error (proportional to the number of steps in the operations). When you use an intermediate form which has greater precision, and only convert in the last stage (in this case, in the D-A converter for the signal) you get a much higher quality result. I guarantee that quake3 or any game from it''s era could be re-written to use floating point pixels (and of course intermediate math and artwork) and the result would be a much better looking game (and larger memory usage, and lower performance of course).

For those who think this is crazy, I still remember when games went from 8-bit palette color to 16 bit, and the quality went DOWN, a lot. because the 16 bit color spectrum, with blends and effects applied simply does not have enough range to express the lights and darks in an image well - so you get a game where most of the detail is in only one density area. Today, that is getting less and less aceptable. We expect to see details in the shadows and the highlights, and not just for everything to be washed out or underexposed when not right in the middle of the spectrum.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!