Good idea?

Started by
3 comments, last by d000hg 21 years, 9 months ago
You know the thing in d3d where you tile many textures into one texture to avoid many costly SetTexture() calls? Well why can''t D3D support wrapping for this? It isn''t difficult to implement really if you thought about it at design time. John 3:16
Advertisement
Yes, you''re absolutly right Direct3D should have support for it. As a matter of fact, you should be able to easily set texture regions so you could tell d3d where the texture you want is. The unfortunite side is I don''t think hardware supports it. It The reason I would like it more is because with lightmaps you are setting textures for pretty much every polygon that nees a lightmap. You can''t easily put the lightmaps in one file because when d3d does bilinear filtering, it might take samples from parts of your other lightmaps. I had an idea where you seperate your lightmaps so that the samples don''t overlap and it would probobly work but never got a chance to test it out. You''re right though, it would be a good idea for future hardware.

---
My Site
Come join us on IRC in #directxdev @ irc.afternet.org
Its hardware which is the limiting factor. When MS plan new revisions of the API, as well as listening to developers suggestions, they talk to each of the hardware manufacturers in turn and find out what they''d like adding to the API and what will be coming in their hardware. If it isn''t in the API, then not all hardware can support it without some amount of emulation.

However, many apps aren''t vertex processing limited (if you''re getting less than half the published rate, you''re limited in some other way), so the cost of extra vertices shouldn''t hurt.

You should also be able do it (and much more sophisticated things) if you have pixel shader level hardware which supports dependent texture reads. The caveat of course is you burn a texture stage (doesn''t cost anything in performance, just limits the novel blends you could do, though you could probably tile in one direction by using the alpha channel of the lookup texture for the coordinate offsets).

--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

My app is definately poly-pushing restricted. 2million polys in my map - if I want to use LOD I need to be able to wrap the texture, which means lots of calls to SetTexture.
If you could wrap subsections of a texture I could do it with one call.

Just how expensive are calls to Lock, SetTexture and SetREnderState anyway? Like is there a maximum number of times you can call them per frame and get decent performance?

And what are dependant texture reads exactly?


John 3:16
Poly pushing" is the whole operation on the card. What I''m talking about is where within the process of "pushing polygons" the bottleneck is.

On a GeForce2 with absolute minimal bottlenecks for texture fetch and fillrate, a real world app should get around 20 Million polys/second, i.e. that''s it''s peak vertex processing and poly assembly ability. You can proove this to yourself by running BenMark5 on a properly configured "normal" PC.

It''s much more likely that you''re API limited (the way you call D3D), CPU limited (the amount of time you spend doing processing or your code forces the API/driver to be), upload limited etc...


How "expensive" Lock, SetTexture etc are is totally relative to your app. I suggest you search these forums for previous discussions on the topic (I''ve been involved in at least 4 discusssions here in the past). Also download the nVidia performance presentations.

As for what dependent texture reads are, umm read...the...documentation!!


--
Simon O''Connor
Creative Asylum Ltd
www.creative-asylum.com

Simon O'Connor | Technical Director (Newcastle) Lockwood Publishing | LinkedIn | Personal site

This topic is closed to new replies.

Advertisement