Storing complex functions as textures

Started by
3 comments, last by edwinnie 19 years, 10 months ago
hihi! i was wondering wats the procedure of using the vertex shader to compute complex functions, then pump them into a texture, and have the fragment program to do a lookup for the values. is it:? 1)activate render to texture 2)bind vertex shader 3)bind fragment shader 4)any draw call? 5)deactivate render to texture & other stuffz 6)bind fragment program for actual coloring load the texture rasterised from above 7)draw i am sure there are alot of stuffz missing here... hopefully someone can point it out... thx! Edwinz
Advertisement
As far as I know, even with VS model 3.x, you cannot write to a texture.

So I would say output your value in a texture coordinate and in your pixel shader, write the interpolated texture coordinate value as a color in the texture.

So:
- Set render target
- Render to texture using a more or less tesselated quad. i.e. if you draw a simple quad to the texture, you''ll end up with an interpolation (bilinear, linear, whatever you set) between 4 corner values for your function which is not what you want. What you might want is to have 1 vertex corresponding to 1 pixel of your texture. Not too difficult to do with transformed vertices when you know the texture size and if this one remains rather small.
- Set main render target
- render as usual using your texture normally. You will need to compute the correct texture coordinates to look up in the texture -> you have a range of values for your function that will correspond in the texture coordinates to a [0,1] range.

Sorry, this sounds a bit Direct3D like but I don''t know GL equivalents. I hope it gives some more info, else just be a bit more specific.

On the other hand, usually, I think that people render the function to the texture using the CPU and avoid to refresh this texture too often.

Vincent
hi vprat!
thx fer replying!

quote:Original post by vprat
- Render to texture using a more or less tesselated quad. i.e. if you draw a simple quad to the texture, you''ll end up with an interpolation (bilinear, linear, whatever you set) between 4 corner values for your function which is not what you want. What you might want is to have 1 vertex corresponding to 1 pixel of your texture. Not too difficult to do with transformed vertices when you know the texture size and if this one remains rather small.


hmm... the rendertexture size should reflect how much info we want to store?? suppose the screen is 800x600, that would mean we need 800x600 vertices??

thx!
Edwinz

Depends on how much accuracy you want in the function. Say you want to have a 1D texture having values for the function f(x) = x^2.

First you need to see what would be your bounds for x. Let''s say you will need to know f(x) for x in [0, 256].

Then you need to know how much accuracy is needed in the f(x) value : you have several texture formats, simple 8 bits per component, 16 and even now 32 bits for floating point textures. This will also depend on the hardware requirements you want to set.

From that, you will decide the precision of the discretisation. Here, 256 vertices allow you to store values for each integer x. 1024 will allow to store samples each 0.25 units.

Then you decide the texture size. If it is less than the number of vertices, then you will loose information (i.e. so many vertices are useless). If it is equal, then you will just have the same information than you computed (best case). If it is more, that means the values will be interpolated before storing in the texture (that is not what you want either, redundant information -> texture too big).

I hope this answers your doubts.
hi vprat!
thx alot!

i''ll try it now...mebbe back wif more questions later
thx!
Edwinz

This topic is closed to new replies.

Advertisement