Sign in to follow this  
gaurav khanduja

Loading Images - Urgent

Recommended Posts

Hi, I am trying to load ( and render) 5000 images (tif format). Can anybody suggest me the best way to load all of them together as a texture (i am using 2d texture as I want to provide interactivity). Please keep in mind providing interactivity is very important for me. I will be loading nearly 500 images in one viewport (in all i will have 10 view ports at a time). Thanks GK

Share this post


Link to post
Share on other sites
Hi,


We are carrying out the simulation for the atoms and charge density. The result of the simulation is the 500 x 500 x 500 grid points. From these I am generating the 500 images (for each simulation) assigning colors based on the difference in the charge at a point (between new set and original set).

I have to render the images generated using the simulation data and provide some operations. 500 images constitute a set. We have 10 sets showing the data in a different states which I plan to show in different view port for comparison. So what do you suggest, how is it possible. I have been trying it but after 500 images or so the system cant load images due to texture memory constraints.

Thanks

Share this post


Link to post
Share on other sites
I suggest keeping the data as a set of points and use a tesselated grid to draw the data. This way you are loading a small amount of data which is infinitely scalable. If you assign an alpha (perhaps higher alpha for higher delta) to each point you could also display the set in 3d.

I am currently working on a visualization tool that might help. Its an interactive tool designed to display large data sets in 3d/real time. So far the company I work with have been dealing with geological data, but the viewer is very flexible. If you're interested, send me an email at thereisnocowlevel@hotmail.com.

Cheers,
- llvllatrix

Share this post


Link to post
Share on other sites
Quote:
Original post by gaurav khandujaWe are carrying out the simulation for the atoms and charge density. The result of the simulation is the 500 x 500 x 500 grid points. From these I am generating the 500 images (for each simulation) assigning colors based on the difference in the charge at a point (between new set and original set).


Here is you problem 500 rgb images with a resolution of(what must be 512x512, and 500x500 doesn't give much different values) gives us 512*512*3*500=393216000 bytes or 375 megabytes, so there is just no way even one data set will fit in the texture memory, and you wanted ten of them, that's over 3,6 gigabytes, wich means that you would have to render that directly from the harddrive.

even using the method that llvllatrix suggest takes up a lot of space, so you have to find a way to limit the displayed information in some way, perhaps by prerendering some of the data sets (create 1 image from 500 ones from one view point).

Share this post


Link to post
Share on other sites
Quote:

Here is you problem 500 rgb images with a resolution of(what must be 512x512, and 500x500 doesn't give much different values) gives us 512*512*3*500=393216000 bytes or 375 megabytes, so there is just no way even one data set will fit in the texture memory, and you wanted ten of them, that's over 3,6 gigabytes, wich means that you would have to render that directly from the harddrive.


Very true, you're at a loss for memory. Using my method, to store an entire dataset it would take (for an entire dataset): 500 * 500 * 500 * 7 (x,y,z,r,g,b,a) * 4(sizeof(float)) = 3,500,000,000 bytes or 350 megs if the data point locations were not predictable. If they were it would take 500 * 500 * 500 * 4 (r,g,b,a) * 4(sizeof(float)) = 2,000,000,000 bytes or 200 megs, and you would still max out your memory for 10 data sets. I'm not too sure how well a vertex array might help, having never used them.

I think you have two options. Either reduce the sampling on your datasets or have multiple computers displaying the different datasets using your visualization app. I think this shouldnt be too difficult provided the visualization app works. So far I think the largest dataset we have loaded was on the order of 75 megs, ran in real time, and had 6 float components. While a visualizaiton of your dataset may not run in real time per say (a lot of lag), it should at least run.

Cheers,
- llvllatrix

Share this post


Link to post
Share on other sites
a little note on your numbers there

3,500,000,000 bytes != 350Mb
3,500,000,000 bytes = 3,25Gb

2,000,000,000 bytes != 200Mb
2,000,000,000 bytes = 1,86GB

allso, with the right texture compression and reduction in data depth you could reduce those 300MB datasets to something more managble like 50 MB, even 25 or less if the colors are not that important, this depends a little on the data, but you might be able to use some kind of paleted texture or grayscales.

Share this post


Link to post
Share on other sites
I have tried to reduce the image resolution by factor of 4. But still it seems impossible to fit all the data in the memory. What you guys suggest??? And lc_overlord it would be great if you could tell what texture compression scheme you are talking about.... I am little new to this field so dont get offended by my questions.....

One thing more ... I have reduced the resolution of the images ... what I need to do increase the resolution of the particular area of the image.

Thanks
Gaurav

Share this post


Link to post
Share on other sites
You can use unsigned bytes for colors instead of full floats. But I think lowering the resolution would be very wise.

You said you're planning on displaying 10 sets at once. To get maximum resolution, each gridpoint should fall on a pixel, hence you'd need an area of more than 500x500 pixels on your screen for each data set. At 1600x1200 you might just get 6 sets on the screen at once. If you lower the image from 500x500x500 to 250x250x250 (making 250 images of 256x256) You not only need 1/8 as much memory (bringing a single dataset to 64 MB), but you'll still be able to view maximum detail for all 10 datasets on screen at once. If you then use ddx files to get compression on the images, you might be able to get this done.

Share this post


Link to post
Share on other sites
When it comes to compression i am a little new to the area myself, but depending on your data DXT1, DXT3 or DXT5 compression could reduce the image size to about 25-50% without mutch loss of data

Share this post


Link to post
Share on other sites
Quote:

I have tried to reduce the image resolution by factor of 4. But still it seems impossible to fit all the data in the memory.


If I understand you correctly, you are sampling a 3d space to produce a visualization? Would it be possible to parametrically define the space instead of sampling it? If you are dealing with magnetics you might be able to define the fields using isosurfaces.

Cheers,
- llvllatrix

Share this post


Link to post
Share on other sites
llvllatrix .... I couldnt get you. If you could describe it I would appreciate.And I wrote you an email regarding what you are doing ... probabaly you could tell me something about that too.

Regarding lowering down the resolution it is fine, what should I do to enhance the resolution of a particular region. (Like I clip the images ... now i want to enhance the image resolution at the clipped surface)

And rick could you tell me about the ddx file.

I not only need to reender the images, I should be able to do some simple things like rotation, clipping, .....in an interactive manner.

I would like to confirm with you guys that should I render the images as texture(2D)(but still with 2d texture and lowering down the resolution -- it doesnt seem i will be able to load all 10 sets) or something else (please give some idea --- if not texture)

Thanks

Share this post


Link to post
Share on other sites
what i think you should use only one image of 1x1 pixel and regulate the color while rendering the image in hte cell of the grid, based on the charge density or what ever. Also to optimize redaw only the protion which has chaged.

Share this post


Link to post
Share on other sites
The ddx file is basically what lc_overlord was talking about with the DXT1, DXT3 and DXT5 compression. Under OpenGL its mostly known as S3TC (S3 Texture Compression). I've never worked with these myself, but as mentioned, they can give large gains.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this