Medical data and sdl2 texture loading

Started by
4 comments, last by ninnghazad 5 years, 6 months ago

Tommorow i plan to go to hospital to fetch tomography images to load them into my raytracer, however i wonder if sdl2 uses lets say opengl to render images then after changing it it stirrs it to surface->pixels or its its own implementation not depending on any gpu based rendering so it just rrads pixels in cpu? And whatever data it receives it will not change it?

I ask becauae i dont know if it automatically changed a texture pixel data to be power of 2 etc. Ideally i could just allocate an array and store pixels but there so many img formats that i am forced to use adl for that

Advertisement

There are flags you can check to see about the card. The NPOT (non power of two) extension has been in all the cards since about 2005, and has been in the core since OpenGL 2.0 (2004).

That doesn't answer about what happens if you try to load large images or images that aren't supported by SDL.

Anything over 16384 across is unlikely to be supported. While they make pretty images that can be zoomed, they aren't a natural fit for graphics hardware.

I assume you are not going to dynamically change the scanned data - in that case you might want to consider pre-processing you image data into some (sparse) octree. That works well with raytracing. You'd then upload that octree as a non-image-blob or an array of 3d-textures (blob is easier for starts) and traverse it in your shaders. That has some nice benefits - like you can get kind of a cone-tracing thing for free and traversal can be (depending on the data) a lot faster.
You could even do marching-cubes on the octree to get some nice normals for shading.
And you can easily save that preprocessed octree to a file so your program does not have to parse all images on each startup.
I found that i actually *had* to use data structures like that because large scans don't fit onto reasonable gpus, and with a nice tree you can quantize the data a little and gain some nice "compression"-ratios on the way.

I guess making an octree and splitting data into smaller texture set will be good but what if it exceeds actual hw limitations, i need to find out how to render whole set at once i think rendering each cube from furthest to nearest and storing act pixel data along with depth information will do, however after reafing about these voxel octree thing i think they are sending something different to vertex shader than vertices? And then fragment shader rasterizes that? Or am i wrong?

Ye, you send a prepared sparse octree, basically an array of the tree's nodes. you can use vertex shader to do some translations and then use the fragment shader to raytrace it. or you use OpenCL/CUDA for the actual tracing, but fragment shaders would do.
a sparse tree uses (in most cases) a lot less data than the raw data, on top of that you have kind of builtin multisampling for far away nodes.
i advise to read this excellent paper from NVIDIA on the subject: https://www.nvidia.com/docs/IO/88889/laine2010i3d_paper.pdf
also if you do not need transparency (which i would assume you don't with medical scans) you can raytrace front to back and stop at first hit. 
and if you only need to see the surface and not be able to dynamically slice the scan, you can store only the surface in the octree, drastically reducing its size.

for a simple introduction check out this cpu-only example: https://github.com/tunabrain/sparse-voxel-octrees

 

should you find that SVOs aren't for you, consider using multiple compressed 3d-textures. you prepare each "cube" by quantizing non-surface areas of the scan. this will let the compression achieve better ratios without reducing the visual niceness too much. 

 

This topic is closed to new replies.

Advertisement