• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Martin Perry

Marching cubes and edge detection with 3D edge detector

5 posts in this topic

I have a question. If I run classic Marching Cubes i need to iterate all  voxels and calculate iso surface in them. That means I need to decompress my data to get RAW stream (or use compression, that allows random access. Most of those compressions, that also brings high ratios are slow to do in realtime.

So my thoughts. If I compress data with DCT (and Huffman, that is fast for decompress), I can detect edges in DCT domain (I have found this article about it http://www.sciencedirect.com/science/article/pii/S187770581106557X). So basicly, I compress each slice as single 2D image. During extraction, I do edge detection in DCT. Than I got point cloud of edge points. On this, I run triangulation. Now, points from edges are in grid and most likely if point at [x,y] is edge, one of its neighbour will be detected as edge as well. I connect those points in a similar way as Marching Cube does.

 

Its just idea, I dont tested it. I would like to discuss, if it is possible and also see some remarks on my approach (or if it is totally stupid :-))

 

Thank you

0

Share this post


Link to post
Share on other sites

You don't need to have the entire voxel space in memory to run a MC algorithm over it.  In my own implementation I used 2D slices, so if the volume was WxHxD the total memory requirement at any one point in time was only 3xWxH.  At each point I needed the point previous and the point ahead along all 3 axis.  3 2D slices was the minimum (well you could get by with a little less) I needed to store without recalculating (in your case uncompressing) a point more than once.

Edited by Ryan_001
0

Share this post


Link to post
Share on other sites

Yes.. but i still need to transfer data from HDD to RAM (or VRAM). If they are uncompressed, its bottleneck. Plus if compressed, i need to uncompress DCT to RAW data. So extraction directly from DCT domain would be faster.

0

Share this post


Link to post
Share on other sites

If we're talking about the normal DCT/IDTC used in image processing (aka jpeg) then its not a memory issue.  In jpeg (or mpeg) the data is uncompressed from either a Huffman or arithmetic encoded entropy stream into its quantanized coefficients.  The coefficients take up as much space as the final output.  So unless you plan to do the entropy decode on the video card (not entirely impossible in theory, though not easy I'd imagine), you're not saving much by working directly on the DCT coefficients as opposed to the post transform data.

 

In the paper they are clearly performing the edge detection on the uncompressed coefficients, and not on the compressed/entropy encoded data stream.  The IDCT transform is so fast compared to all the other operations that occur its kinda a non-issue IMO.

 

You mention a few things though that might imply you don't fully understand the question.  Not trying to be antagonistic (tone is very hard to convey in writing).  The discreet cosine transform (like that used in jpeg or mpeg) in and of itself does not perform compression.  I'll describe it briefly but you may want to google about jpeg, pictures can really make it easier to understand.

 

The original source image data (you called it RAW I think) in pixels is divided into 8x8 blocks (merely for ease of computation, the DCT can use blocks of any size, but 8x8 is the de-factor standard).  Each pixel in the 8x8 block is a single 8-bit byte (in the case of color images each channel is compressed separately).  Each block is transformed by the forward discreet cosine transform, abbreviated to DCT.  The output of the forward DCT isn't any smaller, in fact it can be larger.  When I played around with the DCT (years ago) you could see that the result of the DCT would sometimes exceed the 8-bits of the source image, so an in-place transform would cause issues if this wasn't taken into account.  Some papers I read used pre-scaling, some did other methods (I was lazy and just used a floats as the intermediate), each method had its pros/cons.  Bottom line is though, you're not saving memory by doing the forward DCT.  The coefficients after the forward DCT are then quantanized, which reduces their magnitude and is the 'lossy' step of compression.  There are all sorts of various standard quantanization matricies, but in general they tend to favor the top left coefficients and throw out the bottom right coefficients.  After this the entropy encoding occurs.  This is the step that actually compresses the data, up till now you've actually made things larger/more difficult to work with.

0

Share this post


Link to post
Share on other sites

Thank you.

 

I know, that DCT does not reduce size of image and coefficients are larger (if not quantized). But main concenr is, that RLE / Huffman is fast to decompress on GPU. Plus if I didnt take 8x8 blocks and do DCT over full image resolution, I got better compression results, but IDCT is after that slower. Thats why I thought oc computing in DCT domain.

0

Share this post


Link to post
Share on other sites

You can go slightly larger with the DCT.  I've played around with full image DCTs.  Problem is coefficient quantinization will cause obvious 'ringing' artifacts.  At large block sizes you can start to see the cosine waves as they propagate through the image.  You trade blockiness for ringing.  IMHO it wasn't a good trade-off.  Wavelets work much better if a high quality transform is what you're looking for, with all sorts to choose from with different properties.  For image compression they are pretty much superior in every way to the DCT.  Also wavelets are generally easier to analyse when it comes to things like edge detection, ect...  Though for 'ease of use' I'd just to the full decompress + transform and have the MC algorithm work with the final pixel data.

 

That said... do you think you can get a good huffman and/or arithmetic decoder + RLE expansion running quickly on a GPU?  The very nature of it would seem to suggest it would be difficult but to be honest I've never tried or read anything on it.  If you attempt it, post it up here somewhere I'd be curious on how it goes.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0