Drawing only on-screen tiles

Started by
2 comments, last by Demus79 17 years, 8 months ago
Hi all, Suppose I have a large isometric map which covers an area much larger than the screen. I have offset_x and offset_y variables to enable scrolling. How can I tell which tiles are not on-screen and thereby stop wasting time by drawing them? I can grasp how this would be done for a regular 2d grid, but not for isometric. Here's a bit more detail on how I'm doing things: I'm drawing my isometric map the 'diamond' way, not the 'staggered' way. Also I'm calculating the on-screen coordinates of a tile by taking the coordinates of my tile in my regular 2d array, and multiplying it by the matrix: 0.5 0.5 -0.5 0.5 and then multiplying the result according to the size of my tiles. Hopefully this makes sense. It works anyway. So if anybody can let me know how to solve this little problem, I'll be grateful!
Advertisement
I don't really see what your problem is. If you know the coordinates of a tile, and you know what transformation you're using, then why can't you just look at the transformed cooreinates vs the offsets and see if the tile is visible?
Quote:Original post by SiCrane
I don't really see what your problem is. If you know the coordinates of a tile, and you know what transformation you're using, then why can't you just look at the transformed coorinates vs the offsets and see if the tile is visible?


Do you mean to transform and test the coordinates of every tile in the map?

Depending on the size of the OP's tilemap, a faster way might be to reverse-transform the coordinates of the four corners of the screen (or at least two diagonally opposite corners) into tile coordinates, and only consider the tiles that lay in the resulting quadrilateral.

In fact, if you imagine your tilemap as viewed in tile-space (ie. so it appears as a regular grid), the screen would then appear as a rotated rectangle. Determining which tiles to draw is then basically the same as, eg, determining which pixels to draw when drawing a polygon onto the screen. In fact, it's somewhat simpler because the gradient of the edges is known in advance (based on the ratio of your tiles - for my standard 32x64 tiles, it was 0.5 iirc).

Sorry, it's hard to explain without a diagram. You should just draw a grid, then draw a rectangle at 45 degrees (for standard 2:1 ratio tiles) on it - you know the corners of the rectangle, so - starting with a line one tile wide at the top-most vertex - begin drawing horizontal strips down, increasing the width of the line each time, until you hit the second-from-the-top vertex, at which point the length of the strips will remain constant although they will be offset by one unit each time, until you hit the third-from-the-top vertex, whereupon each strip will be smaller and smaller until you hit bottom-most vertex, when you should be back to a strip one tile wide.

The advantage is that no matter what size your tilemap is, you only consider the tiles that will be onscreen.

You have to be careful though to make sure you get every tile. I have a little C# interactive app that I wrote when I was doing mine to let me manually drag the screen-rectangle around the tilemap and highlight which tiles are being considered for display, which I found handy.



If one tile is made of 2 triangles, don't waste time checking the visibility of all the tiles. If you know your camera frustum, you can always do a bound check against it.

Anyway, I suggest creating bigger patches perhaps 8x8 and check the visibility of the whole thing (inside the patch you can arrange the tile drawing in a way that is most efficient for the 3d hardware).

Cheers !

This topic is closed to new replies.

Advertisement