I'm using HGE to render a one-dimensional array of tiles. Each tile in this array stores vertex, color and texture information, which is pre-calculated (only once, before the first render cycle) and does NOT change. Instead, I have implemented a virtual camera system, storing an X/Y pixel offset + view width/height information, allowing a tilemap to be rendered at any given offset without actually changing the vertices of any tiles. This means that tilemaps are always located at 0,0 world-space coordinates, but the user is allowed to scroll through the map using the camera system.
I had this system working rather nicely for a while, until it was time to apply a few optimizations... One of which included calculating which tiles are within range of the camera view, and only rendering these tiles. This optimization nearly doubled my rendering speeds, but I realized the downfall is that I am still looping through each & every tile in the array. For smaler maps, this wasn't a big deal. But for larger maps, the performance hit is almost unbearable... And this is only for a single layer of tiles!
So, given the following variables:
- One-dimensional Tile Array (stores vertex info)
- Camera X Offset (pixel, world-space)
- Camera Y Offset (pixel, world-space)
- Camera View Width (pixel, screen-space)
- Camera View Height (pixel, screen-space)
- Tilemap Size (logical # of columns & rows)
- Tile Dimensions (logical width & height, in pixels)
How could I create a loop which ONLY iterates through tiles within range of the camera's current view?