OK, I'm trying to come up with a good way of shrinking integer-sized regions to fit into a certian ammount of space.
Say my regions have the following pixel sizes [100][100][200], and I want to fit them in a space, say 330 pixels in size.
The idea being they will each scale down nicely to look something like [83][83][164] (adds up to 330).
It must use up all the space available to it, but it dosn't matter if things are a small ammount off. Not converting to floating point and back is highly preferable.
Anyone have any ideas?
For anyone interested, my current (broken) algo looks something like this:
// The values here are from [100][100][200]
// The Greatest Common Divisor of the different regions
int gcd = FindGCD(); // 100
int normal = TotalSize(); // 400
int available = AvailableSize(); // 330
// "chunk" is what big chunks to alocate to each element
int chunk = available/(normal/gcd); // 82
// "spare" is the size remaining after the alocation of chunks
int spare = available%(normal/gcd); // 2
for( /* each element, e */ )
{
int e.final = chunk*(e.size/gcd);
// hand out "spare" space first-come-first-served
e.final += (e.size/gcd);
spare -= (e.size/gcd);
if(spare < 0) // if we use up more spare than we have, give it back
{
e.final += spare;
spare = 0;
}
}
Anyway, that all works nicely, and gets the results [83][83][164], however once I tried it on some program-generated data it fell appart, because the GCD was way too small (like, 1).
Say for instance, the sizes become [100][100][201], then the result is something like:
gcd = 1;
normal = 400;
available = 330;
chunk = available/(normal/gcd) = 0;
spare = available%(normal/gcd) = 330;
And as the spare is alocated to the first element as it needs it, the result is something like [100][100][130], which fills up the space, but is 70 pixels off correct.
So yeah - any ideas?
[Edited by - Andrew Russell on May 13, 2005 2:58:25 AM]