Sign in to follow this  
neilkenny

Algorithm for solving 3d puzzle

Recommended Posts

I am currently trying to write a program to make a 3d version of a puzzle I own. The idea of the puzzle is to fit blocks of different shapes and sizes into a cube without having any parts sticking out at the end. There are thirteen different pieces that can each be rotated in twenty-four directions. Each piece is made up of a number of cube units. The size of the box is 4 units * 4 units * 4 units. Here are pictures of the pieces in the puzzle Each piece can be rotated on the X Y or Z axis by 90 degrees at a time. 7 of the pieces have 24 different orientations, 5 of the pieces have 12 different orientations(because they are the same shape when turned upside down) and piece12 (the last red piece show on the web page above) has only 3 different orientations. I have also figured out how many different positions each piece can be placed in in the box. (piece0:432, piece1:216, piece2:432, piece3:384, piece4:192, piece5:432, piece6:216, piece7:324, piece8:216, piece9:432, piece10:432, piece11:576, piece12:48) I hope to write different algorithms to try to solve the puzzle and compare how each of them works. I am fairly new to this type of problem however and would like a few suggestions of what type of algorithms would be most useful. I have read up on genetic algorithms but find it hard to understand how I could implement this for this type of problem. This is my final year project and I have been encouraged to seek help from experts and people who are knowledgeable in this area. Any suggestions would be greatly appreciated.

Share this post


Link to post
Share on other sites
I guess the no-homework policy would apply to this thread. Anyway, just start with a simple depth-first search. If you write it right it should be fast enough and you won't need fancy algorithms.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Depth first search is the simplest way. You just try placing each block somewhere in the puzzle. If a position is invalid (because it would overlap another block), you try the next position for that block. If there are no valid positions for a block, you backtrack and go back to trying other positions for the previous block.

There is a paper by Donald Knuth that describes what he calls the dancing links algorithm, which can be downloaded from his web site. It is more complicated than a simple depth first search. It is directly applicable to this problem (the paper demonstrates the solutions it found for other block puzzle problems). It's probably one of the most sophisticated ways to solve this problem.

Share this post


Link to post
Share on other sites
This is an interesting problem.

The key here seems to be that due to the various orientations and positions of the blocks, a depth first search might take very long....yeah, looking at the possible positions its pretty nasty.

I guess I shouldn't give you the whole genetic algorithm solution, but here are a few tips to help you on your way.

1. You probably don't need to use a binary coded GA. Since you have a fixed number of blocks, your chromosome or candidate solution will just be a combination of the blocks in a certain combination of orientations/positions.

2. Your fitness function can simply be, given a combination of block orientations, how many cubes overlap and how many stick out of the 4x4x4 boundary. The goal of course is to reduce it to 0.

3. Mutation should be pretty straight forward as you can simply randomly choose an orientation for a block.

4. Crossover shouldn't be tough once you get the above stuff to fall into place.

5. Remember, these are only ideas, you may need to hack them a bit as you learn more about the problem.

As for other approaches, there is one that comes to mind. You can read up on Tabu Search. Using the same fitness as above, you can probably get tabu search to work. You might even try simulated annealing too.

Just another tip, don't be afraid to hybridize. Good hybrid algorithms may sometimes work better than the original.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
You should really try both. The depth first search algorithm, if implemented correctly, is guaranteed to find a solution if given enough time.

The genetic algorithm is a probabilistic algorithm, so it should also eventually find a solution if given enough time, but the parameters for tuning it may lead it to waste a lot of time repeatedly exploring states that aren't near the solution.

The dancing links algorithm is almost certainly better than the other options presented. More general constraint satisfaction problem solving algorithms can also do well.

Share this post


Link to post
Share on other sites
A suggestion:
I think Depth First Search might be quite feasible, you just need to make it clever about optimizations.

You may have opportunities to Early Out Reject on many of the Depth Search branches.
For example.... if the placement of a new piece 'cuts off' an empty cube by itself (so it is surrounded by either walls or filled spaces), this configuration must be bad since there will never be a way to fill in that empty space. -this idea can be extended, so instead of finding a 'island' of a single empty cube, also watch for clusters of empty cubes that are smaller than the smallest available piece. (in terms of bounding dimensions)

I'd also, when Depth Searching, Only search positions for new pieces where they are Touching previous pieces. That should lower the possible branches by quite a lot! It does not exclude the search from visiting all possible Valid configurations, its just a nice way to help avoid repeating yourself. Plus checking configs where the new piece touches the previous ones also increases the chances of finding the above mentioned 'cut off' empty cubes.

Doing both of those at once makes the search more similar to what a human (me) would do when solving the puzzle - new peices contact old ones and 'islands' of empty space are avoided; so you basically try and Fill In space as compactly as possible.
If you want, you could even add in a 'gravity' effect when placing new pieces (first piece must touch bottom of cube, later peices must rest on previous ones), that should help cull the branches even more, and help to eliminate Symmetryies in the search (reduced to 1/6 the size I'd bet).
-hey if that approach works for a human... why not for the program as well?

Share this post


Link to post
Share on other sites
Actually, what you guys are referring to is not exactly depth first search, its more like Backtracking search used for constraint satisfaction problems. It is not hard to implement as the origianl purpose was to cut down on futile branching. However, you still run into massive thrashing problems.

For example, after placing the 4th piece successfully, you start placing the 5th, but you iterate through all the possible positions and orientations of possible 5th pieces and find that nothing works, because of possible contraint violations down the line. So, you back track and replace the 4th piece. However, in reality, the problem may be that the 2nd piece wasn't right in the first place. But to realise that, the search must go through all possible 3rd, 4th and 5th piece combinations or more before coming back to change the second piece. So, in the current problem, if we've put in piece0 first, then piece1 second, and piece1 is in the wrong position, you won't know till you try combinations of the rest of the 10 pieces. So, assume that 3 more pieces is as far as the search goes before backtracking to change the second piece, you're still searching at least 192x48x216 possibilities and thats a minimum.

Probably the better way, if you really want to use backtracking style searches, is to use a weak-commitment search, which performs 2 - 10 times better than the best backtracking. But based on the size of the search space, that's still gonna take pretty long.

Sorry, I've sort of rambled on, but just wanted to point out the real downside to using backtracking style searches, since we are lookin at a search space of about 10^38 and pruning can only really go so far.

Share this post


Link to post
Share on other sites
This is an NP-complete problem, so even in the best case it will not be very efficient. It is an example of the "3D bin packing problem" for arbitrary solids.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Since you know the geometry of the pieces you have a few heuristics that can help you in cutting down the search quickly.

Share this post


Link to post
Share on other sites
I would use backtracking with two improvements:
- Place the cross-shaped piece first, since it can only be put in essentially two places, and this will take care of all the symmetries in the solutions. Actually, I would consider these two options as separate puzzles and solve them separatelly.
- Divide the remaining 12 pieces in two groups of size k and 12-k. With the first group, generate all possible arrangements of the k pieces and store them in a fast data structure (a hash table is a good one to start with, which in many C++ compilers' STL implementations is provided as `hash_set'). All you need to store is a 64-bit pattern of what blocks are occupied, and remember that you can't use the five cubes used by the cross-shaped piece. Now start with an empty cube again and use backtracking to find all arrangements of the 12-k pieces. At the end of the search, check your data structure to see if the remaining spaces can be filled exactly with the other k pieces. Set k to 5 or 6 if your memory can take it.

The idea of checking for isolated sections that can't possibly be filled with pieces (e.g., isolated single cubes) is probably too expensive to check. The idea of only placing pieces touching already-placed pieces is not going to reduce the branching factor much for this problem (the space to be filled is too small) and using a pre-established order for the pieces is probably faster.

Using the dancing-links idea that Knuth describes in his article is just a very smart implementation trick on backtracking, and it might be too much of a pain to implement, but you can try. It probably is faster.

Oh, and for the name `depth-first search', this is exactly what this search is, if you consider the acyclic directed graph whose nodes are arrangements of a subset of the pieces and whose links represent the placement of an additional piece.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by alvaro
I would use backtracking with two improvements:
- Place the cross-shaped piece first, since it can only be put in essentially two places, and this will take care of all the symmetries in the solutions. Actually, I would consider these two options as separate puzzles and solve them separatelly.
- Divide the remaining 12 pieces in two groups of size k and 12-k. With the first group, generate all possible arrangements of the k pieces and store them in a fast data structure (a hash table is a good one to start with, which in many C++ compilers' STL implementations is provided as `hash_set'). All you need to store is a 64-bit pattern of what blocks are occupied, and remember that you can't use the five cubes used by the cross-shaped piece. Now start with an empty cube again and use backtracking to find all arrangements of the 12-k pieces. At the end of the search, check your data structure to see if the remaining spaces can be filled exactly with the other k pieces. Set k to 5 or 6 if your memory can take it.

The idea of checking for isolated sections that can't possibly be filled with pieces (e.g., isolated single cubes) is probably too expensive to check. The idea of only placing pieces touching already-placed pieces is not going to reduce the branching factor much for this problem (the space to be filled is too small) and using a pre-established order for the pieces is probably faster.

Using the dancing-links idea that Knuth describes in his article is just a very smart implementation trick on backtracking, and it might be too much of a pain to implement, but you can try. It probably is faster.

Oh, and for the name `depth-first search', this is exactly what this search is, if you consider the acyclic directed graph whose nodes are arrangements of a subset of the pieces and whose links represent the placement of an additional piece.


I wouldn't really call this artificial intelligence but more like a really nice optimization trick, if you can get it to work.

Share this post


Link to post
Share on other sites
Quote:
I wouldn't really call this artificial intelligence but more like a really nice optimization trick, if you can get it to work.

I wouldn't call it AI either. I am just trying to solve a problem here, and I don't particularly care if you can call it AI or not.

Do you consider computer chess to be AI? A very similar trick is used there; databases are generated with all the exact scores for positions with only a few pieces left, and then they are queried during the search, whenever one of the branches happens to fall into a known endgame.

Share this post


Link to post
Share on other sites
Quote:
Original post by alvaro
I would use backtracking with two improvements:
- Place the cross-shaped piece first, since it can only be put in essentially two places, and this will take care of all the symmetries in the solutions. Actually, I would consider these two options as separate puzzles and solve them separatelly.
- Divide the remaining 12 pieces in two groups of size k and 12-k. With the first group, generate all possible arrangements of the k pieces and store them in a fast data structure (a hash table is a good one to start with, which in many C++ compilers' STL implementations is provided as `hash_set'). All you need to store is a 64-bit pattern of what blocks are occupied, and remember that you can't use the five cubes used by the cross-shaped piece. Now start with an empty cube again and use backtracking to find all arrangements of the 12-k pieces. At the end of the search, check your data structure to see if the remaining spaces can be filled exactly with the other k pieces. Set k to 5 or 6 if your memory can take it.

The idea of checking for isolated sections that can't possibly be filled with pieces (e.g., isolated single cubes) is probably too expensive to check. The idea of only placing pieces touching already-placed pieces is not going to reduce the branching factor much for this problem (the space to be filled is too small) and using a pre-established order for the pieces is probably faster.

Using the dancing-links idea that Knuth describes in his article is just a very smart implementation trick on backtracking, and it might be too much of a pain to implement, but you can try. It probably is faster.

Oh, and for the name `depth-first search', this is exactly what this search is, if you consider the acyclic directed graph whose nodes are arrangements of a subset of the pieces and whose links represent the placement of an additional piece.


Its a nice idea, but then consider the space complexity involved. I'm not even sure if you can set your k to 3 or 4. Given the current problem, say we group the first 3 pieces together and generate all the possible combinations, which is 432x216x432 and then 64-bit entries, that's 8 bytes per entry. So, the required space would be 322,486,272 bytes, or about 307.5MB. Through in another piece and you'll be clear into the multi-GB region. Even if you store only the feasible ones, you'll still need to check 40 million entries and optimistically if only 1% are feasible, storing all combinations of 4 pieces would still require close to a GB of memory. The only possible implementation for this I can think of is dump everything to file then use the windows filemapping feature to map the whole file into virtual memory and then let the OS worry about the address paging.

Well, I wouldn't exactly say its not AI. In a sense it is because the field of AI involves all sorts of search algorithms, including depth first search. The key is the heuristic that's used to prune the possibility.

Share this post


Link to post
Share on other sites
Quote:
Original post by WeirdoFu
Its a nice idea, but then consider the space complexity involved. I'm not even sure if you can set your k to 3 or 4. Given the current problem, say we group the first 3 pieces together and generate all the possible combinations, which is 432x216x432 and then 64-bit entries, that's 8 bytes per entry. So, the required space would be 322,486,272 bytes, or about 307.5MB. Through in another piece and you'll be clear into the multi-GB region. Even if you store only the feasible ones, you'll still need to check 40 million entries and optimistically if only 1% are feasible, storing all combinations of 4 pieces would still require close to a GB of memory. The only possible implementation for this I can think of is dump everything to file then use the windows filemapping feature to map the whole file into virtual memory and then let the OS worry about the address paging.


I hadn't made any computations on how big the table would have to be. There are some tricks by which you can save some space. For instance, you could divide the data in 65536 data structures, indexing with the first two bytes. Those data structures now only have to contain 6 bytes per entry. In any case, there are more combinations than I had originally thought. 3 pieces is workable and 4 might be possible with a lot of RAM and effort (1 GB of RAM is fairly standard these days, and 4 is not unheard of).

Share this post


Link to post
Share on other sites
Quote:
Original post by alvaro
Quote:
I wouldn't really call this artificial intelligence but more like a really nice optimization trick, if you can get it to work.

I wouldn't call it AI either. I am just trying to solve a problem here, and I don't particularly care if you can call it AI or not.

Do you consider computer chess to be AI? A very similar trick is used there; databases are generated with all the exact scores for positions with only a few pieces left, and then they are queried during the search, whenever one of the branches happens to fall into a known endgame.


I think the key word here is search. If the solution uses a search, it's AI. On the other hand, it is often said that once we solve an AI problem, it's not AI anymore. ...Maybe AI is in the eye of the beholder.

Share this post


Link to post
Share on other sites
This is a problem that falls clearly within the realms of constraint satisfaction problems, which are a popular area of AI research. I would suggest that MILP (Mixed Integer Linear Programming) would be a good tool to tackle your problem. You'll find plenty of free software and literature for MILP on the web.

Good luck with it. 8)

Timkin

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by alvaro
Quote:
I wouldn't really call this artificial intelligence but more like a really nice optimization trick, if you can get it to work.

I wouldn't call it AI either. I am just trying to solve a problem here, and I don't particularly care if you can call it AI or not.


Gee, than why are you posting in an *artificial intelligence* forum? Why not post it in
"I Like to solve puzzle problems with obtuse and broken algorithms forum?"

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Anonymous Poster
Gee, than why are you posting in an *artificial intelligence* forum? Why not post it in
"I Like to solve puzzle problems with obtuse and broken algorithms forum?"


Aren't they the same thing? Let's use a genetic algorithm to decide.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Give simulated annealing a try. I think you will find it easier to implement than a GA. There should be plenty of papers available to point you in the right direction. I remember a lecture given by Rob Rutenbar @ CMU where they had a video of one of their place and route systems solving the exact problem you describe.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anonymous Poster
Give simulated annealing a try. I think you will find it easier to implement than a GA. There should be plenty of papers available to point you in the right direction. I remember a lecture given by Rob Rutenbar @ CMU where they had a video of one of their place and route systems solving the exact problem you describe.


Yup, simulated annealing will be much easier to implement, considering its nothing more than a probabilistic hill climber. And of course, like every other probabilistic hill climber, you run into the nasty cooling schedule issue. What most research on simulated annealingdon't show half the time is all the various cooling schedules that are tested before they hit on one that does better. Of course, its the same with GAs as well. So, in the end, you win some and you lose some. It all boils down to heuristics, parameter settings, and the various choices you make during implementation. The crazy world of optimization.

Heck, even hill climbers have various issues involved, like do we do probabilistic restarts or do we do min conflict or something else.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I'm not sure Simulated Annealing (or any other hill climber, including gradient descent and genetic algorithms) will be usefull for this type of problem as the search space isnt in any way smooth. In effect, theres no hills to climb.

To be specific, it is not simple to define a loss function which smoothly indicates how close a solution is to being perfect/acceptable. The obvious loss function here only indicates to what degree the constraints arent satisifed.

In some problems the two values as I described would be one and the same, but for other problems (especialy this one), the number of constraint violations is not an inidcation of how close the solution is to being the correct one.

To be really specific, a solution with 1 peice out of order and 1 constraint violation is indistinguishable from a solution with 10 pieces out of order and 1 constraint violation. The loss function in both cases will indicate 1 constraint violation.

- Joseph Koss

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Joseph,

This is still an area of active research and investment: place and route in VLSI, truck/cargo container packing, etcetera. Regardless of the approach taken, whether it is an exhaustive search of the design space, a probablistic search of the design space, or an integer/linear programming approach, a cost function needs to be defined and constraints need to be handled (implicitly or explicitly).

For what is worth, GAs, SA, monte carlo, simple gradient methods with restarts, etc. are all provably identical. If the code is well written it shouldn't be too hard to change the optimizer.


Phil

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
VLSI, container packing, scheduling, etc. are all problems for which there are clearly functions to minimize (circuit length, space, time, etc.). In these problems, a solution that is not optimal is still usually good enough. For a puzzle, being close to the solution is not good enough, since we actually want the solution. The goal is not to minimize a cost function, it is to find a valid solution.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this