The minesweeper problem is figuring out where all the mines are. If there are 20 mines in a 10x10 field, there are initially 535983370403809600000 possible arrangements of mines, and the objective is to eliminate them as quickly as possible. Every time you click on a field, you either blow up or you find out more information about the board. As you gain information, the number of possibilities is reduced.
What do you mean?
(the expected amount of information obtained by clicking on that field is probably about the best you can do, but it's a bit tricky).
The amount of hidden information left on the board is the base-2 logarithm of the number of arrangements that are compatible with what is currently known, and the result is measured in bits (about 68.86075 bits in the example). When you click on a field you reduce the number of possibilities, and you can compute the amount of hidden information left. The difference between the amount of hidden information before and after the click is the number of bits of information that the click gave you. For instance, if clicking on a field cuts the number of arrangements in half, it gave you 1 bit of information; if it cuts it down by a factor of 8, it gave you 3 bits.
Now consider your safe fields. You don't know how much information clicking on each one of them is going to give you, but you can compute the expected value (a.k.a. "entropy") for each move and pick the maximum.
Estimating the entropy of each move is a bit tricky, but I think it's doable. You would have to generate lots of random arrangements that satisfy the conditions (as I did in the program I posted, or perhaps being smarter about it) and then look at what you would learn if you clicked on a particular field (i.e., what number would pop up there, and if it's 0 what other parts of the board would be opened and what numbers they would have). After you accumulate a whole bunch of these, you will have the random arrangements classified by what you would learn, and from the size of each class you can estimate the entropy. I think the formula is something like this:
entropy = sum over classes (log2(class_size / number_of_samples) * class_size) / number_of_samples