Understanding probabilities of getting certain items?

Started by
4 comments, last by Zipster 6 years, 7 months ago

I am currently designing and coding up a general card game. I have developed enough server and database code to have a complete register/login system working and also have everything set up the database for modular building of cards. This is all great and working, and now I come to the point where I have to create "packs" of cards.

I don't understand how to implement the "percentage of getting a card" type of system. I was originally thinking that I can tag each card in the database with a percentage of being acquired, so that I can fine tune each and every single card if the need ever arises. So, let's say that card1 has a 1% chance of being pulled and card2 has a 100% chance of being pulled.

With this setup, if I use the normal random methods


Random _random = new Random(); //this is class level declaration
	//create a pack here of 5 cards
	double percentageToPull = _random.NextDouble() * 100;

Now, let's assume percentageToPull becomes 55%, we should never pull a card below that 55%. But, if percentageToPull becomes .9, we now have a chance at pulling that 1% card. However, in my mind there is also that 100% card sitting there. So I can't assume the lower percentage pulls are guaranteed (.9 != 1), and if I look at cards >= .9, it's now another rng on top of rng.

However, on the other side is how most games are set up now. They set up "rarity levels" of their cards. Cards 1-3 are "common", card 4 is "uncommon", etc... So when creating a pack we can guarantee pulling any card within any rarity level.

Both systems create a random aspect of creating packs of cards to be pulled. However each system works different. I like the idea of having control over each individual cards "chance" of being pulled, but if creates this much difficulty, i'll go with the more standardized rarity level system.

Any thoughts/opinions?

Advertisement

Suppose you have a list of cards and choose from it randomly. Now, for common cards, duplicate them. For rarer cards, duplicate them less. This would exactly mirror how it works with real cards.

So suppose you have cards A, B, C, D, E, each with increasing rarity. So you might have 50 "A" cards, 30 "B" cards, 10 "C" cards, 5 "D" cards, and one "E" card. So the way you would define a card's rarity is as commonality, or redundancy. And to predict the chance of getting a particular rarity, you check its commonality against the total size of the expanded list.

If you want to ensure that the number of unique common or rare cards has no effect on probability, you could instead make those into categories. You choose a rarity category in this way, and then you choose the exact card randomly from a list. Simple.

So I really was overthinking it and trying to solve a problem that doesn't exist? I was thinking since it's coded, we could make it more defined with card having its own separate chance of being pulled. But now that it's explained, that wouldn't work as the number of cards in the entire pool keeps changing.

It is simple to create the rarity category system and tag cards with their category. Then, I'd roll the random, see what category I need, and pull a random card from that category. Thanks for the input.

When you say "100%", you have to define what that means. 100% of what?

Obviously if you're choosing 1 card out of 2, it's simply not possible to have "100% chance of card 1, 5% chance of card 2". Probability doesn't work like that.

If you were implementing a booster pack system, e.g. of 7 cards, then I suppose you could say there is a 100% chance of getting a certain card... unless, for example, you had 8 cards each "100%", which means that again it's impossible.

A more workable concept is to think in terms of relative frequency. You might decide that a common card is twice as likely to be selected as an uncommon card is, and an uncommon card is twice as likely to be selected as a rare card. But even here, there are 2 interpretations:

  1. For every card selected, it's twice as likely that it is common than it is uncommon.
  2. For any given common card, you're twice as likely to select that than you are to select any given uncommon card.

It's easy to see how these 2 definitions can work differently. Examples: (ignoring rare cards for now)

Pick one at random from: "Common1", "Common2", "Common3", "Common4", "Uncommon1", "Uncommon2" - this fits definition 1. You're twice as likely to pick a Common card. But you're just as likely to pick Common1 as you are to pick Uncommon1. Common3 is as frequent as Uncommon2. So although this fits one definition, it probably isn't intuitively what you expect.

Example 2: pick one at random from "Common1", "Common1", "Common1", "Common1", "Uncommon1", "Uncommon1", "Uncommon2", "Uncommon2". Here, you're twice as likely to select Common1 than you are Uncommon1. You're also twice as likely to select Common1 than you are Uncommon2. And yet, half of your selections will be Uncommon, just as many as your Sommon selections. Again, this is unintuitive. Because there are more types of Uncommon card, the individual infrequency is balanced out by the overall frequency.

So, what you will probably want is a system that tries to balance both of these concepts. You will probably want to attach a weighting to each card individually, but you also need to manage the size of the relative categories and ensure they are of similar proportion, so that both the rarity interpretations hold true. Then, selecting is a case of a standard weighted random sample. (e.g. https://medium.com/@peterkellyonline/weighted-random-selection-3ff222917eb6 or, use the cumulative frequency and a binary search.)

I've worked on a few card games in the past and had to address a similar problem. We ultimately ended up with a solution based on a conditional probability model.

We gave each card a weight and used the aforementioned weighted random selection to find a card in the pool. The weight was based on rarity or other traits, and represented how much "opportunity" a card had relative to other cards to be selected for the deck. Once a card was found, we used a percentage chance to determine if the card should actually be added to the deck, and repeated this process until the deck was full.

This approach also allowed us to limit the number of occurrences of any given card in the deck by temporarily setting its weight to 0 when its limit was reached, a very important feature for most deck building games.

This topic is closed to new replies.

Advertisement