Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 22 Mar 2010
Offline Last Active Jun 19 2015 03:44 AM

Topics I've Started

Spherical world for a 2D game

20 April 2015 - 01:02 PM

The game is 2d and overhead. The player can walk around and scroll the map arbitrarily. (practically the cardinal and ordinal directions, via arrows or WASD)


It's easy to do this if the world, logically is a torus. But if it's a sphere it seems more complex.


I've found resources aimed at representing a spherical world in a 3D game, but since this is 2d and directly overhead.


Is there any way to do this without having to represent the world as a real sphere and then sliding around on it with some sort of square subset of it? And it seems like even this would run into some issues along the edges and really throughout the whole square slice that gets more stretched the further you are from the center of the screen.

Random subset of list including specific element(s) in Python

02 January 2015 - 06:56 AM

I would like to generate a random subset that contains a specific element.
For example, given a list of the letters of the alphabet. Generate a 5 element sample that includes the 3rd element.
alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', ...]
wantedSubset = random.sample(alphabet, 5)
This does everything I want except for guaranteeing that 'c' will be an element.

Chess simulation requiring seemingly intractable Algebra

08 December 2013 - 06:09 AM

I'm trying to write a chess simulation where entity's ratings are emergent out of pairings where the results are based on comparison of various statistics.


Here is an example that covers the issue I'm running into:


player_1 has a strength attribute strength_1

player_2 has a strength attribute strength_2


I want the expected score [(# of wins + 1/2 * # of draws) / (number of games)] to equal strength_1 / (strength_1 + strength_2)


So far so good. The issue is when I include draws.


Arbitrarily (not exactly arbitrary, but tl;dr) I want the distribution of wins///draws///losses to stick to wins///GM(wins,losses)///losses, where GM is the geometric mean


It's not as simple as strength_1///GM(strength_1, strength_2)///strength_2 because this actually skews things in favor of the weaker player.


So it has to be reverse engineered algebraically.


We need to find x, y, and z, such that ((x + (1/2)*y) / (x + y + z) = strength_1/(strength_1 + strength_2)    <- we can treat the righthand side here as a constant since we know strength_1 and strength_2


And we also have that y = sqrt(x*z)


And finally we have the same equation as above but inverted for the other player: (z + (1/2)*y)/(x+y+z) = strength_2/(strength_1 + strength_2)


Problems are


A) equations are incredibly difficult to solve

B) they might not even count as two equations/two variables because I'm not sure if the two main formulas are linearly independent. (same denominator and the constants in question add up to 1) This might not preclude their linear independence, but I'm not very experienced in this type of math so I don't know.

Data Structures & Algorithms for Blackjack Simulator

10 October 2013 - 01:57 AM

I'm trying to create a Blackjack simulator so I can test out (and maybe genetically evolve) some various play-charts, betting-strategies, and card-counting-charts.


Before I dive into it, I wanted to outline the structure in detail, since the game has these caveats that make it less trivial to code than a game like poker. (like splitting)


My basic design so far is this:


Data Structures:

-deck: composed of a vector or list of cards

-card: composed of a value (like 10 for 10's, Jacks, Queens, etc.), a name and a suit

-hand: composed of a bet, a list of cards, and two hands (potential left split and right split)

-player: composed of a balance and a hand



-shuffling: i'm going to shuffle by sucking cards, randomly, out of the deck and creating a new deck, in order



1. Should I use a vector or a list for the cards? I'm thinking:

vector: O(1) for lookup, O(n/4) for removal (average cases)

list: O(n/4) for lookup, O(1) for removal (average cases)


2. How do I deal with Aces? Is the best way to just have some conditional code whenever dealing with cards to see if a card is an Ace to check for soft 17's and the like?

Suggestion algorithm

31 May 2013 - 03:11 AM

I'm currently in the process of designing a web service that will allow users to give binary information about whether they enjoyed a game/movie/show/anime/music/book/etc. Then the service will turn around and offer suggestions of other media that the user would have a good probability of enjoying.


Netflix/Amazon/IMDB/a lot of other services provide this feature.


The algorithm I'm looking at using for suggesting new media for the user based on what they "liked" is the somewhat well known cosine algorithm. This algorithm basically makes the assumption that there is a reasonably strong correlation between the cosine of the angle between two vectors and their "similarity."


Basically imagine a grid where one axis is all users and the other axis is all media. Each intersection is a 1 if that media was liked by that user and a 0 if not.


Now if a user likes media item 'X,' then the vector for X is taken against the vector for every other media item. For each pair, the dot product is taken, then divided by the product of their magnitudes. This results in the cosine of the angle between the two vectors.


These are then ranked and the top candidates are suggested to the user in order.


My question/problem:


This will become prohibitively expensive quickly.


First off, the cosine(angle) needs to be computed for every pair of vectors every so often.


Then for every user, the suggestions based on their "likes" need to be sorted and presented every so often. Basically for this we need to look through the database of cosine(angle)'s and grab all the scores that include as one of their items, one that the user has liked. Then sort that list and present the top few to the user.


For millions of users and thousands of items, this could grow way out of control.


Are their cheaper ways of doing this? Or am I going about it algorithmically wrong. Or is it just something that has to be dealt with directly via cacheing/segmenting/etc.?