• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

118 Neutral

About myro

  • Rank
  1. alright, thanks for all your suggestions. especially groovyone, those kind of sounds are pretty much what i am looking for. I'll start experimenting.    offtopic: obviously, i don't have the luxury as a student to buy expensive soft/hardware.
  2. Hi, I am a computer science student and would like to create some simple sound effects (like a sound when the player picks up an item, etc) for a game I wrote.  So the simple question is: Where do I start? Are there any tutorials for someone with 0 experience with audio content creation? Are there tools that get me decent results quickly?   Is starting with standard tutorials for lmms a good idea?   In total: I got no idea where to start.   Thanks, myro
  3. Let me clear things a bit up: The course should be more of a "Introduction to Game Development" course. But without asset creation as time restrictions won't allow it. Aka. game engine components will be explained, but source code is not required (as long as information about the engine is available). My university is a small university and they have no courses about game development whatsoever. My professor wants to change that and he has some experience with game development, but that was several years back. I should find a suitable engine for his course. The course setup itself is not completely planned yet. It contains game design and game programming (basically students should get a first impression on how to create games from a computer science perspective). With my thesis I should find a suitable engine for his course. And I have only 2 month time to complete it. Costs should be below $10,000 for 20 seats. (Yes and the cheaper, the better.....).   The benchmark is somewhat up to my choosing.   As for the initial requirements, i checked the mentioned engines from the Game Engine Survey 2011 (GDMag) to get a starting point (+ I added Panda 3D as full open source extension). The 2 second scoring above should not be renamed as a preferred feature list with weight and not as requirement:   Prelimary findings about licensing:   Unreal: possible (UDK) (unrealscript editor commercial/open source editors exist (quality?)) / waiting on e-mail about Unreal Engine 3 Trinigy Vision Engine: nondisclosure agreement (excluded) Unity: 75$ / seat and year (with android + ios 225$)  CryEngine: possible (free for academic use) / nda for commercial use Gamebryo: unclear (waiting on e-mail) ShiVa: 670$/seat Torque: Open source (MIT License) / Script Editor (~40$/seat, open source editor (quality?)) C4: 2500$ / site (licenses for students in the course are included)/ check if demo version is enough to evaluate (otherwise $750 for a standard copy (probably too much)) Panda3D: Open Source (Modified BSD License)  
  4. thanks for the input, but the thesis topic cannot be changed. The professor wants to start a game programming class and wants to use an engine. Basically, he wants that I choose a suitable engine. The main point here is that other universities/schools would have had to choose an engine for their courses as well, aka. they had to make the decision as well. But every paper I read about creating a game programming class does not state on how the selected the engine. Only that they selected it (it sounds more like the instructor knew the engine and because of that they used it).   My current requirements (not finished) lists as follows. KO-Criterias (Engines will be discarded immediately if they fit one of the following criterias to get down to a decent amount of engines):   • No Windows 7 platform support • No free student license for homeworks • Project is not active • No recent AAA game title • Requires additional commercial software (except for 3D Studio Max/Visual Studio) • Non disclosure agreement • No assets • Has to support at least the following game engine features: – 3D graphics engine – Physics engine – Audio system – Content management/Pipeline – Scene Manager – Animation system – Networking – Collision Detection   Then, the remaining engines would be scored according to a weighted scoring system (even less finished): General features low licensing costs 100% documentation/tutorials 100% accompanying art assets 100% high distribution among game studios 80% active/helping community 60% used in an existing game programming course 20%   Technical Features reusable programming language & concepts 100% game type flexibility 100% usability (ie. editors/IDE/debugger) 80% supported platforms (source and target) 60% appearance (ie. graphics) 40% AI engine 20%   Then prototype games with the best fitting 2 engines will created. And one engine shall remain for the course. My problem is that every requirement is very general, since the game types (even genres) are unknown. But still, other universities should have had the same problem.
  5. Hi, I am currently doing my bachelor thesis which is a comparison of game engines and their applicability to a game programming course. My problem is that I have pretty much found 0 literature on the topic on how engines where selected for other courses. Thus my question is: Does anyone know of any papers covering the process of selecting an engine for a course? Or simply for selecting an engine at all?   The requirements modelling for the engine seems to be pretty problematic, since I do not know what games will be programmed by students within the course projects.    
  6. [quote name='PhillipHamlyn' timestamp='1353103516' post='5001649'] ... [/quote] You introduce an indirection which adds an extra method call in means of performance. Furthermore not using Polymorphism where it is appropriate will increase code size dramatically. For testing this should imo not be a runtime decision, but a decision at build time. In C++ you can easily swap in mockup classes in a type hierarchy with #ifdefs or by including different directories for unit/integration tests. Not sure about C# though.
  7. Quote:Original post by hplus0603 Why are you assuming that one match of (2100:2050) and one match of (2300:1900) is a worse choice than one match of (1900:2050) and one match of (2100:2300) ? In the first case, one of the matches is fair, and one is unfair. In the second case, both matches are unfair. Your model also assumes that, even though four players joined in 5 seconds, there will be no other players joining for the next 35 seconds. While that's possible, you can't predict the future when making the 2100:2050 match. If a player with a skill around 1900 or 2300 shows up between 0:05 and 0:40, then the respective player will get a better match. I think you need to first decide what your actual desired outcome is, and then implement it. If you want an outcome that depends on global state, then you need to take that global state into account. If that global state function you want says it's better to wait longer to get a smaller overall match difference, then you have to force players to wait. Because of that problem, I thought of suggestion 3, which somewhat predicts the queuing of players in the future, based on their previous queuing behaviour. That system is always assuming that a player does not play 1 game and then logs off, but does several games in a row. Thus, the system would know if there are currently matches in a rating range or if there are none.
  8. Quote:Original post by Kylotan ...but it's not actually an easy example. There is an unspoken assumption in it which we were not aware of before now. The assumption is that it's better to relax your rating difference limit by 100 points straight away than to wait 40 seconds to see if someone better comes along. This is not a flaw of the system, just a choice you've made. Imagine exactly the same system except where you have 10x as many players, all with similar ratings. After 5 seconds have elapsed you will have 40 people in the queue rather than 4, and in that case you can probably match almost all of them without expanding the difference limit since there will be about 10 teams with ratings of 1900-2000, 10 with 2000-2100, and so on. So you can see that the suggested system doesn't have an inherent flaw. It just comes down to how you're estimating the typical time that a player will have to wait, and how important it is to trade off that waiting time vs. getting a good match. In the queue are only the players who have not been matched up, because their RDL cannot find one + 1 player newly entering the queue. Since the match ups are done in 2 cases: 1. case: new players enters the queue -> Match ups will be calculated. 2. case: the "increase of RDL" timer -> Match ups will be calculated. Thus, the queue will never contain 10 players in a 2000-2100 interval (at a given starting RDL of 100). There will be a maximum of 2 players at any give time in 1 RDL.
  9. Quote:Original post by hplus0603 Quote:this idea would also require an arbitrary time frame, to create a list of players and then make a decision based on that list of players. Not necessarily. You only need to re-check the queue when the criteria for matching changes. There are two reasons for criteria for matching changing: 1) Someone joins or leaves the queue. 2) The match properties of someone in the queue change. The queue is re-checked every time one of these events happens. This means you may end up getting a match right away when you enter the queue, if there's a suitable match waiting. Now, it's common to balance waiting time versus quality of match. If there are few players available in the queue, it's likely that you'll have to wait longer for a match, and that that match will be worse. Thus, time is one factor in criteria 2: at certain times, their tolerance for a "bad match" changes. I would probably make the rules something like: - When you enter the queue, you will match someone who is within +/- 5% of your skill grade. - After waiting 10 seconds, you will match someone who is within +/- 10% of your skill grade. - After waiting 20 seconds ... 15% Note that you could keep incrementing above 100%, because someone with rating 800 needs a 150% boost to match someone with a rating of 2000. Also, you will need for both players to have waited long enough to include each other in their match range. Someone at rank 800 might have waited for 300 seconds, and someone at rank 2000 just joins. That 2000 player is a match for the 800 player at that point, but the 800 player is not yet a match for the 2000 player. Because time-waiting affects how matches are made, you need to re-check the queue at regular intervals. You *could* keep a separate timer for each and every person waiting in the queue, and re-check each person when their tolerance for matches changes, but that's a lot less efficient from an implementation point of view as the number of players grows higher. The reason I do not like the idea of incrementing by time is explained in the following example (let's assume we start a rating difference limit of 100 and increase it by 50 every 10 sec, I will call the rating difference limit RDL): 0:00 Player0 at 2300 rating with 100 RDL queues. 0:01 Player1 at 1900 rating with 100 RDL queues. 0:03 Player2 at 2100 rating with 100 RDL queues. 0:05 Player3 at 2050 rating with 100 RDL queues. =>Player2, Player3 match up. At 0:40 Player0 and Player3 match up. The best choice would have been to match Player0 with Player2 and Player1 with Player3 at 0:05. This is only an easy example, but it shows that the decision won't get better on the time that passes, aka the whole waiting time of Player0 is not used to find better opponents. In other words: You do not take the actual distribution of currently playing players/soon to enqueue into account. With an arbitrary time frame of 30sec, the decision would have been better in this case. With a "game history" adjusting overall match up ratings on the previous matches that took place, you would get better results in my opinion. But I think, I have to write some sort of "simulation" to know which matching strategy works better under certain conditions. [Edited by - myro on November 1, 2010 12:00:06 PM]
  10. Quote:Original post by Zimans These problems have been sort of solved in professional games, just no one seems to write about them. World of Warcraft has had it's arena match making system modified multiple times. It operates very similarly to what you are after ( matching teams based on ratings ). They may not publish is though because they don't want players to know the inner workings so they can't game the system. Another possibility to consider that hasn't been brought up is the idea of applying a handicap to the better player ( or a buff to the less skilled player ) in an attempt to even the match through altered game mechanics. This may not apply to your game mechanics, and you would want to be upfront about it in the match making. --Z Altering the game mechanics is an interesting idea, but this would make judging players really hard. The ranking systems I found on the net: Elo System (easy to understand, easy to implement. "Unfairer" than the 3 below. unfair matches are way more common, since the rating changes are very slow) Mark E. Glickman's Glicko v1 (easy to implement, easy to understand. "fairer" than elo, due to quicker judgement of a players skill. => better in achieving a 50% win/loss ratio) Mark E. Glickman's Glicko v2 (there are more variables here, which you would need to figure out, but it would be "fairer") Microsoft's TrueSkill (probably the "fairest" system atm, but rather complex. Understanding the mathematics of this system would take quite some time) Adding weights to those formulas to judge a player with a buff/handicap would need a serious amount of math knowledge and probably rather a math student or at least someone smarter than me. The first 3 systems calculate an estimate probability of a team winning against another team. Basically, I could somewhat start here and apply buffs/handicaps accordingly to get a 50% win probability and then judge them somewhat as even teams. But this would require a lot of formula altering. And balancing those buffs would be another very complex problem. I did some google research on the wow matchmaking system. Imo it's using either a glicko v1 or v2 due to the match making ratings range and then apply a custom "personal rating" gain/loss to it. Or they researched a complete custom system, but due to the grade of the problem complexity, I somehow doubt it. But I could not really find anything on what rating differences teams would match up, since only their custom "personal rating" is visible on the Internet.
  11. Quote:Original post by BTownTKD Why not come up with a "weight" system, instead of relying solely on "player rating." For any given player looking for a match, every other potential player would have a "weight," determining the likelyhood of the 2 players being paired up. "Player ratings" would effect the weight the most, so players with a large rating difference will have (virtually) no chance of playing each other, and the likelihood increasing as player skill-levels become closer. However, other things will effect the weight to a lesser degree; "wait time" - how long a player has been waiting for a match. "recent-ness" of a partner-pairing - How recently have these 2 players been paired together? This effect will slowly dissipate after a few matches with other players. This would ideally create kind of a bell curve, with players of identical skill levels being paired up most often, with the likelihood decreasing quickly as skill-levels differ. The thing is, that I don't have a list of players that I could base my decision on, without an arbitrary waiting time for the whole system. Let's assume I have a weight system: Player0 at 0 Rating Player1 at 1000 Rating => Match (only 2 teams in the list, which means their probability to match is 1) The same will happen for every match-up. There will be no decision made, but teams will always match immediately every opponent. The only way to make a decision here, is by making the whole system wait for an arbitrary time frame, to have a list of players queuing and then make a proper decision. This is somewhat my suggestion 2 since I will make there a probability distribution, which is in this case only weighted on rating difference. Quote:Original post by hplus0603 My observation: In general, the way you do these matches is very similar to a broad-phase collision test: You want to find overlaps among ranges of values. You could view the allowable match set as a volume in a multi-dimensional parameter space, and you want to find all intersections within that space. Thus, I would imagine that various broad-phase algorithms would work well, assuming you can adopt them to having different types among different axes. I would probably start with sweep-and-prune. Put all the teams on a single axis (the score). Sort the axis. Then walk along that axis, collecting teams into a "potential match pool" as you enter their "lower" range, and removing them from the pool as you leave their "upper" range. I would probably start by trying to find the best possible match for the team as it's being removed from the match pool, by examining each other team in the pool at the time. I would increase the "size" of the "match shape" by age in the queue. You could run the algorithm once a second, say, and remove any matches made during each run from the list of potential candidates. If I understood this idea correctly (sorry, I am not that familiar with collision detection.), this idea would also require an arbitrary time frame, to create a list of players and then make a decision based on that list of players. Thus, I would need to wait for several players joining (the time frame can be reduced, by calling the system after a certain amount of players are in the list as well.). The overall idea to use a "collision system" to find matches, if I have a player list is pretty nice though. If I look at this problem in a more abstract way: I can only make a proper decision, if I have a list of players I can choose from (aka arbitrary time frame) or if I have a "game history" I can base my decision on. (The case of simply increasing the rating difference with a timer is ignored here, since a decision based on that is simply not a good decision and leaves expert and "noob" players at very long waiting times.) The idea of an arbitrary time frame(i.e. 2 min) would increase the waiting times for several cases quite a lot, but somewhat average it over all ratings: Player0 at 1500 rating queues Player1 at 1500 rating queues They now have an average waiting of 1 min, which is really unnecessary. The "game history" idea (my suggestion 3), would base it's decision on games that previously took place and shape the rating differences accordingly. It's based on the idea that usually players will not only queue once, but several times in a row and thus they will be matching at a similar rating for quite a while. Quote:Original post by Hazerider Instead of looking at max rating differences, you should look at percentiles, and match players who are with X% of each other. In effect this is really your solution 3 where the rating difference is determined automatically, based on the distribution of ratings. In case what I am saying is unclear, for example, a player who is in the top 10% of ratings should only be matched with a player who is between top 5% and 15% of ratings. You could force a redraw (or a few) if you match up players who played against each other in the last X matches. Do you have other ideas on how to "transfer" the percentile idea into a "game history"? I tried looking at it from the percentile idea itself, but I ran into several dead ends like that. If I look at it from the "suggestion 3", I run into a very complex implementation. PS.: Thanks for all the ideas presented. I think it's somewhat strange that there are no articles about this topic, since this should be a very common problem, solved in several professional games. PPS.: The more I think about this problem, it seems to get more and more complex....
  12. we were discussing the problem as well and yes, we realized that it's rather a complex problem... First we have to note that an increase of the Rating Difference above 500 wouldn't make sense, since the glicko v1 system will not reward "fair" then. For example the higher winning team would get 0 for a win and the chance of winning for the lower team would drop below 3% (This is the "QQ i am getting farmed by some super pro nolifers hardcore gamers" case.) We had 3 suggestions: 1. suggestion: Increasing the Rating Difference Limit according to the time queued. This is basically what you 2 suggested as well. Limiting it at 500, no further increase after. I, for myself, did not like the idea that much. Because during the increase time, there could have been closer match ups: For example: Team0 at 2300 queues Team1 at 2000 queues after 1min both teams rating difference limit will be increased by 100. Team2 at 2050 queues Team3 at 1900 queues => Team1 and Team2 get matched up. after several more increases Team0 and Team3 get matched up. Basically, the system does not prefer better matching teams. 2. suggestion: We do not start matching players immediately when they enqueue, but after an arbitrary discrete time frame. For example 2mins. Thus the matching system will only be called every 2mins. Then the search for an opponent would work like this: We select the first team. Then we make a sorted list with possible opponents according to their rating difference (from lowest to highest). We select one team from the opponents list randomly, but the team with lowest rating difference has a higher probability. The biggest rating difference team has the lowest probability. The teams in between are linear less probable. => This system would prefer closer match ups, if they are possible. But it would require an unnecessary waiting time to have a certain amount of opponent teams in the list, which we could base our decision on. The waiting time is required, because otherwise teams will always match up immediately and practically use always 500 limit. 3. suggestion: We implement a "controlling" system for the rating difference. The system would contain for each bracket the highest possible rating difference: 0-500 500-1000 .... This controlling system will be on a fixed timer, 1 min for example. It will count the matches taking place in each bracket. If for example 6 games happen in a bracket, it will divide this bracket into 2 brackets. Let's say 500-1000 had 6 games. The brackets would then look like this: 0-500 500-750 750-1000 1000-1500 That way we getter "fairer" match ups, if matches occur in a certain bracket. (The surrounding brackets can only choose teams from the nearest bracket as well. aka 499vs990 cannot happen.) After a certain amount of time frames passing (i.e. 2) with < 3 matches the bracket will be doubled again: Let's say in the 500-750 bracket is 1 match and 1 in the 750-1000 bracket. Thus, the rating range there will get doubled again. The brackets would again look like this: 0-500 500-1000 1000-1500 I would be interested in your opinions on these 3 suggestions. Atm the last suggestion is our latest idea, thus we haven't thought of trouble with it yet. PS.: Skill should matter, maybe not as much as in chess, but it should matter.
  13. A couple of friends and me are working on a small online game, in which 1 player fights another player and a winner is decided. I have a queuing problem, but to understand it, I have to first explain what is happening: Now we wanted to make the players match equal opponents and create rankings. To create fair match ups and create rankings we use the glicko v1 system, which is described here: http://math.bu.edu/people/mg/glicko/glicko.doc/glicko.html (and yes we are aware of the limits of this system in means of judging skill correctly.) Here is a small summary of how it works: new players will have a rating of 1500 if the player wins a match the rating will rise. if the player looses a match the rating will drop. after several matches players have a normal distributed rating in a range from 0 to 3000. Match ups will happen dependent on each players rating. The first implementation looks like this in psuedo code (it's missing some special cases, but the overall logic should be clear): We have a list called QueuedTeams, in which every team that wants to join a fight is enlisted. The queuing system is called whenever a team is enlisted in QueuedTeams. ratingDifferenceLimit = 150; for(Team t1 in QueuedTeams) { for(Team t2 in QueuedTeams) { if(abs(t1->rating - t2->rating) <= ratingDifferenceLimit) { MatchUp(t1, t2); QueuedTeams->RemoveTeams(t1,t2); } } } We ran some tests and this works very well, as long as you have enough teams througout ratings. If you only have a small amount of teams, they won't get match ups at high ratings or have to wait really long. The simplest way to get match ups would be to increase the ratingDifferenceLimit. But this makes matches unfairer, even if there should be enough players for a close match-up. Basically, I rather want a dynamic system, which queues player rather according to their closest possible rating. In total it's a problem dependent on 3 variables: Minimizing rating difference. Minimizing queuing time. Maximizing the amounts of match-ups. Here comes the first question: Are there articles on this problem somewhere on the net? What do I have to look for? Are there books which describe these problems? (Since I did not find anything on the problem using google, I looked through different queuing problems, for example queuing processes, but all the systems i could find do not need a match up.) PS.: Please move this thread, if this should be the inappropriate forum. [Edited by - myro on October 27, 2010 2:42:12 PM]
  14. Hi, i want to start making some small 2d games with the use of XNA and depending on how it goes, start with some basic 3d concepts. Thus, I thought it would be best to buy a book, but there is a series amount of books out there and truthfully i have no idea which to choose. As for myself, I know C#, basic university math and I am a decent programmer. The only games i programmed so far were a pong clone in c++ with directx like 5 years ago, a snake with gdi and space invader clone with WPF. Thus, i am definitely lacking knowledge and experience with making games. Now the questions arises, which book should I buy? thx for any suggestions Myro PS.: If I should have missed an already existing topic about this, I am sorry, i couldn't find any via search.
  • Advertisement