WARNING: This might sound like a sore loser, but its not!! I am simply throwing out some constructive criticism.
[/color]After spending two years doing this competition I have noticed some very odd things. This post is an attempt to curtail such things and help to create a more "professional" style of GameJam. First off I want to say that I am very pleased we had one of these for this community. After looking at a few others that have been going on this one is ran by people who love and care for this community; which is why I know what I am about to say might sting a bit.
The judging for this competition is seriously off.
To whatever judge wants to comment I would greatly appreciate any insight you might have into the workings of the following:
- What scale did you use to judge each category (by scale I mean metric)?
- Did you compare each game to AAA standard, to each other, or to your own idea of what a gameJam game should be?
- What is your model for how points are assigned?
Some of my observations are as follows:
- There is no clear criteria for each category and what leads up to each score. When you want to decide the quality of something (like a score) you need to define the criteria clearly so that the people making it can match up to that. Without this you are just shooting in the dark and hoping to god you hit the target. This is a problem considering this leaves a lot of opinion to play into these categories rather than actual merit. The only category that should be judged based off of opinion is the 5 points at the end.
- There is no way to judge difficulty of the game being coded for this event. IE: We went with 3d instead of 2d because we thought it would present more of a challenge and create a better end product. We could have gone with a 2d side scroller like all the other entries and had something far more developed. There should be a category for difficulty and it should weight how hard it was to complete the game attempted. It could be worth 10pts or something and would convey choices made by a team that might be riskier, but provide a better end game instead of making something that everybody else is doing (not to bash anybody who went with side scrollers)
- I honestly feel that games are being judged differently based off of opinion. Each category should take the game that best defines that and give it a score of 20. Then judge the rest of the games from there based off of your best in category. (This could be another metric you can use for showing people "Best in Category") For example, if you have a game with EPIC graphics that scored a 20, you cant give a game with 2d thrown together art in 3 min the same score. If you do than it discourages people from doing extra work in those categories and just meeting the bare minimum to get the score. Furthermore, I saw some scores for art that was well below some of the best in the competition (not talking about just mine btw) that got high scores. This was almost a slap in the face. So, are they judging the game off of personal opinions of what the art makes them feel? Or is it using some kind of metric? (SEE POINT 1)
I did not expect to get first place this year, especially after we had some issues with the AI and the final build (6 hours wasted). However, 14th place left a bad taste in my mouth. I don't know, anyway.... I hope the judges can take these comments with an open mind and realize that some of us want clarity. Maybe I am the only one tooting that horn and if so just ignore me, but I don't think I am. We had fun for sure, but next year I would really like to see a bit more defined criteria for the categories.
Constructive criticism doesn't sting. From my perspective this competition is still in its infancy, the level of administration and handling has improved greatly over that of last year (I am ignoring WoA I). I expect that come next year's competition that issues identified in WoA III will be addressed, not necessarily to anyone's liking, but that is always the case when making changes.