Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Dec 2008
Offline Last Active Today, 07:39 AM

Posts I've Made

In Topic: 'Week of Awesome 2016' Game Jam at GDNET?

Today, 06:05 AM

Orymus3 / Slicer4Ever - contact me if this is something you want to move forward with. We can make this an official GameDev.net virtual event and use our past experience with contests / judging.


p.s. anyone still remember the 4 Elements Contest?

Got'cha, still giving this a bit more time to see where folks stand regarding the competition.

So far, looks to me like there is enough interest to hold it, but that some changes would be welcomed.

Trying to get a feel of what this summer will be like for me too.


4 Elements Contest? Can't say this rings a bell :(



Hey their folks, so I'm still uncertain if I want to run this years, while last years ran mostly fine, a few folks took issue with the scores they received, and the number of people talking about being disappointed in how they did really left a sour feeling for how things turned out.  Even so if i don't run this years, i figured i should outline the changes I had in plan for this years.


First off the prize pool, I got a feeling that too many folks were putting alot more stock into winning than they should have.  I have no issue with having prize money but I don't think I want it to be as big as it was last year, I certainly have no plans to contribute as much this year to it as I have last.  I will also likely remove the prizes for feedback, and funnel that into the main pool, as it didn't seem many people were interested in that prize. 


Secondly, the theme was going to branched out to have more options.  a big thing throwing people off was the fact that the theme was too strict to ideas, i do agree with this.  I was planning to offer 4 themes this year, and 2 of which must be implemented(basically 5 points for each theme you work into your game).  this would likely make the theme simpler, being 1 or 2 words, and likely split between two gameplay, and two graphical themes.


Thirdly would be how judging is handled, i was toying with the idea that having additional judges is still good, but to only take the top 3 scores in each category.  meaning if we have a total of 4 judges on each game, that game will not be pulled down because one judge didn't see it as being as good as the others.


Finally would be removing the sponsors concept, I don't think it was very good, and basically if you want to contribute to the pot, then you should feel free to do so, and they will be recognized for doing so, but not in anyway that's in each game.

My feeling from 2015 was that the increased prize pool made it a lot more competitive and a bit less 'friendly' than 2014, but that was just my impression. I tend to agree that a smaller prize pool would probably be desirable, and it seems like this is in line with the poll results we currently have as far as funding is concerned.


You Theme idea isn't bad, but I believe it should have some form of main theme. That being said, I've taken part in a gamejam years ago where you had bonus points for including elements. Some of them were related to community insider jokes, but some of them were outright challenges, and it ws up to the judge to determine whether you had succeeded at making it 'work' (how easy is it to talk about underwears in a high fantasy game without falling straight into trash comic territory!)


Judging should work. Question still remains to see how many we end up with...


I was not opposed to sponsering per se, but it seems you've had a bad experience with that, so I'll defer to your judgment here.



I seem to remember there being some discussion even among the judges as to what the actual scores should represent.

I think having something written up-front as to what the different scores mean for each category might be beneficial, both for judges and contestants. I can't actually remember if this was done last time or not...

At least it might help negate some disappointment concerning expectations and results.


I also think that any disappointment was, for the most part, short-lived -- personally I would definitely have liked to end up higher placed (and thought I would), but given a day or two to let things settle I didn't have any strong issues with the results :)


For me, I might possibly want to participate and make something, but I don't think I have the resources to offer anything more than that this time around.


I was leaning that direction too. I feel that if each category was 1-5 or something similar (and THEN weighted for finals coring) it would create consistency and we could describe 1-5 as items such as '3 - Passable: the element worked, but it did not carry the game to be a compelling experience', etc.




I've actually been thinking about the competition of late, and do intend to take part if it takes place this year. ^_^



It will take place even if I have to return to reality and shovel this competition into everyone's infile :)


@Orymus I will send you a mail once I am much less sleep deprived, which given my current schedule will be in two days time.



Will await your PM :)


At this point, I'm unclear whether I'll be organizing this, as I'm still unsure where my availability will land, but I'd really like to get to a point where whoever ends up organizing this has an easy task. Slicer has had to do a LOT of work the past two years to make this happen, and I'm sure I can help streamline the process a bit, if not more.

In Topic: Stealth/Action SideScroller of a Ninja Chameleon with Shadow Abilities

Yesterday, 11:14 AM

FYI: broken image links in the OP.

In Topic: Salary Research

Yesterday, 11:13 AM

http://www.glassdoor.com is another site. Has a Salaries section. Don't know how exact the numbers are but I heard this is useful site.



I would imagine you've most likely come across the yearly survey, which is good in general, but doesn't give a correct assessment on a per-company basis.

If there are glassdoor entries for a business you are interested in, it is likely a source of interest.

Of course, it relies on people actually being honest which, despite the anonymous nature of the exercise, may not be the case.

If you have several entries, it would be easier to average and determine expectations, whereas 1-2 entries might be hard to work with.

Still a valuable tool for me.

In Topic: 'Week of Awesome 2016' Game Jam at GDNET?

25 May 2016 - 06:45 PM


Your spreadsheet would indicate there were only 6 full entries? I imagine this to be a sample of the full entry?

Do you have the full count?

In Topic: 'Week of Awesome 2016' Game Jam at GDNET?

25 May 2016 - 10:29 AM


My own post-mortem from 2014 is that judging was tedious. Would love to get a review of the 2015 process and a bit more insight as it would probably help determine whether it was a viable alternative. Care to chime in?

It was still tedious. Breaking it down so that each judge handled less entries was a life-saver, but at the same time, the entries were on average a lot more complex, and a number of them took considerable time to play through.

Having clear examples of the expected quality for each judging dimension could help out - I felt like I spent a lot of time re-reviewing earlier games, after later games caused my scoring criteria to move (i.e. I'd score a game highly for graphics early on, then have to re-evaluate it down as later entries exceeded expectations).



I feel you, I vividly remember doing this myself in 2014.

The problem I see with setting a "bar" is that we just don't know whether it is a valid one. It could be that the bar is too demanding, and every entry would be in the 1-5 range, which wouldn't make much sense, or it could be that it is too low, having all games competing for that .5 point in a given field. The byproduct would be that, if, for example "sound" was correctly gauged, but not art, then audio would become a more valuable rating as it would be spaced out (3-7 on avg) whereas everyone would roughly get the same score (1-3 or 7-10) for visuals, largely limiting the validity of this process.


Breaking the games across all judges was something that was done out of necessity, but it also makes it much less "fair". I can remember going through something similar in high school where we had 2 math teachers, and the two of them had a drastic differences in their exams. Teacher A had a relatively easier exam, whereas Teacher B was more demanding. To the outsider, it felt like class B (associated with Teacher B) had on average poorer results, which would've indicated that they were not as good, but as it turned out, this was incorrect. 

Measures were then taken to review scores, so that the average of both classes would be roughly the same, but this correction was done on a leap of faith. Ultimately, no one could determine whether one class was doing better than the other as far as learning was concerned. Our assessment of the situation back then was that Teacher B was more demanding in his class, and that as a result, it would've been likely that the average for that class should've been higher if they had been tested against the same exam, but there's just no way to prove that.


Long story short, if we have judges that don't overlap sufficiently, the ranks would be less about who did well than about what judge they ultimately got.

If my understanding is correct, last year's competition mitigated this by having judges that would overlap (3 judges would score each game, and these did not work as a cohort, but were rather randomly assigned games to test)?

Assuming this to be the case, it would imply we would need a certain amount of qualified judges to go through these entries and insure proper distribution. And this number should be able to scale based off how many entries would end up being delivered.