'Week of Awesome 2016' Game Jam at GDNET?

Started by
32 comments, last by Servant of the Lord 7 years, 10 months ago

Hi folks!

After speaking with Slicer4Ever, it appears he won't be running a Week of Awesome event this year and that got me thin- WAIT!

Ok, here's a bit more context:

Once upon a time (2013), user Cornstalks created a virtual Game Jam called 'The Week of Awesome' that was open to all members of Gamedev.net. The event was a moderate success, and though Cornstalks did NOT followup, Slicer4Ever would run the event the two following years (2014 and 2015).

The event's popularity increased, and so did the prize pool. And well, the jammers had a lot of fun!

Given the event is left without an organizer, I come to you with a few questions, to help determine whether it would be viable to run the 2016 edition.

1 - Would you take part in the event?

2 - Would you organize the event?

3 - Would you finance the event?

4 - Would you judge the event?

Let's see whether this shakes a few passions!

Advertisement

Mark me down as a 'maybe', with a maybe +1 partner (same as the first two contests).

For those curious, when we competed in previous years, our scoring was:

First contest: We scored 10th place. :P

Second contest: We scored 2nd place. :o

Third contest (last year): I agreed to be a judge, and my artist punched me for not competing. It hurt. :(

"Would you consider judging the games coming from said event?"

Would you consider getting repeatedly pummeled by an angry artist? I'll pass, thanks.

I'm too pressed for time this summer to help out with the judging this time around, sadly

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

I would be willing to organize/co-organize if no one else wants to if there's enough demand. I'd also donate some money to the prize pool. If the judging system is streamlined I'm also willing to help judge if the time investment required is reasonable.

Last but not least, I'd love to participate but would probably only invest two days or so.

Mark me down as a 'maybe', with a maybe +1 partner (same as the first two contests).

For those curious, when we competed in previous years, our scoring was:

First contest: We scored 10th place. :P

Second contest: We scored 2nd place. :o

Third contest (last year): I agreed to be a judge, and my artist punched me for not competing. It hurt. :(

"Would you consider judging the games coming from said event?"

Would you consider getting repeatedly pummeled by an angry artist? I'll pass, thanks.

I feel you. Spent the 2nd year judging + participating, which meant I had to go through all of the entries immediately after rushing the build. No-no zone, plus it was a bit iffy with the other participants even if I did my best to be as objective as possible. Though I finished co-4th position, I had originally mentioned I would not be eligible for the prize pool given the nature of the situation, and my co-4th position entry decided to donate their prize (solidarity?).

My own post-mortem from 2014 is that judging was tedious. Would love to get a review of the 2015 process and a bit more insight as it would probably help determine whether it was a viable alternative. Care to chime in?

I'm too pressed for time this summer to help out with the judging this time around, sadly

Perfectly understandable. You've given much to the cause already.

I would be willing to organize/co-organize if no one else wants to if there's enough demand. I'd also donate some money to the prize pool. If the judging system is streamlined I'm also willing to help judge if the time investment required is reasonable.

Last but not least, I'd love to participate but would probably only invest two days or so.

Good to hear.

It would be up to the organizer to figure a judging system, but I'd love nothing more than to help identify a way to make it humane for the judges and still keep on track with a "fast scoring" system. Thinking it could also be conveyed through the theme or guidelines. For example, if all of the games are to create a "1 minute gameplay" experience, that would insure judging would be pretty swift. Some of the adventure games from 2014 and 2015 had a LOT more gameplay than this (my unfortunate entry didn't, but that was caused by the fact I only had 2.5 days to contribute).

Does "participate" in the first question mean "participate in any way including just judging?" Or does "participate" just mean "make a game for?"

I'll be able to judge again this year if this gets off the ground.

I can also slide a few pesos/gifts into the pot again.

Does "participate" in the first question mean "participate in any way including just judging?" Or does "participate" just mean "make a game for?"

You're correct, the question is flawed, but it does refer to "making a game for".

I have created a separate question for judging.

Thanks for pointing it out! (fixing...)

My own post-mortem from 2014 is that judging was tedious. Would love to get a review of the 2015 process and a bit more insight as it would probably help determine whether it was a viable alternative. Care to chime in?

It was still tedious. Breaking it down so that each judge handled less entries was a life-saver, but at the same time, the entries were on average a lot more complex, and a number of them took considerable time to play through.

Having clear examples of the expected quality for each judging dimension could help out - I felt like I spent a lot of time re-reviewing earlier games, after later games caused my scoring criteria to move (i.e. I'd score a game highly for graphics early on, then have to re-evaluate it down as later entries exceeded expectations).

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

My own post-mortem from 2014 is that judging was tedious. Would love to get a review of the 2015 process and a bit more insight as it would probably help determine whether it was a viable alternative. Care to chime in?

It was still tedious. Breaking it down so that each judge handled less entries was a life-saver, but at the same time, the entries were on average a lot more complex, and a number of them took considerable time to play through.

Having clear examples of the expected quality for each judging dimension could help out - I felt like I spent a lot of time re-reviewing earlier games, after later games caused my scoring criteria to move (i.e. I'd score a game highly for graphics early on, then have to re-evaluate it down as later entries exceeded expectations).

I feel you, I vividly remember doing this myself in 2014.

The problem I see with setting a "bar" is that we just don't know whether it is a valid one. It could be that the bar is too demanding, and every entry would be in the 1-5 range, which wouldn't make much sense, or it could be that it is too low, having all games competing for that .5 point in a given field. The byproduct would be that, if, for example "sound" was correctly gauged, but not art, then audio would become a more valuable rating as it would be spaced out (3-7 on avg) whereas everyone would roughly get the same score (1-3 or 7-10) for visuals, largely limiting the validity of this process.

Breaking the games across all judges was something that was done out of necessity, but it also makes it much less "fair". I can remember going through something similar in high school where we had 2 math teachers, and the two of them had a drastic differences in their exams. Teacher A had a relatively easier exam, whereas Teacher B was more demanding. To the outsider, it felt like class B (associated with Teacher B) had on average poorer results, which would've indicated that they were not as good, but as it turned out, this was incorrect.

Measures were then taken to review scores, so that the average of both classes would be roughly the same, but this correction was done on a leap of faith. Ultimately, no one could determine whether one class was doing better than the other as far as learning was concerned. Our assessment of the situation back then was that Teacher B was more demanding in his class, and that as a result, it would've been likely that the average for that class should've been higher if they had been tested against the same exam, but there's just no way to prove that.

Long story short, if we have judges that don't overlap sufficiently, the ranks would be less about who did well than about what judge they ultimately got.

If my understanding is correct, last year's competition mitigated this by having judges that would overlap (3 judges would score each game, and these did not work as a cohort, but were rather randomly assigned games to test)?

Assuming this to be the case, it would imply we would need a certain amount of qualified judges to go through these entries and insure proper distribution. And this number should be able to scale based off how many entries would end up being delivered.

This topic is closed to new replies.

Advertisement