About this blog
But tonight, on this small planet, on Earth, we're going to rock civilization.
Entries in this blog
Well, I could tell you, but maybe it'd be easier just to show you.
(Let me know if you get any errors out of it. I'm aware of two issues at the moment: one, that ads don't load in IE; and two, that sometimes a page displays a generic 'something went wrong' message which goes away when you refresh. I'm fairly sure the second is something to do with an idle timeout somewhere because it only happens after nobody's touched the pages for a bit).
More to come.
EDIT: Here's another one.
I've been quiet for a while now. I've been quiet for a number of reasons; the end of my degree is one, and shifts in my personal life is another. I'm a very different person to how I was a year ago. But those aren't the reasons you're most interested in, are they? [smile] Work on V5 has kicked up a notch. We've now contracted the services of a professional designer, with whom I'm meeting once a week - and these meetings are epic. He and I spent two hours just going over the top menu bar. At our last meeting, he showed me his first 'renders' of the site (actually mocking up how it will look in the browser). I'll be sharing bits of these renders with you today and in the coming days. On the back end, not much has changed, but as the user experience solidifies it becomes clearer and clearer how the backend will support that. Search, for example, is not just going to have to be very flexible, but will actually be proactive - the results of common searches will be continually maintained and updated as content changes, instead of only updating in response to user requests. Content control will be increasingly decentralised. There will be an overall shift away from types of content - discussion threads, articles, and so on - and towards what the content actually is. The site will feel like more of an integrated whole, instead of a collection of different sections that are loosely tied together. And the site's content will be even less bound by the www.gamedev.net domain... Anyway, that's enough blather for now. Let's get on with some pictures! ShareThis, Tagging, Rating, and Notifications Today I'm going to talk about the UI for tagging and rating content (plus a couple of other things). This is UI that will be present for every piece of content on the site - be it articles, forum threads, even user profiles - so it's important that we get it done both early and correctly. This is what we're thinking it might look like: It'll sit in the top right corner of a page, below the banner ad at the top. There are four different parts here, so let's go through them in turn. ShareThis You've probably seen a button like this before - we're going to be integrating the ShareThis button into our pages. The button, if you've not seen it before, provides quick links to share whatever you're looking at with a large number of social networks and sharing sites - Facebook, Twitter, Digg, Delicious, Reddit, StumbleUpon, and more. There's something like 48 supported services in total, and they add more without us even having to do anything. So, this will make it a lot easier for you to share interesting threads, articles, journal entries, etc with people in your venue of choice. Rating
We're going to integrate a five-star rating into every content item. In the diagram above it says 'User rating' - that's because my picture there was cropped from a User Profile render - but it'll be more useful for things like articles or forum threads (we may drop it from the user profiles). By default, the number of stars shown will be an aggregated figure drawn from across the whole community. If you click on the 'Rate them' button, it changes mode: When popped-out like this, it'll display the rating you've assigned to the content item (if any), and will let you click on a star to assign a new rating.
This is an interesting one, as tagging's going to be such an important part of the new site usage patterns; we've spent a lot of time talking about it, asking questions, considering hypotheticals and so on. Let's look at the UI that pops up when you hit the 'Add Tag' button. The top part of the panel displays tags that are presently applied. Tags in blue are tags that you yourself have applied, while tags in grey are tags that other people have applied - with different shades of grey indicating how many people have applied that tag. If you agree with a tag that other people have applied, you can just click it, and it'll turn blue to indicate that you're applying it too. If you change your mind about a tag, you can hit the cross on the right end of it, and it'll either fade back to grey (if other users are still applying it) or disappear (if nobody else was applying it). The lower part of the panel is for adding other tags. You can type a tag name into the text box there (which will suggest tags based on what you're typing in), or click any of the 'recent tags' at the bottom - those are tags that you're recently applied to other content items. Notifications The last part of the UI today is the Notifications panel. This is so simple that it doesn't even have a popout: you simply click on the tag-like things to toggle the relevant notifications - click 'EMAIL' to turn on email notifications, 'IM' to get instant messenger notifications, 'CHAT' to get IRC notifications, and so on. The exact options that will be available as notifications is still to be worked out, but you get the idea. Conclusion So, that's it for today. What do you think? Next time I'll start talking about the top navigation bar.
So, one of the things that we at Gamedev Towers want to bring to the site in the future is a tagging system. I've spent the day so far working on a basic prototype.
Tags are very easy to implement, but difficult to design. Here's the basic idea of tags:
Allow users to attach a bunch of tags to things.
The basic notion is that tags can be used to establish a 'semantic network' of content, making information easier to find. Instead of taking a user's search phrase and matching it against all the text in your database, you take each chunk of text at authortime and pull the keywords out then to make searches faster later. Furthermore, rather than trying to pull the keywords out automatically, you encourage the author to provide the keywords him or her self.
Second to the idea is the notion of incidental search - things like "related content." You do a search for the tags that the current item is annotated with, ignoring the current item, and offer it as a "see also" section. For this to work well you thus need to do more than just a basic string matching on your tags. Things like synonyms and spelling mistakes would cripple such a simple implementation.
Who gets to tag content, and at what granularity should content be tagged? Youtube allows the author to set the tags, and only per-video. Del.icio.us allows each user to provide their own set of tags for a bookmark, but they're only per-bookmark. Most blogs, on the other hand, only allow the author to tag, but tag each individual post. Which approach is right for GDNet? Do we tag posts, threads, entire forums? Do we rely on the authors to tag their content correctly, or do we encourage the community to do it en masse? How do we structure the system so that it can't be broken by incorrect tagging?
The model employed by del.icio.us is the one that I think seems the most promising, at least in part. Del.icio.us, if you don't know it, is a social bookmarking site - you store your bookmarks in the cloud, annotated with descriptions and tags, and other people can browse or search through them. Now, if a site is good, there's a reasonable chance that lots of people will all bookmark it independently - and they'll use similar tags. Once 10 people have bookmarked the same resource, you'll have a pretty good idea of what the correct tags for it are. Once 100 people have done it, you're solid; you'll have covered most synonyms, spelling mistakes, etc. Languages are a thornier issue but I'm not super concerned about addressing that quite yet.
So, we could use that model. We actually already have a bookmarking system, so that would be the logical thing to expand. Let people quickly add threads - or even individual posts - to their bookmarks to form a "personal search store" of useful content. That would be a good starting point for guiding searches, even for those people who don't bookmark anything. We could even add support for bookmarking external links. And if we were to implement something like del.icio.us, why would people use it instead of just using del.icio.us? Integration. Del.icio.us doesn't do things like tracking when pages update; while for us, providing last-post information with each bookmarked forum thread is trivial. We have insider knowledge on most of the content.
So that would be a start. Would it be enough? I'm not sure, but I think probably not. Under that system, some content would acquire tags that could aid later searches - that works out quite well, in fact, because the content that people tag will be the content most likely to be useful. Still, it leaves a lot of content untagged, and doesn't help change the way people find content in the first place.
One small extension to the system might improve things significantly: when a user posts a new content item, consider it "auto-bookmarked." While posting, have the user set up the tags that it should use. By folding this into the bookmarking system - not explicitly, of course, but internally - all new content items are guaranteed to receive tags. Question is, if this were enforced - posters had to supply tags - would they actually use it? It's an approach that leads to people using tags like "asdfasdf" just to satisfy the software. That's not helpful. There are two things that may help, though.
The first is automatic tag suggestion. It's a nontrivial task, but it may be possible to take a content item - I'm think primarily text, here - and identify key words automatically. To take a page out of Google's book, extra weight would be given to things like the title or to hypertext links. Clicking a few tags in a "suggested tags" list is easier than typing junk into a text field, so while people might apply the wrong tags, it would help stop the system getting polluted with junk tags. Automatic tag suggestion is also the only realistic way of generating tags for all our archived content...
The second is to take advantage of the path the user took to creating the content item. Take the saved searchforum that gets used to post a new thread. If that forum has some tags associated with it, then the new thread could automatically have those tags applied. That would ensure that anything posted in Graphics Programming and Theory would at least get a "graphics" tag, for example. This leads neatly to the next aspect of the system...
Currently there are a number of predefined forums on GDNet - "For Beginners," "Graphics Programming and Theory," and so on. These are categories for topics that have been defined by the GDNet Overlords over a long period of time, and are fairly resistant to change - new forums are only created in response to a surge of discussion on one subject that distorts the focus of an existing forum and drowns out discussion about other topics.
But who's to say that we're right? Many of the forums have poorly defined boundaries - where do you draw the line between General Programming and Game Programming, after all? Or Math and Physics and Graphics Programming and Theory? We don't permit cross-posting, so if you've got something in the grey area, you just have to pick one and go with it, likely costing you the expertise of people in the other one. Ideally your topic should be marked (*cough* TAGGED *cough*) for both forums.
Thing is, if we've got all our content tagged, rigid categories aren't necessary. Instead we have the concept of saved searches - a set of search parameters, the results of which are used to generate a set of topics. We flip things upside down and allow topics to self-select into "forums" instead of having to explicitly associate them. Want a forum dedicated entirely to shadow-mapping? Just set up a saved search for that. And of course, anything that the search can do, this can do too - for example, you could edit your search to exclude topics started by a particular poster that you don't like. If you start connecting it to user profile data, too - like, say, a user's stated "proficiency level" in given topics - then you can quickly construct a beginners-only (or experts-only) view.
There's obviously still a lot of value in having predefined categories. And that's one of the great things - we can still keep those, even with a search-based system; a saved search for the "offtopic" tag, titled "GDNet Lounge", and you've got your Lounge. It's self-supporting, too, as I noted above - if you go to the create-thread interface via that Lounge saved-search, then your topic will receive the "offtopic" keyword automatically, so what you've posted in the Lounge will appear to stay there.
There are other details I'm thinking about. For example, should all tags be considered equal? This post is mostly about tagging, somewhat about GDNet, a bit about forum structure... yet just tagging it "tagging, gdnet, forum structure" wouldn't capture that information. It would have to be a simple UI, like a slider bar for each tag, but perhaps users could choose to specify weights for their tags if they so desire. You no longer have to decide whether or not it's worth using a particular tag, you can just use it but at a low weighting.
I realise this is a long post. If you made it this far, well done! Care to round off your journey by leaving me some feedback?
One of the questions from a previous entry was what's happening to user ratings in V5. I don't have funky screenshots to show you this time, but I'll talk about what the plan is.
The present system
The present user rating system, visible under every post as a number, was created to solve a set of problems:
How do users distinguish the people that should be listened to from the people that shouldn't?
How do we identify users who are contributing to the site and community?
How do we identify users who are detracting from the site and community?
These problems were all solvable, but they required a lot of time investment and effort. We wanted to shift away from solutions that relied on users and moderators spending lots of time watching site activity. The solution was to seek to recruit the entire userbase to help solve the problem, by giving everybody a means to indicate who should and shouldn't be listened to. That, in turn, needed some kind of balancing to determine which people were good judges, which is why higher-rated users have a larger effect on the ratings of others than lower-rated users.
It's true that in general, the rating system has worked. The top-rated users are, pretty much uniformly, good contributors to the site. The lowest-rated users are generally incoherent, in(s)ane, and unwanted - though I think that exceptions exist. And users do pay some attention to the ratings of those they read, though only around 1% of registered user accounts actually filter out posts with ratings below a given threshold.
We do definitely see some undesirable behaviour. For example:
People getting upset about their rating dropping a few points and posting threads about it. This wouldn't happen if people were less sensitive, of course, but we have to face the fact that they are this sensitive. It doesn't help that there's not much one can tell those people except "be nicer."
Bandwagoning - people voting somebody down partly because they've got a low rating, and That's What This Thread Is All About Anyway. Group dynamics can be bizarre at times.
People who are great technical contributors, ending up with low(er) ratings because they got a bit ranty in the Lounge, and therefore start to be ignored in technical discussions.
Similarly, people who are really funny in Lounge threads get high ratings, and then when posting in technical threads perhaps get given more authority and credit than they're due.
People who get low ratings can have trouble recovering that rating, partly because people aren't inclined to vote low-rated users up, and if the filters are in play then their posts won't even get seen. This usually leads to the low-rated poster either creating a new account (which is a policy violation) or just leaving the site altogether. Sometimes they'll stay and just not care about their rating, but whether or not they care doesn't change the fact that we then have a user who is making positive contributions but has a low rating.
At the heart of the current rating system's design rests a few fundamental assumptions. Firstly, it assumes that if a user is good in any one way recognised by the community, then they're good in all ways - or at least are smart enough to disclaim themselves in areas where they're not good. Secondly, it assumes that users will fully consider a user and the contributions they've made to the site as a whole before rationally rating them. Thirdly, it assumes that users have good ideas about how to respond to changes in their rating - that they don't just keep doing exactly what they've been doing (albeit with an added air of bafflement and indignation) expecting a different result.
It also contributes to a bad philosophical assumption on the part of the user, and that is: that something is right because a particular person said so. Smart users won't read the ratings in this way; but some users will, when given two answers to their question, pick the answer from the higher-rated user because the user is higher-rated rather than because the answer is better.
None of these assumptions are good. They're true enough of the time that we can point to some corroborating accounts and say, "look, the system works!" but that doesn't tell us whether the system works as well as it could do.
I'm the highest-rated user on this site, so it's not something I consider lightly [grin] but in V5 I'm planning to replace the present rating system with an approach that is less susceptible - albeit not totally immune - to the above problems.
The V5 Rating Strategy
The first problem I set out to solve was this: How do we make the rating better convey the ways or areas in which a person is good?
The solution to this one seemed fairly obvious. A mechanism by which users can express their support of a person in arbitrary, user-defined categories? Sounds like a job for tagging to me! By letting users tag users as another kind of site content, we go from having a single rating axis, to as many axes as you want - be they subject-area tags like 'Python' or 'object oriented,' or style tags like 'funny' or 'friendly.' Reconciling the different ways users tag content is already something the tagging engine has to do.
Immediately this also defeats the assumption that 'good in one area == good in all areas.' It becomes very easy to identify when a user is participating in something that matches their tags - i.e. when they're talking about what they're good at.
How do we defeat the second assumption - that users will think long and hard before selecting tags for a user? In reality, people don't do that - they read one post, have a strong reaction to it, and then rate accordingly; they don't go "well, this post is obnoxious, but maybe the guy's just having a bad day. I'll check out his other stuff to be sure." If we embrace the strong-reaction-to-a-single-post idea instead of denying it, what we get is: Let people express that reaction with a single click, and then aggregate those reactions to get a feeling for where the user is most well received.
The way this'll be implemented will be via a 'thanks' button on every content item that a user can contribute to. It lets you express that strong reaction quickly. Then, over time, the posts that a user is 'thanked' for will start to contribute their tags to the user - if the user receives lots of 'thanks' in threads that are tagged 'Python performance pygame' for example, then they'll start to acquire those tags themselves. This also gives users more feedback on what they're doing right.
Will there be a 'No thanks!' button? I'm not sure, but I think probably not. If you don't like a contribution, just don't thank the author. If it's really necessary, you can still tag the author explicitly, or even report the post to a moderator.
How do we deal with the fact that a user's expertise will change over time? Maybe they were a game programming guru 10 years ago, but they've not kept up and their advice is out of date now. This is a fairly simple one, actually: have tags 'decay' over time. Tags that are still frequently applied to a user will 'refresh' and will decay more slowly than tags that aren't. This also solves the 'idiot' problem - how to handle people tagging each other as 'idiot' - because if the user stops being an idiot, the tag will fade away; and it mitigates the lack of a 'no thanks' button, because posting without receiving thanks will cause your tags to fade away.
How do we get people to actually use this stuff? That's one of the bigger problems with tagging in general. Step one is to make things as easy to use as possible - single-click to 'thanks' a post, two clicks to get to adding more complex tags. Step two is to get users to at least tag their own stuff; users will be encouraged to 'self-assess' by tagging themselves, to tag their own threads and entries, and so on. Step three is to incentivize. Now, there's a limited amount we can do here - we're not about to start paying people to tag content. What you saw in my last post, though, was the 'badges' system in userboxes; what we can quite easily do is grant a badge to people who tagged 100 content items in the past month, or something like that.
Using the output
Lastly, how do we help users find the best possible content, instead of wasting their time with incoherent in(s)anity - without encouraging them to trust an answer just because it's from a highly rated user? This is a balancing act to be sure, because most of the time the best content is produced by the high-rated users.
The first trick here is to make the way that ratings are displayed be subtle; no more four-digit numbers on each post. Instead, we're considering things like changing the background colour of the post, or the thickness of the post border, to indicate when a user is strongly aligned (tagged the same way as) a thread. Making the display subtle in this way will still make the post stand out a little in the thread, without providing such a clear and definitive thing that people can get overexcited about.
What we will probably display clearly on a post is the number of times it's been thanked (perhaps only within the past X weeks). This makes the number that people latch onto be about individual posts, rather than about users, and that's a lot safer - posts are easier to talk about without people taking things personally.
The second trick is to use the information on a broader level to bias search results. When you're searching for content on a particular topic, the search can elevate threads that have good alignment, or that have lots of 'thanked' posts in. This is still sort of acting on this idea that that content will be right 'because a smart person said it,' but by elevating it to the per-thread level instead of the per-post level, lower-rated users will still have a good opportunity to point out when the higher-rated user isn't making sense.
You'll notice I've not talked about 5-star ratings at all so far. We're still deciding exactly how they'll be integrated. The advantage that 5-star ratings offer is that they are coarse; tagging a thread with particular tokens might capture what the thread is about, but maybe you just want to convey some overall impression that the thread is awesome (or terrible), without figuring out exactly which tags would express that; they might be more applicable to, say, gallery entries. They've got their fair share of problems, of course, as comments on my previous post about the rating UI pointed out. We'll have to do some more thinking about them.
The new system doesn't quite solve the problems that the original rating system set out to solve. Instead, it focuses on the deeper problems of how to get the best content into your hands as quickly as possible and how to describe users; they're harder problems, naturally, but I think more worthwhile.
So, what do you think? I expect that quite a lot of people might have strong feelings about this topic [smile]
For the past three days or so, I've taken some time away from working on V5 to see if there aren't some things I can do for the current site, V4. As you're no doubt aware, we're in a bit of a tight spot on cashflow right now - much like everyone else in the industry - so I figured I'd see if there wasn't anything I could do to bring down our hosting costs. Messing with our hardware and datacenter setup is beyond my remit; I'm only the software guy here, but that software has been churning out an average of 15 terabytes of data every month, and bandwidth ain't free. Not to mention that it makes the site load more slowly for you.
So, what exactly have I done about it? 97 commits to Subversion in the past three days, that's what [grin]
I spent about 4 hours optimizing and refactoring the site's CSS. Historically the site's had one large (28kb) CSS file per theme, with lots of duplication between themes; this is now one shared (16kb) and one theme-specific (11kb) file. A whopping 1kb saving, hurrah! Might not seem like much, but now that all the common stuff is in one file, it makes it easier to optimize, and also means that the optimizations will be picked up by people on every theme.
I totally rewrote the markup (and CSS) for the header banner you see up top there. It used to be this big 3-row table, with 0-height cells, lots of sliced-up background imagery, etc. It's now 4 divs. Much, much cleaner.
I put all the little icons from the nav bar into a sprite map, and got them all to be displayed by CSS. So, now, instead of making 15 separate requests to the server, you only make 1, and now there are no image tags in the header of every page.
I stripped a bunch of
tags out of the markup and replaced them with margins (specified in the cached CSS files, naturally).
I updated our Google Analytics code. This wasn't strictly necessary, but I wanted to do it, and in the process I discovered that none of the forum pages have actually been including it properly up until now. The visitor graph in Analytics since I fixed it has a spike that looks like we've just been featured on CNN or something [grin]
I tidied up the breadcrumb, search box, and footer code. Again, mostly just getting rid of tables and replacing them with CSS.
I killed some of the 'xmlns' attributes that get left in our output due to the way we're using XSLT. There's still a bunch of them around, but I covered forum topics, which are the most popular offender. At some point I'll go back in and do all the other cases.
I redid the markup for the headers in 'printable version' articles. The gain from this won't be too huge, but it's often where Google searches end up, so it won't be nothing either. Also because I HATE TABLES AND WILL MAKE LOVE TO CSS IF IT IS EVER INCARNATE AS A TANGIBLE ENTITY.
I started switching the site over to using Google Ad Manager, instead of our in-house copy of BanMan. This is quite a big deal; the switch has been far from painless for me, and it's still ongoing, but the benefits are numerous. Firstly, instead of the ad images consuming our bandwidth, they'll consume Google's. Secondly, instead of the ad system consuming our CPU cycles, it'll consume Google's. Thirdly, instead of the ad data store consuming our disk space, it'll consume Google's. I'm pretty much fine with this, and for whatever reason, Google are too.
I made us a new version of the OpenGL|ES logo. It's shinier!
That's pretty much everything for now. It's a little difficult to get a picture of how much total change it's made, but the HTML for the site front page has dropped from 95kb to 85kb. I guess I'll find out if it's actually made a serious dent when I hear the bandwidth figures in a few days.
What's the downside to all this? I've been acting with basically no regard to old versions of IE. Chrome is my primary development browser now, with Firefox a close second; I check that things work in IE8, particularly when using unusual CSS pseudoclasses like :hover and :first-child, but anything prior to IE8 - and especially anything prior to IE6 - can go die in a fire, basically. I know, I know, you can't do anything about it, your machine is locked down by corporate, I understand... and I don't care. These days, I think I'd be comfortable accusing any sysadmin who hasn't upgraded all their machines to at least IE7 of criminal negligence.
I guess the site will probably still work in old versions of IE. I'm not actively trying to shoot them down. Yet. By and large, things should degrade gracefully.
To end, here are some excerpts from my SVN logs that you may enjoy.
2010-07-15 00:29:18 dropped prototype and clientscripts.js from the page header. (over 120kb for a new visitor!)
2010-07-15 00:32:50 also dropped menu.js, as the menus have been CSS powered for some time now
2010-07-15 03:24:27 killed the empty child! \m/
2010-07-15 04:33:49 tidied up breadcrumb + search boxes
2010-07-15 04:34:38 oops
2010-07-15 04:35:45 added a floatclearer
2010-07-15 04:37:03 try again
2010-07-16 02:21:38 updated 'printable' articles to use GAM
2010-07-16 02:23:11 forgot the
OK, post icons - on threads - are safe for now. I've left them off individual posts, though they might go back on; I can see them sometimes being useful to communicate the overall tone of a post (I often used to use the roll-eyes smiley when being sarcastic). We'll see. Certainly where icons are kept I think we will roll out more of them.
Design work continues... most recently I've written in the stuff about subscriptions. Paypal will still be supported, and eventually I want to look into the possibility of supporting transactions through other means, maybe such as Google Checkout.
There are still a few things on my to-do list - some smaller than others. The one that I guess is amongst the most contentious is the rating system.
It's been established that the site will be getting the ability to tag a user with one or more keywords. If you think that a particular person is "all about" graphics, or neural networks, or whatever, you can tag them accordingly. Then, when you're searching for information on a particular topic, the site will be able to point you at people who are heavily involved in that topic - the idea is that they will be the 'experts in the field.' The search will also be able to do things like identifying threads or articles that have involvement from those experts (handy if you're looking for answers), versus things that do not (handy if you're looking for questions).
The tagging system won't just be limited to technical topics. If somebody's just a really nice guy, you can tag them with 'nice guy' (or just 'nice'). If they're good at explaining things, you can tag them with 'good teacher' or 'good at explaining.' If they're impatient and ungrateful, you can tag them with 'unpatient' and 'ungrateful.'
The question is, is that enough to be useful?
Remember that the site has no concept of what a tag means. It has no inherent distinction between 'idiot' and 'guru' - they're both just words. As such it's difficult for the site to 'take action' against people who are being rude and abusive; it doesn't know which tags indicate that.
Furthermore, I'm not sure how many people will be comfortable tagging somebody as an idiot. It is, perhaps, a bit too negative, a bit too damning, and it lacks eloquence - 'idiot' isn't very descriptive. The system won't prevent it for those people who /are/ comfortable with it, naturally, but I fear that it may simply not be used by people who just want to express a vague feeling of displeasure with a person.
Add to this another oft-cited issue with the existing rating system - that people too often don't know or understand what they've been rated up or down for. We've said in the past that ratings should be awarded based on a holistic consideration of a person's contribution - that you shouldn't rate somebody without looking at their profile and seeing their other posts. Maybe they're just having a bad day. After observing the system for several years, I don't think people do this - so maybe it's worth abandoning the approach.
What I'm considering is a variation on a system I've seen at some other forums - specifically I'm thinking of TCE, though I'm sure it's elsewhere as well. Simply put, on every post, there'd be a "thanks!" and "no thanks!" button. You press the former if you want to thank a user for their contribution; you press the latter if you feel the opposite. The total number of 'thanks' and 'no thanks' are weighed up and used to calculate a karma rating for the post. A user's total karma rating is then calculated as a function of the karma ratings of all their posts.
To be clear, unlike these other systems, it would still be anonymous. You would not see who has thanked/blamed somebody for a given post, only the number of people who have done so. I'm thinking as well that it would be displayed on a colour scale rather than a numeric value, so that people don't go apeshit over tiny changes in value.
What do you think?
So, the main thing I'm working on at the moment is the design document for the next version of the site. It's not a small document - it outlines everything I plan to bring to the site, in terms of functionality, across the entire V5 line - and will probably guide development for at least a year. So it's fairly important that I get it right.
The announcement I posted - collecting user stories - was the first step in this. The entire first chapter is dedicated to information about our audience, from user stories to group statistics.
One of the things I'm particularly interested in is which other sites you use on a regular basis - partly to add more background info to that first chapter, and partly to look for opportunities to integrate this site with others.
So, which sites do you use on a regular basis?
Are there any times you've been using GDNet and have thought "Hmm, it'd be good if I could here?" For example, somebody on IRC suggested integrating Twitter status feeds into the user profiles, which I think is a nice idea.
One of the things I've been working on - for quite a while now - is a rewrite of the Journal software. The main reason for this is that the way journals are currently structured - one huge thread per journal - is pretty poor from a performance point of view. Viewing a journal entails scanning the entire posts table for posts with a particular reply 'depth' and thread ID, which isn't great - even with an index on the reply depth (which is used by nothing else on the site) the posts table is still orders of magnitude larger than the topics table.
What we should have is an implementation whereby every journal entry is a new topic, and then comments are replies to that topic. Switching to this will not only be a performance win, but it will be a functionality win as well - some things are simply impossible to support under the current journal implementation (for example, closing comments on a single entry), while other things are needlessly duplicated (for example, RSS feeds - if a journal is just a collection of thread-first-posts, we can reuse the RSS we've got for providing feeds of a forum).
There are also other changes I want to make with regards to the way journals are actually retrieved and rendered - to the extent that I'm basically rewriting them from scratch, using tasty things like XML queries and XSLT to make it clean and fast.
One of the things I can change is the way the right-side bar is rendered. I'm not massively happy with the way the bar looks on journals currently; the calendar isn't particularly useful, the RSS button is obsolete, and the monthly links are just run together into a single unstructured list. You can see my proposed replacement here.
The biggest thing I'm not sure about is whether people will be upset by me dropping the Calendar. I think it's fairly useless, but I know some people enjoy making little patterns on it and so on. Implementing it will take a fair bit of work - there's a whole extra DB table that appears to be dedicated to it - so if I can get away without it then I'd quite like to, but what do you think?
Happy 10th birthday, GDNet! I got you a present. It's not much. I'd hoped, planned, for so much more, but you know how these things go.
Yes, folks, the V5 codebase is finally at a point where I can start putting bits of it up for public dissection, consumption, digestion, and *ahem* feedback!
There's not much to show you today, but I'm planning on pushing out new stuff very quickly at this point; much of the infrastructure is now in place, reasonably solid, so I can really focus on things that you can see.
Things to note before we start:
Firstly, I've been developing it primarily in Firefox; it also mostly works in Chrome. It's broken in IE - I think the problem is the content-type - and I've not tested it in Opera. Eventually, the site will be supported in FF3, IE7 or later, Chrome, Safari, and Opera. I'm aiming to downgrade gracefully to older browsers, but it's not a top priority and it probably won't be pretty.
Secondly, I've been doing all of the graphic design work myself, and I'm no artist. I'm focusing mostly on the functionality of the UI; consider the way that it looks to be 'programmer art' for now. Somebody with actual aesthetic sensibilities will look at it later, I promise [grin]
Thirdly, speed-wise, what you can see today is an unoptimized debug build, sharing a server with the current site (and the current site does not like to share). I've not had a chance to properly stress-test it, which is partly what taking it public is for. So, performance will improve drastically as the bugs are ironed out and I can start turning off the debugging flags.
Today, we'll start with the basics.
Login / Account home page
You can use your regular GDNet username/password for login. It's all connected up to the current site DB through an adaptor layer that maps V4 database records to the new schema formats to as great an extent as possible.
Submission of username/password info is now done over SSL, for greatly improved security. (Maybe you don't care that much about your GDNet account being secure right now, but this is an absolute requirement for some of the services we want to offer in the future).
Once you're logged in, you should see a little bug icon next to the welcome message in the bottom right corner. Click it, and you'll get a box that lets you submit bugs and feedback, right from the browser; reports go automatically straight into my bugtracker. This icon should appear on every page of the site for logged-in users. Go ahead and use it liberally over the coming weeks. (Please don't abuse it; all you do is make more work for me).
A forum topic
I've tried to minimize the amount of extra cruft displayed on each post, so you can focus on the content. Extra user info can be revealed by hitting the chevrons at the right end of the post header.
Avatars don't work yet. They're going to be hard to sync between the current site and the new site...
You can see a few people have badges next to their name. More info about their badges is displayed in the expanded info. At the moment there are only two kinds of badge - Moderator and GDNet+ - but it's easy to think of other badges we might create and apply.
So, yeah. Not much to look at for now, but gimmie feedback. I should have some more stuff for you in the next couple of days.
Well that was fun. Sorry, phantom, I'm keeping my hat. Also, my groin hurts now.
There are videos, but they're generally very small and difficult to see that it's actually me... there's a certificate too, maybe I'll find a way to scan it in or take a photo.
I also have a new sympathy for ragdolls. I recommend that anyone designing or developing physics in games go and do a bungee jump as you will gain a new appreciation for what the poor entities have to go through.
[Edit: Also, if anyone can tell me what the fuck this is, I'd be most grateful. I think it's extremely rude, but I'm not sure.]
I spent much of the day being hugged by pretty girls.
I've also discovered the history of the terms "Escape sequence" and "Escape characters." It's very simple - an 'escape sequence' was ESCAPE (0x1B) followed by any other character, and indicated some particular action that was not what that other character would usually produce. An escape character is thus any character used in an escape sequence.
Now, a question for you: Is there any solvable problem which has a finite number of distinct solutions?
I mean, consider the problem "What is the value of 1+1 ?". The 'normal' solution is just "1+1 = 2", but you could also have "1+1 = 2*1 = 2" or "1+1 = 1.0 + 1.0 = 2.0". Or even just "1+1 = 1+1+0+0+0+...+0 = 2," where the ellipses omit any number of "+0" terms.
Each of these solutions give the same end result, but differ in their characteristics. The number of stages, and the number of terms at each stage, differ; the nature of the terms themselves differ. Simple things like the length in characters of the solution differ. Thus, the solutions are distinct from one another. Note that by "solution," I'm not just talking about the end result - I'm talking about the process used to derive it.
I submit that any problem that has at least one solution must have an infinite number of distinct solutions. Can you prove me wrong?
Some thoughts on machine vision.
Say we're making a game with strong stealth elements... Splinter Cell, Thief or something. We want the player to be able to sneak around, hide, and so on; we want the AIs looking for him to respond to him being hidden as realistically as possible.
Traditional approaches use a line-of-sight test. A line is traced from the AI agent's eyes to some point on the player. If the line intersects the environment, the AI agent cannot 'see' the player. The problem with this technique is that only tracing to a single point cannot possibly produce an accurate result (unless the player is a glowing ball of light or something). We need some technique that takes the whole of the player's geometry into account from the AI's point of view.
Enter differential rendering. Here's how it works:
The camera is set to the AI agent's eyeposition.
The game world is rendered (sans player) into a texture.
The game world is rendered again (with player) into another texture.
The two textures are bound to texture stages.
The depth buffer is cleared to a value of 0.5.
An occlusion query is issued.
A quad is rendered. A pixel shader is in place which (a) samples both textures, (b) subtracts one from the other, (c) dots the result, (d) tests the result against some threshold value, (e) writes the test to the oDepth register.
The occlusion query is ended.
The results of the occlusion query are retrieved.
The resulting value is the total number of pixels for which the player character caused a significant difference to what the AI can see. What's nice about this?
It takes all world and player geometry into account.
It handles transparent stuff - as well as stuff with specialised shaders - seamlessly (you just render things normally for steps 2 and 3).
It allows you to set the 'keen-eyed-ness' of your AI agents by varying the size of the buffer. The smaller the buffer, the less sensitive the AI will be to small changes, and vice versa.
It allows camoflauge. If I'm wearing a black ninja-suit and I stand in a black area, I make a smaller difference than if I were standing against a white backdrop. And it's handled without any testing/calculation of light levels and what have you.
The big downside is that it requires pixel shader 2 (to write oDepth). There's also the fact that occlusion queries are asynchronous, which can make managing them a bit of a problem... using the results from the previous frame should be fine, though, because chances are you'll want your AI to pause for a split second before 'reacting' anyway.
Suggestions / comments? Otherwise I'll see about knocking up a demo of this...
Was trawling through some old ICQ logs... amidst the porn spam and russian mafia recruitment messages (the ones that got me to adopt my policy of "don't respond to 'hi'"), I found this gem:
Session Start (ICQ - 139762446:268602683): Mon May 10 13:13:53 2004
[13:13] 268602683: Hi
[13:14] Superpig: hi
[13:14] 268602683: I want to be your friend.
[13:14] Superpig: I see.
[13:14] Superpig: I want to go eat lunch.
[13:14] Superpig: bye.
Session Close (268602683): Mon May 10 13:14:42 2004
[16:42] 268602683: Wait you. [Offline Message (5/10/2004 [13:15])]
I think I've discovered a foolproof plan for drinking double vodkas. you follow them up with apple sours to take the taste away. works pretty damn well.
Crunch continues. Looks like I won't be home much before 10pm any day this week.
I've had a devilish idea about 4E6... but I'm not sure what people's responses would be, or that it's even workable. Simply enough, it'd be the requirement that at least two people are credited on an entry (and by 'credited,' I mean that they have to have done a significant quantity of work - they can't just sign their name to it).
I'm pleased to announce that the GDNet+ member webspace is now fully back online - and, for the first time in about 3 years, accessible by FTP once more!
Just FTP into members.gamedev.net, using your regular username and password, to access your personal space. Old GDNet+ members should find all their files ready and waiting for them. We've also increased the space quota to 100MB per user, and we'll look at increasing this further as things get settled in.
Anything you upload into your webspace is accessible over HTTP, too...
Old GDNet+ members can browse to the same addresses that they've always used. We'll probably be retiring these addresses at some point, but we'll make sure we let you know before we do.
New GDNet+ members, log in, and browse your way to https://www.gamedev.net/subscribe - take a look at the bottom of the page for your address info.
4E5 seems to be going pretty well. People are into it here, and I've also been publicising it a bit around the web (such as here, here, and here). Four new sponsors have signed up, too: Delgine have donated three copies of their DeleD environment modelling suite, IndiePath have donated a couple of copies of web runtime environment igLoader and a slot with their press team to put out a press release, Slitherine Software have donated six (signed!) boxed copies of their game Gates of Troy, and EZPCShop.com have donated GBP100 in cold hard cash.
And to think... I've still not announced the grand prizes [wink]
Other than 4E5, there's not much going on in my life at the moment - my end-of-year exams start on Tuesday, so in theory I'm revising for them, but in practice... eh, not so much. I'm going to do poorly on the datastructures and algorithms half of the first paper - mainly just because I don't know the details of the 30+ algorithms the course has covered - but I only need to answer one question from that half of the paper (provided I answer the whole of the functional programming half, which isn't a problem for me). I'm also going to completely flunk the maths paper - calculus and linear algebra - but if I do well in the others, it will hopefully not matter.
Oh, I'm going street-luging on Sunday though, with the OSF. It looks like I might be taking over their website too.
GDNet development will also be starting to crank up in the near future too. I'm getting development software sorted out (SVN, etc) and will soon be recruiting people for the dev team. One thing people may be surprised to hear is that I'm not looking for ASP experience - I'm not even looking for ASP.NET experience (though it helps). The project that I want to put everyone I recruit onto is building a specification for the existing site - describing how it behaves in a formal way - so that we have a framework in which to make changes or do rewrites. God knows parts of this creaking behemoth need rewrites.
In the surprisingly large amounts of free time I have left over, I'm trying to learn how character animation in Blender works. I'm not really trying to learn to animate - I don't have the artistic skills, really - but I want to learn how the process works so that I can understand how to build a content pipeline from it. I'm contemplating an article - "Building a character animation pipeline with Blender and Direct3D." Before I can write that, I need to understand how to build a character animation pipeline with Blender and Direct3D. At least I've got the Direct3D part sorted [wink]
So, I think I've settled on the project that I want to undertake next year. It's a system for the rapid development of board games.
The first part of the project will be to design a domain-specific language for describing board games. I want this language to be a subset of English; when a board game is described it should read like the rules leaflet you find inside a game box. Internally, it's a descriptive language that is used to identify game states and the legal transitions between them. It's also fundamentally object-oriented; playing pieces are the objects, and each supports properties and methods. A six-sided dice would have a 'roll' method and a 'value' property; this maps directly into rules like "Roll the dice and move the piece forward by a number of squares equal to the value of the dice." Common game objects - like dice, counters, and boards - would be available in libraries.
The second part of the project will be to develop a compiler/interpreter/simulator for these games. They take the game description as input and build a working model of it. The model can then be used to explore available gamestates, either automatically (i.e. generating a graph of all possible game states and the transitions between them) or interactively (i.e. 'playing' the game).
The third part of the project will be a GUI for the interactive mode - i.e. a proper client for playing the game. Wiring game objects up to graphics might be tricky, but it's probably doable. I'd also like to add network support to this mode so that you can play the game against others over the network.
The final part of the project - something I probably won't get onto - will be to start getting into things like static analysis. Using the machine-literate game description the computer can start doing things like identifying dominant or recessive strategies, finding points of articulation, finding loopholes or conflicting/ambiguous rules, and looking at balance issues.
And finally, there's the grand prize, which is to write an AI that can use these static analysis results to play any board game described to it in the system.
For testing I'd want to be able to play the following games using the system:
Snakes and Ladders
What do you guys think?
quite drunk. registered today and then went to the EMEA dinner - awesime food. met up with jollyjeffers and S1CA amongst others. Drank there. then went to Gamewerks and drank there - and played arcade games for free! including original space invaders! - then went to the Westin to drnik there - not free but cheap - and there was a reeaaally drunk guy who passed out and a guy with an inflatable sex sheep and so on.
back kn my eoom now about to sleeep. top floor! only a few dors away from the presidential suite :D
(alex - that is my girofriehd to the uninitiated, she is reading this at some point - I love her very very very much because she is awesome. my nane id r8chard fine and i spprove thi s merssage.).
Technically minded folks: You can skip the next two paragraphs on the shenanigans of my personal life if you want. Except the bit about the burger, because you need to know that.
Left at about 1:30 yesterday afternoon to go into Bicester (where I grabbed a burger from the van on Sheep Street - best fucking burgers in the country, I swear, swimming in grease and you can feel your heart screaming "ARE YOU TRYIN TA KILL ME?" as it goes down but they're so good with the melted cheese and the onions all fried in, mmmmmmm) and then on to Milton Keynes where I met up with a friend to go to the cinema. We saw Charlie and the Chocolate Factory - my second time, but her first, and there's not much else on anyway. Then crashed back at her place for the night, then headed back here a few hours ago.
We even found time to do a little clothes shopping in MK. We celebrate my birthday next Saturday, and afterwards she and I plan to head down to London where she will take me to Slimelight for my very first time. They have a relatively strict dress code though - "If it's not black, fuck off" - so I'm pulling together a goth disguise. Over the next week I need to try and find a loose, light, black t-shirt, and some black shoes (preferably boots). Should be easy enough to pop into some shops round Oxford, it'll just be a question of finding the right places.
Technically minded folks: OK, stop skipping now.
I'm still pulling together my little screensavery thing, and currently implementing HDR using this article as a guide. However, I think I've found a way to calculate the image key on the GPU, avoiding any readbacks and thus any stalls while the CPU waits for the GPU to finish rendering.
The image key formula in the article is:
image_key = (1/number_of_pixels) * exp( sum_of( log( luminance( each_pixel ) ) ) )
That summation is the hard part. Once we've got the summed value it's pretty easy - a few instructions to exp() and divide it by number of pixels. Here's what I'm going to try:
Create a 1x1 R16FG16FB16FA16F render target. Ideally this would be R32F but you'll understand why it can't be in a moment...
Given an MxN HDR image, set up vertex buffers (I might use indexing) to render M*N triangles that cover the whole viewport, with their texture coords set to map to each individual texel of the source image. i.e. a single texel will be used for the entire primitive.
Enable alpha-blending with both SRC and DEST blending set to ONE. (This is why I have to use R16FG16FB16FA16F - AFAICT it's the only floating-point format to support alpha blending. R32F would be nicer because it uses half as much memory and gives a lot more precision).
Install a pixel shader to sample the source image using the interpolated (i.e. constant) texcoords, and write out the log luminance.
Each primitive will calculate the log luminance of one texel in the source image and add it to the value stored in the 1x1 render target. When it's finished, the value in the render target should be the required summation. I can then simply set that as a texture, sample it when doing the actual tone mapping (probably in the vertex shader, so that I'm doing 4 reads instead of 1280x1024 reads), perform the exp() and divide on it, and voila, I have an image key ready to be used for the per-pixel HDR->LDR tone mapping. Should all execute asynchronously meaning no blocking. I'll let you know how it turns out, anyway.