Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

372 Neutral

About TheGilb

  • Rank
  1. TheGilb

    Left4Dead networking

    I understand that on Xbox Live you could technically run a client / server setup, that's kids stuff, but what I'm saying is that assuming client / server on Xbox Live seems wrong given the bandwidth constraints. Let's take another Source game with a better known network architecture such as Counterstrike Source. With a tick rate of 33Hz, you need approximately 40kbps per player. That means you need 280kbps of upstream bandwidth to host an 8-player game, which would probably exclude about 30% of all players in the entire world from being able to host a game using that architecture on XBL. Left4Dead has even more data to synchronise, thus even less players could host. However if you use P2P then the load can be shared, thus I speculate that the solution is P2P, but each peer has authority over a subset of entities rather than the simulation as a whole. The reason I think the architecture varies from XBL to PC is that on PC if player lags then none of the other players are affected. On XBL if a player lags then some of the zombies do not get updated. This leads me to the conclusion that XBL has P2P architecture and the subset of zombies that lag with the player are the ones that the player has authority over. On PC no zombies lag with the player because the server has authority over the entire simulation, and as already pointed out Valve prefer this model because it is safe from hackers. Even though client / server is safer from a hacking point of view, XBL networking is already protected by the XBL platform, which makes P2P a safer option, even if not an ideal solution.
  2. TheGilb

    is "return((void*)1)" safe?

    #define SUCCESS 1 #define FAIL -1 int so_simple_even_a_child_could_write_it( int iValue, void** pOut ) { if( iValue > 0 ) { *pOut = new int[iValue]; return (pOut != NULL) ? SUCCESS : FAIL; } return FAIL; } Sorry :-)
  3. TheGilb

    Left4Dead networking

    Quote:Original post by hplus0603 Quote:One thing to bear in mind is that the Xbox 360 version is P2P, because there is no central server I don't understand that, either. Client/Server is a topology; it doesn't need a central server in the sense of a rack of hardware in some data center. There is still a "host" for matchmaking purposes, and that "host" could still bounce incoming traffic to the other participants. The fact that they are all Xbox machines doesn't impact whether the topology is P2P or C/S. Also, on the Xbox Live! network, there are central servers, in the form of the Live! servers. Those servers keep your gamertag, do leaderboards, do matchmaking, and facilitate UDP punch-through. The are not involved in the actual gameplay mechanics, though -- that's typically done by the hosting Xbox. Hi, on Xbox Live, the servers cannot bounce incoming traffic to the other participants in the manner that you suggest. The Live servers facilitate matchmaking only, once you're in-game it's P2P UDP all the way, unless you know of any Xbox 360 dedicated servers?? Just as a note: It's not impossible that Valve have a battery of Xbox 360 dedicated servers to support all the Xbox 360 gamers in all the world, but a model like that just isn't sustainable in terms of cost, and there's no monthly fee for playing L4D online to pay for those servers. Microsoft certainly wouldn't pay for that, they provide the servers for matchmaking only, and PC L4D dedicated servers aren't compatible with Xbox 360 consoles because the 360 has custom packet headers with encryption. Quote:Original post by hplus0603 It's more likely that the 8 player limit is because an Xbox cannot be expected to have more than about 64 kbps upstream bandwidth (that's the lowest common denominator, and what you have to budget for to pass tech cert AFAIK). Yes you're absolutely correct. What I meant by the 8-player P2P limitation was implying that a 64kbps bandwidth limit would mean a maximum of 8-players. Even then the size of a packet sent to a single peer should average 32 bytes, and at least half that will be used by headers (IP header, UDP header, data header). Considering a single position / orientation uncompressed is around 28 bytes, this is why I'm talking about transmitting node indexes and interpolating the data, because then a single update for a zombie is 4 bytes (uncompressed 32-bit int, might be able to compress more), and the orientation can be derived. With delta compression this would compress down further to a single bit when the client acknowledges the data and it doesn't change. The real question I am exploring here is not the in-game network topology, because that's not really the topic of the discussion in hand. The real secret to L4D networking is in the compression which enables the synchronisation of all those zombies within the given bandwidth constraints. I was speculating that a client/server model would be simpler because the server is authoritative (and thus, secure from hackers), and Valve know methods (as in other Source games) for compressing huge amounts of data and hiding latency. Extrapolating that idea I was speculating also that in a P2P model you could do a similar thing, except every client is also a server with authoritative control over a subset of the game entities, and also perhaps a simple algorithm for allowing clients to choose which entities they control (and also which entities are controlled by your peers).
  4. TheGilb

    Left4Dead networking

    One thing to bear in mind is that the Xbox 360 version is P2P, because there is no central server, which was probably one of the limiting factors in making it an 8-player game. However, to contradict this, it does seem like the PC version is clearly a client/server system, but that is not to say that perhaps the in-game entity management is not done P2P as is the case for the Xbox 360. Either way, the question is not so much enquiring about the topological properties of L4D networking, let's just say that it might be either or both. Although I don't know all of the specifics, we can assume that the rough outline for the network model is the same as is used for other source games. This is a tried and true formula for Valve (and many other companies developing FPS games). Given that, we will assume a client / server model for the purpose of discussion, I think that it might also be safe to assume that for a single zombie entity, some player must assume 'control' of that entity, that is to send updates to his peers in the session with the data required to simulate that entity running on a peer machine. The main entities posing the biggest challenge being zombies (due to their numbers), with the special zombies maybe only requiring simple synchronisation techniques that deviate from the primary formula. I think it is likely that the game world in L4D can be traversed by the AI using a node network as is common in FPS games. On the most simplistic layer, a zombie that spawns at point A has a target at point B, and may take a path that passes through any number of nodes on the way. Through observation it seems as though the route a zombie will take is precalculated, and this could be done using a fully deterministic algorithm, thus only requiring a start node and an end node to calculate a route using splines, which have several favourable properties useful to network simulation. This would be a good way to exploit existing AI data to save on bandwidth, though zombie behaviour is not quite so simple. Zombies seem to have the ability to make simple decisions about their target using simple sensory inputs. For example, zombies may change their target temporarily to chase a pipe bomb, or they may prioritise a player based on distance or flashlight on / off status. Changes to the target have to be reflected by sending updates to game participants, and the delta compression algorithm favours less frequent updates. Zombies also have the ability to perform actions whilst idle. They play various animations and wander around bumping into each other and occasionally fighting one another. These actions could be deterministic or synchronised, but it's probably more likely that it is a balance of both techniques to get an acceptable game experience. In summary I don't think a single zombie entity requires much data to synchronise.
  5. In my experience of working across next-gen (current-gen?) console platforms, you can't rely on your socket api being the same across platforms. To a certain extent this is also true of PC platforms. More so in particular when you consider matchmaking functionality, which may be completely different across different platforms. If you want my advice, work with the native socket API of your chosen platform (which on PC platforms does tend to be a derivative of Berkeley sockets), and don't be afraid to separate your code so it will compile cleanly on each target platform, and just give it a common interface so some code can be shared, that's just cross-platform coding sense. Also, I just wanted to add that the htons and htonl functions are C-style functions which convert the byte ordering of packet data to little endian. Obviously, on PPC platforms such as the old-style Mac (Pre Core2) which are already little endian, nothing needs to be done. On big endian platforms such as Intel x86 (windows, some linux variants), the bytes need to be reordered in reverse so that data transmitted over the network can be properly processed on different platforms.
  6. TheGilb

    TCP or UDP? Why not both!

    Quote:Original post by Xeon06 Why are there debates about which one to use? Why not use both? Truth be known, I think the reason there is so much debate is because there are a lot of people on the Internet with an opinion, but far fewer people with actual real world experience. It's merely a ratio, for every real developer there are 1000 more who are more interested in being 'right' than getting it right. There is no technical reason why you can't use a TCP and UDP socket at the same time (ie. No socket library which supports TCP is ever likely to come with an api restriction like that) but as rightly pointed out already, TCP isn't exactly as friendly as UDP when it comes to NAT punchthrough, which is potentially problematic depending on your network topology. There are other relationships between TCP and UDP, like for example what happens to your TCP connection when your UDP connection is using most of your bandwidth. Though I'm sure there are plenty of people with an opinion who will explain these much better than me ;-)
  7. TheGilb

    Handling UDP resends

    I'm working on a project at the moment where the bandwidth restraint means each packet is allowed only 13 bytes payload. That's pretty tight, even when you pull out the heavy weapons ;-) (ie. Hand tuned data compression, delta compression, etc ..) Because of the tight bandwidth constraint, we have developed a method - amongst others - of reliable packet delivery, where you specify the bandwidth to be used for the resend. If you're working to a 64K bandwidth budget and you have 100 bytes per second free for resends then it will use no more than 100 bytes per second for the resend. The math / logic couldn't be simpler. Given the size of the packet and the number of bytes to be used per second, you can calculate how frequently to resend the packet. Be sparing ;-)
  8. TheGilb


    Congrats :-) We got one too: New International Track and Field.
  9. TheGilb

    Locking a dynamic vertex buffer...

    Quote:Original post by Tape_Worm I know how to fix it up to get it down to a minimum number of locks, but it's going to end up being a rewrite of a sizable (and quite stable) chunk of code. I want to know if it's really worth it. Optimizing can be a fun learning exercise, but it seems to me like the reason why you're finding it difficult answering this question yourself is because there is no definitive target set. Many of the latest GPU optimization techniques help to minimise per-frame state changes. John Carmack has even gone so far as to claim that, theoretically speaking, the idTech5 engine could render the entire world and everything in it in just 1 pass with no state changes. The best advice I can offer you is to write the game first, get all the assets in, and then optimize after. You can optimize forever if you want, and spend your whole life making the code run as fast as can be - there really is no limit, just diminishing returns .. Or you can say "right, I want this to run at 60 fps, where's my first bottleneck?" and then that's a realistic target. As it stands, your code works, it's stable, so I say, put it to work for you! Hope that helps, and good luck! :-)
  10. TheGilb

    Levels in C++ games

    You may find the link to the following tool quite useful: Mappy. Mappy can generate quite sophisticated tile based levels, save it into a file and provides C++ source for reading back and rendering the level. Even if you don't use the tool and the source, you may find it useful to study the file format and see if you can get any useful ideas from it. Hope that helps, and good luck!
  11. theOcelot is correct, you need to dereference your iterator to gain access to the object it actually contains.
  12. TheGilb

    Word to the wise

    I actually laughed out loud when I read this post and took at look at the website! I found a particularly amusing article here where they basically shred the site! Also I found this amusing quote: “Thank You so Much, I can't believe this actually worked. It took me only 5 Days to find a game tester job and I am making around $23 per hour, it’s so exciting. Thanks Again.” We get paid pounds (£) in the UK .. Plus, minimum wage is only £5.52 ;-) You know, if you say the word 'gullible' fast enough, it sounds like 'orange'?
  13. TheGilb

    OpenGL3.0.. I mean 2.2

    Phantom just ignore that guys post, he's just trolling. You're the guy I see answering everyone's GL related questions on here, and you're entitled to your opinion, and I see your point, the GL3 spec is disappointing. I've been waiting for this to come out in about a year. I suppose we should be greatful that anything has been produced at all, I was half expecting the 'announcement' at siggraph to be announcing they're still working on it ;-) Still, the new spec is a step forward for the API, albeit a slow, small tentative step while DX barges on through to DX11, seeing how DX10 isn't exactly the 'heavy hitter' Microsoft were perhaps hoping for ;-) Fact is most casual gamers still don't own the hardware the GL3 spec is aimed at, so don't break a sweat, there's still time for GL :-) At least the API isn't leaving its old 'fans' behind, which is probably a good thing for the API as a whole.
  14. TheGilb

    [Starting C++]Need opinions

    I learned C++ through "Thinking in C++" by Bruce Eckel. He's a nice enough guy that he also gives the book away for free In my opinion this is the best free resource for learning C++, well worth checking out either way. Hope that helps, and good luck!
  15. TheGilb

    More dev

    About the ratings system ... As things stand in the current design, the ratings system feels 'bolted on', with very little actual use within the site. I think the rating system is a great idea, but it could possibly be put to a more effective use. Another issue, - possibly linked - the same topics in the forums come up over and over again, and on the 'active topics' section of the site, topics that have been answered several times over already keep getting comments, and ultimately I think this leads to beginners often getting confused. Posters only need one answer. Also the tagging system doesn't mesh with user profiles too well. Areas of expertise can be roughly divided up into the major game programming disciplines - like they are in the forums already. Graphics, Networking, Maths .. etc. If you give the user an inch to abuse the system - he will. Don't make me 'tag' you over this. The way I see things, you could maybe borrow a few ideas from the likes of Your rating is linked to your profile in the form of points which can be spent to ask a question, and you can earn points by answering questions. If a user accepts an answer the points are awarded from the posters account, a grade awarded, and the topic closed. Users can easily search past topics, with the best answers highlighted by their rating. Now you're building up a knowledge base. Stats are visible in realtime on the main page, grouped by category, in the 'Hall of fame'. Now you're making a game where the users are actively encouraged to participate and share whatever knowledge they have, even if their rating is super massive. Suggest giving ratings an upper limit of a 32 bit unsigned integer. Users could also be awarded a rank like Beginner, Expert, Guru, Genius, Jedi .. whatever - and that rating could be linked to the points in their profile. Rating icons could be displayed in posts, and help guide posters better towards a good answer. Because many gamedev participants like to spam the forums with rubbish, some forums could be flagged for 'free chat' or whatever, where the points system is completely bypassed. Now users keep spam out of the main categories where people are looking for answers, it means less work for the moderators. Now you have a reason for ratings, you are discouraging duplicate topics already answered, you are discouraging forum spam and encouraging community participation.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!