• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1862 Excellent

1 Follower

About Drew_Benton

  • Rank

Personal Information

  • Interests
  1. Hey Benton,

    This is Halim. I was reading your post "A guide to getting started with boost::asio".

    But I couldn't locate the option to download the zip file.

    Can you please tell me how can  I get the zip file.

    I would appreciate that.

    thanks a lot.


  2. Hei. I am just curious. Are you the same drew benton who created the tools for silkroad? And btw, you can't receive any new PMs :)
  3. Looking to kill IOTD as it currently exists.. Need testers

    Talk about a trip down memory lane! I still have (and cherish) my trading card: [spoiler] [/spoiler] Anyways, Michael, I started clicking through the showdown and I don't particularly like the format. The best explanation I have to why I feel that way comes in the form of a question: "Do the screenshot necessarily need to compete with each other, or should a screenshot only compete with itself to the viewer?" I think a "vote/no-vote" system would be more appropriate for the screenshots. Either the screenshot is such that it invokes a viewer to "vote" for it, thus giving it a point, or "no-vote" it and go to the next screenshot. With that sort of system, the screenshots that are appealing for whatever reasons will bubble to the top with votes, while those that don't simply "don't have votes", which is not negative, just not positive. For something that is supposed to be fun and entertaining, I think a model like that one would better suite the screenshots while not taking anything away from the screenshot itself.
  4. My First try in HTML5 game

    I found Level 6 to be pretty tricky compared to all other levels around that set. Level 12, 13, and 14, you don't extend the blocks at the top off-screen, so you can just aim straight up and shoot over easily, bypassing the challenges. The pineapples and their vision mechanic don't make the game any more challenging; they just waste the time it takes to fire your one shot. That mechanic needs to be reworked. Level 23, you can't zoom out far enough really. The text covers a top block and that's a little annoying. I played through all 24 levels, but the game just went to the title screen after completion. So there's no end game credits or "you won" message. That makes it feel like an incomplete game. Overall, it's a cute little game, but too repetitive, not enough mechanics to make it challenging. There's pretty much only one way to solve most of the puzzles and it's more based on consistent timing than anything. I think it needs more varied music as well. You always start out on the left side of the screen and your objects are on the right. Just an observation. Nonetheless, congratulation on your first HTML5 game! It's a really good start, but there's a lot more work to be done on it. Good luck!
  5. Software patents

    [quote name='Mussi' timestamp='1343348274' post='4963466'] [quote]I have heard you and I am granting the open source community immunity from this patent.[/quote] Seems like this won't be an issue. [/quote] It still is an issue, and it will always continue to be one for any oss project threatened with patent violations. If you accept him saying that, then you are operating under the assumption the project actually does violate the patent (or admitting it did violate the patent). Just because he thinks the project violates his patent, doesn't mean it actually does; that is for the legal system to decide. Looking past that important point, will that statement on a blog be upheld in a court of law a legally binding agreement? I think not, but that's not up to me to decide. There needs to be two sides to the agreement. What does it mean to be a part of the open source community? Which one (there are public and private ones)? What about commercial usage? There's too many questions to simply take that at face value and think it's "ok" now. Maybe he changes his mind, maybe he sells the patent to someone else who has a different opinion, or maybe he means it, and won't ever take action. Who knows, it doesn't matter. What does matter is that before you use the library, or any library for that matter, you have to be aware of patent issues and be able to handle them accordingly. In this specific case, it's been shown that you might run into patent issues, so you will have to plan for the worse and seek legal counsel if you really want to be sure. FWIW: I think Doug Rogers got the short end of the stick here. Looking at the released initial e-mail, I think it was pretty civil, done in a respectful matter, and in a way that was not meant to have things blown out of proportion. But what about the title you might say? Well it's an appropriate title to ensure the e-mail gets read. If I saw an e-mail with that title in my inbox, I'd surely click on it. But as with anything on the internet, things tend to get blown out of proportion way too easily. It's not like he was threatening a lawsuit in the shown e-mail or had actually filed a lawsuit, he seemed to want to try and talk things out and now look where that got him. It really sends a bad message because perhaps people might feel less inclined to try and open a dialog in the matter rather than just rushing straight into a lawsuit. Compare the e-mail Rich got vs the one Notch got. Which would you rather...? Anyways, I'm just commenting about the specific stuff at hand rather than patents in general. There's no need to beat a dead horse; the patent system needs to be drastically overhauled when it comes to software.
  6. Based on your other thread, I'd strongly urge you to slow down and actually work out you current problem rather than rushing to potentially solve the wrong one. The alternate asynchronous pattern SocketAsyncEventArgs provides is a very advanced model that caters to very specific needs. When they (Microsoft) say the model is for "specialized high-performance socket applications", they mean a very specific thing, not a general thing that you are thinking about. The model is meant to help developers control almost all aspects of resource costs for servicing 1000s, 10s of thousands, to even 100s of thousands+ worth of concurrent connections or an extremely high network I/O throughput. If you do not properly implement all aspects of this model, your solution essentially degrades into the same model provided by the Being/EndXXX API, just more complicated and bug filled. There's more to using the model than just the networking aspect. The rest of your code is vital in determining whether or not the networking can even scale to its max potential on the hardware you provide. The most basic and specific example is "global shared state". The more "global shared state" you have, the less efficient async models become due to locking. While there are certain ways around this, it is a very advanced topic. What's the point about talking about all this? The point of using this model is that [b]you[/b] control all aspects of it. You would be hard pressed to find a generic socket server library that utilized SocketAsyncEventArgs because of how specialized the solution is. Whatever you might find, you certainly wouldn't want to use without understanding the core concepts first because typically speaking, you will be finding unsupported code that will have tons of bugs in it that will negatively affect your application down the road. If you needed a library that did all of this for you, then use the Begin/EndXXX API! The bigger issue at hand though is that if you are having issues with the Begin/EndXXX API, the XXXAsync API functions will not magically solve your problems, especially if you do not understand what the problems are in the first place. That is why you need to try to figure out the real problem first. If there's one thing I've learned over many years of working with networked applications is that there's nothing worse than solving the wrong problem using an even more complicated solution because you did not take the time to understand the inherent flaws in your code. You should really post code in your other thread if you want to get more help as right now, all people can do is guess. Programming and guessing do not go together, especially if you hope to get a problem solved.
  7. [quote name='Zadd' timestamp='1342424677' post='4959501'] So tell me, if I am speaking to anyone who has ever made an engine of their own. How did YOU do it? How did YOU get started? I would like to see some unique and helpful answers from you guys.[/quote] In the past, I've made a few simple games and I've made a few simple engines. Nothing commercial, just indie stuff. I hope this doesn't sound too cynical, but it's the truth. I started out on GameDev back in 04-06 with aspirations to become a game developer. Unfortunately, I got caught up in the whole "making engines rather than games" deal. Ultimately, it lead to the demise of my game development career and I moved on at the time. I'm not a person with regrets, but if I knew now what I did then, I'd have not wasted my time. The matter of the fact is, a game engine isn't an end, it's the means. But, you have to ask yourself the means to do what? If you want to make a game engine to understand how game engines work, there are far superior ways, such as studying and using existing successful game engines, whether they are commercial or not. I'm a strong believer in trial and error, but in terms of a time investment, getting experience and actual portfolio end product work on commonly used engines looks and feels a lot better than unpolished tech demos on an engine that you might think is great, but everyone else just shrugs at. Don't believe me? Just take a look at some job offerings for "engine programmer/developer". I'm not going to link specific postings, because it might feel a bit like advertising, but hopefully you'll get the idea. Having your own experience is not bad, but the way you do things certainly won't always be the way the "industry" does things. If you want to compete in the "industry", you have to play their game. Even if you don't want to get into the industry, part of becoming a good programmer is finding the right tools for the job. The sooner you get over the hump of trying to do everything yourself, the sooner you can actually make your dreams come true and get stuff done. If you want to make a game engine to make games, then you should really just make games. Here is the obligatory, [url="http://scientificninja.com/blog/write-games-not-engines"]Make games, not engines[/url] article. The entire read is good, but the third from last paragraph is what I want to draw the most attention to: [quote][background=rgb(250, 250, 250)]Most hobby developers who “finish” an “engine” that was designed and built in isolation (with the goal of having an engine, not a game, upon completion) can’t ever actually use it, and neither can anybody else. Since the project didn’t have any goals, any boundaries, or any tangible applications, it essentially attempted to solve aspect of the chosen problem space and consequently failed miserably. ....[/quote][/background] Looking back now, as I know a lot more than I did in the past, this is exactly what happened to me, and most other people who went down this route. In a sense, this quote highlights the main problem most people have with "learning" anything. Trying to learn something as an "end" rather than as a "means" most typically leads down a hard and unsuccessful path compared to people who use it the other way around. Sure, there are exceptions, but that's why they are called exceptions. How should you view game engines? As a manufacturing factory whose sole purpose is to speed up the production of games. You wouldn't build a factory without knowing what product you are producing, right? Unfortunately, most people do when it comes to game engines and games. So my advice to you would be simple. Forget about the concept of "making a game engine", completely. As saejox mentioned, learn graphics rendering, physics, sound, input, scripting, multi-threaded programming, scripting, databases, tool development, etc... typical software development stuff applicable to game development. Once you learn those things, make games using them. When you have enough games made, you will see commonly recurring patterns of functionality and tools. Take all of that stuff and get it interconnected into a new project. You now have a game engine, without having made a game engine. From there, it's all about evolving the project as you continue to make more games from it. If you have made games already, great! You are ahead of most people who want to start their own game engine. However, you still need to keep making games in order to understand the type of engine that you need to help speed up game development of similarly typed games. Making simple board games doesn't mean you are ready to make a generic game engine for a FPS, RTS, or anything like that. If you have interest in developing a broad range of games, then focusing on an engine is not a good idea, as the game development concepts can vary between game genres (e.g., action based mmorpg vs turn based rts).
  8. [quote name='King Mir' timestamp='1342415710' post='4959452'] I would say one step, as much as possible. Exceptions make for cleaner code. [/quote] [quote name='rnlf' timestamp='1342417519' post='4959463'] I prefer the same. Do as much as possible in the constructor and throw an exception if anything goes wrong. If you use return values to indicate failure, you can be sure to forget to check it every once in a while and you will get unpredictable behaviour sooner or later. If you forget to catch an exception, you get at least a well defined shutdown of your application. [/quote] What makes you think that you can't have your "initialize" function throw an exception? The concept of return codes is just one of the two ways to implement two step Initialization. If you prefer one step initialization, great! But why? If you believe exceptions make for cleaner code, and throwing an exception from an initialize function is perfectly valid and typical of C++ exception styled programming, then you haven't made an argument for either yet. Poorly constructed exception handling is no better than poorly constructed non-exception handling either, so saying you have a "well defined shutdown of your application" is not always true. What SiCrane said about "This isn't a one-size-fits-all situation." is pretty much the main focus point that should be addressed. Looking past exceptions, which is only one of the main aspects to this issue, the cost of "creation, copy, and destruction" have to be kept in mind as well. How expensive is it to create, copy, or destroy your objects? Are your objects going to need pools for managing memory? It's just going to depend on a case by case basis, thus there is no one-size-fits-all solution. Furthermore, why choose one or the other when both might be more appropriate? Resource Acquisition Is Initialization, [url="http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization"]RAII[/url], is one such idiom where inherently it is one step initialization, but there are times when adding support for two step initialization can make for more flexible and simplified code. A specific example would be Win32 critical sections (whether you use them or not isn't important as much as understanding the justification). Rather than littering your code with Enter/LeaveCriticalSection calls, you can use RAII to greatly simplify the process. However, there might be times when you need the concept of RAII, but the implementation doesn't make logical sense. E.g., acquiring exclusive access to a CS, but not needing to maintain that access at all times (to prevent deadlocks), so allowing for two step initialization with manual cleanup code support helps keep things more simple/cheap than constructing/destructing new objects over and over. It's typical in game development, and most other development, to make uses of 3rd party libraries at one point or another. Maybe it's a mysql++ connector wrapper, or a physics library, or an input or audio library, the list goes on. While a lot of people have a DIY mentality, it's important to understand that you can't simply reinvent the wheel for everything and will eventually have to resort to another person's code. A lot of these libraries might be setup to use exceptions, while some might not. In either case, you have to design your code around how the other components are setup. This means, specifically to the OP, you should not be trying to "commit" to one style or another, especially in a language like C++. It's like saying you want to have dinner tonight, but don't want to eat a meal that would require the use of a fork. To each their own, of course, but it's certainly not a typical concern from that perspective to say the least.
  9. Am I "using" people?

    I am not a lawyer and not trying to provide the following as legal advice, but here's something to think about. Ultimately, I don't think it's an issue that boils down to morals or ethnics, but rather simply legal. If you try to hide what you will be doing with user contributions, you are more likely to be viewed negatively in the eyes of the community, so I would suggest being more upfront with your plans rather than hiding them away in a legal document. I digress though. [quote name='GameCreator' timestamp='1341944213' post='4957700']Pretend I get level submissions from people and I use them in the game I release for sale. If this possibility is specified in the agreement for the beta (which I'd guess many people don't read), is it in any way wrong?[/quote] You should also need specific terms that apply to the process of "content submission" in addition to the normal terms applied to the game/tools themselves. In regards to the latter, your terms should say what people can or can't do with the content they create with your tools. E.g., if you provide people with a level editor, clearly specify if they can only use it for non-commercial purposes, if they are allowed to re-distribute the content, and so on. There's a lot to consider there and it's handled a number of different ways in different scenarios. In regards to the former, you open up another can of worms when it comes to accepting user content, especially if you plan on publishing it or redistributing it in any form. For example, and I'll use an excerpt from the GameDev.net [url="http://www.gamedev.net/page/info/legal/tos.htm"]Terms of Service[/url]: [quote] [left]c) Customer warrants to Provider that Customer has all necessary rights to store, reproduce, license access to, and otherwise use the data contained in each of the Customer posted content for which Customer utilizes Provider's Software and Services.[/left] [left][color=#5A5A5A][font=helvetica, arial, verdana, tahoma, sans-serif][size=3]d) Customer acknowledges that Provider's software stores customer data, personalization settings, and other Customer posted content. Customer hereby grants to Provider a fully paid up, non-exclusive license to store and maintain such data for the limited purpose of providing a public forum.[/size][/font][/color][/left] [/quote] 'c' is vital in terms of ensuring users have the necessary rights to provide the material and 'd' is vital in establishing what that material can be used for once GameDev.net has it. If you look up the ToS for any game publishing platforms or application stores, you will see similar. Some sites reserve the right to feature or use your stuff for purposes of promoting their site and so on. It should be noted though, you are still ultimately responsible for the content, even if someone breaks your ToS to provide you with it. In other words, since you are accepting user created content, even though it is done with your tools, there are still "rights" issues that have to be considered. If your levels allow people to supply their own textures or models, then you would need to ensure those textures and models are not being used from a rights violations. Perhaps specific level designs are made that would infringe on trademarks or one thing or another. There's a lot of considerations. In either case, you are setting yourself up for a lot of potential legal problems if you simply use user contributed content directly in your game. You would have to verify and ensure you have all the necessary rights to use the content first, which in itself, might be too much work to be worth it, to avoid issues down the line when someone sees their stuff in your game. People obtaining and using content that contains rights violations is a totally different issue, out of your control (from a non-technical standpoint, e.g., not having DRM mechanics built in). If I were you, and you were worried about these things, you simply don't bundle any user contributed content with your game. You create a website that allows for people to share and download maps, taking into consideration DMCA provisions and the steps necessary for addressing copyright complaints so you are fulfilling your legal obligations. Here is one such page (random, no affiliations) that will give you an idea about that: [url="http://borgheselegal.com/news/44-internet-law/85-reducing-company-website-liability-steps-to-verify-dmca-safe-harbor-compliance"]Reducing Company Website Liability - Steps to Verify DMCA Safe Harbor Compliance[/url]. That way, if anyone has any copyright claims, they need to follow the process and give you the appropriate time to respond vs. just sending out a C&D or filing a lawsuit for the violations. Here is another page (random, no affiliations) that cover this as well: [url="http://www.patent-trademark-law.com/copyrights/plagiarism-take-down-stolen-content/cease-desist-dmca-takedown/"]How to send Cease & Desist and DMCA Takedown letters to sites infringing your copyright[/url]. Of course, a lot of these things depend on how your actual level editing pipeline works. If you are talking about a 2D game with a fixed number of sprites to use, and it's a matter of a map format that uses only numbers to represent the tiles and users cannot add any custom images or sounds, then you won't have to worry about hardly any of these things. In that case, it's simply a matter of establishing the terms of what you can do with the content once a user submits it to you. On a side note, and I'm sure you are familiar with the game, StarCraft 2 took quite an interesting path when it comes to content creation by keeping everything server sided. Even with that model though, they still have to maintain a clear [url="http://us.battle.net/support/en/article/starcraft-ii-copyright-infringement-policy"]copyright infringement policy[/url] consistent with what was previously mentioned. And as always, you should consult a lawyer!
  10. Can't type "}" in Visual C++ 2010

    Here's something you can try, not sure if it'll work: 1. Download and install AutoHotKey: [url="http://www.autohotkey.com/"]http://www.autohotkey.com/[/url] 2. Create a new AHK script and paste in the following contents: [code] #NoEnv ; Recommended for performance and compatibility with future AutoHotkey releases. #Warn ; Recommended for catching common errors. SendMode Input ; Recommended for new scripts due to its superior speed and reliability. SetWorkingDir %A_ScriptDir% ; Ensures a consistent starting directory. ::}::{}} [/code] 3. Save and then execute the script (you will see a green H icon in your tasktray if it's running). To stop the script, right click on this icon and choose the appropriate context item. 4. Try using the } key in Visual Studio followed by another key to trigger the macro. For example, press } and then 'enter' or 'spacebar'.. The idea is to make use of [url="http://www.autohotkey.com/docs/Hotstrings.htm"]Hotstrings[/url] to replace the keystroke itself with the text. With how AHK works, it should process the } before Visual Studio does (assuming Visual Studio is in fact eating the keystroke) and you should get the text to show up after the followup key stroke to trigger the macro. Do you have any addons installed in Visual Studio? Perhaps a bug in one of those is causing a problem. You should also check your system for any system wide hooks or DLLs being loaded in AppInit_DLLs. Perhaps there is something that isn't compatible with Visual Studio, but is working fine with everything else.
  11. Diablo 3 representing the future of Anti- piracy?

    I think most people misunderstand what Diablo 3 and Batle.net 2.0 are. Chindril seems to have an important part of it down, but didn't really elaborate to the topic at hand, so I will. I'm not going to try to change anyone's opinion on the matter, but hopefully more people can see things how they really are rather than what they are perceiving them to be. To start out, I'll list some points of interest that my explanation is based upon. I'm not going to fully elaborate each point, but just give enough for readers to hopefully see where I am going with it. 1. DRM simply does not work for the masses. Short term or long term, it's been shown time and time again. In the end, the paying customers suffer the most, while the illegitimate users benefit the most. Some companies are finally getting the picture, while some are not. 2. Writing the functionality for a single player version, a multiplayer version, and a lan version of a game, in addition to the multiplayer server, is impractical when all you have to do is write a multiplayer client and a multiplayer server (more on this later). 3. In this digital age, selling digital products has been increasingly more difficult due to piracy. The business model of selling digital goods is going down the drain. The foreseeable viable alternative is to sell access to a service instead. 4. We, as a society, are experiencing a digitally driven social revolution. Facebook, twitter, reddit, and many other services are now a part of everyday life in ways that social media never was imagined a decade ago. Now, how are all these points relevant to the discussion at hand? Consider if you were Blizzard: [i]"If we write a single player game, it will get pirated and we will lose money. If we add DRM, we will be vilified and cause even more people to pirate the game."[/i] [i]"If we write a SP/SP-LAN/MP version of the game, we are doing a lot more work and have a lot more security concerns as we did with D2, which also will cause us to lose a lot of money"[/i] [i]"if we don't legitimize official item selling via RMAH, someone else will capitalize on the market, like in Diablo 2, and that money will not be going to us."[/i] [i]"We need a way to add social elements to the game to stay relevant and allow people to enjoy the game with friends easily."[/i] The only solution is a client-server model for the game. Let's eliminate single player altogether so people have nothing to crack/pirate. Cut down on all the extra security and maintenance required for SP/SP-Lan/MP versions of the game so we only have to deal with a non-authoritative client and an authoritative server. Let's add a GAH/RMAH so people can safely do what they always wanted to while allowing us to capitalize on our own IP. Sure, people will go outside our system, but our system will be of the most safe and convenient for users. Finally, let's add some more social features to the game via Battle.net 2.0. Friends should be able to easily join each others games at the click of a button. Match making should be seemless and require no effort for people to join new games and complete quests with strangers. This is what I truly think Blizzard's logic was for the design of Diablo 3. To maximize profits, minimize costs, and be in as much control as possible over their game. Isn't that the ultimate goal of any business? I can't say I blame them for trying to financially survive in these times. With all that being said, the designs of Diablo 3 and Battle.net 2.0 are not so much about anti-piracy as much as just being the one stone that kills many birds, so to speak. It's not a silver bullet, obviously, but it addresses the main concerns with making a new blockbuster game and I think it is the future of major games. [b]However[/b], where Blizzard has failed is the actual [u]implementation[/u] of everything I just said. I was a part of the beta since late last year. Each major test had the same connectivity issues experienced on login. Quite literally, they had the same problem for at least 6 months and were unable to fix it by the time of launch. The launch day problems did not surprise me, but given the amount of time Blizzard had to address the problem, obviously they failed pretty hard. Likewise with random disconnects, another issue that was going on since beta. Next, the state of their GAH is just a mess. I find it totally unacceptable, after all this time and testing that they have to take 1/2 of it down and rework it while the other 1/2 is unstable the majority of the time. To think that the RMAH was going to work in the same way, I see now why it has been delayed for a while, which is totally disappointing, all things considered. I don't expect anything to be flawless, but I do expect a certain level of quality in a game like this, and that level has not been met. Their "social" aspects of the game are just quite pitiful. Their match making is akin to blind or speed dating. You get no choices on who you might get grouped with, group sizes were low for a while, and trying to play the game with randoms who had really bad gear is no fun at all. I can't tell you how many times I joined people who had 6k hp in Hell with hardly any resists, and had to carry them through everything. Have you ever been in a group with all of the same class? It can be quite annoying. When I play my Witch Doctor, I prefer to be in a group with a Monk or Barb at least. If I play my wizard, I'd like to be with a Barb or Witch Doctor. So on and so forth. Their biggest mistake though, is failing to establish a real identity for the game. The name "Diablo" is what carried the game. So much changed in the beta from the time I started trying it until May when it shut down from launch. PvP was shelved and who knows when that will be added. The AH, as mentioned before, is a mess. The state of Inferno and the itemization of items in game show clear design flaws that make you ask, "what were they thinking?" Honestly, it feels like they ran out of time and had to put out something for the money, and will be spending the foreseeable future, "finishing the game". I myself hated the beta with a passion. I thought I'd never play the game based on what I saw. However, the idea of the RMAH intrigued me and I'm a sucker for the "Diablo" title, so I decided to give it a go. I figured I could play a little and when the RMAH comes out, see if I wanted to stick around or not. With the way things have worked out now though, I've stopped playing and will be awaiting the 1.0.3 patch. I've gotten 3 different classes to 60 (which is pretty easy with how the game is setup), but with the way items work and how Inferno is designed (don't get me started on all the flaws and exploits), the game is not only unplayable, but simply unejoyable now. I don't regret the $60 purchase, but after all this time and knowing what types of resources Blizzard has, I find Diablo 3 to be quite disappointing, which is a shame because there's a lot of really nice things about the game that gets overlooked and not talked about either.
  12. I think you need to start over with the code you are working with and restructure it. When you work with TCP, it's far easier to think in layers when working with the data. The lowest layer is the raw byte stream you send and receive. At this layer all you are worried about are bytes and making sure you send and receive them. What the bytes represent is totally irrelevant; all you care about is making sure they are processed correctly by the system. The next layer up is your protocol layer. This layer gives meaning (but not context) to specific bytes and determines how data is processed by the system. For example, using "§" as a delimiter defines your protocol. You know messages begin from the beginning of the stream until a "§" is received. Finally, the message layer is on top. This layer gives context to the data passed using the protocol. In your case, you only have one type of implicit message, text, but you could expand your protocol to support other types of messages as well. For example, add more delimiters that would result in different processing of the data. I.e., let's say you use "[" and "]" to make a section of text that should be capitalized, that'd be part of the protocol while the ability to "bold" text is part of the message itself.. When receiving data, the process is [Raw Bytes] -> [Protocol] -> [Message(s)]. When sending data, the process is reversed: [Message(s)] -> [Protocol] -> [Raw Bytes]. This means your send/recv logic should be generic, protocol agnostic, and completely reusable for any program really. Since you are working with TCP, and TCP is a stream protocol, you have to make use of buffering. At this point in your learning and programs, you do not have to be worried about the extra overhead from data copies or allocations or anything like that. You just want good solid code that works and you can understand. You will need to buffer all data you receive at the lowest layer and then allow the next layer to process it separately. Once the protocol layer is done processing it, it reconstructs the messages and buffers those for the system to process. When you go to send data, the reverse happens. You buffer a higher level message first, then let the protocol layer break down the messages into byte buffers, then dispatch the buffers to the raw processing layers. Putting all this together, here's a simple single threaded, one client example that shows the distinctive separation of the layers. Only the important stuff is commented. [spoiler] [source] #include <winsock2.h> #include <mswsock.h> #include <windows.h> #include <stdio.h> #include <string> #include <vector> #include <list> #include <iterator> #include <algorithm> #pragma comment( lib, "ws2_32.lib" ) int main( int argc, char * argv[] ) { WSADATA wsadata = {0}; int error = 0; error = WSAStartup( MAKEWORD( 2, 2), &wsadata ); if( error != 0 ) { printf( "WSAStartup failed with error (%d).\n", error ); return -1; } if( LOBYTE( wsadata.wVersion ) != 2 || HIBYTE( wsadata.wVersion ) != 2 ) { printf( "WSAStartup does not support version 2.2.\n" ); error = WSACleanup(); if( error == SOCKET_ERROR ) { printf( "WSACleanup failed with error (%d).\n", WSAGetLastError() ); } return -1; } SOCKET listener = socket( AF_INET, SOCK_STREAM, IPPROTO_TCP ); if( listener == INVALID_SOCKET ) { printf( "socket failed with error (%d).\n", WSAGetLastError() ); return -1; } sockaddr_in localAddress = { 0 }; localAddress.sin_family = AF_INET; localAddress.sin_addr.s_addr = inet_addr( "" ); localAddress.sin_port = htons( 7777 ); error = bind( listener, reinterpret_cast< sockaddr * >( &localAddress ), sizeof( localAddress ) ); if( error == SOCKET_ERROR ) { printf( "bind failed with error (%d).\n", WSAGetLastError() ); closesocket( listener ); error = WSACleanup(); if( error == SOCKET_ERROR ) { printf( "WSACleanup failed with error (%d).\n", WSAGetLastError() ); return -1; } } error = listen( listener, 1 ); if( error == SOCKET_ERROR ) { printf( "listen failed with error (%d).\n", WSAGetLastError() ); closesocket( listener ); error = WSACleanup(); if( error == SOCKET_ERROR ) { printf( "WSACleanup failed with error (%d).\n", WSAGetLastError() ); return -1; } } sockaddr_in remoteAddress = { 0 }; int remoteAddressSize = sizeof( remoteAddress ); SOCKET client = accept( listener, reinterpret_cast< sockaddr * >( &remoteAddress ), &remoteAddressSize ); if( client != INVALID_SOCKET ) { printf( "Accepting a connection from %s:%i.\n", inet_ntoa( remoteAddress.sin_addr ), ntohs( remoteAddress.sin_port ) ); u_long mode = 1; error = ioctlsocket( client, FIONBIO, &mode ); if( error == SOCKET_ERROR ) { printf( "ioctlsocket failed with error (%d).\n", WSAGetLastError() ); } else { std::list<std::string> incomingMessages; std::list<std::string> outgoingMessages; std::vector<char> sendWorkspace; char recvBuffer[8192]; std::vector<char> recvWorkspace; bool checkRecvWorkspace = false; // Client welcome message. outgoingMessages.push_back( "Welcome!\r\n" ); while( true ) { //--------------// Protocol processing logic (send) //-----------------------// if( !outgoingMessages.empty() ) { std::list<std::string>::iterator itr0 = outgoingMessages.begin(); while( itr0 != outgoingMessages.end() ) { std::string & message = *itr0; message += '§'; std::copy( message.begin(), message.end(), std::back_inserter( sendWorkspace ) ); ++itr0; } outgoingMessages.clear(); } //--------------// Raw data processing logic (send) //-----------------------// if( !sendWorkspace.empty() ) { int count = send( client, &sendWorkspace[0], sendWorkspace.size(), 0 ); if( count == SOCKET_ERROR ) { error = WSAGetLastError(); if( error != WSAEWOULDBLOCK ) { printf( "send failed with error (%d).\n", error ); break; } } else { sendWorkspace.erase( sendWorkspace.begin(), sendWorkspace.begin() + count ); } } //--------------// Raw data processing logic //------------------------------// int count = recv( client, recvBuffer, sizeof( recvBuffer ), 0 ); if( count == 0 ) { printf( "Disconnected.\n" ); break; } else if( count == SOCKET_ERROR ) { error = WSAGetLastError(); if( error != WSAEWOULDBLOCK ) { printf( "recv failed with error (%d).\n", error ); break; } } else { std::copy( recvBuffer, recvBuffer + count, std::back_inserter( recvWorkspace ) ); checkRecvWorkspace = true; } //--------------// Protocol processing logic //------------------------------// // Since it is possible we get some data that is not complete, we only need to check // the workspace once it "changes". Otherwise, since we are in non-blocking mode, we // would be performing the same redundant checks each loop on data we already know // is incomplete. if( checkRecvWorkspace ) { checkRecvWorkspace = false; // Loop while we have raw data to process. This is so we can extract as many // messages at once rather than just one at a time per loop. while( !recvWorkspace.empty() ) { std::vector<char> message; for( size_t idx = 0; idx < recvWorkspace.size(); ++idx ) { if( recvWorkspace[idx] == '§' ) // alt + 167 in console { // Extract the message. std::copy( recvWorkspace.begin(), recvWorkspace.begin() + idx, std::back_inserter( message ) ); // Remove the message and delimiter from the workspace. recvWorkspace.erase( recvWorkspace.begin(), recvWorkspace.begin() + idx + 1 ); // Do not continue checking. break; } } // We only need to continue if we actually have a message to process. if( message.empty() ) { break; } // Make a null terminated string. message.push_back( '\0' ); // TODO: Verify message data, invalid characters, etc... // Save the message for processing by the system. incomingMessages.push_back( std::string( &message[0] ) ); } } //--------------// Message processing logic //-------------------------------// // Check to see if we have any messages to process. I check empty // to keep scope space clean of extra variables. if( !incomingMessages.empty() ) { bool doExit = false; std::list<std::string>::iterator itr0 = incomingMessages.begin(); while( itr0 != incomingMessages.end() ) { std::string & message = *itr0; // Simple command handling example. if( message == "exit" ) { doExit = true; break; } else if( message == "hello" ) { // Note how we simply save the higher level message to the list // and let the protocol processing logic take care of the rest. outgoingMessages.push_back( "world!\r\n" ); } else { printf( "Error: Unprocessed message: %s", message.c_str() ); } ++itr0; } incomingMessages.clear(); if( doExit ) { printf( "Client is exiting...\n" ); break; } } // Prevent 100% CPU usage in this example. Sleep( 1 ); } } error = shutdown( client, SD_BOTH ); if( error == SOCKET_ERROR ) { printf( "shutdown failed with error (%d).\n", WSAGetLastError() ); } error = closesocket( client ); if( error == SOCKET_ERROR ) { printf( "closesocket failed with error (%d).\n", WSAGetLastError() ); } } else { printf( "accept failed with error (%d).\n", WSAGetLastError() ); } error = closesocket( listener ); if( error == SOCKET_ERROR ) { printf( "closesocket failed with error (%d).\n", WSAGetLastError() ); } error = WSACleanup(); if( error == SOCKET_ERROR ) { printf( "WSACleanup failed with error (%d).\n", WSAGetLastError() ); } return 0; } [/source] [/spoiler] In this trivial example, everything is "inline", but when you use this approach, you can wrap everything up into helper functions and classes/structures to keep things organized and support more than one client. Each "context" object will have a socket, a workspace buffer, and message queues. This way, no matter what underlying send/recv mechanisms you use, the message layer remains the same as well as the protocol layer. If you want to change up the protocol some, the other layers aren't affected and so on. You won't ever "send" or "recv" data directly, only indirectly through buffering. This way, you can properly handle the semantics of the TCP stream as well as gain some flexibility in your system. It takes some getting used to working with TCP and this approach, but in the long run, it helps make your system a lot more manageable compared to the direction you are going right now. Good luck!
  13. Dealing with idle state on the server

    [quote name='fholm' timestamp='1319047054' post='4874395'] The server is running, everything is fine, it's handling different rooms, clients, actors over the network. But when everyone disconnects, or there are very few players on the server, I don't want it to just spin through receive->simulate->send loop over and over again because this basically steals one core (the server is single-threaded) at 100% usage and with the server just spinning round round round. So I solved this by a few clever checks with hunts down any remaining work, and when we're sure there is absolutely NO work to do, it will stall for 1ms by doing sleep(1). This feels very very dirty (I've done a decent chunk of multi-threaded applications, and sleep() is usually the effect of a somewhat bad design). I've also tried with using several different types of the thread/os-level events available in Win32, timers, etc. but nothing seems to be working as well as I want.[/quote] If you are using IOCP on Windows, then something is wrong with your current implementation because IOCP naturally solves this issue by design. In typical IOCP use, you create a bunch of network worker threads that all block on [font="Consolas, Courier, monospace"][url="http://msdn.microsoft.com/en-us/library/windows/desktop/aa364986(v=vs.85).aspx"]GetQueuedCompletionStatus[/url][/font], waiting for work. When there is no work, there is no worker thread execution because they are all blocking. That means nothing will be spin waiting or wasting CPU cycles in an infinite loop bounded by a Sleep/Ex call. The [i]scalability[/i] aspect of IOCP comes from the fact when you need more processing power, you simply create more threads that sit on GQCS and assuming you have the hardware resources, everything just works without change to anything else (of course, you have to have designed code that scales, but that is another issue). If you have all your worker threads blocking on GQCP, the most obvious question is how do you exit worker threads? This is also really easy by design, since you just call [font="Consolas, Courier, monospace"][url="http://msdn.microsoft.com/en-us/library/windows/desktop/aa365458(v=vs.85).aspx"]PostQueuedCompletionStatus[/url][/font] with a user defined message that you process in the worker thread that signals the worker thread to exit and not loop back to the next call to GQCP. Likewise, for any other custom event handling, PQCS is used to give the next available worker thread work to do. To implement a generic worker thread event handling system, you could also make use of the [font="Consolas, Courier, monospace"][/font][font="Consolas, Courier, monospace"][url="http://msdn.microsoft.com/en-us/library/windows/desktop/ms684954(v=vs.85).aspx"]QueueUserAPC[/url] [/font]function. In that design, you simply setup a pool of worker threads that all block infinity with a SleepEx call. When you have work to process, you call QUAPC with one of the handles of the worker threads so the work is processed in context of that thread. For more information on that topic, check out the [url="http://weblogs.asp.net/kennykerr/archive/2007/12/11/parallel-programming-with-c-a-new-series.aspx"]Parallel Programming with C++[/url], an older, but still revelant blog entry that has very useful information about some Win32 stuff. I wrote some stuff with [url="http://www.gamedev.net/topic/533159-article-using-udp-with-iocp/"]IOCP and UDP[/url] a while ago that sort of pertains to these issues. I have learned quite a lot since, so some stuff I got right, but a lot of stuff I got wrong with my understanding (too much to go over). Nowadays, unless you absolutely have to stick to writing everything yourself, there are libraries out there that take care of all these things for you and make life a lot easier. Such libraries include [url="http://www.cs.wustl.edu/~schmidt/ACE.html"]ACE[/url] and [url="http://www.boost.org/"]Boost[/url]. Boost::Asio is the main library you would be looking to get into, but there are a lot of other Boost libraries you would make use of. I also wrote some [url="http://www.gamedev.net/blog/950/entry-2249317-a-guide-to-getting-started-with-boostasio/"]boost::asio stuff[/url] not too long ago. Once again, some stuff I got right, some stuff I got wrong, so take it with a grain of salt. There is a lot of issues at hand that would have to be explained first before simply updating those works. Anyways, getting back to the issues at hand, taking care of the IOCP aspect will fix anything inefficient about the network related work. What it will not take care of is your main simulation loop that performs any system upkeep. You can either stick with the Sleep pattern to throttle when there is no work or you can redesign your system to be more scalable and asynchronous and make use of worker threads to handle execution of any pending work. For example, using boost, you would have at minimal two thread pools of worker threads. The first pool is for network related stuff only. The callbacks that execute should involve very little shared state so you should not be hitting any unavoidable bottlenecks from having to synchronize access to global shared resources. This means that most business logic processing does not take place in these threads, work is forwarded outside that system. The second worker thread pool is for everything else that needs to execute. The reason for separating them is to achieve a more flexible system rather than having to worry about longer running tasks from starving out other critical tasks. Such a design is taken by .Net in their ThreadPool design along with Async operations. In doing so, all threads are always blocking, waiting for work. When there is no work, there is no execution so you do not waste any resources. When there is work, then threads will process it as efficiently as your code is implemented and that is that, The only times you might need to use Sleep in such a design is when the overhead of starting a new async operation is higher than simply waiting and trying again for a limited number of times. For example, say you need to open a file. If your first attempt fails, you might want to loop a few times and keep trying rather than simply let the operation fail. Otherwise, you would be setting up a timer to execute a limited number of times and then have to continue execution when the file was opened that really complicates things. The downside to using boost is the resource/performance overhead you are trading for simplicity and convenience. In most cases, it is not a big deal, because you end up writing easier to maintain and understand code without having to worry about all the little annoying issues you otherwise would have to. However, some people have very specific needs and are unable to use such libraries due to licensing issues, but if that does not apply to you, then you should definitely check it out. That is not to say boost is not without its own quips, but there is a large community of support available.
  14. [quote]I'm having a hard time understanding how to make it connect more than one person at once and continue functioning after people randomly disconnect(and not just sit in a permanent error state). [/quote] In typical network programming, you have a listening socket that calls accept over and over to accept new incoming sockets (or just once to accept 1 connection, as most trivial examples do). Depending on the networking approach you take, whether it's blocking or non-blocking, event based or asynchronous, you will obtain a handle to the connection to work with via a socket. The lines (in the main function): [code] boost::shared_ptr< MyConnection > connection( new MyConnection( hive ) ); acceptor->Accept( connection ); [/code] represent one such instance of performing that logic. The key difference in this approach is, rather than working with a raw socket handle, you work with a connection object that maintains the socket handle for you as well as providing you the context object to work from. So, rather than accepting a new connection and then allocating a context for it later on, a context is allocated for it initially and tied to the socket object. This is useful when you need to associate meaningful user data with each socket that you post for accepting sooner, rather than later. Some designs are better implemented this way, while others are not. Either way, you can choose two different methods for "refilling" the pending accept queue: 1. post a lot of accepts up front to handle bursts of incoming connections (like a high use web server might) and refill when it hits a specific low threshold or 2. post one accept and then post another accept after that accept has completed (thus limiting your connection acceptance rate, but requiring less resources at any given time compared to keeping a pool of them.) You can actually mix the two, posting a lot of accepts up front and refill them as they are processed, depending on your application needs as well. Not all servers or networking projects are meant to infinity accept new connections though, so depending on your needs, you do have to tailor examples to your needs. Once a connection is accepted, MyAcceptor::OnAccept is called for you to verify if the connection is allowed (think in terms of software application specific firewall). If it is allowed, the MyConnection::OnAccept function is then invoked. Alas, a new connection is never posted again for accepting, so you cannot accept more connections! To remedy this behavior, you simply create a new MyConnection object and pass it to Accept of the acceptor. The follow code is the new MyAcceptor::OnAccept function that accepts more than one connection: [source] bool OnAccept( boost::shared_ptr< Connection > connection, const std::string & host, uint16_t port ) { boost::shared_ptr< MyConnection > new_connection( new MyConnection( GetHive() ) ); this->Accept( new_connection ); global_stream_lock.lock(); std::cout << "[" << __FUNCTION__ << "] " << host << ":" << port << std::endl; global_stream_lock.unlock(); return true; }[/source] Each time a connection is accepted, a new connection is posted so you can handle more than one connection. When you use this approach, you have to ensure you post the new connection for acceptance first, as if you accidently skip that logic, then no more connections will be accepted. If instead you wanted to signal the another thread based on an event, the logic in the main function would be used instead: [code] boost::shared_ptr< MyConnection > connection( new MyConnection( hive ) ); acceptor->Accept( connection ); [/code] Putting implementation specifics aside, the key thing to understand here is that for every connection you wish to accept, you have to post another accept for the next connection. The server is not in an error state; it's simply not accepting any more connections! This was by design for that simple example (as is pretty standard, see [url="http://msdn.microsoft.com/en-us/library/windows/desktop/ms737526(v=vs.85).aspx"]accept[/url] on MSDN for reference). In terms of other boost::asio examples that show this, the [url="http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/example/chat/chat_server.cpp"]chat server[/url] example does a good job. Bolded below are the lines of interest (hm, seems bold tags don't work correctly, look at the b and /b in square brackets): [source] class chat_server { public: chat_server(boost::asio::io_service& io_service, const tcp::endpoint& endpoint) : io_service_(io_service), acceptor_(io_service, endpoint) { [b]start_accept();[/b] } [b]void start_accept() { chat_session_ptr new_session(new chat_session(io_service_, room_)); acceptor_.async_accept(new_session->socket(), boost::bind( [b] &chat_server::handle_accept [/b], this, new_session, boost::asio::placeholders::error)); }[/b] void [b]handle_accept[/b](chat_session_ptr session, const boost::system::error_code& error) { if (!error) { session->start(); } [b]start_accept(); [/b]// NOTE: Can be an issue if start() throws! } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; chat_room room_; };[/source] From that code, an accept is first posted upon construction, then after each subsequent incoming connection is accepted, a new accept is posted. The danger of posting the accept at the end of the handler is what I was mentioning before. [quote]They're all kind of tied together into a big mess of classes and methods that are interdependent. [/quote] It's only a few hundred lines of code! If you think that's a big mess, then heaven forbid you actually look through the boost::asio library code! ;) Most of the mess you see is related to the look and feel of boost using the C++ language and not the logic at hand. The interdependency of the code is by design and pretty much unavoidable since it wraps up the boost::asio library. For more information on this, check out the following page: [url="http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/overview/core/async.html"]The Proactor Design Pattern: Concurrency Without Threads[/url]. The code is organized as follows: Hive - Wraps up the boost::asio io_service object and the work object into one discrete object you can control the 'master' system from. Any objects that are constructed using the Hive object can be serviced through that Hive object. Acceptor - Wraps up an boost::asio acceptor object to allow you to accept incoming connections. Note: An acceptor is a general purpose name, do not think of it as a "server" as the concept of a server is far more specific in functionality while an Acceptor simply accepts/maintains new connections. You can however, turn an Acceptor into a server using the wrapper, since that's the way the code is designed, to make things easier to work with, Connection - Wraps up a boost::asio socket object that represents a connection to a remote host or an incoming connection. To implement your own custom logic upon events and context specific data, you derive your own classes as the examples do and then get to work. While it might seem trivial, if you look through all my examples before the wrapper and imagine doing that for every project, hopefully you can see how tiresome and messy it'd be. The "purpose" of the wrapper is to show a practical OOP example of what you might want to do with the core functionality the boost::asio library gives you. Most of the time, you will duplicate network code project after project and after a while, you will want to write a wrapper to avoid it. The network.cpp/.h was a simplification of my wrapper. While the namespace expansion is quite annoying, I can't say I'd want to go back and change anything about it conceptually. You won't be able to implement the generality or flexibility the wrapper provides in significantly less code. You could merge the OnAccept/OnConnect and pass an enum to save some space, but not much. The implementation specifics of the atomic_cas32 is really iffy and would be the only thing I'd consider looking into updating, but simply using a synchronized lock would be a lot of extra overhead, but might be needed on some platforms. The intent of the design is also very specific to a recurring problem I had come across with network code. That is, the separation of "client" and "server" objects made for far messier code and actual communication between different objects greatly increased code complexity. With this proactor design that boost::asio uses, certain tasks are made a lot easier since communication between "client" and "server" objects are seemless. The biggest example is a proxy that requires accepting incoming connections than connecting to new remote destinations. [quote]My understanding is that I want multiple sockets so that I can use them to determine who gets sent what from the server and I can tell which clients are sending me what. Is that accurate? [/quote] Once you add in the code to accept more connections, your network events will then execute in the context of the connection that receives them (each context has its own strand so in the context of the connection, everything is thread safe internally). One of the biggest networking challenges comes about when you need to multiplex these events into a "simulation" of some sorts. For example, if you were writing a web server, each connection does not need to know of the others, so the code as-is would just have the http processing added to work with. There is very little global state to worry about, so you don't have much to do. If you were instead writing a game server, then you would have to come up with a way to pass the objects created from packets to your "simulator" in a fashion that you could associate the object from that connection and then be able to easily send objects to send back. This is a lot more complex and is not easily shown in simple examples. One such "easy" way, would be to give each connection a GUID, keep a mapping of GUIDs to Connection objects, then lock a global event queue and post events to that queue for a main thread to process. That in itself is another discussion though and outside the scope of the guide. If you are going to use the network wrapper for any testing and stuff, line 122 of network.cpp should read: "connection->StartError( error );" and not "StartError( error );". It's a minor error that should rarely trigger, but if it does, it might mess something up that shouldn't be. The wrapper code is simply the lowest level code your networking logic uses for getting the raw data and connection management. For any real project, you would have to add additiona layers on top to handle your specific network protocol, message serialization/deserialization, and everything on up in terms of program logic. Lastly, the most important thing to understand about that code are the sacrifices you make if you use it. Performance hits are a key issue as the overhead of the code will be at some factor that has to be measured and taken into consideration. For example, boost::bind does carry quite an overhead, but the trade-off is the unique functionality it provides and simplification of a lot of things that otherwise would not be possible as easily. Relying on strand for synchronization makes life a lot easier as well, at the expense of the overhead. boost::asio also has specific [url="http://stackoverflow.com/questions/2893200/does-boostasio-makes-excessive-small-heap-allocations-or-am-i-wrong"]allocation strategies[/url] ([url="http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/example/allocation/server.cpp"]official example[/url]) you must be aware of if you plan on using it in production code. The use of vectors and lists also carries quite an overhead, but all these issues are only issues if you measure performance and determine it is not suitable for your code. When trying to write generic, simple, wrapper code, certain sacrifices do have to be made. Once you know exactly what needs your project requires networking wise, you can write your own code and take what you need and get rid of the rest. If you are new to networking, there's a lot of concepts and specific implementation strategies to wrap your head around. Boost::asio and C++ are by no means "simple", so you should discuss what you are looking for and why you think boost::asio can help you on your project so you can be sure you are on the right path. I'm not trying to say anything against boost::asio, but everyone's situation is different so if you are here on the forums, it'll be a good idea to make use of all the resources here. Hopefully that clears up some of your questions!
  15. Proper sendto() Error Handling

    [quote name='YogurtEmperor' timestamp='1305280155' post='4810142'] I have just never done networking coding before and want to understand the API and all the little details that go into the big implementation first. Very important for a solid foundation, and a solid foundation is very important for my task (a next-gen commercial game engine).[/quote] If this is the case, you will definitely want to look at existing UDP libraries used by games to get an idea of what they do and what features they provide. Namely, [url="http://www.jenkinssoftware.com/"]RakNet[/url] (complete game library), [url="http://enet.bespin.org/"]enet[/url] (low level wrapper), [url="http://code.google.com/p/lidgren-network/"]lidgren[/url], [url="http://www.pxinteractive.com/index.shtml"]NetDog[/url] (caters to MMOs), [url="http://www.smartfoxserver.com/"]SmartFoxServer[/url] (a whole platform), [url="http://www.exitgames.com/Photon"]Photon[/url] (another platform), [url="http://www.boost.org/doc/libs/1_46_1/doc/html/boost_asio.html"]boost::asio[/url] (cross platform wrapper library), [url="http://pocoproject.org/docs/00100-GuidedTour.html"]POCO[/url] (similar to boost libraries), and finally [url="http://www1.cse.wustl.edu/~schmidt/ACE.html"]ACE[/url] (toolkit). Focus on the big picture first. The "little details" come about from the specific network implementation you use. I.e., using regular Winsock, POSIX, BSD sockets come with their own little quirps and platform specific limitations, so if your overall goal is a next-gen commercial game engine, then the platforms you will support will play a large part into what you might want to build upon. For example, if you have no plans for console support, then there is no reason not to start looking over boost or poco which contain a lot of code dealing with the little things already. The research and development has been done, so you don't need to discover it all yourself as much as just look at existing solutions to see what is done. That is not to say you can't code your own implementation, you can if you so choose, but trying to relearn decades of R&D from scratch is not worth it. If you do plan on supporting consoles, then things like boost or poco are no good to you, so you already know there's no real need for them because you don't want different networking cores for different platforms (imo). You can still consult them for PC specific stuff, but you'd probably want to go for something a lot more lightweight depending on all the languages you will have to support for your engine. [quote]This article suggests not using a second thread to handle networking. [url="http://trac.bookofhook.com/bookofhook/trac.cgi/wiki/IntroductionToMultiplayerGameProgramming"]http://trac.bookofho...GameProgramming[/url] He says it is tricky to get working while avoiding race conditions, deadlocks, etc. These things are non-issues at my level. Should I still avoid threading? To me I think it would simplify a few things and make some things smoother, but again my knowledge of networking is naive, and I worry about getting bitten in the ass by some unforeseen hiccup.[/quote] That unforeseen hiccup is generally " race conditions, deadlocks, etc.". "Stuff" happens, even when you are familiar with the concepts. All it comes down to is "shared state" really. This is more of a bigger issue with TCP than it is with UDP. Since TCP is a stream protocol, if you have multiple recvs posted on a socket, you must maintain the order that the data is returned before processing it. So if thread 1 recvs data into a buffer object with a sequence id of N and thread 2 recvs data into a buffer object with sequence id N + 1, then you must be sure to process the object with id N before N + 1. This means you have to code a custom system to handle this if you need that type of functionality. Unless you are working with massive data transfers using hardware that is meant for it, then usually it's just easier to keep one recv posted on a socket at a time and only process data from one thread. That one thread might be a worker thread in a pool of many, so it may not be the same each time, but only one concurrent recv is posted at a time. Let's say you are using TCP and doing it the traditional way. Each connection has a context that has a receive buffer. You only have one concurrent recv operation in progress at once, so no other thread will access the buffer. Once you receive data into the buffer, you parse out your packets.The trouble now is what do you do with the packets? Typically, you lock a context specific queue and add them so they can be processed elsewhere. In the thread that will process them, the queue is locked, copied into a new queue, cleared, then unlocked. This is to help reduce lock contention between the network thread and the thread processing packets. The thread that is processing the packets does not run as fast as the network thread, so "bursts" of packets do not require a lot of lock contention. I.e., checking for packets hundreds-thousands of time per millisecond does not make much sense (in the context that we are talking about, there are other systems where it might). Other designs dispatch the packet processing logic from the network thread. This design only works for systems with very little shared state (as to not tie up the network thread) and very lightweight logic processing. For example, a simple web server with no shared sate between the connections might take this approach. Since the context object will not be accessed from any other part of the system, it works out 'ok'. If you were trying to write a MMO, then this design is not practical because you tie up a network worker thread (which if your thread pool does not grow, can be a real problem). It's far easier to deadlock the threads with a lot of shared state locks so by keeping them to a minimal (i.e. packet queue lock then dispatch) you reduce the problems you might have. With UDP, you generally only need one main thread to handle the server socket. This is because you are not working with "connections" like you are in TCP. As a result, when you get a packet in the network thread, you can simply pass it to another thread to take care of processing it. In other words, the network thread is only responsible for pulling packets from the wire and passing them along. For this to work out, you have to make sure the method you are using to pass the packets along does not involve a lot of overhead. For example, let's say you recvfrom one packet. You lock the queue, add the packet, and continue on. If the thread that is processing the packets acquires the lock but does not release it fast enough by the time the next packet is processed, then you stall the network thread some each time and really don't gain anything from this setup. That is why, going back to the TCP example, when you lock the queue, copy it, and release, you minimize the contention and can maintain a well functioning system in that regards. Depending on the platform and language, you might have to optimize the "copy" step to make sure you don't get hit with unneeded global locks for allocation or deep copying. Even so, that's an optimization that only should be pursued after profiling and having solid evidence it's causing issues! Likewise, the lock mechanisms you use do matter. If you are on a platform where the most viable locks incur significant overhead, then the benefits of using multiple threads is greatly reduced. Once you managed to get your packets out of the network layer, then you are back to dealing with your typical issues of message validation, multi-threaded programming, and so on. Working out how you "send" data through the network might be more troublesome with some designs then others. For example, you never want to allow threads to arbitrarily send data instantly. If you do, you just make life significantly harder than you have to when things go wrong. Instead, you want to save outgoing messages to a queue and process them from a specific context. The implications of doing this though can be quite complex. For example, with TCP, you have to wait until all the previous buffered data is passed through send before starting the next send. As a result, you must make sure you chain your send triggering logic so you don't end up with data still in a send buffer waiting to be passed through send, but no event is pending to trigger it. This is more related to the multi-threaded programming issues than just networking. With UDP, you just have to make sure that up until the packet data is dispatched with sendto,you maintain protocol synchronization so you don't have a case where you send a packet with sequence number N, but a security flag based on sequence number N + 1 as a result of not handling your locks the right way (i.e., locking individual pieces of logic instead of the entire unit). If you don't have any extra protocol specific security stuff, then it's not as big of an issue, but any decent system should. Of course, if you are only using one thread and very simple networking APIs, like select, you don't have all the headaches to deal with when it comes to multi-threaded programming, so keep that in mind. I'd say start simple and small and see how it carries you. I've found it's nice to have solutions for both single and multi-threaded setups, so don't feel like you have to just dive into making an uber multi-threaded solution whose power and functionality might not ever be utilized due to other constraints. In the end, a lot of what you do is going to depend on your target platforms and the language you use. Lastly, since I couldn't fit it in anywhere else, you have to keep in mind the data type differences and byte order of different platforms and architectures. You must code your higher level network logic to take this into account. Otherwise, you will run into tons of (not so) fun issues that are really hard to figure out after the fact. So keep in mind: size_t size differences, wchar_t size differences, float/double precision differences, code page string processing differences, and many more. A lot of the things you have to be worried about are not directly related to network API issues in network programming!
  • Advertisement