Jump to content

  • Log In with Google      Sign In   
  • Create Account


Drew_Benton

Member Since 29 Jul 2004
Offline Last Active Apr 28 2014 07:22 PM
*****

#4959567 What are the basics to making a Game Engine?

Posted by Drew_Benton on 16 July 2012 - 06:36 AM

So tell me, if I am speaking to anyone who has ever made an engine of their own. How did YOU do it? How did YOU get started?
I would like to see some unique and helpful answers from you guys.


In the past, I've made a few simple games and I've made a few simple engines. Nothing commercial, just indie stuff. I hope this doesn't sound too cynical, but it's the truth. I started out on GameDev back in 04-06 with aspirations to become a game developer. Unfortunately, I got caught up in the whole "making engines rather than games" deal. Ultimately, it lead to the demise of my game development career and I moved on at the time. I'm not a person with regrets, but if I knew now what I did then, I'd have not wasted my time.

The matter of the fact is, a game engine isn't an end, it's the means. But, you have to ask yourself the means to do what?

If you want to make a game engine to understand how game engines work, there are far superior ways, such as studying and using existing successful game engines, whether they are commercial or not. I'm a strong believer in trial and error, but in terms of a time investment, getting experience and actual portfolio end product work on commonly used engines looks and feels a lot better than unpolished tech demos on an engine that you might think is great, but everyone else just shrugs at.

Don't believe me? Just take a look at some job offerings for "engine programmer/developer". I'm not going to link specific postings, because it might feel a bit like advertising, but hopefully you'll get the idea. Having your own experience is not bad, but the way you do things certainly won't always be the way the "industry" does things. If you want to compete in the "industry", you have to play their game. Even if you don't want to get into the industry, part of becoming a good programmer is finding the right tools for the job. The sooner you get over the hump of trying to do everything yourself, the sooner you can actually make your dreams come true and get stuff done.

If you want to make a game engine to make games, then you should really just make games. Here is the obligatory, Make games, not engines article. The entire read is good, but the third from last paragraph is what I want to draw the most attention to:

Most hobby developers who “finish” an “engine” that was designed and built in isolation (with the goal of having an engine, not a game, upon completion) can’t ever actually use it, and neither can anybody else. Since the project didn’t have any goals, any boundaries, or any tangible applications, it essentially attempted to solve aspect of the chosen problem space and consequently failed miserably. ....



Looking back now, as I know a lot more than I did in the past, this is exactly what happened to me, and most other people who went down this route. In a sense, this quote highlights the main problem most people have with "learning" anything. Trying to learn something as an "end" rather than as a "means" most typically leads down a hard and unsuccessful path compared to people who use it the other way around. Sure, there are exceptions, but that's why they are called exceptions.

How should you view game engines? As a manufacturing factory whose sole purpose is to speed up the production of games. You wouldn't build a factory without knowing what product you are producing, right? Unfortunately, most people do when it comes to game engines and games.

So my advice to you would be simple. Forget about the concept of "making a game engine", completely. As saejox mentioned, learn graphics rendering, physics, sound, input, scripting, multi-threaded programming, scripting, databases, tool development, etc... typical software development stuff applicable to game development. Once you learn those things, make games using them. When you have enough games made, you will see commonly recurring patterns of functionality and tools. Take all of that stuff and get it interconnected into a new project. You now have a game engine, without having made a game engine. From there, it's all about evolving the project as you continue to make more games from it.

If you have made games already, great! You are ahead of most people who want to start their own game engine. However, you still need to keep making games in order to understand the type of engine that you need to help speed up game development of similarly typed games. Making simple board games doesn't mean you are ready to make a generic game engine for a FPS, RTS, or anything like that. If you have interest in developing a broad range of games, then focusing on an engine is not a good idea, as the game development concepts can vary between game genres (e.g., action based mmorpg vs turn based rts).


#4959548 One-Step vs Two-Step Initialization (C++)

Posted by Drew_Benton on 16 July 2012 - 05:32 AM

I would say one step, as much as possible. Exceptions make for cleaner code.


I prefer the same. Do as much as possible in the constructor and throw an exception if anything goes wrong. If you use return values to indicate failure, you can be sure to forget to check it every once in a while and you will get unpredictable behaviour sooner or later. If you forget to catch an exception, you get at least a well defined shutdown of your application.


What makes you think that you can't have your "initialize" function throw an exception? The concept of return codes is just one of the two ways to implement two step Initialization. If you prefer one step initialization, great! But why? If you believe exceptions make for cleaner code, and throwing an exception from an initialize function is perfectly valid and typical of C++ exception styled programming, then you haven't made an argument for either yet. Poorly constructed exception handling is no better than poorly constructed non-exception handling either, so saying you have a "well defined shutdown of your application" is not always true.

What SiCrane said about "This isn't a one-size-fits-all situation." is pretty much the main focus point that should be addressed. Looking past exceptions, which is only one of the main aspects to this issue, the cost of "creation, copy, and destruction" have to be kept in mind as well. How expensive is it to create, copy, or destroy your objects? Are your objects going to need pools for managing memory? It's just going to depend on a case by case basis, thus there is no one-size-fits-all solution.

Furthermore, why choose one or the other when both might be more appropriate? Resource Acquisition Is Initialization, RAII, is one such idiom where inherently it is one step initialization, but there are times when adding support for two step initialization can make for more flexible and simplified code. A specific example would be Win32 critical sections (whether you use them or not isn't important as much as understanding the justification). Rather than littering your code with Enter/LeaveCriticalSection calls, you can use RAII to greatly simplify the process. However, there might be times when you need the concept of RAII, but the implementation doesn't make logical sense. E.g., acquiring exclusive access to a CS, but not needing to maintain that access at all times (to prevent deadlocks), so allowing for two step initialization with manual cleanup code support helps keep things more simple/cheap than constructing/destructing new objects over and over.

It's typical in game development, and most other development, to make uses of 3rd party libraries at one point or another. Maybe it's a mysql++ connector wrapper, or a physics library, or an input or audio library, the list goes on. While a lot of people have a DIY mentality, it's important to understand that you can't simply reinvent the wheel for everything and will eventually have to resort to another person's code. A lot of these libraries might be setup to use exceptions, while some might not. In either case, you have to design your code around how the other components are setup.

This means, specifically to the OP, you should not be trying to "commit" to one style or another, especially in a language like C++. It's like saying you want to have dinner tonight, but don't want to eat a meal that would require the use of a fork. To each their own, of course, but it's certainly not a typical concern from that perspective to say the least.


#4957890 Am I "using" people?

Posted by Drew_Benton on 10 July 2012 - 10:57 PM

I am not a lawyer and not trying to provide the following as legal advice, but here's something to think about. Ultimately, I don't think it's an issue that boils down to morals or ethnics, but rather simply legal. If you try to hide what you will be doing with user contributions, you are more likely to be viewed negatively in the eyes of the community, so I would suggest being more upfront with your plans rather than hiding them away in a legal document. I digress though.

Pretend I get level submissions from people and I use them in the game I release for sale. If this possibility is specified in the agreement for the beta (which I'd guess many people don't read), is it in any way wrong?


You should also need specific terms that apply to the process of "content submission" in addition to the normal terms applied to the game/tools themselves.

In regards to the latter, your terms should say what people can or can't do with the content they create with your tools. E.g., if you provide people with a level editor, clearly specify if they can only use it for non-commercial purposes, if they are allowed to re-distribute the content, and so on. There's a lot to consider there and it's handled a number of different ways in different scenarios.

In regards to the former, you open up another can of worms when it comes to accepting user content, especially if you plan on publishing it or redistributing it in any form. For example, and I'll use an excerpt from the GameDev.net Terms of Service:

c) Customer warrants to Provider that Customer has all necessary rights to store, reproduce, license access to, and otherwise use the data contained in each of the Customer posted content for which Customer utilizes Provider's Software and Services.

d) Customer acknowledges that Provider's software stores customer data, personalization settings, and other Customer posted content. Customer hereby grants to Provider a fully paid up, non-exclusive license to store and maintain such data for the limited purpose of providing a public forum.


'c' is vital in terms of ensuring users have the necessary rights to provide the material and 'd' is vital in establishing what that material can be used for once GameDev.net has it. If you look up the ToS for any game publishing platforms or application stores, you will see similar. Some sites reserve the right to feature or use your stuff for purposes of promoting their site and so on. It should be noted though, you are still ultimately responsible for the content, even if someone breaks your ToS to provide you with it.

In other words, since you are accepting user created content, even though it is done with your tools, there are still "rights" issues that have to be considered. If your levels allow people to supply their own textures or models, then you would need to ensure those textures and models are not being used from a rights violations. Perhaps specific level designs are made that would infringe on trademarks or one thing or another. There's a lot of considerations.

In either case, you are setting yourself up for a lot of potential legal problems if you simply use user contributed content directly in your game. You would have to verify and ensure you have all the necessary rights to use the content first, which in itself, might be too much work to be worth it, to avoid issues down the line when someone sees their stuff in your game. People obtaining and using content that contains rights violations is a totally different issue, out of your control (from a non-technical standpoint, e.g., not having DRM mechanics built in).

If I were you, and you were worried about these things, you simply don't bundle any user contributed content with your game. You create a website that allows for people to share and download maps, taking into consideration DMCA provisions and the steps necessary for addressing copyright complaints so you are fulfilling your legal obligations. Here is one such page (random, no affiliations) that will give you an idea about that: Reducing Company Website Liability - Steps to Verify DMCA Safe Harbor Compliance.

That way, if anyone has any copyright claims, they need to follow the process and give you the appropriate time to respond vs. just sending out a C&D or filing a lawsuit for the violations. Here is another page (random, no affiliations) that cover this as well: How to send Cease & Desist and DMCA Takedown letters to sites infringing your copyright.

Of course, a lot of these things depend on how your actual level editing pipeline works. If you are talking about a 2D game with a fixed number of sprites to use, and it's a matter of a map format that uses only numbers to represent the tiles and users cannot add any custom images or sounds, then you won't have to worry about hardly any of these things. In that case, it's simply a matter of establishing the terms of what you can do with the content once a user submits it to you.

On a side note, and I'm sure you are familiar with the game, StarCraft 2 took quite an interesting path when it comes to content creation by keeping everything server sided. Even with that model though, they still have to maintain a clear copyright infringement policy consistent with what was previously mentioned.

And as always, you should consult a lawyer!


#4878872 [WinSock TCP] A problem sending/receiving data

Posted by Drew_Benton on 31 October 2011 - 06:46 AM

I think you need to start over with the code you are working with and restructure it. When you work with TCP, it's far easier to think in layers when working with the data.

The lowest layer is the raw byte stream you send and receive. At this layer all you are worried about are bytes and making sure you send and receive them. What the bytes represent is totally irrelevant; all you care about is making sure they are processed correctly by the system.

The next layer up is your protocol layer. This layer gives meaning (but not context) to specific bytes and determines how data is processed by the system. For example, using "§" as a delimiter defines your protocol. You know messages begin from the beginning of the stream until a "§" is received.

Finally, the message layer is on top. This layer gives context to the data passed using the protocol. In your case, you only have one type of implicit message, text, but you could expand your protocol to support other types of messages as well. For example, add more delimiters that would result in different processing of the data. I.e., let's say you use "[" and "]" to make a section of text that should be capitalized, that'd be part of the protocol while the ability to "bold" text is part of the message itself..

When receiving data, the process is [Raw Bytes] -> [Protocol] -> [Message(s)]. When sending data, the process is reversed: [Message(s)] -> [Protocol] -> [Raw Bytes]. This means your send/recv logic should be generic, protocol agnostic, and completely reusable for any program really.

Since you are working with TCP, and TCP is a stream protocol, you have to make use of buffering. At this point in your learning and programs, you do not have to be worried about the extra overhead from data copies or allocations or anything like that. You just want good solid code that works and you can understand. You will need to buffer all data you receive at the lowest layer and then allow the next layer to process it separately. Once the protocol layer is done processing it, it reconstructs the messages and buffers those for the system to process. When you go to send data, the reverse happens. You buffer a higher level message first, then let the protocol layer break down the messages into byte buffers, then dispatch the buffers to the raw processing layers.

Putting all this together, here's a simple single threaded, one client example that shows the distinctive separation of the layers. Only the important stuff is commented.
Spoiler


In this trivial example, everything is "inline", but when you use this approach, you can wrap everything up into helper functions and classes/structures to keep things organized and support more than one client. Each "context" object will have a socket, a workspace buffer, and message queues. This way, no matter what underlying send/recv mechanisms you use, the message layer remains the same as well as the protocol layer. If you want to change up the protocol some, the other layers aren't affected and so on.

You won't ever "send" or "recv" data directly, only indirectly through buffering. This way, you can properly handle the semantics of the TCP stream as well as gain some flexibility in your system. It takes some getting used to working with TCP and this approach, but in the long run, it helps make your system a lot more manageable compared to the direction you are going right now. Good luck!


#4867486 to those who have read the boost::asio getting started guide

Posted by Drew_Benton on 30 September 2011 - 12:34 AM

I'm having a hard time understanding how to make it connect more than one person at once and continue functioning after people randomly disconnect(and not just sit in a permanent error state).


In typical network programming, you have a listening socket that calls accept over and over to accept new incoming sockets (or just once to accept 1 connection, as most trivial examples do). Depending on the networking approach you take, whether it's blocking or non-blocking, event based or asynchronous, you will obtain a handle to the connection to work with via a socket.

The lines (in the main function):
 boost::shared_ptr< MyConnection > connection( new MyConnection( hive ) );
acceptor->Accept( connection );
represent one such instance of performing that logic.

The key difference in this approach is, rather than working with a raw socket handle, you work with a connection object that maintains the socket handle for you as well as providing you the context object to work from. So, rather than accepting a new connection and then allocating a context for it later on, a context is allocated for it initially and tied to the socket object. This is useful when you need to associate meaningful user data with each socket that you post for accepting sooner, rather than later. Some designs are better implemented this way, while others are not.

Either way, you can choose two different methods for "refilling" the pending accept queue: 1. post a lot of accepts up front to handle bursts of incoming connections (like a high use web server might) and refill when it hits a specific low threshold or 2. post one accept and then post another accept after that accept has completed (thus limiting your connection acceptance rate, but requiring less resources at any given time compared to keeping a pool of them.) You can actually mix the two, posting a lot of accepts up front and refill them as they are processed, depending on your application needs as well. Not all servers or networking projects are meant to infinity accept new connections though, so depending on your needs, you do have to tailor examples to your needs.

Once a connection is accepted, MyAcceptor::OnAccept is called for you to verify if the connection is allowed (think in terms of software application specific firewall). If it is allowed, the MyConnection::OnAccept function is then invoked. Alas, a new connection is never posted again for accepting, so you cannot accept more connections! To remedy this behavior, you simply create a new MyConnection object and pass it to Accept of the acceptor. The follow code is the new MyAcceptor::OnAccept function that accepts more than one connection:


bool OnAccept( boost::shared_ptr< Connection > connection, const std::string & host, uint16_t port )
{
	boost::shared_ptr< MyConnection > new_connection( new MyConnection( GetHive() ) );
	this->Accept( new_connection );

	global_stream_lock.lock();
	std::cout << "[" << __FUNCTION__ << "] " << host << ":" << port << std::endl;
	global_stream_lock.unlock();

	return true;
}


Each time a connection is accepted, a new connection is posted so you can handle more than one connection. When you use this approach, you have to ensure you post the new connection for acceptance first, as if you accidently skip that logic, then no more connections will be accepted.

If instead you wanted to signal the another thread based on an event, the logic in the main function would be used instead:

boost::shared_ptr< MyConnection > connection( new MyConnection( hive ) );
acceptor->Accept( connection );

Putting implementation specifics aside, the key thing to understand here is that for every connection you wish to accept, you have to post another accept for the next connection. The server is not in an error state; it's simply not accepting any more connections! This was by design for that simple example (as is pretty standard, see accept on MSDN for reference). In terms of other boost::asio examples that show this, the chat server example does a good job. Bolded below are the lines of interest (hm, seems bold tags don't work correctly, look at the b and /b in square brackets):
 class chat_server
{
public:
  chat_server(boost::asio::io_service& io_service,
      const tcp::endpoint& endpoint)
    : io_service_(io_service),
      acceptor_(io_service, endpoint)
  {
      [b]start_accept();[/b]
  }

  [b]void start_accept()
  {
    chat_session_ptr new_session(new chat_session(io_service_, room_));
    acceptor_.async_accept(new_session->socket(),
        boost::bind( [b] &chat_server::handle_accept [/b], this, new_session,
          boost::asio::placeholders::error));
  }[/b]

  void [b]handle_accept[/b](chat_session_ptr session,
      const boost::system::error_code& error)
  {
    if (!error)
    {
      session->start();
    }

    [b]start_accept(); [/b]// NOTE: Can be an issue if start() throws!
  }

private:
  boost::asio::io_service& io_service_;
  tcp::acceptor acceptor_;
  chat_room room_;
};


From that code, an accept is first posted upon construction, then after each subsequent incoming connection is accepted, a new accept is posted. The danger of posting the accept at the end of the handler is what I was mentioning before.

They're all kind of tied together into a big mess of classes and methods that are interdependent.


It's only a few hundred lines of code! If you think that's a big mess, then heaven forbid you actually look through the boost::asio library code! ;) Most of the mess you see is related to the look and feel of boost using the C++ language and not the logic at hand. The interdependency of the code is by design and pretty much unavoidable since it wraps up the boost::asio library. For more information on this, check out the following page: The Proactor Design Pattern: Concurrency Without Threads.

The code is organized as follows:

Hive - Wraps up the boost::asio io_service object and the work object into one discrete object you can control the 'master' system from. Any objects that are constructed using the Hive object can be serviced through that Hive object.

Acceptor - Wraps up an boost::asio acceptor object to allow you to accept incoming connections. Note: An acceptor is a general purpose name, do not think of it as a "server" as the concept of a server is far more specific in functionality while an Acceptor simply accepts/maintains new connections. You can however, turn an Acceptor into a server using the wrapper, since that's the way the code is designed, to make things easier to work with,

Connection - Wraps up a boost::asio socket object that represents a connection to a remote host or an incoming connection.

To implement your own custom logic upon events and context specific data, you derive your own classes as the examples do and then get to work. While it might seem trivial, if you look through all my examples before the wrapper and imagine doing that for every project, hopefully you can see how tiresome and messy it'd be.

The "purpose" of the wrapper is to show a practical OOP example of what you might want to do with the core functionality the boost::asio library gives you. Most of the time, you will duplicate network code project after project and after a while, you will want to write a wrapper to avoid it. The network.cpp/.h was a simplification of my wrapper. While the namespace expansion is quite annoying, I can't say I'd want to go back and change anything about it conceptually. You won't be able to implement the generality or flexibility the wrapper provides in significantly less code. You could merge the OnAccept/OnConnect and pass an enum to save some space, but not much. The implementation specifics of the atomic_cas32 is really iffy and would be the only thing I'd consider looking into updating, but simply using a synchronized lock would be a lot of extra overhead, but might be needed on some platforms.

The intent of the design is also very specific to a recurring problem I had come across with network code. That is, the separation of "client" and "server" objects made for far messier code and actual communication between different objects greatly increased code complexity. With this proactor design that boost::asio uses, certain tasks are made a lot easier since communication between "client" and "server" objects are seemless. The biggest example is a proxy that requires accepting incoming connections than connecting to new remote destinations.

My understanding is that I want multiple sockets so that I can use them to determine who gets sent what from the server and I can tell which clients are sending me what. Is that accurate?


Once you add in the code to accept more connections, your network events will then execute in the context of the connection that receives them (each context has its own strand so in the context of the connection, everything is thread safe internally). One of the biggest networking challenges comes about when you need to multiplex these events into a "simulation" of some sorts. For example, if you were writing a web server, each connection does not need to know of the others, so the code as-is would just have the http processing added to work with. There is very little global state to worry about, so you don't have much to do.

If you were instead writing a game server, then you would have to come up with a way to pass the objects created from packets to your "simulator" in a fashion that you could associate the object from that connection and then be able to easily send objects to send back. This is a lot more complex and is not easily shown in simple examples. One such "easy" way, would be to give each connection a GUID, keep a mapping of GUIDs to Connection objects, then lock a global event queue and post events to that queue for a main thread to process. That in itself is another discussion though and outside the scope of the guide.

If you are going to use the network wrapper for any testing and stuff, line 122 of network.cpp should read: "connection->StartError( error );" and not "StartError( error );". It's a minor error that should rarely trigger, but if it does, it might mess something up that shouldn't be. The wrapper code is simply the lowest level code your networking logic uses for getting the raw data and connection management. For any real project, you would have to add additiona layers on top to handle your specific network protocol, message serialization/deserialization, and everything on up in terms of program logic.

Lastly, the most important thing to understand about that code are the sacrifices you make if you use it. Performance hits are a key issue as the overhead of the code will be at some factor that has to be measured and taken into consideration. For example, boost::bind does carry quite an overhead, but the trade-off is the unique functionality it provides and simplification of a lot of things that otherwise would not be possible as easily. Relying on strand for synchronization makes life a lot easier as well, at the expense of the overhead. boost::asio also has specific allocation strategies (official example) you must be aware of if you plan on using it in production code. The use of vectors and lists also carries quite an overhead, but all these issues are only issues if you measure performance and determine it is not suitable for your code. When trying to write generic, simple, wrapper code, certain sacrifices do have to be made. Once you know exactly what needs your project requires networking wise, you can write your own code and take what you need and get rid of the rest.

If you are new to networking, there's a lot of concepts and specific implementation strategies to wrap your head around. Boost::asio and C++ are by no means "simple", so you should discuss what you are looking for and why you think boost::asio can help you on your project so you can be sure you are on the right path. I'm not trying to say anything against boost::asio, but everyone's situation is different so if you are here on the forums, it'll be a good idea to make use of all the resources here. Hopefully that clears up some of your questions!


#4808908 Which engine should we use for our browser-based MMORPG?

Posted by Drew_Benton on 10 May 2011 - 04:54 AM

For the specific approach you and your community are taking for this project, there are a lot of prerequisites you need to complete first before being able to choose the best tool for the job.

First, you need to establish some specifics of your game (which should already be completely designed and documented, everything. If it is not, then you are not ready for this stage.)

Q. Describe how your main game world works.
- Is your main game world uniquely instanced? I.e., do players join a lobby first and get their own game world when they start the game.
-- If your main game world is not uniquely instanced, will you support channels? I.e. will players be distributed across a limited amount of instances of the main world.
--- if your main game world will not support channels, how many concurrent players will you aim to support? I.e. having to support 1000 people in the same zone is significantly more challenging than 100, which is still more significantly challenging than 10. A Massively MORPG has to be able to scale depending on your requirements.

Q. Describe the simple game play mechanics and pace.
- Is the game play fast or slow paced. I.e. A turn based MMORPG will require significantly less data transfer and update speeds than a real time action based one.
- How is player movement setup? I.e. Click to move vs. WASD (style, not specific keys) are two very different implementations that will greatly influence your implementation.
- How detailed are interactions between entities? I.e. if you need a lot of physics to determine collisions, your development path will be a lot different than if you just use simple ray/body collisions or overlapping object contours.

Q. Describe the persistence of your world.
- Is game state saved and loaded from the last point or does it reset? I.e. if a player leaves town and logs out, where will they be when they login again? (This is not a trick question!)
-- Continuing on that same point, the same goes for enemy entities. Do they need to be exactly where they are left or does it not matter?
- Are dropped items a part of the world or simple destroyed? This is very important to consider when you work with games that have higher latency.

There are many more, but those are 3 I can think of off the top of my head that are by far the most important to start out with.

Second, you need to establish some specifics of the interactions required for external access to your game's data.

Q. Will you have a "cash shop"?
- If you do, this has to be taken into consideration so the process of modifying your game's data externally is possible.
-- If you don't want to automate it, it's important to come up with a system to make it as safe and efficient as possible, as it is more human error prone and easily abused.

Q. Will you expose game data?
- Will you setup a site to support searching for players, tracking various stats, and so on?
- There's a lot more, but the main point resolves around either needing this or not.

Q. Will you have GM's?
- I use "GMs" as a simplification to you having people who manage your world in real time in game.
- What functionality do they require? I.e. How much power you give them over your game world might influence certain development decisions.

As with the previous section, there are many more, but those are the 3 minimal aspects that should be considered.

Finally, what is your project's budget and time frame. Based on this, you will come up with cost estimates for using each potential solution, from having to buy add on libraries for an engine, integrating a 3rd party payment platform into your game, as well as any costs related to getting your assets (whether they are free or not) into a suitable format for the game and optimized for distribution. This is just upfront stuff, you will have to consider the costs of maintenance once the project is done and deployed.

Once you have very specific detailed answers for these things at minimal, you can then being to look at the different engines and technologies you mentioned to find the one you think is best for your project. Getting programmer input as Satharis mentioned is pretty important too nowadays. You want to be on the same page as your programmers so you get exactly what you want from them and make their lives are easier.

I don't work with web based MMORPGs, so I don't have any specific advice for you. Good luck!


#4807165 Theory - Handling TCP data that is split on recv calls

Posted by Drew_Benton on 05 May 2011 - 05:41 PM

So, nType will identify to the app what sort of data to expect and what to do with it.

Does this sound feasable?


Yes, that's how it's usually done in practice for binary protocols. My advice is to keep it simple, and don't try to be clever with it.

I've seen some variations, such as writing a size as a byte, then data, and then a continued size byte for more data and so on (the last size byte is 0 to signal no more data), but I've never seen the point of those implementations. In the end, they use 2 bytes the majority of the time and are not saving anything.

I prefer having the size there to give more flexibility to the protocol rather than determining the size based on the ID alone. Reason be, fixed size packets usually mean fixed size strings, and those can really take a toll on a system. I've seen games use 512 byte fixed size strings for every chat packet, which is pretty painful to think about...

I am also going to look at IOCP as suggested earlier (or atleast a multi-thread approach), as I foresee problems with waiting around for data too long - like application litter, if the data gets caught up somewhere.


Any problems you would have with IOCP in regards to that would also happen in any other network implementation. This issue belongs to the application protocol processing layer, not the underlying network layer. If you say you only care the client is still sending data, then a client could send your ping packet over and over while not actually doing anything. Of course, you'd not know that if you only handle it at the network layer.

If instead you handle it at the application protocol processing level, you can time between different logical packets. Only then, can you know that client A has not sent a login packet in 1 minute, so you should disconnect them. Or, a client has not finished sending the packets required to complete a transaction in game in 10 seconds, start a rollback of actions and disconnect them. So on and so forth, you have to be aware of such things if you want to avoid timing exploits in your system. It's surprising how many systems are actually vulnerable to such things. I've come across a lot of flaws in games that were directly related to this issue.


#4807157 I need some direction for client server

Posted by Drew_Benton on 05 May 2011 - 05:18 PM

I have heard bad things about the each client on a thread idea. Something about not a good idea to create threads at runtime, and a limit on the ammt of threads. So Im not really sure I want to follow that line of logic... or have I been mislead?


I kind of mentioned that in my post, but I'll restate it. Normally, in most languages (where 'threads' are operating system threads), it's not a good idea if you are aiming to have a lot of concurrent connections and there is a lot of shared state. The cost of always creating more threads and then the system resources required to maintain so many threads (context switches, lock contention) eventually overwhelms the system. Actually getting to that point is just dependent on how much processing is going on in each thread and how many synchronization objects there are. A high end machine can certainly handle a lot before getting to that point, but it is not really a viable solution in the long run.

The thing to understand with C# is (more like .Net), without manually setting the ThreadPool size, you can actually degrade your async system into a "thread per client" design if you do not take care. For example, let's say you do all your packet processing in the callback (not recommended). The callback is invoked from the ThreadPool, so you tie up a worker thread. As you get more concurrent requests from more connections, more threads are needed in the thread pool, so the system creates more threads for you based on its logic for growing the ThreadPool. This is why it is not recommended to use the ThreadPool for such long blocking tasks; it is designed to accommodate more frequent short lived tasks (that run in the background with a set priority). See the previously linked MSDN article for example.

Ideally, you want to minimize the time spent in the callback so this does not happen. Unfortunately, actually doing so can result in a more complicated design, depending on what trade-offs you are willing to accept. As a result, and this is only because the way C# works, implementing a thread per client (say like for the TCPClient network IO), turns out to be the worst case you have to already account for when using the async methods with shared state.

The only difference is (and I'm so over simplifying this), with the async functions, the actual network IO takes place in a completion thread (and operate more efficiently), whereas your own thread per client design takes place in a user thread (which would be akin to a worker thread). Because of this, if you start out with something simple and understand how it works and where the flaws are when it comes to scaling, you can immediately apply that knowledge to the next best system, some form of async, and not fall victim to the problems you otherwise would have. I've seen a lot of people have such problems because they've yet to learn the implications of what you do in the callback function has on the rest of the system.

So my point is at least, and others might disagree, rather than starting with the most complex, efficient method, which has a lot of problems to solve that you will not yet know of, start with something simple and well known where most problems are obvious (I don't like the word obvious, but computer science common sense, such as basics of threading and synchronization). That way, you can directly address the known problems so the unknown problems can be encountered and handled appropriately more easily. Once you take care of that and move to a more advanced system, you already know a lot more to be equipped to handle the problems that come with that system as well, so anything you might not have encountered before, is only part of a smaller subset of problems than you would of previously had if you had not.

The concept of "a thread per client" is bad is only because of the nature of operating system threads and synchronization overhead. If threads were free and synchronization was over-headless, then there would be nothing wrong with the model, theoretically. So as you go to languages that handle these aspects differently, you have to keep that in mind.

As I said before, my advice comes just from the way C# works. I'd never suggest this for something like C/C++, because in practice, you would just be wasting your time. With C#, you are not. When using any programming language, you have to understand traditional programming problems in context of the language. In C#, there's a lot of riding on the way the language works in regards to internals, such as memory management, GC, threading, and so on than in a language like C/C++. In C/C++, it's more operating system / hardware constraint driven. In C#, you still have to worry about such things, but the perspective is different. You have to worry how .Net handles them and code accordingly to that.

When you use a productivity language like C# and the .Net framework, I feel you should make use of it's productivity features for fast prototyping if you are just getting started (and are not developing software in the traditional sense). Worry about performance and optimizations afterwards, once you get a good idea of what you are going to do and how you can utilize different classes in the .Net framework to accomplish your task. All of this is said in my opinion. :)


#4806807 I need some direction for client server

Posted by Drew_Benton on 05 May 2011 - 03:01 AM

I'm loving C# and .Net 4.0 more and more nowadays for the reasons hplus mentioned. There are quite a lot of different approaches you can take using the .Net libraries. You can look into those methods he already mentioned as they are your basic efficient methods most people end up using.

For your little project, and for the sake of getting something up and running to get your head wrapped around, I'd like to direct you to two cool little APIs.

TcpClient / TCPListener - These are wrapper APIs that sacrifice a lot of the customization and flexibility you have with Socket, but as you can see in the example code, there's not much code to deal with (compare that to something like this for example.). This is most useful if you want to only service a few connections or so, and you'd use the thread per client model. While that method does not scale once you need to support more players, it really sounds like you just need to get something started to get the ball rolling. In a language like C#, adhere to the Managed Thread Pool rules and make your own threads to handle each TCPClient that connects rather than post to the worker thread queue.

TCPChannel - This API is basically a RPC wrapper. Rather than worry about the specific network protocol, you simply send and receive objects in a layer above Microsoft's TCP stream protocol. The MSDN reference does not really show how easy it is to use it, so take a look at this example to get started. NOTE: There are flaws and short-comings in that example, but it should give you the gist of how to get started so you can go back to MSDN and implement it better. Using this approach would be highly experimental and only practical really if your game client was also written in C# or used a C# DLL to handle the networking. This article goes over the basics of it, but that article is for older versions, so you would have to consult MSDN to implement it the correct way for 4.0. This method might not be practical for many uses, but it's only something for you to consider to get something up and running faster.

Both of these classes won't really solve any of your real problems though. They just allow for easier and faster prototyping of networked systems. You will eventually have to learn how to do it the most traditional way, since most people usually hit a feature or performance wall with the aforementioned APIs. If you can make them work for what you need, then great! But if not, they are still good to understand because they are great for making tools to test your networked code with. Ultimately with .Net, you want to be able to say: I [ can / cannot ] use [ class A / class B / .. ] for [ Task a / Task b / ... ] because [ Reason 1 / Reason 2 / ... ]. So it only helps improve your knowledge of what tools you have available if you give them a try. You never know when you might be able to make use of it!

The biggest challenge you will run into will be multi-thread programming in C#. Since there are worker thread pools and a lot more threads in your traditional C/C++ program, you have to be aware of which data might be used from different threads concurrently and which data is dependent on other data that might need locking. For example, if you make a global List of players and lock the list for any Add/Remove/Enumerate operations, that protects the list, but not the objects inside. As a result, if you were to lock each Player before using it, which you typically would, you can introduce a deadlock or race condition with other logic that makes use of the Player object at the same time, which all too often can happen since there are so many threads in the system.

If you are rusty on threaded programming or would not consider yourself well versed in all the aspects of thread programming in C#, it would be wise to simply make a simple Select based server and not using the thread pool for anything, and just handle all your logic in the main thread. This would be your "continuous old loop" as you so described. For a project like yours, the tradeoffs are well justified since you will be able to make things work first then worry about converting to a more efficient, scalable solution.

Truth be told, the actual method you use won't matter for a while. You will still have the same "networked game programming" problems to solve with all of them. Just because your network code can scale to tens of thousands of connections doesn't mean your game design and implementation can. ;)

That is why I'd suggest just something up and running to get familiar with the problems you will face so you can work on those rather than worry about how efficient or scalable your core is. You will be rewriting code no matter what. This advice is only being given as-is because you are using C#. The time spent and experiences learned doing things the 'less than optimal way' are significantly more beneficial here than if you were to do the same in something like C++. I.e. a DIY thread per client is never really recommended generally, but the implementation logic required is the exact same as using BeingXX functions. The only difference is the system manages the threads and you do not. If you understand how to make a thread per client work yourself, i.e. the multi-threading part and lock, you should be able to easily adopt it to the async system. All of that is assuming you have shared state in the system that requires multi-threaded management. If you don't, then the statement won't hold true.

For everything else relating to server design of the topic you mentioned, literally just read the entire forum. Take a few days or weeks if you have the patience and read through all the threads with keywords relating to your game genre. Specifically, read through all of hplus's posts; there's in invaluable trove of knowledge just there for the taking! A lot of content seems not to be indexed from the old site, so check those forums too. The more time you are willing to put upfront for research now, the better off you will be in the future in regards designing your system in a way that has a very high likelihood of being successful. Good luck!


#4797177 WinSock, identifying a packet

Posted by Drew_Benton on 11 April 2011 - 10:35 AM

All right. Please look over this code and tell me what you think:


So now, someone sends a 1400 byte packet. You have a 1400 byte buffer that you set the 1401st byte to 0, corrupting whatever memory came after the buffer.

What about when a string is not the last member in your structure? The code does it no good either.

Since you are using C++, consider using the one of the patterns mentioned in this thread. Either setup packet reader/writer classes and manually build.parse fields or make use of the visit pattern so you do not have to mess with this low level casting stuff that you will probably eventually have to change.

For the packet handler, consider using a map of UINT to function pointers/boost::functions/functors/etc... if you do not want one large switch statement. In doing so, you can keep the logic of the packet handlers separate and more easily maintainable than having to work with all of them at once.

Also, since you are using UDP, how are you going to take into account duplicated, lost, or packets corrupted on an application level? If you don't, I can't imagine players being happy when their chat messages never go through or they don't get info packets telling them the position of the epic loot they just dropped (just examples, but you should get the point.)

Consider making use of an existing UDP library that handles this stuff for you if you are trying to add networking to a project. Otherwise, you will need to look into add your own protocol on top of the UDP layer before your application logic is added to handle packet ordering and simple reliability as needed. It'd be really hard to imagine you do not need any of those things in your application; it's like playing Russian roulette with network data!


#4793029 Game Login throught a forum

Posted by Drew_Benton on 01 April 2011 - 07:13 AM

In general would this be a safe system?


As long as you implement it properly, yes, it should be pretty safe.

Square Enix uses such a system to handle their logins for Final Fantasy 14. There are many other games that do similar as well, but that's just one example I remember offhand.

However, additional security measures are always needed to help protect users' accounts against "unauthorized access" arising from their own faults and not from your system. The idea nowadays is, even if someone should have their account name and password compromised, the account should not be able to be compromised so easily since additional validation checks would be required to unlock the account. Blizzard uses some access time pattern heuristics to help, checks computer specs and IP for example. Other games require a PIN number to access specific characters once you login.

So there's a lot you can do but having a secure login process is only the beginning of such a system.


#4791081 Advice on C++ Garbage Collector

Posted by Drew_Benton on 27 March 2011 - 03:00 PM

Based on the API usage in your example, it looks more like you have a custom memory manager rather than a garbage collection system.

I say that because the whole point of GC is automatic memory management and allowing users to manually manage memory like you do would break most designs. The API is not that transparent at all either. Compare the usage of yours to something like the Boehm garbage collector. I bring that up because if you already have a decent sized project, trying to integrate in your GC would pretty much require interface rewrites all across the board.

It looks like you have support for multiple GC objects. How would that actually work? What if one object was stored by an object in another? From what I've seen when trying to interop two languages with different GC systems, all sorts of headaches can occur if manual steps are not taken to avoid one gc'ing an object as it's still in use.

So i guess it's a start, but the largest looming question is why are you writing a GC for C++? Outside of it being for fun or learning purposes, have you come across an instance where writing custom memory management functions simply wasn't enough? What about trying a smart pointer library (boost is worth checking out as well).

It's just hard to give rah-rah support to such a task given the whole point to using C/C++ nowadays is for the fact you have the power to manage memory exactly how you want. If the memory management aspect isn't that important to you, other languages should be considered for the task as there might be better options.

I'm not trying to rain on your parade, but I'm just sayin' :)


#4790644 Packet data types

Posted by Drew_Benton on 26 March 2011 - 06:10 AM

This is a bit over my head but I do not have a problem reading about it and trying to learn this. However what's the benefits of me doing it this way?


Just to add some of my own commentary in addition to the points hplus already mentioned:

To put simply, rather than having to write 4 pieces of logic: stream writer, stream reader, object writer, object reader, you will only have to write 3: stream writer, stream reader, object visitor. It might not seem like much at first, but as your project gets larger, the difference between the two methods' amount of code is huge. In addition, when you use the object writer and object reader approach, you have two pieces of logic you have to maintain and keep track up for maintenance whereas the object visitor is only one piece of logic.

By going the object visitor route, you further decrease development time when you have base types that can be reused. For example, let's say you have 5 different messages that all contain the same sequence of data, like entity id, X, Y, Z. In the object reader/writer approach, you will simply have logic to read/write each of those fields individually in all of your functions. In the object visitor approach, if you combined those 4 fields into a base type, you would only have to write the visit function logic once for that type, then reap the benefits of being able to reuse it for any message that uses it. So rather than having 8 lines of actual reading/writing code total, you only have 1. That is because the visitor pattern handles both reading and writing!

Here is a real world example. Consider the following structure that contains data about a security protocol:
Spoiler


Using the object writer/reader approach, we would have the following two functions:
Spoiler


Where the WriteXXX / ReadXXX functions are coded as part of the StreamWriter / StreamReader class.

Now, for the object visitor pattern, we only have one function:
Spoiler


We still have both StreamReader and StreamWriter classes, but rather than them defining Read/Write named functions, they all use the same "visit" function with different logic depending on the object.

So we are taking advantage of the way C++ works to drastically cut down on the work and code needed to implement object serialization. The more types you have, the more visit functions you do have to write, but you only have to write them once, so you can easily reuse them in the future. As mentioned before, as your project grows, the object visitor pattern pays for itself.

The object reader/writer code shown is typically how you see people do it. I myself used that style for years because I was unaware of the visitor pattern. Now that I understand it better, I can see how beneficial it is and how there really is no reason to use the object reader/writer method because everything you can accomplish there, you can accomplish with the visitor pattern; you just might need to add a state object to know some extra information.

Here's some simple examples of more complete visitor stream classes shown in hplus's earlier post:
SeralizeStream
Spoiler


DeseralizeStream
Spoiler


They are pretty basic classes. More functions for vectors, lists, maps, etc... could be added as needed. Also the way you work with strings might vary. Some protocols use fixed size strings, some use a variable length variable size type similar to the one shown, and others just use a variable length fixed size type (as shown in hplus's post with a 1 byte length limitation). I think the float/double logic is correct, but I might be wrong. Another thing to be careful of is ensuring you are using portable types (I'm not purposefully for the sake of a simple test). There are a few gotchas here you have to be careful of if you are going cross-platform or 32/64-bit different architectures. The most annoying one is the differences between wchar_t size on gcc on linux (4 bytes usually) and the size on windows (2 bytes usually). If you tried to send a string from one platform to the other without keeping this in mind, you can be in for some real headaches!(I.e. Windows Client <-> Linux Proxy Server <-> Windows Server).

Anyways, hopefully that adds to the useful information in this thread. Also as a disclaimer, all code was written during the course of reading this thread, so it might have bugs, do not use it without understanding what it does first. Good luck!


#4790595 libcliser: C++ library for creating multithreaded TCP servers

Posted by Drew_Benton on 25 March 2011 - 11:48 PM

igagis is the author of ting ;)


#4790296 Any good network simulators

Posted by Drew_Benton on 25 March 2011 - 03:22 AM

I cant believe that something as simple as a simulator is so hard to do


It's not that hard actually.

All you have to do is write your own proxy that implements a man-in-the-middle attack to route traffic through. That gives you flexibility to:
- delay packets as needed
- drop connections
- modify/corrupt/inject data

The only "hooking" you need to do will be generic detours to make the application connect to your proxy rather than the remote host. That is assuming you cannot change it via a hosts edit or an application specific option.

If you are working with your own application, then you simply implement your protocol so you can process packets from the data stream then relay them to the remote host. If you are working with 3rd party applications, then you will need to know their protocol first to be able to work with their packets.

If that is not possible, then you will only be able to simulate dropped connections and delay the entire network stream without being able to process individual packets. As long as the application you are working with does not implement specific anti-man-in-the-middle attack logic, such as the things mentioned in the "defenses against the attack" section on Wikipedia, or does not employ anti-tamping mechanism like GameGuard, XTrap, Themida packing, etc... this method works very nicely.

Another potential pitfall is if you are working with a application (such as a game) that performs a connection hand off and you don't know the packet protocol, then you will have to do some messy hacks to be able to continue working with the traffic stream at each hand off. It's certainly possible and works, but it is nowhere as elegant as having full control over the protocol.

Once you have your basic proxy done, then you can just add a GUI (or console if you rather work that way) to allow you to do the stuff you need to simulate the network. It's all a matter of logic from there. For example, adding latency is as simple as buffering all packets to a queue rather than dispatching immediately to the other side then checking a timestamp to know when to send it. Connection dropping can be as simple as forcefully closing the socket, but you will not be able to simulate certain types of connection dropping unless you used other tools. I.e., to simulate a connection where the host simply has a power loss, you pretty much need to simulate that yourself on a different machine. That is why running a ritualized instance, such as VirtualBox is great, as hplus mentioned. Likewise for application crashes or network failures.

It might seem like a lot of work at first glance, but these are tools you should already have at your disposal when you are working on a network related project. Existing tools are certainly useful as well, but you can really get the most out of custom tools that cater to your needs exactly. Basically it's a large one time up front investment that pays for itself over time as you develop more and more stuff that you can reuse the tools on. I've had my own generic tools in the past that proved to be quite useful in this regards. Don't expect to be able to find the perfect solution right off the bat. I've played with different designs for this stuff for many years and am still trying new approaches and methods to find the perfect solution for my needs. Getting something that "just works" is simple enough though and the most practical approach to take in your situation.

That should get you pointed in the right way, I hope. Just code a simple traffic relay proxy in whatever language you are using first. Once that is done, you can add your protocol specific logic to break the data stream into packets. Finally, you can add your custom logic to do the tasks you want. You have it pretty easy if all you are working with is your own protocols. You do have to be careful of some 3rd party TCP protocols since they might not be implemented correctly. TCP has been around a very long time, but I still come across games (f2p mmos) that incorrectly treat TCP as a packet based protocol and it is very annoying to work with since everything breaks when you do not handle the stream the same way the client does (Nagle algo, send/recv buffer sizes, etc..). Hopefully your own protocol does not make those mistakes. :)

If you need any more ideas on architectures or whatever for such a program feel free to ask. I've been working with this stuff for the past 4 years now trying out different methods, languages, libraries, etc.. to try and come up with the 'ultimate' tool so I've learned a lot along the way. Right now, my focus has been C# (.Net 4.0) + IronPython + Construct and I am extremely pleased with results so far in my early progressions with it. While I can't recommend such experimental ideas for when you have real work to get done, if you are using C++ I'd strongly advise you to checkout boost::asio as your networking core.

The design model they use allows you to do a lot of useful things in the least amount of lines of code. Once you get used to the concepts and the large namespace and write your own wrapper (I talked about mine here) production is greatly increased since you are spending less time worrying about the things boost::asio does for you that you otherwise would if you rolled your own using native Winsock. I spent many years trying to do it myself and after wasting so much time and ending up with such limited code, I finally heeded the advice of people who knew a lot more than me and learned an existing library that took care of everything. I don't regret it one bit and make heavy use of it for my C++ stuff. For example, using my network wrapper my own basic traffic relay proxy code would look like this:
Spoiler

And it's ready to have the protocol processing object dropped in then the custom logic added. So all my work is focused on everything but the underlying network code I'd otherwise have to write if I didn't have my own wrapper already written and wasn't using an existing network library. The advantage of using boost vs rolling my own in this case would be I can easily add multithread support, it's cross platform, and it's a proven library that can be used for larger scale projects (although I can't recommend using my wrapper since there are a few bugs and some specific tradeoffs with the style I used, mostly related to overhead and lackof memory management).

As part of another recent enlightenment I had though reading these forums though, I'm transitioning my tool development to higher level languages because the solutions that I wanted to express through C++ were simply taking too much time, effort, and ended up so mediocre that I just got tired of it. I'll stop rambling, but keep in mind to find the best tools for the job rather than just force a solution through what's most known.You didn't mention what language(s) you are using or which network protocols, but TCP and C++ are what I've had the most experience with so far, so that's what I choose to talk about.;)

Good luck!




PARTNERS