Thoughts on this direction of network model

Started by
5 comments, last by DividedByZero 12 years, 10 months ago
Hi Guys,

A couple of weeks ago I started making a network server and client class (in Winsock) to handle split data streams etc, trying to make the who thing robust and easily integrated into my applications.

So far, it is working well and I haven't had any problems / malfunctions or the likes. And from the host application side (once the class is included) there is minimal coding required to send or recieve data. :cool:

So, I thought I would share my logic and see if you guys think there will be any major problems that I haven't accounted for.

So, this is the general overview of what happens;

Client side: (Tri treaded)

- Network class gets included into the project.
- The programmer creates data structures of any type or length (must be same on client & server though).
- Data structures are 'registered' with the class.
- Client connects to server.
- Instances of data get modified in main loop.
- Data type, size, and raw data is sent by client to server in separate thread.
- Periodically data type, size, and raw data is recieved back in separate thread.

Server side: (Quad threaded)

- Network class gets included into the project.
- The programmer creates data structures of any type or length (must be same on client & server though).
- Data structures are 'registered' with the class.
- Accepts client(s) in different thread (#2)
- Data gets recieved (blocking mode) in thread (#3)
- Data gets sent in thread (#4)

Here is an example of the possible code used in an entire client app:

#include<iostream>
#include"NetClient.h"
#include"UserData.h"

int main()
{
// Setup initial structs
PlayerPosition t0;
t0.x=255;
t0.y=254;
t0.z=253;

PlayerStats t1;
t1.nHealth=100;

// Setup networking instance
CoreNetwork *network=new CoreNetwork();

// Register data type (MUST BE SAME TYPES AND ORDER ON BOTH SERVER AND CLIENT)
network->registerData(PLAYER_POSITION,&t0,sizeof(t0));
network->registerData(PLAYER_STATUS,&t1,sizeof(t1));

// Connect to server
while(!network->connectStatus())
network->connect("localhost",4444,10); // Host, Port, Timeout (in secs)

// Check for successful connection
if(network->error())
{
int nError=network->error();
delete network;
return nError;
}

while(true)
{
t0.x=0.01f; // Simulate data changes
t0.y=0.02f; // "
t0.z=0.03f; // "

// Send data stream #0
if(!network->send(PLAYER_POSITION))
std::cout<<network->error();

Sleep(200); // Simuate a delay

t0.x=0.03f; // Simulate data changes
t0.y=0.04f; // "
t0.z=0.05f; // "

// Send data stream #1
if(!network->send(PLAYER_POSITION))
std::cout<<network->error();

Sleep(200); // Simuate a delay
}

// End program and clean up
system("PAUSE");
delete network;
return 0;
}


The UserData.h file looks like this and can contain any data types as long as they are the same on both sever and client ends.

enum
{
PLAYER_POSITION=0,
PLAYER_STATUS=1
};

struct PlayerPosition
{
float x;
float y;
float z;
};

struct PlayerStats
{
int nHealth;
};


Essentially, you have data transmission & reception within half dozen or so calls. All of the split data problems etc are handled behind the scenes in a different thread so as not to block normal program flow.


Let me know what you think. All (constructive) critisism welcome. ;)
Advertisement
You should look into WSAAcept, WSASendTo, and WSARecvFrom ith overlapped IO. The callbacks happen on kernel threads.

Essentially, you have data transmission & reception within half dozen or so calls. All of the split data problems etc are handled behind the scenes in a different thread so as not to block normal program flow.

Assuming client has at least 3 cores and server at least 4. And OS not doing anything else, such as running browser, anti virus, twitter, facebook, indexing, ....

Networking is really not demanding on CPU. For this type of trickle bandwidth, the overhead of passing data back and forth between threads will be an order of magnitude larger than networking itself.

So, I thought I would share my logic and see if you guys think there will be any major problems that I haven't accounted for.


I like it that you're trying to make using the library simple. Simplicity in an API is always good, because it leaves less room for user error.
There is, of course, a limit to how simple anything involving distributed computing can be -- networks are unreliable and messy, and that's just the way it is :-(

I don't like how you use threads. What are they good for? I see nothing asynchronous in your client-side system that couldn't just as easily be implemented just using select() or non-blocking sockets, and using the buffering semantics of sockets and recv()/send(). Threads add locking overhead and more possible sources of bugs.

For the server, I think the choice of few, task-specific threads is even less of a match. If you want to use threads on the server, then that's because you want to scale across available cores. Modern Xeon CPUs this year have 10 cores, and you can put 8 of them on a motherboard if you're looking for density. That's 80 threads needed to take advantage of that CPU. Also, at that point, the threading model of the server has to be built into the actual server program structure, because you want to be able to avoid deadlocks and avoid contention on some common resources. When the threads in the networking layer only are visible in the library, that neither scales across cores, nor allows users to actually take advantage of threads for scalability.

For a multi-threaded server, conceptually, you want there to be a queue of stuff to do, and each worker thread just does this:

while (true) {
Work *w = queue.wait_for_work();
w->work();
}

This is, by the way, what boost::asio does, as well as the kernel worker thread pool in Windows, and is why that pattern is so powerful.
enum Bool { True, False, FileNotFound };
Thanks for the comments hplus0603.

I was initially thinking that multi-threading would be the way to go, probably because of my lack of experience with them (not knowing the pitfalls).
I have allready had obscure crashes when making a stack of clients connect at once. Luckily, I found out about Enter & LeaveCriticalSection(), which fixed this problem.

The thought behind the multi-threading was to twofold, to take advantage of multiple cores for workload and to keep to blocking sockets to keep CPU usage down while the blocking is occuring.

I have also thought of another potential problem with blocking sockets. I am currently using a vector of connected clients and iterate through these to send data. Is it possible that a slow or problematic client will block for too long and stop program flow to the other clients (same with the recieve side of things)?

Is it possible that a slow or problematic client will block for too long and stop program flow to the other clients (same with the recieve side of things)?


Yes, absolutely!

I recommend that you remove as many unknown factors as possible. If you aren't already very well versed in threading (and just finding out about EnterCriticalSection() seems like you aren't), I suggest starting with an approach based on select() and send() and recv(). That approach will get you to 63 connected clients on Windows, and 1000-odd connected clients on UNIX, with pretty decent performance, in a single thread.

If you actually want to provide a good threading API as well, then chances are that you need to get a lot more experience before you'll get to the point that you can compete with systems like boost::asio.
enum Bool { True, False, FileNotFound };
Thanks again.

I am probably guilty of aiming too high here, too early.

So, last night I started with the single thread approach (while still aiming at the simplicity of use - at the completed user level).

Once, I get that completed, to a level I am happy with - then I'll (possibly) attempt a threaded approach.

This topic is closed to new replies.

Advertisement