using apache, custom server, and php for real-time game?
They are now called "comet" techniques. This could serve as a good jumping-off point: http://en.m.wikipedia.org/wiki/Comet_(programming)
If you need a persistent connection with real-time data updates, you need a TCP connection, not a HTTP connection. If you HAVE to do it with servers and clients you don't have full control over, look into Websockets and see if they are supported in your environment. Without websockets, COMET-style requests are your best bet.
Typically, you'll want to double-buffer the requests.
You're trying to force a square peg through a round hole.
If you need a persistent connection with real-time data updates, you need a TCP connection, not a HTTP connection. If you HAVE to do it with servers and clients you don't have full control over, look into Websockets and see if they are supported in your environment. Without websockets, COMET-style requests are your best bet.
Typically, you'll want to double-buffer the requests.
yea, i know this isn't the ideal scenario to get things working, but i don't have a dedicated public server that i have access to at the moment, so i'm trying to cut the sides off that square peg to make it fit.
this is a c# application, not a browser application, so i have full control over how the client connects.
I like the idea of double-buffering requests, so if i understand what your saying correctly, i attempt to maintain two active connections with the server, so when i get a response, the other connection is still their while i attempt to re-create another connection.
Yup. It works best when the server can know that there are two requests incoming, and returns the data for the first request when the second request comes in. If you're using plain PHP, you may need to use shared state through something like memcache to make that work, and you'd need to be polling or something to actually sequence it right. It's a right mess when all you have is a "single process per request" model.i attempt to maintain two active connections with the server, so when i get a response, the other connection is still their while i attempt to re-create another connection.
Yup. It works best when the server can know that there are two requests incoming, and returns the data for the first request when the second request comes in. If you're using plain PHP, you may need to use shared state through something like memcache to make that work, and you'd need to be polling or something to actually sequence it right. It's a right mess when all you have is a "single process per request" model.i attempt to maintain two active connections with the server, so when i get a response, the other connection is still their while i attempt to re-create another connection.
well, what i was thinking is doing something like so:
1. client makes request to server, php takes the request, and passes it to my custom server.
2. custom server receives request from php(which begins waiting for a response from the server, before responding to the client.)
3. my curstom server checks who is requesting the data, and adds the request into that user's queue.
4. once the server has data to submit, it selects the first available php request, and submits the data(then the php request hands that back to the client).
5. if no request is availble, the data is buffered while waiting for a request to come in.
do you see anything wrong with this approach?
Yup. It works best when the server can know that there are two requests incoming, and returns the data for the first request when the second request comes in. If you're using plain PHP, you may need to use shared state through something like memcache to make that work, and you'd need to be polling or something to actually sequence it right. It's a right mess when all you have is a "single process per request" model.i attempt to maintain two active connections with the server, so when i get a response, the other connection is still their while i attempt to re-create another connection.
well, what i was thinking is doing something like so:
1. client makes request to server, php takes the request, and passes it to my custom server.
2. custom server receives request from php(which begins waiting for a response from the server, before responding to the client.)
3. my curstom server checks who is requesting the data, and adds the request into that user's queue.
4. once the server has data to submit, it selects the first available php request, and submits the data(then the php request hands that back to the client).
5. if no request is availble, the data is buffered while waiting for a request to come in.
do you see anything wrong with this approach?
Everything.
Ok, let's be blunt: I would not allow this approach in my office. It is crap. So many things can go wrong it is scary.
Communicate over TCP. Find another hosting option.
Sorry for not helping with this response.
Yup. It works best when the server can know that there are two requests incoming, and returns the data for the first request when the second request comes in. If you're using plain PHP, you may need to use shared state through something like memcache to make that work, and you'd need to be polling or something to actually sequence it right. It's a right mess when all you have is a "single process per request" model.i attempt to maintain two active connections with the server, so when i get a response, the other connection is still their while i attempt to re-create another connection.
well, what i was thinking is doing something like so:
1. client makes request to server, php takes the request, and passes it to my custom server.
2. custom server receives request from php(which begins waiting for a response from the server, before responding to the client.)
3. my curstom server checks who is requesting the data, and adds the request into that user's queue.
4. once the server has data to submit, it selects the first available php request, and submits the data(then the php request hands that back to the client).
5. if no request is availble, the data is buffered while waiting for a request to come in.
do you see anything wrong with this approach?
Everything.
Ok, let's be blunt: I would not allow this approach in my office. It is crap. So many things can go wrong it is scary.
Communicate over TCP. Find another hosting option.
Sorry for not helping with this response.
well, perhaps if you'd actually point out why/how "so many things can go wrong", or provide some examples, i'd actually have a better idea of why such things could go wrong, instead of just walking in, and saying "Nope, find another way, but i'm not going to give you any reason why it won't work." I think i've made it clear that i'd choose another path if i could, so instead of re-iterating a known fact, you might actually be....helpful and tell me why this path is so adamant to failing,