Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About LunaRebirth

  • Rank

Personal Information

  • Interests
  1. Okay, thanks frob. Data usage is important to me so I will be making a system to deal with that. Last question, is it faster to send a smaller amount of bytes than a larger amount? Sounds like it should be, but I don't know enough about networking behind the scenes. For example, if I send() a char array of 6 bytes, will it be faster than if I send() a char array of 200 bytes? I'm thinking about sending a message that reserves 4 bytes, delayed by 1 frame, in order to tell the server how many bytes the next message will be. That way, instead of sending 1024 bytes per frame, I can send the exact amount of bytes I need by using up 4 extra bytes per send() to tell the server what amount to expect next. I could see this being good in some cases and bad in others.
  2. Yeah so you've answered my question, but just to extend on it, say we have a blocking server and client. The client must send data to the server, in which the server sends that data to all clients Say a client doesn't send data, because it has no data to send. Then, the server waits for data before it will send any, so the client won't receive any data before it sends some. In order to prevent that, I send and recv each frame, no matter what, so that I can receive all data without waiting. I do the same sort of thing with a non-blocking socket, by not sending data until I've received data. So yes, synchronous. I have tried to make my server and client wait for a recv before it sends, and then if there is nothing left to send, it sends only 1 empty array in order to tell the other that it is ready for more data, but it's easier said than done, and I fear time isn't on my side. Basically I'm worried that sending 1024 bytes each loop synchronously may use a lot of unnecessary data for a server hosted on a virtual machine, though my worry is backed up by zero evidence.
  3. I'm wondering how common it is for an application using a socket server to continuously send bytes, regardless there is actually any data to send. For example, I have a socket server which sends a char array to all connected clients, even if the char array has no significant usage for the client. Reason being, I need the server to send data to the client at the same speed the client sends data to the server. If I check for the buffer to have information before sending through this algorithm, it could cause data to not be sent to a client from another client until the client who needs the data firstly sends data (I.E a player must move in order to see another player move). In order to prevent the above example, I stream all data, even an empty buffer, to all clients. Is this normal, or should I worry about spending time on a new algorithm that only sends data when necessary? Obviously it would be more beneficial to spend the time, but with deadlines I must prioritize. How common is it for games to continuously send data, including empty data, instead of sending only data with a message?
  4. Sorry for the confusing title, this is very hard to word with limited characters. I have a game engine which includes various games inside of it. One of the games inside of the program is named after a different game. Will this bring copyright issues? I will be selling the program itself, but will not be making money from the "mini"-game. Thanks for shining any light on this topic for me.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!