UDP problems in server/client app

This topic is 3550 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Hi We're making a server application in C++ and SDL_Net and a client in VB.NET. Right now the client can connect and send values to the server like we want it to do. It successfully sends the message "p1" to the server first, which symbolizes the starting point of the application (not really relevant to my question though). The problem occurs when the actual application starts though. The purpose with our application, is to send the mouse y-coordinate to the server from the client. This works perfectly if I send a datagram with less than 4 characters. IF the datagram contains the value "p1512" for example, the server which receives the message only shows garbage data and cannot use the value. However, if I send a datagram which contains less than 4 characters, for example "p15", it works perfectly. This is the main problem we have stumbled on with our network project. We cannot get any further because of this and have no ideas what's wrong. We've double-checked the code and tried everything. Help is much appreciated.

Share on other sites
Check your code again - seems to be a problem with your code.

Share on other sites
Show the code you use for sending and receiving (sendto(), recvfrom() calls).

Share on other sites
If you send 5 bytes then you should receive 5 bytes. You should post some code so we can offer a bit more insight. Specifically the code that does the send and the code that does the receive would be helpful :-)

Share on other sites
Sounds like you're using sizeof() instead of strlen() to figure out how big your string is.

Share on other sites
Well then, here's some code for the server (C++/SDL_net)

// allocating packet is done before the main loopif (!(p = SDLNet_AllocPacket(10))) {    fprintf(stderr, "SDLNet_AllocPacket: %s\n", SDLNet_GetError());    exit(EXIT_FAILURE);}// this is called inside the main loop, it simply shows all the messages that is receivedmemset(p->data,0,sizeof(p->data));if (SDLNet_UDP_Recv(sd, p)) {    std::cout << (char *)p->data << std::endl;}

The garbage data always exceeds the allocated size 10 FYI. I don't have access to the VB.NET-code right now, but after sniffing the datagrams that were sent, we're fairly sure it's the receiving side that's doing it wrong.

I can post the code for my other client, just going to check if the same thing occurs using that.

edit:
It worked when I made a C++ client to send the datagrams. Is there some known conflict between VB.NET and C++ when it comes to networking? I think my friend only uses the UDP Send function when sending the messages.

[Edited by - password on May 28, 2008 7:45:39 AM]

Share on other sites
As hplus0603 said above, you're using sizeof to measure the length of your data, which is wrong. sizeof is a compile-time measure to tell you the size of a given type, and can't know how much data has been placed there. You need to query the packet's 'len' property to see how much to read. Docs here.

Share on other sites
Quote:
 Is there some known conflict between VB.NET and C++ when it comes to networking?

No. It's quite likely it's a bug in your sending code.

Share on other sites
Quote:
 Original post by KylotanAs hplus0603 said above, you're using sizeof to measure the length of your data, which is wrong. sizeof is a compile-time measure to tell you the size of a given type, and can't know how much data has been placed there. You need to query the packet's 'len' property to see how much to read. Docs here.

I only use sizeof in the memset and memset is something that wasn't originally there either. I don't use sizeof to measure the length of my data at all.

Share on other sites
You're still not posting the sending code.
Also, if you sniff the data using Wireshark, what data is in the packet on the wire?

Share on other sites

Isnt p->data a pointer (see below)?? (you treat it that way in the memset)

size of a pointer is usually 32 bits (4 bytes)

You should be sending a array of chars (zero terminated) insid ethe packet data

You dont show how you ar putting data in on the sender end.

you are doing a memset of 4 zeros and no change after the 4th byte
(if the sender isnt zero terminating the data it can overrun after the 4th byte)

typedef struct {
int channel; /* The src/dst channel of the packet */
Uint8 *data; /* The packet data */
int len; /* The length of the packet data */
int maxlen; /* The size of the data buffer */
int status; /* packet status after sending */
incoming/outgoing packet */
} UDPpacket;

Share on other sites
Quote:
Original post by password
Quote:
 Original post by KylotanAs hplus0603 said above, you're using sizeof to measure the length of your data, which is wrong. sizeof is a compile-time measure to tell you the size of a given type, and can't know how much data has been placed there. You need to query the packet's 'len' property to see how much to read. Docs here.

I only use sizeof in the memset and memset is something that wasn't originally there either. I don't use sizeof to measure the length of my data at all.

Sorry, my mistake. (sizeof is still the wrong operation to use there though.)

Share on other sites
Quote:
 You should be sending a array of chars (zero terminated) insid ethe packet data

That's wrong -- the data does not need to be zero terminated, unless you intend for it to be a zero terminated string (in which case the zero is part of the data). send() takes a length parameter, so you can send whatever data you want.

Share on other sites
We managed to solve it. It works if we send an array of chars which is NULL-terminated with \0.

Share on other sites
Quote:
 It works if we send an array of chars which is NULL-terminated with \0.

That just means that you don't understand what's going on in one of the sending or receiving code paths. You happened to find some work-around that makes it appear to work -- but are you sure that it will keep working? For all possible messages? Is this what it's designed to do?

Trusting in code that "appears" to work, without knowing that it works and why, is one of many ways that software becomes buggy, unstable, and hard to maintain. If you KNOW (through specification or documentation) that the recipient expects a zero-terminated string, then you can send that data -- but then the recipient is fragile and possibly vulnerable to remote attack, because someone might send a packet that does not contain a terminating zero.