Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Confusion programming a server accepting websockets


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 Vexal   Members   -  Reputation: 459

Like
0Likes
Like

Posted 27 June 2014 - 02:33 PM

I have written a basic HTTP server in C++ that supports regular websites, but am now trying to extend it to support websockets.  All of the networking and parsing is done by my own code, so no libraries -- except I copied and pasted some code for computing SHA1 and converting to base64.

 

I am reading several guides that describe how to handshake an incoming websocket connection, such as this: http://www.altdev.co/2012/01/23/writing-your-own-websocket-server/ and http://enterprisewebbook.com/ch8_websockets.html

 

I am stuck on the part where the server needs to take the key from the client's header and append the magic string onto it, then hash it with SHA1, encode it in base64, and send the result back as part of the accept header.

 

The specific part giving me trouble is the '==' in the key, and the '=' in the resulting string sent back by the server.  I can't find any information stating what to do with these -- do I drop the == from the key before appending the magic string to it, or do I keep them on?  Also, I can't figure out where the '=' at the end of the final server answer comes from -- and why the server's answer only has a single '=' and the client's key has two '='

 

Lastly, I am confused about why the magic string the server uses is in base16, but the key sent by the client's browser is already base64 -- do I need to convert the server string to base 64 before appending it, or does it not matter?

 

And when I convert to base 64, am I converting it as if each ascii symbol is the digit (so 'A' in the string would be 10), or am I converting the binary representation of the entire string to a string representing the original binary but with digits of base64?

 

 



Sponsor:

#2 SeanMiddleditch   Crossbones+   -  Reputation: 9902

Like
2Likes
Like

Posted 27 June 2014 - 06:25 PM

http://stackoverflow.com/questions/6916805/why-base64-encoding-string-have-sign-in-the-last

The trailing = is part of padding in the base64 encoding. The spec does not tell you to decode the client's base64-encoded key, so just take it (equals-sign and all) and concatenate it with the special GUID value. Then base64 encode the result, which may end up with 0, 1, or 2 trailing equals-signs depending on the exact byte count of encoded value.

#3 Vexal   Members   -  Reputation: 459

Like
0Likes
Like

Posted 27 June 2014 - 06:52 PM

Thanks.  I ended up figuring it out through trial and error.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS