Good HTTP library?... few dependancies?

Started by
14 comments, last by JPulham 17 years, 4 months ago
I want a simple library to access a PHP 'server' like so: http://....../server.php?action=blah blah blah I also want as few dependencies as posible. Whats the best one? I'm using a socket library in the game... how could I use this to make the HTTP request? is it worth it or should I use another lib? Any help appreciated... JPulham
pushpork
Advertisement
This is probably better suited to the networking forum.

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

I have just tried
http://www.mindcontrol.org/~hplus/http-get.html
http://www.scumways.com/happyhttp/happyhttp.html

Both worked, I ended up using happy http, not sure why! Maybe it was a little easier to use.
true... it did belong there... never really knew it existed... Just thought of web development and went into autodrive :S.

Anyway... I've decided to go with making my own!!!! Maybe just for kicks but also for an education :P

Basically, Can I have some good links to HTTP details?

I have looked up and googled HTTP 'GET...'
how do I signify the 'new line'? the reson for asking is that in the source to HTTP-GET a new line is '\r\n'. Where can I find these details?

thanks for your patients (I hope)
JPulham
pushpork
Connect to server.
Write the HTTP GET request, preferably 1.0 to keep things simple.
Add a blank line.
Wait for data, read it all.
Socket should close.

A new line in HTTP is indeed "\r\n", characters 13 and 10.

Read this: RFC 2616
Pay especially close attention to sections 5.1 and 6, with reference to section 2.2 to decode what the codes like CRLF mean.

Personally I'd just use libcurl. Maybe implementing a HTTP GET directly with sockets is simple, but using libcurl you're absolutely sure it works. No wasted time on debugging.
The library is released under a MIT license (not as restrictive as GPL/LGPL), so you can just copy-paste the code directly into your project.
Also, with libcurl you get quite advanced stuff like proxy-support for free - and HTTPS if you can live with an extra dependency.

-- Rasmus Neckelmann
I understand the basics of HTTP. I can write VERY simple 'GET'. Now I know that HTTP uses '\r\n' I'll be able to put it together... its just handelling the responce thats left really. I only need simple communications with my 'server' so it should do... Its not like IO'm writing aweb browser... yet :P :?
pushpork
Actually it's not quite as simple as Kylotan says.

Most web servers these days don't support HTTP/1.0 requests without the "Host" header (In HTTP/1.1 it's mandatory anyway), and many don't support HTTP 0.9 requests at all.

Although you don't need to deal with much, response-parsing wise, you may wish to get the contents of various headers, which requires parsing, which is extra effort and easy to get wrong. I'd go with using an existing library.

Mark
I've implemented quite a few applications that utilized screen scraping or some form of HTTP download. Honestly, every time I use one of those connection libraries I'm usually disappointed. Libcurl is a possible exception, but it's API (IMHO) leaves much to be desired. If all that you are doing is a simple quick connection, it might be interesting to write your own. I found that by doing so I had complete control over the process (including notifications, filters, proper buffer allocation, etc).

It is also worth noting that this is not an extremely simple task. Unless you want to block your application you're going to either need to use multithreading and synchronize, or you'll have to use Asynchronous socket I/O. Neither are trivial considering you'll have quite a bit of state information to manage.

All in all, I was impressed by the performance that I got using a custom implementation, as well as the zero dependancy and flexibility it afforded.

Everything you could want to know about HTTP 1.1 is at http://www.w3.org/Protocols/rfc2616/rfc2616.html and markr is quite right, many servers will refuse to speak to an HTTP 1.0 client nowadays, so implementing the (marginally more complex) HTTP 1.1 is probably best.

Took me about a single day to write my last implementation (in C#), which included support for multiple simultaneous tracked downloads (as well as a queue and a single manager class for notifications). I used Async I/O to do the dirty work, I've always found that issues creep up with multithreaded implementations if you're trying to do multiple connections.

Things get significantly easier if you control the server you're connecting to. If you don't be aware that you may have to add in support for cookies and referrers/etc.

Hope this helped, sorry for the 10 page article :-)

-Dave
The server is on my friends web site (I'm the lead 3D programmer there) and he talks to the owner of the web host quite a lot. So I can get alot of info on the server (in theory).

Also, This is a one of connection. I can hard code the request into the game if I want(although I'll probably put it in the XML config file). Basically I only need to load an XML document that was downloaded (PHP genterated).

This is why I think this is a good idea... I think :?
pushpork

This topic is closed to new replies.

Advertisement