Risks Of Using Computer As Webhost?

Started by
32 comments, last by Hodgman 8 years, 2 months ago
On the isp I use the only limitation is that during peak times which are weekends and 5 pm to 11 pm the isp automatically identifies the highest 1% of users by bandwidth used and throttles certain types of non http traffic (mostly peer to peer and bittorrent) to around 10% of the usual. Considering normal is 150 mbps and I'm rarely ever in the top 1% for the area and never use the throttled protocols, I never get throttled...

It also helps that most of my usage is also outside of the Peak hours between 9am and 5pm when i work from home... :)
Advertisement

There are risks to any hosting. The question should be, "Am I okay with the risks of putting my personal machine out on the internet as a web host?". The risks are commonly:

  • Scalability: This model is hard to scale to match load with increased growth as it requires more hardware (quickly). This makes services like DigitalOcean and Amazon great; click button, scale.
  • Maintainability: Are you willing to put in the time to put in effort keep software (Apache, etc) up to date to avoid the continual stream of exploits from being used? Some hosting providers may do this, many do not
  • Data loss: Are you willing to accept the loss of anything on that machine and anything on the associated network to be lost in the worst case event?
  • Extra costs: Are you willing to put up the cash for a static IP and beefy connection that hosting requires? Hosting providers amortize this cost over several thousand clients. If you are going to do this for real, you probably want to do this unless you like pissed off clients.

Note that VMs don't actually solve most of these issues except the data loss issue. Typically though once in, you can route out on your local network to "find" things as an attack vector unless you have some network skillz. I don't recommend doing what you want to do.

There are risks to any hosting. The question should be, "Am I okay with the risks of putting my personal machine out on the internet as a web host?". The risks are commonly:

  • Scalability: This model is hard to scale to match load with increased growth as it requires more hardware (quickly). This makes services like DigitalOcean and Amazon great; click button, scale.
  • Maintainability: Are you willing to put in the time to put in effort keep software (Apache, etc) up to date to avoid the continual stream of exploits from being used? Some hosting providers may do this, many do not
  • Data loss: Are you willing to accept the loss of anything on that machine and anything on the associated network to be lost in the worst case event?
  • Extra costs: Are you willing to put up the cash for a static IP and beefy connection that hosting requires? Hosting providers amortize this cost over several thousand clients. If you are going to do this for real, you probably want to do this unless you like pissed off clients.

Note that VMs don't actually solve most of these issues except the data loss issue. Typically though once in, you can route out on your local network to "find" things as an attack vector unless you have some network skillz. I don't recommend doing what you want to do.

Great points!

But what of the ability to control your own security? I might be wrong (and out to learn if i am, thats the whole point...) ...but i guess you could plug lapses in security loop-holes if you are in control , where your ISP might fail. So control of your own security might be an advantage - security of very big name ISPs have fail regularly.

Setting up your bespoke firewall with special algorithm could be an advantage

Don't trust the "understood" models of securing a web hosting package. If you have anything on your PC that you wouldn't want a random stranger traipsing through, then don't open that PC to the internet, and ideally don't put it on a network next to a PC that's opened up.


In other words, the only really good way to hide your sensitive data (cookies, password files, credit card info, kinky porn, whatever) is to make sure that nothing can talk to that machine without first being authorized by you. Fortunately, most NAT routers have firewalls that do a great job of this, until you go and poke holes in them. Put your web host in a separate network or a DMZ at a bare minimum - assuming the (all good) arguments above about bandwidth and availability haven't yet dissuaded you.

Not stating the following as an expert on this topic but as a thought process as it occurs to me, thus I stand to be corrected.

Would such authorization be in real time or be manual? If its real-time then its an algorithm and can be hacked. If its manual thats backward as non-real-time system are not practical for most applications

Also is it an American thing to be charged a lot for crappy Internet with data caps, low upstream, extra charges for going over, and contracts that say you can't even open a Listening port on your public ip?

Sounds backwards as hell to me, things like that are generally dead here in the UK except on mobile contracts which are and always will be living in the dark ages

A bit off topic; I promise this is not America bashing as i love the US a lot, But with mobile contracts in the states, things are really bad. I mean when you get charged for receiving calls as it is in the states, then thats really really backward

can't help being grumpy...

Just need to let some steam out, so my head doesn't explode...

Also is it an American thing to be charged a lot for crappy Internet with data caps, low upstream, extra charges for going over, and contracts that say you can't even open a Listening port on your public ip?


You forgot low downstream as well. laugh.png

Now that is a German thing, not an American one.

It's 5 months since I ordered 50Mbit/s upstream downstream (heck, I wish you could even have 50Mbit up...) for my home which is the fastest that 1&1 offers chez moi (although they do offer 100Mbit/s in other locations, and Telekom says 100Mbit/s would be no problem in my place either, but they say a lot, and little of it is true).

Well, that's 5 months since order, and 2 months since last heard of being told "real soon now".

As it stands, I pay for 16Mbit/s downstream, and get about 13.5MBit/s down (and ca. 1.05 up) and a lot of excuses (long cables and whatnot, which is demonstrably a lie, they just didn't book enough capacity on the fiber carrier) as well as references to the fine print which says "up to". I told my neighbour who lives in the next house down the road, and he replied: "Why! You're lucky, I wish I had 13.5, I'm getting 7.5".

Financially, it's not much of a difference since the rate is the same for 16 and 50 and 100 either way (only if you order 6 or 2 it gets like 5€ cheaper).

So basically, they're unable to provide 16Mbit/s as promised, but they intend to deliver 50MBit/s. Which is just a joke because in every other half-civilized country you meanwhile have 200MBit/s unless you live where the water flows over the border of the world.

Thumbs up for a high-tech country.

But what of the ability to control your own security? I might be wrong (and out to learn if i am, thats the whole point...) ...but i guess you could plug lapses in security loop-holes if you are in control , where your ISP might fail. So control of your own security might be an advantage - security of very big name ISPs have fail regularly.

Setting up your bespoke firewall with special algorithm could be an advantage

So if you setup your own cloud in say Amazon Web Services (AWS) for example:

  • You control the DNS with Route 53 and have to manage this for any change of machine IP modifications, etc
  • You control the machines that the DNS routes too
  • You control the security groups (basically a simplified firewall) of what ports are exposed, what machines can talk to other machines, etc...Yes, you control your own security here.
  • You can control what machines can access what data (external stores (S3 buckets, Glacier, RDS, etc)
  • You control how it scales by either doing it manually or using a service like Elastic Beanstalk
  • You will quickly learn about automating deployments as doing so in this kind of environment doesn't scale manually
  • You control when/where security updates are applied and also though must provide some heads up to a customer or then again you get to learn about having redundancy/fail-overs. So that means that you get to learn about something called load balancers not to mention an automation framework such as Chef, Puppet, SaltStack, etc.
  • You learn how much it really costs to self-host...shiz ain't cheap as it scales

The more that I think this thru if your goal is to learn, why aren't you doing this?

Note: There are many good alternatives to Amazon such as DigitalOcean. Please check them out; I just used AWS here as it is what I presently use for my stack. If you want to give it a shot, ping me and I can help you with the initial setup.

But why does it have to be cloud?

The question was "can I run this from home?", which automatically runs out the possibility "I want to do something like eBay, only better and twice as large". For anything that you could possibly run from home, any such word as "scalable" is nonsensical.

For anything on a scale that would remotely allow for a question like "can I run this from home?", a 4.95 vServer will do just fine. Those have a gigabyte or two of RAM guaranteed, and some 20-50GB of disk. Yeah, virtual servers suck because there is extra scheduler jitter. So what? You're serving web pages, not running a FPS game. Nobody notices a difference. The plan usually includes fully transparent backup, too. So... nothing to do on that front, either.

For anything a cheap vServer cannot do, unless you aim for running at least something the size of the sales site of a multinational company, a single no-special dedicated server for anywhere from 29 to 49 currency per month will do. You'll have not-the-fastest Skylake, or possibly not-the-most-recent Haswell with anywhere from 4 to 8 TB of disk and 16 or 32 GB of RAM. Plus either a 100MBit or 1GBit uplink. Dude. Really, for a web server?

If you think that you are so important that the world cannot afford two, possibly three reboots per year during which your server will not be reachable for 5 minutes, or if your company cannot afford that in case the server suddenly catches fire and it takes 4 hours to get a replacement up and running... well, then run a second server on a failover IP service.

You don't need to "scale", unless your company name is Amazon or Twitter, or Youtube. Really, just how many million visitors per second do you expect? Most people could probably host their website on a Raspberry Pi, and nobody would ever notice (in fact, Hetzner now offers Odroids as dedicated mini-servers!).

You don't need to control the DNS, the DNS run by the provider is mighty fine. You need no super-special dedicated firewall, no complicated security groups or stuff, and you don't need 35 servers in round-robin mode with 12 application servers and 10 database servers in a virtual LAN which you can dynamically reconfigure six times per day within an instant (although you can get all that at every normal provider, if you absolutely believe that you need it).

Yes, there are scenarios like when you are under a DDOS and you wish you had this-and-that, such-and-so... but realistically... First, if you are under DDOS then 9 out of 10 times you had it coming. Second, if you have a somewhat reasonable provider, they have an automated mitigation mechanism anyway. Third, if the provider doesn't have such a thing far up the hierarchy, there is nothing you could do anyway. None of that applies for anything that falls under "can I run this from home", though.

There are also scenarios where the word "scale" makes sense. In the case of serving a not-very-very-very-serious-sized website, it's just a marketing buzzword used by companies that try to sell you their expensive cloud services.

First, if you are under DDOS then 9 out of 10 times you had it coming. Second, if you have a somewhat reasonable provider, they have an automated mitigation mechanism anyway



Hahaha. No, really?

I've been hit by ddos several times on my dedicated servers. A few times, it was big enough an attack to take down the entire datacentre and we're talking places the size of telehouse here.

Why was I attacked? Simply because a teenage kid was bored and had a botnet. I know this because these kids like to brag and let you know what they did. The thought that you need to annoy anonymous or rile up the Russian mob to get hit by ddos is not true in the modern world.

What happened to the hosting providers and what was their response? It was to null route my ips upstream for a week locking me completely out of my box. This is the usual response and most hosts say they'll do this in their terms of service. If you want more you need go get proper ddos protection tunnels/proxies which cost money.

This is moot anyway if you host at home, if you do and someone throws multiple gigabit at you, they'll kill the whole neighborhood or town, and you'll be banned from your isp most likely. At least, be left with a big bill...

DDOS is no laughing matter :(

I have to agree on the DDoS attack above...often there is no reason. A few startups I have been with have been hit without reason. Github gets DDoS all the time and they don't go out of their way to get their site blacklisted.

Well, I'm not doubting that you can be the victim of a DDOS otherwise, nor saying it's a laughing matter. But more often than not, the attacks that deserve the capital D at the beginning are not from bored kids, but from either angry kids (or angry young adults, have a word with your customer relations guy), or from organized crime (either paid by a competitor for direct damage, or blackmailing the target). I remember a mail from Conrad some 2-3 years ago a week or two before Christmas which basically said "We received a demand for ransom, and we're not going to pay. Apologies in case service is disrupted". Shit happens, not much one can do in such a case. But it's not like it usually comes as total surprise (though, of course, that can happen, too). But the thing is, either it's a stupid kid with 2 computers and it does not matter. Or, it is... something big, with a few thousand machines. And then, no matter what clever plan youn had, no matter what awesome firewall you have, nothing will help. It doesn't matter whether your website runs on one dedicated server or has magic cloud scale powers. Either way, it will go down (only cloud will charge you 100 times more for the extra bandwidth and CPU time before bailing out, whereas on a single dedicated server the maximum possible "damage" is limited by the one ethernet cable's capacity). The only way of effectively dealing with it is blocking the traffic somewhere near ix level. That's not something you can do, but it is what providers may be able to do (and the big ones do).

OP, you never did state what you think you need a web host for. If it is for creating your version of eBay but better the reasons for not using your PC have already been thoroughly discussed. Or do you want a host that you can play around with web programming with? If it is the latter just install WAMP/XAMPP on your machine and serve everything from your closed localhost (or a VM with Linux for full control). All the other computers on your LAN will be able to access the site but no worries of outside trouble (other then the usual).

This topic is closed to new replies.

Advertisement