Worldwide ransomware cyber attack

Started by
34 comments, last by samoth 6 years, 11 months ago
Though it didn't make its way to the place where I work, it's still made things interesting over here. They upped security on our network, but in doing so kind of crippled some of us. As we speak, I'm unable to work. I can only sit at my desk. It's... definitely something.
Advertisement

Though it didn't make its way to the place where I work, it's still made things interesting over here. They upped security on our network, but in doing so kind of crippled some of us. As we speak, I'm unable to work. I can only sit at my desk. It's... definitely something.


The most secure system and the one least at risk is the one that can't do anything at all? :)
[quote name='Promint']What complete and utter nonsense. Take your conspiracy mongering garbage somewhere else. This was a system bug like anything else, including heartbleed.
[quote name='Promint']Go back to Slashdot or whatever random hole you crawled out of to waste our time.
What the heck. Do you need rudeness to talk with people?
@Hodgman
I can't speak whether Microsoft has any relationship with NSA or whatever other institution. But, the reason I mentioned that it's bad for a public foreign institution to use Windows, even if it didn't had any security vulnerability, is because since Windows 7 it has been reported that even with telemetry options disabled, the system still send data to Microsoft servers. It happened on 7, 8 and still happens on 10. Some people might not care about that, but there are people who cares about their privacy and their civic information (which is usually passed to public institutions), especially if it could fall in the hands of a private company.
@mhagain
I'm well aware of that. That's why I mentioned that it was design for single-user and later added the multi-user feature. As for the registry, I meant more by the fact that if not using one of the Reg*() functions, entries can be modified via a simple ".reg" file.

@mhagain
I'm well aware of that. That's why I mentioned that it was design for single-user and later added the multi-user feature. As for the registry, I meant more by the fact that if not using one of the Reg*() functions, entries can be modified via a simple ".reg" file.


DOS-based Windows was designed for single user only. DOS-based Windows never had proper multi-user added, and the product line was cancelled in 2001.

NT was a clean-sheet-of-paper design that was multi-user from the outset and just happened to share components of the shell. NT is otherwise unrelated to DOS-based Windows. Come on, you use Linux, you should know that the shell is not the OS.

.reg files are absolutely nothing to do with the format of the registry database itself.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

The recent attack isn't so scary if you remember the time when your unpatched Windows XP computer was rendered inoperable literally a second after you plugged in the network cable.

Presently, you must have an unpatched system and either be immensely stupid, or have someone else with an unpatched system who is immensely stupid on the same LAN to be affected. And then, if you don't have SMB running, you're out of the equation anyway.

What I find more scary than the actual attack is the mindset that allows for it, which I've already mentioned many times before. Everything has to be smart and networked, and connected to everything, regardless of risks, and even if there is no benefit whatsoever.

I mean, be serious... cyber attacks? Who gives a fuck about cyber attacks. So you think you can bring down my website. Awesome.

But reality has it that, for a reason no sane person can understand, mission-critical equipment in institutions like British hospitals or Deutsche Bahn is not only physically connected to the internet (what the fuck? what the fuck?) but there are even computers on those subnets that can receive unsolicited emails. Who needs to read external emails on a computer that is connected to the central scheduling network? And then of course, none of the mission-critical computers are up-to-date, nor is there a re-image DVD available to fix the problem in a timely manner.

.....nor is there a re-image DVD available to fix the problem in a timely manner.


The problem with that is that a lot of these systems use very heavyweight "enterprise" software: SAP, Oracle Applications, Great Plains, that kinda stuff - that requires a team of 40-odd consultants performing 6 months of customer-specific customization and configuration in order to even make it usable.

Getting the OS back is easy.

Getting data back is easy.

Getting applications back is moderately difficult.

Getting heavily customized stuff back to exactly the same state as it was previous to a DR scenario is virtually impossible.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Though it didn't make its way to the place where I work, it's still made things interesting over here. They upped security on our network, but in doing so kind of crippled some of us. As we speak, I'm unable to work. I can only sit at my desk. It's... definitely something.

The most secure system and the one least at risk is the one that can't do anything at all? :)

Hahahahaha, That seems to be what they're thinking!

But reality has it that, for a reason no sane person can understand, mission-critical equipment in institutions like British hospitals or Deutsche Bahn is not only physically connected to the internet (what the fuck? what the fuck?) but there are even computers on those subnets that can receive unsolicited emails. Who needs to read external emails on a computer that is connected to the central scheduling network?

Is there a cost-effective alternative to networking all these machines that doesn't involve them being on the internet? I could imagine they could be on a VPN, but that is not a full solution.

Additionally, it's pretty important for some of these systems to have both internal and external connectivity. An example is a doctor sending an email to their assistant, who then may need to contact a patient. They could have 2 systems with an airgap but that's pretty inefficient, and adds other expenses.

Is there a cost-effective alternative to networking all these machines that doesn't involve them being on the internet? I could imagine they could be on a VPN, but that is not a full solution.
Additionally, it's pretty important for some of these systems to have both internal and external connectivity. An example is a doctor sending an email to their assistant, who then may need to contact a patient. They could have 2 systems with an airgap but that's pretty inefficient, and adds other expenses.


This is actually a pretty standard setup.

For the example of internet access, PCs on a network are only connected to the LAN, you also have a proxy server on the LAN, and you ramp up security on the proxy server.

For the example of email, you have an email gateway that your internal email servers forward email to, and which then forwards on to the internet. You ramp up security on the gateway.

This is standard thinking for networks for a long long time; rather than connecting PCs directly to the internet you have a single point of ingress/egress which connections go through, and you have heavier security on that single point.

This breaks down in a couple of scenarios.

One scenario is BYOD environments where an already-compromised device is brought onto the network. One way of dealing with this is to just not allow BYOD. Another way is to configure and enforce NAC/remediation networks. Both of these are vulnerable to the second scenario...

...which is "power user"/management types who like to believe that policies such as these shouldn't apply to them. It's either too awkward, or it slows down their workflow, or they're too important, or they're self-proclaimed experts on security, or whatever.

This is why I generally take a quite dim view of "power user" types: I do this kind of stuff for a living, and I'm quite familiar with the kind of mess that all too frequently needs to be cleaned up after them.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

very heavyweight "enterprise" software: SAP, Oracle Applications, Great Plains, that kinda stuff - that requires a team of 40-odd consultants performing 6 months of customer-specific customization and configuration in order to even make it usable.

Getting heavily customized stuff back to exactly the same state as it was previous to a DR scenario is virtually impossible.

I don't have any insight on how things work at Deutsche Bahn (where the attack literally meant "lights out", I was travelling by train that day, was real fun finding the train in the first place, and then they stopped in the middle of nowhere within Franconia for half an hour because ... stuff ... with routing trains by hand and not knowing who owns the track or who has precedence), but I happen to know how a company that does those kind of 40-consultant twiddling jobs uses that same enterprise stuff.

They have a lot of custom stuff, but deleting the custom SAP/Salesforce/MMS/whatever stuff is simply not possible, short of by compromising the server.

You have a web interface over which you can access the data that you have access to. Which usually is a surprisingly small subset of a subset of your particular work group's stuff. Not rarely, people in the same group cannot even access another project's data from the same group because of (imaginary or real) direct competitor issues. You just don't have the access rights to modify anything of significance (significance in a meaning of "will disrupt business for days", you can of course still fill in wrong data for your project). Operating system and applications... everything is imaged onto the computer by IT. Bare-metal copy. Which is like... 20 minutes maximum (they'll make that 3-4 hours to look more important, though).

It didn't look like anything "really big" such as a server was compromised at DB, anyway. For example, there was no information whatsoever about where a train could be found, nor did trains or chariots have their proper numbers until a minute before departure when the conductor typed it in by hand. However, service personnel on the station could access all data via a web interface on their phones, and if you asked them they would tell you where to find your train... in my case, the information was even correct, so I was lucky :P

Really, I don't get it what caused such a massive dropout. I don't know why e.g. a computer running displays at a station or a computer managing train numbers needs to be connected to, and accessible by the internet. Especially if you are the one who owns the part of the "internet" (on the physical layer) that would be of importance to you.

But I don't get it why it took so long either. Especially since the presumed "re-image DVD" to restore the dysfunct machines is most likely not a DVD at all, but a built-in bootloader that pulls a rescue image from SAN. Which you can, presumably, enable via KVM without having to visit a hundred locations in person. Works that way with every no-shit hoster where you can rent a 50€/month server for hobby. You would think a country-wide enterprise setup isn't much worse than your favorite hobby server hoster. So... given restore from rescue image, if doing 1 computer takes, say, 30 minutes, then doing 100 computers takes like... 31 minutes?

Additionally, it's pretty important for some of these systems to have both internal and external connectivity. An example is a doctor sending an email to their assistant, who then may need to contact a patient. They could have 2 systems with an airgap but that's pretty inefficient, and adds other expenses.

It's a couple of years since I worked in a hospital, but we did indeed have two completely separate systems with airgap (university hospital) or no internet access at all (public hospital). The reason being that short of researching a scientific paper, there is really no good (or even legitimate) reason for you to access the internet, and there is risk involved with nothing to gain.

You wouldn't have a scenario like the one above since most doctors hardly know how to write emails, and assistants... phew, I'm trying hard not to get insuling. Let's say that with very, very few exceptions, assistants (the ones you get in a hospital, anyway) aren't as splendid as you might think. One would believe that writing three lines in one's native language with fewer than two spelling errors is an ability possessed by most human beings. One would believe that being given two simple, unrelated tasks, the average human being is able to fulfill both tasks, one after another. One would believe that after typing "laparoscopy" for the 500th time, you know how to spell it. One would believe that the third request of "Please use the auto-spellchecker" didn't result in the reply: "Oh you meant, every time?". They're all wrong assumptions.

But anyway, you really don't want to send anything patient-related via email with our privacy laws (your mileage may vary in UK). You might have a centralized database with patient data (I wouldn't know about UK, but I know e.g. Sweden already had that like 15+ years ago, and zero security, seemed to be no problem?) but we certainly don't have any such thing. So... no need for zee internets, really.

Billing is the only thing that is in some way centralized (insofar as you need to contact the insurances), but this is something that could be easily and with no loss be done by a single machine in the basement having a VPN connection to another one at the insurance, or even by calling the other computer over landline every day at midnight. No, I'm not joking. I mean, it takes one and a half years before the insurance company pays, so why bother about a couple of hours? This doesn't need to be "realtime", that's complete bollocks. There is zero difference between transmitting live within the second or tomorrow at midnight. Everything does not need to be live and online.

At one occasion, I've seen internet access and access to billing data on the same computer -- which, too, struck me as fucking crazy. Billing means patient name, procedures, and disorders. ICD/OPS encoded, sure, but the codes are publicly available on the internet. While the odds are minimal that someone hacks one such particular computer and uses it to pull patient data this way are minimal, the mere fact that it is at least in principle possible (when it needs not be) is scary.

This topic is closed to new replies.

Advertisement