• Advertisement
Sign in to follow this  

theoretical of practical safety of binary (www) plugins ?

This topic is 1317 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i never thinked about ot so i would like to ask in this case.

lets assume that instead od loading javascript code into a browsers

you download a binary compiled c/asm modules (in the form of

dlls or something like that) such dll should be restricted only to

modify client area of web browser but could use full processor and

gpu abilities 

 

can it be done safe (on windows especially) or there are some basic

problems to do it safe that disqualify this tehnology?

 

Share this post


Link to post
Share on other sites
Advertisement

I'm not sure what you're asking, but you can't run a binary and you can't access the whole file system with JavaScript.

 

If you meant replacing the content of .js file with the content of an executable file, the browser won't recognize the file as valid JavaScript code and will throw it away immediatelly. And even if somehow the file is not discarded, the file won't be executed as "starting a new proccess in the OS", it will be interpreted as JavaScript code and it will run contained by the security rules of JavaScript.

 

If you meant downloading a file from a page like you do everyday, Javascript is not concerned about that, and the browser won't put any restriction on it. If you downloaded a file you did it manually, if you run that file from the download list of the broswer you're doing it manually. The best thing the browser can do is warn you about possible problems of runing an executable file that you just downloaded.

 

If you configure the browser to download and open everything automatically and you download and run malicious software you're kinda doing it manually.

 

When the user does something is his/her fault and the security problem is much bigger than the browser aspects of security.

Share this post


Link to post
Share on other sites

Microsoft did it with ActiveX, which was... bad, very bad.

Not only is it a huge security risk but on top of that you'd have to provide different executables, at least for x86 and ARM

Share this post


Link to post
Share on other sites

I'm not sure what you're asking, but you can't run a binary and you can't access the whole file system with JavaScript.

 

If you meant replacing the content of .js file with the content of an executable file, the browser won't recognize the file as valid JavaScript code and will throw it away immediatelly. And even if somehow the file is not discarded, the file won't be executed as "starting a new proccess in the OS", it will be interpreted as JavaScript code and it will run contained by the security rules of JavaScript.

 

If you meant downloading a file from a page like you do everyday, Javascript is not concerned about that, and the browser won't put any restriction on it. If you downloaded a file you did it manually, if you run that file from the download list of the broswer you're doing it manually. The best thing the browser can do is warn you about possible problems of runing an executable file that you just downloaded.

 

If you configure the browser to download and open everything automatically and you download and run malicious software you're kinda doing it manually.

 

When the user does something is his/her fault and the security problem is much bigger than the browser aspects of security.

 

im asking about different thing - some solution when you would use native binary plugins (some kind of dlls) though restricted instead of ineterpreted

or at least not native things like js or hava plugins -

 

im curious if it could work safely or if not for what reasons

Share this post


Link to post
Share on other sites

Microsoft did it with ActiveX, which was... bad, very bad.

Not only is it a huge security risk but on top of that you'd have to provide different executables, at least for x86 and ARM

ye a heard about that (though never interested with that as it was already obsolete)- but asking here if such reasons for beaing not safe are fundamental or it could be done safely

 

other thing if this is worth doing from the reason you said for example, as it is major unconvenience

Share this post


Link to post
Share on other sites

Apart from the ancient ActiveX, there have been some more modern attempts to bring something like that, but I don't think they were widely accepted: Chrome NaCl (Native Client). http://en.wikipedia.org/wiki/Google_Native_Client

I think it does exactly what you are asking and has decent security.

The major problems with this approach are: different architectures (at least arm and x86 is a must) and the fact that people write terrible code. For javascript it took decades until it became something usable and not so buggy (arguable), so for a native technology to be implemented correctly by more than one vendor it will take comparable time.

Of course out of necessity there emerged some techniques such as ASM.js, in which javascript is used in low-level way, such that JITs can actually compile it to good performance native code.

Share this post


Link to post
Share on other sites

Im pretty sure that there exists a C to javascript converter.

 

Now, im also pretty sure browsers can convert javascript to machine code using JIT technology.

 

So essentially what you describe is already possible. You write C, and at least some of it will be executed using direct machine code.

 

 

Obviously this is probably inefficient because it would go through javascript as an intermediate step and JIT compilation is not as optimized as offline optimized compilation, but you get the idea. The machine code has to be generated or checked by a trusted party id expect though (which is the case with JIT, as its done by the browser).

Share this post


Link to post
Share on other sites

@up 

I was not asking about js (though it is interesting to)

more im interesyed if such native code module can be done

safely, i mean safely restricted - For example when you run

asembly code you have adressing abilities of assembly

language you can read any adresa and write to any adress

so you possibly can break the whole system - though probably

you can disable this reading and writing apilities in such plugin

by setting the properties of virtual pages of ram or something

- I dont know

 

the other question is if it is worth doing , the advantage would be 

that surfers could safely run and test some codes like desktop games with no need to unsafely run it in the system - bit useful but oberal im not sure if this would be worth it

Edited by fir

Share this post


Link to post
Share on other sites

Apart from the ancient ActiveX, there have been some more modern attempts to bring something like that, but I don't think they were widely accepted: Chrome NaCl (Native Client). http://en.wikipedia.org/wiki/Google_Native_Client

 

 

Interesting, ye this is something like im talking about

 

curious if games made in JS or as java applets are much more slow than that? I heard tahat Js is about 5 times slower than native code, java should be faster probably.. still dont know those things though they are probably well known

Share this post


Link to post
Share on other sites
Depending on how you define 'safe' I would argue that neither can be done safely.

As was pointed out, plugins are bad from a security perspective, creative attackers can find and exploit something given enough time and incentive. Plugins are usually treated with more visible security warnings since they have a very high risk for a successful attack. The assumption is that if the user is willing to run the plugin they trust it

Javascript theoretically runs in a sandbox, but in practice every browser has had some severe security exploits when creative attackers found a way outside the sandbox. Even when it is allowed to play within the sandbox, a malicious script can then redirect the browser to do something that is within bounds, such as pulling a request from another web page or redirecting content from one DOM object to another, which results in an undesirable effect. Malicious scripts are common in XSS attacks which run inside the established sandbox. That also boils down to trust. Either you trust a web page to run in your browser, or you don't.


The security bulletins, the big browser bugs, a large number are from when attackers find ways to bypass the user notification. If you get a big red warning that says "This site is probably fraud" but then enter your credit card number, that is your fault. But when the attacker finds a way to bypass the sandbox, to install a plugin or run a script without your permission, that is normally a bug in the browser. It becomes a bit of a gray area when web sites pull in resources from twenty other sources, which in turn pull from more sources, spreading out like a cancer where loading a web site pulls on web requests from a bunch of unknown sources, similar to what many advertisement scripts do today. You request the page, and unless you block it directly it is assumed that you wanted the content...

Share this post


Link to post
Share on other sites

Interesting, ye this is something like im talking about
 
curious if games made in JS or as java applets are much more slow than that? I heard tahat Js is about 5 times slower than native code, java should be faster probably.. still dont know those things though they are probably well known

What you heard is random hearsay about various other projects, not your projects.

JavaScript is a very different thing from Java applets. The name is only similar due to marketers.

Any tales of "5 times slower" are going to depend on details that are not provided. There are programmers who just keep guessing and hoping, and there are programmers who use science, as in computer science.

Take measurements. Form a hypothesis based on the measurements. Make a test. Take new measurements based on the test. Then make decisions based on the test results. You don't say "I heard someone once took measurements on some other projects, so I'm going to blindly assume that is my problem as well."


Since we are in computer science, it works like this:

I measured the program to be slow due to X. I suspect that the problem is Y. I can implement change Z which will correct the problem because of specific reasons that I can list. After implementing the change, I can measure again and get results X' which I can compare against X. If X' is better than X I will submit the change and accept that Y was correct. If not, I will assume either Y or Z was wrong and try again.


Yes, scripted code sometimes run slower than compiled code. But it is very likely that isn't the problem. Unless it is. Your computer can do billions of instructions every second. Does your code require billions of instructions? I've seen computer vision algorithms written in scripted languages that ran in real time on 300MHz computers. Chances are good that whatever bottleneck you are facing, it isn't the fact that you are using a scripted language.

Share this post


Link to post
Share on other sites

Depending on how you define 'safe' I would argue that neither can be done safely.
 

 

 

windows programs are anyway strangely unsafe, for example is it often a need to grant an application possibility to write data to the other folders than its own? - also to change global settings etc

Share this post


Link to post
Share on other sites

curious if games made in JS or as java applets are much more slow than that?


This is why The Code Deity invented Benchmarks and gave the Word to the People so that they might spread the Benchmarks to all that they might know Her grace and majesty.

Here's one of Box2D that literally took 10 seconds to find.

Share this post


Link to post
Share on other sites

 

curious if games made in JS or as java applets are much more slow than that?


This is why The Code Deity invented Benchmarks and gave the Word to the People so that they might spread the Benchmarks to all that they might know Her grace and majesty.

Here's one of Box2D that literally took 10 seconds to find.

 

 

very interesting - close to what i heard but a bit more indepth

 

javascript being slower a couple of times (say about 7x and that order -

that is fast, quite fast),

jave being slower 2x here (I tested sometimes java is faster, some number crunching i tested in java once was so fast as in c) anyway it is also fast

 

action script is faster than javascript and a bit unknown to me asmjs is

like java

 

so imo it is worth to invest in javascript as it is easy to use and interesting

Share this post


Link to post
Share on other sites

In answer to the Thread Title.

Anything that is transported from the internet onto a machine and executed there natively just in place is a security risk. Only in a closed environment and for a defined set of commands it may have a  lower risk, because it is under control of the management of this closed environment.

Share this post


Link to post
Share on other sites

In answer to the Thread Title.

Anything that is transported from the internet onto a machine and executed there natively just in place is a security risk. Only in a closed environment and for a defined set of commands it may have a  lower risk, because it is under control of the management of this closed environment.

sure but question is if binary code couldnt be safely sandboxed same safely as bytecode or scriptcode

Edited by fir

Share this post


Link to post
Share on other sites

This is another very different question.

 

byte-code-interpreters use a virtual machine to let the code run.

If you want native code let run in a vm you loose the speed benefits that may exist compared to a java vm or php or perl.

 

All the for now existing byte-code languages with their respective VM have a long time of development gone by. If a solution do not give any benefit over the already available languages noone will work on the realization of such a technique.

Share this post


Link to post
Share on other sites

This is another very different question.

 

byte-code-interpreters use a virtual machine to let the code run.

If you want native code let run in a vm you loose the speed benefits that may exist compared to a java vm or php or perl.

 

All the for now existing byte-code languages with their respective VM have a long time of development gone by. If a solution do not give any benefit over the already available languages noone will work on the realization of such a technique.

 

as you see gogle native client is doing this, they say 

 

NaCl uses software fault isolation for sandboxing on x86-64 and ARM.[17] The x86-32 implementation of Native Client is notable for its novel sandboxing method which makes use of the x86 architecture's rarely-usedsegmentation facility.[18] Native Client sets up x86 segments to restrict the memory range that the sandboxed code can access. It uses a code verifier to prevent use of unsafe instructions such as those that perform system calls. To prevent the code from jumping to an unsafe instruction hidden in the middle of a safe instruction, Native Client requires that all indirect jumps be jumps to the start of 32-byte-aligned blocks, and instructions are not allowed to straddle these blocks.[18]Because of these constraints, C/C++ code must be recompiled to run under Native Client, which provides customized versions of the GNU toolchain, specificallyGCC and binutils as well as LLVM.

 

(i do not understand whats going on with this 32-aligned blocks, but anyway it seem that i can answet to my question that it can be done 

safely though with some slowdown (some benhmark mentioned in other thread was saying that it was for example 30% slowdown) Anyway im not sure if it is so much usable, Todays ineternet seem to be full of so many tehnologies, java, flash, javascript, some other things yet this 9i know a little about this all)

Share this post


Link to post
Share on other sites

Not using a VM but trying to isolate the plugin is a security risk. Using the segment registers makes assumptions about the processors that run the software. On Intel processors this may work, even it is something that noone should rely on. Forced to use a different toolchain and to use a 32-Bit environment gives lots of trouble while creating the software.

 

If you develop the software and do your tests on a simple PC machine, and you must afterwards cross-compile the software to put it on target, you added the first source of faults to your development. Reducing the default data width from a 64-Bit environment to a 32-Bit environment you will add another source of faults to your development.

Share this post


Link to post
Share on other sites

sure, but i know that all no need to write it to me as i just know it;

as for me this thread can be closed now

Share this post


Link to post
Share on other sites

If you know all that .... why you started the thread?

Ahhhh. I know too. This is a intelligence test ... you are an alien and checks the situation on earth.  But I tricked you.

Share this post


Link to post
Share on other sites

If you know all that .... why you started the thread?

Ahhhh. I know too. This is a intelligence test ... you are an alien and checks the situation on earth.  But I tricked you.

 

I was asking about something different than you answers, for example the exact technical reasons why it cannot be sanboxed safely - but it showed that it can so dont matter

Share this post


Link to post
Share on other sites

From the starter post:

... only to modify client area of web browser but could use full processor and gpu abilities  ....

This can not be done safely.

 

Because even the google native client has speed implications and cannot access all OS resources. You talked about "full processor and gpu abilities". And there is the difference. If you allow a loss of speed you can safely execute nearly any style of code, even native code. In any other case you start to be "unsafe".

 

The more "unsafe" the execution environment is, the more speed can be achieved. Safety mechanisms always  slows down things.

Share this post


Link to post
Share on other sites

From the starter post:
 

 

... only to modify client area of web browser but could use full processor and gpu abilities  ....

This can not be done safely.

 

Because even the google native client has speed implications and cannot access all OS resources. You talked about "full processor and gpu abilities". And there is the difference. If you allow a loss of speed you can safely execute nearly any style of code, even native code. In any other case you start to be "unsafe".

 

The more "unsafe" the execution environment is, the more speed can be achieved. Safety mechanisms always  slows down things.

 

propably, but x86 memory protection mechanisms (you know this setting atributes to memory pages to make them not readable not writeable not executable are probably quite cheap so it is probably some way of make this binary sandboxing protection with not to much cost (dont matter i m bit tired as usual and moved to another more gamedev topics now again)

Share this post


Link to post
Share on other sites

If you make a memory page "not executable" it cannot be executed. That is the very opposite to "native execution". :D

 

It isnt that easy. If it were, it would be done already because the whole internet world depends on the ability to safely execute some code in browsers. And because it has not been reached, to surf without beeing attacked by trojans and viruses all the time, you see that it is not this easy.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement