Digital signing of Ruby scripts

A primary scenario for my RubyCLR bridge is to enable folks to build
rich
client applications on top of the .NET libraries. One potential blocking
issue is dealing with users tampering with .rb scripts on the client. I
was
wondering if folks have spent some time thinking about how to package up
Ruby applications and digitally signing them.

The Monad shell team (the next generation Windows shell which uses an
object
piping metaphor as opposed to the more traditional text piping metaphor
in
*nix shells) already has a code signing policy in place for Monad
scripts,
as well as administrator configurable policies for script execution.

Any and all thoughts around this would be greatly appreciated.

Thanks,
-John

Are you trying to address security concerns or copy protection /
digital rights?

In terms of copy protection, I see the issue as irrelevant - even
without the ruby bridge, anyone can do whatever they want with the .NET
assemblies (especially since they’re so easy to disassemble).

In terms of security, how is this different from the security of a
compiled program? The two standard methods used to allow users to run
them securely are either a) trust of the author, often combined with
code signing or b) running in a sandbox. Both should work equally well
for Ruby, even with full source access.

It’s not copy protection that I’m worried about. Nor is it someone being
able to look at the source code. What I’m worried about is someone
tampering with the source code. So what I’m interested in is code
signing
of Ruby scripts combined with a policy enforcement mechanism (e.g. only
an
admin can install the Ruby interpreter, which is signed and only an
admin
can define the execution policy of the Ruby interpreter which can say
things
like “run all scripts” to “run only scripts whose public keys are
defined by
the admin”).

Now, maybe rich client applications built using Ruby will be more like
web
pages - the real business logic lives on the server with only
lightweight
validation logic on the client. However, it would be a shame to limit
Ruby
apps to just that.

-John

John L. wrote:

pages - the real business logic lives on the server with only lightweight
validation logic on the client. However, it would be a shame to limit Ruby
apps to just that.

-John
http://www.iunknown.com

John,

One solution may be to compile a small app that takes an MD5, SHA, or
some other checksum of the ruby code and only executes it if it is in an
internal hash of allowed files. You could have user-based hashes of
allowed files based on who is logged in. Of course you will have to
rebuild this app every time you change the ruby code but that could be
automated. But a user could run the ruby code directly unless you build
in some dependency to the compiled app. If they can see the source code,
they can copy it, tamper with it, and run it.

Dan

I was thinking about something a bit more onerous - allowing
administrators
to define machine-wide policies for execution of Ruby scripts. It’s
either
that or the guidance must be “write your app in such a way that you
assume
that all clients are compromised”. But then again, that’s the HTML model
so
folks are pretty used to that.

-John

John,

On Mar 29, 2006, at 5:21 PM, John L. wrote:

The Monad shell team (the next generation Windows shell which uses
an object
piping metaphor as opposed to the more traditional text piping
metaphor in
*nix shells) already has a code signing policy in place for Monad
scripts,
as well as administrator configurable policies for script execution.

Any and all thoughts around this would be greatly appreciated.

Rubygems now has the ability to let you sign your gems…

I’m not sure how it is implemented, but it might be what you’re
looking for.


Eric H. - [email protected] - http://blog.segment7.net
This implementation is HODEL-HASH-9600 compliant

http://trackmap.robotcoop.com

I still don’t understand. Who are you trying to protect - a user from
running a malicous (or tampered with) ruby script? If so, as I said,
this is no different than protecting a user from running a trojan
compiled file - people either trust the author (and hopefully use code
signing), or run the code in a sandbox.

In terms of ensuring that only admin’s can install the ruby
executable/interpreter - this is currently impossible, and likely will
remain so. Even if you mark your exe/interpreter to require admin
privs to install, what’s to stop anyone else from creating their own
exe/interpreter without that restriction? It’s essentially the old
copy protection / DRM issue, which all experts agree can always be
defeated (at least short of a hardware implementation).

It’s actually the other way around - can the author of the program trust
the
user of the program? Think about a corporate environment where you’re
worried about employees hacking your system. In today’s SOX compliance
driven world it’s not an unreasonable thing to worry about.

DRM can be used for “good” or “evil”. In a corporate setting, the user
doesn’t own the computer - it’s the company’s property. So in that case,
the
company should be able to define what can and cannot execute on the
machine.
So while today this isn’t a reasonable expectation, in the future
having
the ability to lock down a machine so that it only executes code that
was
signed by an approved list of certificate holders seems like a really
good
way to avoid problems like trusted insiders hacking your system.

-John

I have no ethical problem with DRM. I’m simply coming from a
mathematical / techinical perspective.

Some of the greatest minds have tried working on it, and the conclusion
is uaninimous: you can make it more annoying or cumbersome for someone
to duplicate or modify the software, but you can’t make it impossible.

Again, you can put whatever limitations you want into your interpreter

  • but I can always modify the binary (google IDA Pro) or create my own.

On 3/30/06, John L. [email protected] wrote:

company should be able to define what can and cannot execute on the
machine.
So while today this isn’t a reasonable expectation, in the future having
the ability to lock down a machine so that it only executes code that was
signed by an approved list of certificate holders seems like a really good
way to avoid problems like trusted insiders hacking your system.

Given that you’re bridging to .NET could you use that to your advantage?
Do
some sort of a hash, or simple signature of the ruby code, which gets
passed
through the bridge, and then let .NET handle that part. Hmm…more
random
using here…

What if you took the ruby code, and compiled it into a .NET exe as an
embedded resource (far from secure, but it would allow you to use strong
name keys or something similar on the entire assembly) then use a
generic
Main function that either embeds a ruby interpreter, or ‘forks’ one out
to
call the code? I realize this isn’t much different than exerb or the
like,
but being in a .NET assembly could allow you the strong naming and such.

-John

On 3/31/06, Tanner B. [email protected] wrote:

the
through the bridge, and then let .NET handle that part. Hmm…more random
http://www.iunknown.com

In terms of ensuring that only admin’s can install the ruby


===Tanner B.===
[email protected]
http://tannerburson.com <—Might even work one day…

The idea I am bounching around is to build a version of the ruby
interpreter that has an embedded public key. Then all ruby code would
in a comment header/footer have a signature that was generated with
the private key.

Keeping people from loading their own version of ruby remains a
problem, but this would remove the code insertion into the valid ruby
interpreter issue.

Is this what you have in mind? This is not a bad idea – I can see
where it could be necessary/worthwhile in some client code situations.
Not trying to hide the code, but to make tampering with the code
increasingly difficult. Of course if system security on the machine is
compromised (to the extent someone can change a file which they should
not have permission), then it is likely that this wasted effort.

Validating that a particular client installed version of ruby is the
correct one is (IMHO) an impossible task as you would have to encode a
“secret” into the interpreter (which could then be decompiled and made
unsecret) or have some hardware level support which does not currently
exist on PC platforms.

pth