On the total nondisclosure of the 8/9/06 security vulnerabil

On 8/10/06, Francis C. [email protected] wrote:

then come fully clean (including exploit details) once the patch is
vulnerabilities doesn’t keep enough people from buying their products
But with open source software, we expect well-audited patches to new


Posted via http://www.ruby-forum.com/.


Rails mailing list
[email protected]
http://lists.rubyonrails.org/mailman/listinfo/rails

I completely agree with you Francis, this is a very immature on the part
of
the team. And it specially hurts people like me who are trying to
convince
the powers to be in my company how good RoR is. Big corporations are
hooked
onto Java/C#, not that they don’t have security problems are better
languages, but they give sufficient description of the vulnerabilities
and
there is one place to call for support. So for RoR to be adopted in big
corporations this kind of behaviour is going to leave a bad impression.

your thoughts???

-daya

On 8/10/06, Daniel H. [email protected] wrote:

Nick wrote:

Daniel H. wrote:

Is that the same security flaw, or another one entirely?

Looks like a new one inadvertently introduced as part of 1.1.5. This

what? does this mean there is one more security hole as a result of
applying 1.1.5 ??

dseverin wrote:

Hm, there seem to be already proof-of-concept exploit
(found on ror2ru group:
http://groups.google.com/group/ror2ru/browse_thread/thread/e654a6ddedc29e7e/7b90204e50bd7974
)

We see that nondisclosure is ineffective, since crackers don’t care
about keeping the secret.

And for Rails 1.1.5 (you say, it is fixed?) http://127.0.0.1:3000/cgi
brings down routing totally.

Are these the only affected by “security patch” places, or we should
expect more unknown, so-called “fixed” bugs???

A few minutes ago, on the Rails weblog:

“the 1.1.5 update from yesterday only partly closed the hole (getting
rid of the worst data loss trigger). After learning more about the
extent of the problem, weâ??ve now put together a 1.1.6 release that
completely closes all elements of the hole”

This would have been fixed in MINUTES instead of days if the
vulnerability had been fully disclosed.

Security through obscurity is no security at all.

And there was already a ticket “#5408 Unhandled urls can cause loading
of arbitrary ruby files” on Rails TRAC from 06/16 about mentioned
issues…

I noticed that TRAC was down most of yesterday. Intentionally? So that
people couldn’t go read the old tickets?

Paul


– Paul L., Senior Software Engineer –
— Networked Knowledge Systems —
---- P.O. Box 20772 Tampa, FL. 33622-0772 ----
----- (813)594-0064 Voice (813)594-0045 FAX -----
------ [email protected] ------


----- This email bound by the following: -----
---- E-Mail Disclaimer ----

Apply the latest patch for your version of Rails.

On Aug 9, 2006, at 9:49 PM, Sam Degres wrote:

You make some good points and have valid concerns. However, the fact
remains that Rails is a very new framework and prone to security
issues.

Every bit of software on a network is prone to security issues. I’d
hazard a guess there are still day-0 security bugs in every available
fully-patched TCP stack running on a machine on the Internet.

The place where it matters that Rails is “very new” is that the team
doesn’t appear to have a routine down for dealing with this sort of
thing. That said, Rails has moved beyond the audience of “a small
number of Rubyists that at least someone on the core team knows
either in person or online”. What works for word-of-mouth amongst
associates does not work for mass distribution.

Some personal observations*:

  1. The manner of the announcement probably had as much to do with
    people’s reactions as the reaction itself. The industry is “this
    release fixes critical security bugs, and is recommended for all
    users immediately”. The industry is not used to all-caps and words
    like “MANDATORY”. While experience tells me that the team is
    hunkered down working very hard at this (and probably wrote the
    announcement as an afterthought), the announcement gives the
    impression they’re all sitting in a room screaming “OH NOES!” and
    having seizures. The problem was exacerbated by the mystery behind
    the release: people are used to being told “update if you rely on
    the following features” or “this is a security update for everyone”.
    They aren’t used to “it’s a secret!” from people who are normally
    quite open. It isn’t clear that the secrecy bought anything anyway:
    half the people in the channel last night seemed to be diffing the
    new release against the last, and it sounds like the script kiddies
    were already trading exploit kits last night. Telling us what areas
    are affected wasn’t going to make the already-available exploit code
    be any more already available.

The solution is two-fold: first, the core team needs to tighten its
messages on things like this (should be trivial: it’s not like we
don’t all spend too much time writing on blogs and lists anyway).
And second, we as the community need to realize that security issues
are a fact of life, and patches (and rapid adoption of them) are a
necessity.

  1. For all that the release was described as a “drop-in” replacement,
    various people on the channel last night were pointing out breakage.
    This is both something to be avoided as much as possible and
    something that is somewhat unavoidable. With a platform as young yet
    complex as Rails (see below for why I claim that) things will break.
    There are ways to reduce the risk and impact of this problem: in
    code and in release process. I’ll talk to each in turn:

2.a. Code comes in with complexity. Rails is growing increasingly
complex: to some extent internally, but to a large extent based on
the plugins, generators, engines and customizations that people build
on top (and all through) the core distribution. Some people have
made that argument that Rails should add features and functionality:
e.g. implement a login system rather than having n subtly
incompatible ones out there to choose from. While doing that will
probably cut down on the problem in the short term, I suspect it will
make things worse in the long term (while also cutting down on
flexibility). Nevertheless, Rails isn’t just the core distribution,
but a platform, and for users the complexity comes in at the platform
level. As the platform and community grows, the number of things
that a “minor” Rails release can break will grow faster than the core
code, and faster than it will be possible to test with the current
resources.

I think the challenge here is to find ways to reduce the dependencies
and gotchas between the package and the platform. Mechanisms like
deprecation marking are good. “Don’t rely on this feature to behave
like this [forever / ever]” used to be transmitted by word of mouth
– it should probably be more “officially” stated. If someone has
ideas on how to programatically manage plugin conflicts, this would
be a good time to speak up.

2.b. Right now the release process appears to have a trunk (edge) and
a single branch (release). A side effect of this is that any
security or bug fix release is going to pick up other changes that
happened along the way. For a team running a production app that
hasn’t yet updated from, say, 1.1.2 to 1.1.4, the jump to 1.1.5
becomes even riskier. The only real way to fix this is to make the
branching a little more complicated so as to allow very specific
patches. For example, the Rails core team might elect to support
security fixes on a specific number of major and bug fix releases
back (or a specific length of time back), and then issue security-
only patches to that code (in addition to bug fixes).

As a more specific example (for illustrative purposes only): 3 bug
fix releases and 1 major release. As of two days ago that would be
1.0, 1.1.2, 1.1.3 and 1.1.4. When this security issue came up the
team would release 1.0.0.1, 1.1.2.1, 1.1.3.1, and 1.1.4.1. The
final .1 would indicate a security patch-level only, and the patch
for that would not include any code changes not needed for the
security patch. 1.1.5.0, 1.1.6.0, etc. would be bug fixes, picking
up the security patchlevel in the 1.1.x trunk, and when 1.1.5 comes
out 1.1.2 drops off the support radar. When 2.0 domes out the team
could drop support for anything before 1.1.final (1 major release) or
1.1.3 (3 bug fix releases).

to visualize this (you use a fixed-width font for mail, don’t you):
------------------------------------------------------------- top
level trunk (features)

-------------------------------------------------------- major
release trunk (bug fixes)
\
initial release- * bug fix release - * - *

* = security release

Note that it’s only the things on the last line that actually get
released. With a 3-level structure like this, “edge” becomes
confusing: does it mean the top level trunk? the release trunk? Or
maybe there are two edges to freeze. (Or, more likely, this problem
suggests against using the example layout described here.)

Obviously, this can get pretty heinous: multiple commits are a pain
(and error prone if you don’t test and track stuff), and maintaining
multiple branches is its own pile of ennui. I’ve seen pathalogical
cases (I’ve seen vendors who maintain up to a branch per customer.
Security patches can take 6 months to get out the door while the
vendor gets around to patching each release. Rails’ customer needs
are not at that point, and if they ever gets there the platform will
have reached the point of uselessness).

I don’t think the core team has the resources to do a full blown
legacy-support system here, but something is better than nothing.
The challenge is finding the “something” in a way that works with
“Getting Real”.

Again, the main point is to enable security-only releases while not
slowing down the rest.

  1. On a related note, I think network based gem installs aren’t
    really useful in a situation like this. They’re finicky enough on
    their own - but when 100,000 people hit the server at the same time
    to download a critical release the system tends to fall over hard.
    Can we get a download page with a single downloadable file (gem?
    tgz? also a zip for windows?) containing all the core dependencies?
  • I do have some experience managing development and release for
    infrastructure software that was used by thousands of other people,
    had occasional security issues, and had complex compatibility
    requirements. That and $.50 will not buy you a cup of coffee at
    Starbucks.

On Aug 10, 2006, at 10:44 AM, David M. wrote:

dseverin wrote:

E.g. for Rails 1.1.4 url like http://127.0.0.1:3000/breakpoint_client
can easily take down your server to ever-waiting state (my app has
gone
there, when I tried).

This url takes down my server, running 1.1.5 and mongrel

Sadly on my production xserve running Apache mod_proxy to lighttpd,
the url:

/breakpoint_client

will hang that process indefinitely. We’re running with max-procs at
3. I haven’t tested (nor do I really want to on my production server)
to see if I can get all three processes to hang.

Repeatedly hammering this same server with combinations of the paths:

/cgi
/active_support/dependencies

  • and -
    /builder/blankslate

brings it to a deadlock. I’m assuming this is the same essential
issue as above. I wouldn’t know for sure though, because the server
stops writing to the log at some point during this process. ugh.

…and I thought I was in the clear after doing the upgrade to 1.1.5
yesterday :frowning:

  • Stephen

Your fix

base.match(/\A#{Regexp.escape(extended_root)}/*(?:#{file_kinds(:lib)
*’|’})/) || base =~ %r{rails-[\d.]+/builtin}

works for us here for both Mongrel and WEBrick.

Thanks for the solution. If the dev site were up I’d say submit it as a
patch. :slight_smile:

I’m no http server configuration expert but they seem pretty
configurable…
Is there not a way to band-aid the problem by hard-coding the server NOT
to
access these URLs from the front-end?

At least it would stop the script kiddies.

Microsoft and Sun went through growing pains with their technologies as
well. ‘Jakarta’ Struts handled early vulnerabilities in much the same
fashion. To expect 37signals and the Rails core team to respond in the
same
way a major vendor with thousands of employees, several hundred of which
are
dedicated solely to security, is ludicrous.

Bugs in software happen. Period.

Coordinating the effort with shared hosting companies and getting the
word
out that there was a problem that required a mandatory fix. Due to the
particularly severe nature of the problem and its ability to cause data
loss, I think the Rails team did the best they could by waiting 24 hours
to
disclose the problem.

The Rails team has shown great restraint in not getting into arguments
and
flame wars and has stated very maturely to communicate future
vulnerabilities in a more open fashion. So until you get your next
Microsoft monthly critical patch, relax people.

-Steve
http://www.stevelongdo.com

On 8/10/06, linux user [email protected] wrote:

Cisco, and others that often face this situation don’t face a lot of
as heck for the nontrivial amount of time it takes for a vendor to
unit and regression testing. That means there is a significant-sized
given a description of the vulnerability and postponed full technical
details for 24 hours to give people time to patch.

Posted via http://www.ruby-forum.com/.


I completely agree with you Francis, this is a very immature on the part
of