Attachment_fu S3 uploads killing mongrel

I was wondering if anyone here has seen a similar error to this…

From mongrel.log

/usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/
transactions.rb:85:in transaction': Transaction aborted (ActiveRecord::Transactions::TransactionError) from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/ configurator.rb:293:incall’
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in join' from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/ configurator.rb:293:injoin’
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/
configurator.rb:293:in each' from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/ configurator.rb:293:injoin’
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:136:in run' from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/ command.rb:211:inrun’
from /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:243
from /usr/bin/mongrel_rails:16:in `load’
from /usr/bin/mongrel_rails:16

This happens on Attachment_fu uploads using S3 only in production
(Attachment_fu uploads to S3 in development mode work fine) All my
other logs (production.log, apache logs, etc) are clean and I haven’t
been able to track down the source of the problem. Everything is
properly validated before the upload is called, and the records are
created in the database, but the image is never uploaded to S3 and I’m
getting the lockup shown above, which requires me to restart my
mongrels. I’ve been stuck on this for a good 2 weeks and haven’t been
able to find any working solutions anywhere. I would greatly
appreciate any advice.

Code follows…

foo_controller.rb

def create
begin
@foo = Foo.new(params[:foo])
respond_to do |format|
if @foo.save
format.html { redirect_to foo_url(@foo) }
else
format.html { render :action => “new” }
end
end
rescue
render :action => “new”
end
end

foo.rb

after_create :save_logo

i think we have similiar problem but im on fcgi/lighttd.

this is what i get:

EOFError (end of file reached):
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:inrbuf_fill’
/usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:intimeout’
/usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:inreaduntil’
/usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:inread_status_line’
/usr/local/lib/ruby/1.8/net/http.rb:2006:in `read_new’

We are using apache/mongrel on amazon EC2, and the errors (EPIPE,
EOFError etc) also occur intermittently. We basically cant use
attachment_fu until this is solved.

BTW - we are currently generating 4 different sized thumbnails along the
original - so 5 images are being stored for each upload. It could be
timing out due to the number of uploads? Anyone else have a similar
setup?

I have seen similar errors using aws:s3 without attachment_fu (though I
may
not be on the latest aws:s3 version) .
I gave up trying to solve/prevent it - so intermittant I begin to think
the
s3 connection may just flake sometimes, so I added retry logic instead.
Since my uploads occur asynchronously, that works for me.

-Andrew K.

On 6/14/07, Mark J. [email protected] wrote:


Posted via http://www.ruby-forum.com/.


Andrew K.

On 6/14/07, Andrew K. [email protected] wrote:

I have seen similar errors using aws:s3 without attachment_fu (though I may
not be on the latest aws:s3 version) .
I gave up trying to solve/prevent it - so intermittant I begin to think the
s3 connection may just flake sometimes, so I added retry logic instead.
Since my uploads occur asynchronously, that works for me.

I’ve mentioned it to Marcel. It’s definitely some intermittent bug,
probably at a lower level than AWS (his unit tests pass just fine).

However, doing a lot of s3 stuff with attachment_fu probably isn’t the
best thing either. It sure is convenient, but you’re tying up
precious rails processes uploading data to Amazon.


Rick O.
http://lighthouseapp.com
http://weblog.techno-weenie.net
http://mephistoblog.com

FYI - Ive been doing some further testing, and after adding
:persistent=>false to Base.establish_connection! (in s3_backend.rb) we
have not seen any errors today (tested with images up to 5MB). Im not
sure if this change is making the difference, or if S3 is less flaky
today. Has anyone else received EPIPE or EOFError errors with
:persistent=>false?

I just asked Marcel about it, he said folks still reported errors
after trying that.


Rick O.
http://lighthouseapp.com
http://weblog.techno-weenie.net
http://mephistoblog.com

Good point Rick. Since we are running on amazon EC2 the file is
uploaded to the server once, and the image and generated thumbnails xfer
between EC2 and S3 are free and hopefully fast (= shorter blocking).

Attachement_fu really is convenient, it saves us quite a bit of
development time! Hopefully we can narrow this error down.

FYI - Ive been doing some further testing, and after adding
:persistent=>false to Base.establish_connection! (in s3_backend.rb) we
have not seen any errors today (tested with images up to 5MB). Im not
sure if this change is making the difference, or if S3 is less flaky
today. Has anyone else received EPIPE or EOFError errors with
:persistent=>false?

hi, i also get the same error after being logged into my s3 account via
s3sh for some time.

a few weeks ago, i was messing around in my account (creating, deleting,
buckets, uploading files, etc…) via the shell and after about 10
minutes, i was getting the err pipe error.

i also found this blog http://flexrails.blogspot.com/, said the patch is

go to Mysql.rb => def write(data)

Add the following to the end of the method

rescue
errno = Error::CR_SERVER_LOST
raise Error::new(errno, Error::err(errno))
end

the author reported that he doesnt get the error on his windows machine
and version of rails of mysql.rb is slightly different compared to the
linux version.

On 6/15/07, mixplate [email protected] wrote:

go to Mysql.rb => def write(data)
linux version.

I’m quite sure the AWS/S3 gem doesn’t use mysql. From the stack
traces I’ve seen it seems to originate in the ruby standard net/http
library.


Rick O.
http://lighthouseapp.com
http://weblog.techno-weenie.net
http://mephistoblog.com

mixplate wrote:

i think we have similiar problem but im on fcgi/lighttd.

this is what i get:

EOFError (end of file reached):
/usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:inrbuf_fill’
/usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:intimeout’
/usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:inreaduntil’
/usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:inread_status_line’
/usr/local/lib/ruby/1.8/net/http.rb:2006:in `read_new’

I got the same Error. A small patch in AWS::S3 fixed it.

hi jamie,
exact same error on production with attachment_fu and s3.

seems to occur after the it’s been idle about 4 hours.
g

/usr/local/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/
active_record/transactions.rb:85:in transaction': Transaction aborted (ActiveRecord::Transactions::TransactionError) from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/ mongrel/configurator.rb:293:incall’
from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/
mongrel/configurator.rb:293:in join' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/ mongrel/configurator.rb:293:injoin’
from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/
mongrel/configurator.rb:293:in each' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/ mongrel/configurator.rb:293:injoin’
from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:136:in run' from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/ mongrel/command.rb:211:inrun’
from /usr/local/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/
mongrel_rails:243
from /usr/local/bin/mongrel_rails:16:in `load’
from /usr/local/bin/mongrel_rails:16

On Jun 6, 12:42 am, “[email protected][email protected]