Writing ActiveRecord find result to S3 as a compressed file

On Heroku, using a delayed_job, I am trying to periodically write the
result of a
ActiveRecord find that has been converted to JSON to an Amazon S3
bucket as a compressed file.

I have this successfully by writing the JSON results to a temporary
file using File.open and Zlib:GzipWriter.write and then using
AWS::S3::S3Object.store to copy the resulting file to S3. The code
fragment is below.

The problem is the operation aborts I think because it exceeds
Heroku’s dyno memory or file size contraints when the find returns a
large number of rows.

Any suggestions how to use streams or some other approach so that
large find results can be converted to JSON and then stored on S3 as a
compressed file.

Thanks in advance for any advice.

  • Steve W.

--------- code fragment ------
tmp_file = “./tmp/#{file_name}”

first compress the content in a file

File.open(tmp_file, ‘w’) do |f|
gz = Zlib::GzipWriter.new(f)
gz.write content
gz.close
end

AWS::S3::Base.establish_connection!(
   :access_key_id     => S3_CONFIG['access_key_id'],
   :secret_access_key => S3_CONFIG['secret_access_key'],
   :use_ssl => true

)
AWS::S3::S3Object.store file_name + “.gz”, open(tmp_file),
bucket_name
stored = AWS::S3::Service.response.success?
AWS::S3::Base.disconnect!

File.delete tmp_file

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs