Timeout issue and custom solution

We’re using nginx with HAProxy and mongrel as well as monit and we’re
integrating with SAP using Piers H.'s sapnwrfc gem. It contains a
C-extension which calls itself an SAP provided C-library for RFC
calls. In general it works great.

But there are SAP RFCs with very long runtimes, e.g. orders with a lot
of items, searches with not restricted enough search criterias, but
also sporadic hickups in SAP which can happen to any RFC call. They
can block a mongrel for more than 2 minutes. We therefore need a
timeout solution.

The 2 available solutions we found (SystemTimer and Terminator) just
wouldn’t work, no matter what we tried. We therefore wrote an old-
fashioned custom solution: each request writes a file initially and
deletes it again at the end of processing. A monit process which runs
periodically (in our case every 10s) checks whether there are any
files older than a specific timeout period. If it finds one, it kills
the mongrel process (no soft kill) and starts a new one. When the
mongrel gets killed, nginx receives a 504 error. We assume that this
will happen mostly (only ?) in timeout cases, therefore we modified it
to redirect to our app to a page which has an error message about the
timeout.

This solution works perfectly so far. The only weird phenomenon we saw
is that in one case the user never gets the error page/redirect, but
the browser just hangs (firefox and IE).

Comments ? Ideas why SytemTimer and Terminator would not work ?
Improvements to the current solution ?

Adrian Z.
www.b2b2dot0.com

The solution details:

  1. Setting a global constant with the mongrel port during
    initialization:
    begin
    ObjectSpace.each_object(Mongrel::HttpServer) do |i|
    Const::App.port = i.port
    end
    rescue
    Const::App.port = ‘3000’ # when testing etc.
    end

  2. Creating/deleting the file in before/after filters:
    class ApplicationController < ActionController::Base
    before_filter :write_timeout_file
    after_filter :delete_timeout_file
    def write_timeout_file
    File.new(“tmp/pids/mongrel.#{Const::App.port}.req”, “w”).close
    end
    def delete_timeout_file
    File.delete(“tmp/pids/mongrel.#{Const::App.port}.req”)
    end
    end

  3. Monit Configuration:
    check file mongrel.3000.req path /var/www/apps/b2b2dot0/current/tmp/
    pid /mongrel.3000.req
    if timestamp > 90 seconds
    then exec “/export/admin-scripts/kill_mongrel.sh 3000”
    mode passive
    group mongrel_timeout

On Tue, Sep 29, 2009 at 12:21 AM, Adrian Z. [email protected]
wrote:

timeout solution.

The 2 available solutions we found (SystemTimer and Terminator) just
wouldn’t work, no matter what we tried.

I too faced the same problem. I could use SystemTimer to timeout rails
requests which did most any tasks like running a system command via
%x[], or
sleeping for some time etc, but AR queries to postgresql weren’t timed
out.
That’s when I monkey patched the AR postgresql connection adapter to set
a
session “statement_timeout” for postgres. Now postgres times out any
query
running for longer than my timeout milli seconds value.

I guess you could do the same with whatever DB or backend service you’re
using. Of course, the pure solution would be to find out why SystemTimer
wouldn’t work in this case and fix SystemTimer or rails or both
depending on
the problem.

cheers,
skar.

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs