Balancer redirecting to wrong port

I have a strange thing happening, we’re running a balancer for two
machines:

upstream secure_cluster {
ip_hash;
server 208.78.100.210:80;
server 67.207.149.67:80;
}

We’re rewriting requests on port 80 to 443:

server {
listen 208.78.98.50:80;
server_name secure.mysite.com;
rewrite ^ https://secure.mysite.com$uri permanent;
}

The balancer is under SSL but the upstream machines are not. If I go
to https://secure.mysite.com/subscribe (with no trailing slash) I get
redirected to http://app1.mysite.com/subscribe/ (no longer under
SSL)…

If I go to https://secure.mysite.com/subscribe/ (including trailing
slash) then it stays on that URL as expected.

nginx 0.6.31

Any ideas?
Jeff

probably server_name_in_redirect off; would fix that

(is my real quick guess)

it’s issuing a redirect from “/subscribe” to “/subscribe/” i believe
that you’re not seeing.

I’ve recently been thinking about hosting some or all of our static
files (especially images) on Amazon’s S3. The recent multi-hour outage
has many asking how to create redundancy or self-healing static serving.
On the nginx side my question is a two-parter:

  1. Let’s say you created a CNAME so that media.example.com would point
    to your S3 bucket. What would the location rewrite be so that a request
    for any static file would be redirected to media.example.com?

  2. Is it possible to wrap this in an IF wrapper? My thinking is this:
    People write a php (python, whatever) script that checks for a 1 byte
    file in S3. Have it run in cron, say, every 5-10 minutes. If it can’t
    grab the file (S3’s down) it writes a file locally. If nginx detects
    that file it serves the static files locally. If, 5 minutes later, the
    script deletes the file, nginx goes back to serving from
    media.example.com

I know this isn’t proper nginx syntax, but something like this:

if (-f /usr.local/nginx/htdocs/s3down) {
//serve static files locally
} else {
//serve static files from media.example.com
}

Thanks for any ideas.

Hi Ian,

Google for “heartbeat script” and/or “IP failover”, these are excactly
the topics ou are looking for.

Remeber that if your server is down, and Nginx is on that server, then
Nginx won’t be able to perform the “if” statement and to decide to
serve files locally or from a remote server.

Thomas wrote:

Remeber that if your server is down, and Nginx is on that server, then
Nginx won’t be able to perform the “if” statement and to decide to
serve files locally or from a remote server.

I’m sorry, you missed the point of my post. I was talking about sites
that host their static content on Amazon’s S3 storage cloud not their
EC2 server cloud. S3 recently had a five hour outage and it affected
sites who store their images on S3 like Twitter and thousands of others.

So my question still applies since nginx is running on our servers and
we just need to check if Amazon’s servers are running.

If Amazon’s S3 is down, nginx serves static locally, otherwise, nginx
rewrites all static requests to media.example.com which is an alias for
your S3 bucket.

Ian M. Evans a écrit :

I’ve recently been thinking about hosting some or all of our static
files (especially images) on Amazon’s S3. The recent multi-hour outage
has many asking how to create redundancy or self-healing static serving.
On the nginx side my question is a two-parter:

  1. Let’s say you created a CNAME so that media.example.com would point
    to your S3 bucket. What would the location rewrite be so that a request
    for any static file would be redirected to media.example.com?

You can use a regular expression (a location instead of an if may be
cleaner) :
if ($request_filename !~
/(javascripts|css|images|robots.txt|..html|..xml) )
{
//rewrite
}

if (-f /usr.local/nginx/htdocs/s3down) {
//serve static files locally
} else {
//serve static files from media.example.com
}

There is no else statement. You may want to use an include to a file
that you will change with your script depending on the state of the
service.

However this rewrite system does not seem clean. I think it would be
better to simply use dns and subdomains for your static files, and
change the subdomain used when s3 is down.

Ian M. Evans a écrit :

Sorry…up in the wee hours with a head cold. :frowning:
If you use URL rewriting, your requests will have to go trough nginx to
be
redirected, which is inefficient.

Just use an assets servers like files.example.com in an s3 instance. Set
up an
additional nginx vhost and if s3 goes down you’ll just have to modify
the DNS.

Not necessarily what you’re asking, but services like dnsmadeeasy.com
can do automatic DNS failover. Set up static.example.com to point to
your s3 bucket. dnsmadeeasy (or similar) monitors your s3 bucket,
and switches DNS to your backup if it goes down. Your app only ever
has to point to static.example.com

On Tue, Jul 29, 2008 at 4:42 PM, Jean-Philippe

your s3 bucket. dnsmadeeasy (or similar) monitors your s3 bucket,
and switches DNS to your backup if it goes down. Your app only ever

Someone else mentioned dnsmadeeasy and I contacted their tech people.
They said:

“The manner in which S3 is implemented makes it difficult to monitor
using our toolset.”

There was one DNS company (http://dynect.com/) that mentioned having an
API for allowing remote changes. So I guess the cron method from the
non-static server could work. If it can’t reach your S3 bucket, it could
run a script to change the DNS to your backup choice.

As Dave Winer pointed out, there’s some cash to be made for the
companies who step up to find a way to help spooked S3 users…or
potential users like me.

A good solution may be to use something like panther express…

http://www.pantherexpress.net/s3/

It works like this:

  • you save to s3 and configure cnames for it
  • you configure cnames for panther , which point to s3
  • your website points to panther’s cnames, not s3s

panther handles your requests by pulling stuff from s3 when needed ,
and migrating stuff around their global CDN network as needed

pros:

  • a real CDN
  • you only pay for xfer (not storage)
  • the xfer , last i checked, was quite a bit cheaper than s3
  • for most files, you’ll pay from s3->panther 1x, and then panther-

consumer 10x

cons:

  • for some fies, you may have a 1:1 ration on s3->panther : panther-

consumer

its not an open source solution - its a product, but its really solid
and gives you the power of a real CDN on top of Amazon.

Jean-Philippe wrote:

However this rewrite system does not seem clean. I think it would be
better to simply use dns and subdomains for your static files, and
change the subdomain used when s3 is down.

I’m not sure I’m totally following this last paragraph.

Do you mean have two cnames, say, s3.example.com and local.example.com
and change the rewrite depending on the check files existence?

Sorry…up in the wee hours with a head cold. :frowning:

Freedns.afraid.org has a free service and an api.

On Jul 29, 2008, at 5:28 PM, “Ian M. Evans” [email protected]

This forum is not affiliated to the Ruby language, Ruby on Rails framework, nor any Ruby applications discussed here.

| Privacy Policy | Terms of Service | Remote Ruby Jobs