AJP Support

Hello,

I currently use apache 2.2 with mod_proxy_ajp to load-balance a Java
application running on 3 tomcat servers.

I’ve had good results with nginx for a php and RoR site, and would be
interested in replacing apache with nginx in this configuration.

Does nginx support ajp13 connections? Has anyone on the list done
anything similar?

S.

Stephen Nelson-Smith ha scritto:

Hello,

I currently use apache 2.2 with mod_proxy_ajp to load-balance a Java
application running on 3 tomcat servers.

I’ve had good results with nginx for a php and RoR site, and would be
interested in replacing apache with nginx in this configuration.

Does nginx support ajp13 connections?

No.
It only supports HTTP, FastCGI and SCGI (via an external module).

Has anyone on the list done
anything similar?

S.

Manlio P.

On Mon, Jun 23, 2008 at 10:33:41AM +0100, Stephen Nelson-Smith wrote:

I currently use apache 2.2 with mod_proxy_ajp to load-balance a Java
application running on 3 tomcat servers.

I’ve had good results with nginx for a php and RoR site, and would be
interested in replacing apache with nginx in this configuration.

Does nginx support ajp13 connections? Has anyone on the list done
anything similar?

No, nginx does not support AJP, and it’s difficult to add because
nginx’s upstream module supports this scheme only:

  send request/body to upstream
  cycle:
      read response from upstream/send it to client

AJP’ scheme:

  send request/body first part (up to 8K) to upstream
  --------
  cycle 1:
      wait for GET_BODY_CHUNK from upstream
      send body part (up to 8K) to upstream
  --------
  cycle 2:
      read response from upstream/send it to client

Thus, the cycle 1 is not supported by nginx’s upstream module.

Igor S. wrote:

anything similar?

No, nginx does not support AJP, and it’s difficult to add…

OK. I guess I could just have nginx balance the http traffic and have
use tomcat’s http server rather than passing through ajp13 traffic.

I’d like to offload all SSL decryption to the loadbalancers too - so
traffic would come in on 443, hit the load balancer, get decrypted, and
farmed out as http traffic to the various tomcat nodes. How would I go
about that?

S.

Hi,

worker_processes 2; # number of CPUs

I’m replacing my hardware and need to spec what to use for a pair of
dedicated load-balancer machines. Would it make sense to go with
something with as many cores as possible? I could then make the
worker_processes say 8?

I propose to use heartbeat and drbd to make an HA cluster from two
machines.

    location / {
        proxy_pass   http://tomcat;
    }

Could you point me to the documentation that creates pools of machines
across which to balance? Also, what algorithms are available for
balancing?

Thanks,

S.

On Tue, Jun 24, 2008 at 07:55:21AM +0100, Stephen Nelson-Smith wrote:

Does nginx support ajp13 connections? Has anyone on the list done
farmed out as http traffic to the various tomcat nodes. How would I go
about that?

worker_processes 2; # number of CPUs

http {

server {
    listen               443;
    keepalive_timeout    70;

    ssl                  on;
    ssl_protocols        SSLv3 TLSv1;
    ssl_ciphers 

AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5;

    ssl_certificate      /path/to/cert.pem;
    ssl_certificate_key  /path/to/cert.key;

    ssl_session_cache    shared:SSL:10m;
    ssl_session_timeout  10m;

    location / {
        proxy_pass   http://tomcat;
    }
}

On 6/23/08, Stephen Nelson-Smith [email protected] wrote:

OK. I guess I could just have nginx balance the http traffic and have use
tomcat’s http server rather than passing through ajp13 traffic.

I didn’t get too far with trying to optimize Tomcat, but it could not
handle much load on our machine which had RAIDed SAS 10k drives and 12
gigs RAM - althoug the Java app itself may have had some issues but we
had to setup a cronjob to keep restarting it every so often.

If there’s another way to run Java I might look into that because I
was not impressed at all with Tomcat. It’s sad someone like Igor
hasn’t taken it upon themselves to make a crazy optimized Java server
(not that I know of, but I try to refuse to run any Java stuff to
begin with)

Hi,

SSL operations are CPU intensive, so it makes sense to use as much cores
as possible.

Generally would you say it would be better to have fewer cores with a
faster clock speed? Or does SSL threaded in such away that one would
get better performance from lots of cores, even if they were slower?

In terms of handing off http connections, does nginx benefit most from
plenty of RAM or again, is the guideline to go for a proliferation of
cores?

S.

On Tue, Jun 24, 2008 at 08:11:53AM +0100, Stephen Nelson-Smith wrote:

worker_processes 2; # number of CPUs

I’m replacing my hardware and need to spec what to use for a pair of
dedicated load-balancer machines. Would it make sense to go with
something with as many cores as possible?

SSL operations are CPU intensive, so it makes sense to use as much cores
as possible.

I could then make the worker_processes say 8?

Yes.

I propose to use heartbeat and drbd to make an HA cluster from two machines.

   location / {
       proxy_pass   http://tomcat;
   }

Could you point me to the documentation that creates pools of machines
across which to balance? Also, what algorithms are available for balancing?

If you want to balance the machines running nginx, then there is no
anything
to do on nginx side.

If you mean balancing using nginx, then:
http://wiki.codemongers.com/NginxHttpUpstreamModule

Have you tried Jetty ?

On Tue, Jun 24, 2008 at 11:45:24AM +0100, Stephen Nelson-Smith wrote:

Hi,

SSL operations are CPU intensive, so it makes sense to use as much cores
as possible.

Generally would you say it would be better to have fewer cores with a
faster clock speed?

It’s better to use more cores AND faster clock speed.

Or does SSL threaded in such away that one would
get better performance from lots of cores, even if they were slower?

nginx uses several worker processes those may run on diffeernt
CPUs/cores.

In terms of handing off http connections, does nginx benefit most from
plenty of RAM or again, is the guideline to go for a proliferation of
cores?

If you will use nginx as proxy only, then you do not need plenty of RAM.
If yor use it for static files, then RAM may be used for OS VM cache.

Igor S. wrote:

In terms of handing off http connections, does nginx benefit most from
plenty of RAM or again, is the guideline to go for a proliferation of
cores?

If you will use nginx as proxy only, then you do not need plenty of RAM.
If yor use it for static files, then RAM may be used for OS VM cache.

Right. In this case I will not be serving any content, only terminating
SSL connections and passing off http requests to a pool of tomcat
servers.

So I could go with eg 2G RAM and spend the rest of the money of faster
quad core chips.

Incidentally, can.worms.open(at the moment, what’s to choose between the
Intel quad cores and the AMDs?)

Our procurement is always through Dell, so it’s a fairly simple set of
choices.

S.

On Tue, Jun 24, 2008 at 12:17:00PM +0100, Stephen Nelson-Smith wrote:

Right. In this case I will not be serving any content, only terminating
SSL connections and passing off http requests to a pool of tomcat servers.

So I could go with eg 2G RAM and spend the rest of the money of faster
quad core chips.

Yes, 2G is more than enough.

Incidentally, can.worms.open(at the moment, what’s to choose between the
Intel quad cores and the AMDs?)

Our procurement is always through Dell, so it’s a fairly simple set of
choices.

I do not know which modern CPUs are better now.

Stephen Nelson-Smith wrote:

anything similar?

No, nginx does not support AJP, and it’s difficult to add…

OK. I guess I could just have nginx balance the http traffic and have
use tomcat’s http server rather than passing through ajp13 traffic.

Just a curiosity: do you think this (nginx to tomcat via http) would be
faster than using apache 2.2 with mod_proxy_ajp?
I’m not sure, but there’s probably a reason for using AJP instead of
http there :wink:
If nginx with http would be better/faster, that would be very
interesting indeed.

Paul

Stephen Nelson-Smith wrote:

Incidentally, can.worms.open(at the moment, what’s to choose between
the Intel quad cores and the AMDs?)
I’d probably go for pentium for a server, amd for home. From what I
understand, you’ve got a few more options for extra compilation params
when it comes to pentiums, and you can compile with pgcc for that extra
performance boost (mainly seen when compiling larger software like
mysql).

Phillip B Oldham
The Activity People
[email protected] mailto:[email protected]


Policies

This e-mail and its attachments are intended for the above named
recipient(s) only and may be confidential. If they have come to you in
error, please reply to this e-mail and highlight the error. No action
should be taken regarding content, nor must you copy or show them to
anyone.

This e-mail has been created in the knowledge that Internet e-mail is
not a 100% secure communications medium, and we have taken steps to
ensure that this e-mail and attachments are free from any virus. We must
advise that in keeping with good computing practice the recipient should
ensure they are completely virus free, and that you understand and
observe the lack of security when e-mailing us.

Hi,

Incidentally, can.worms.open(at the moment, what’s to choose between the
Intel quad cores and the AMDs?)

I’d probably go for pentium for a server, amd for home. From what I
understand, you’ve got a few more options for extra compilation params when
it comes to pentiums, and you can compile with pgcc for that extra
performance boost (mainly seen when compiling larger software like mysql).

For me it boils down to being able to afford either 2 x quad core AMD
or 1 x quad core Intel. The intel chips are faster, and probably
better for a machine with less RAM, but the machines are single
socket, so I could only get 4 cores in total.

Opinions?

S.

Hi,

OK. I guess I could just have nginx balance the http traffic and have
use tomcat’s http server rather than passing through ajp13 traffic.

Just a curiosity: do you think this (nginx to tomcat via http) would be
faster than using apache 2.2 with mod_proxy_ajp?

I don’t know. I do know from my experiences with nginx that
throughput on some of my other sites has been much greater, with fewer
resources being needed.

I also know that my current setup isn’t performing as well as I like.

I’ve heard very good reports about how quickly Tomcat can serve http
responses. There’s no static content - it’s all dynamically
generated. So it seems to me that nginx is the fastest and most
resource effective way of balancing http traffic over my pool of
tomcat servers.

I’m making some fairly big assumptions around how sessions are
handled, but I’ve not seen problems with rails, so I’d expect the same
experience with Java.

I’m not sure, but there’s probably a reason for using AJP instead of
http there :wink:

Hehe, in my case, when I took over the site it used Apacke and mod_jk.
I’ve upgraded the systems and we have carried on with the
apache/ajp/tomcat approach. I’m looking at alternatives now.

If nginx with http would be better/faster, that would be very
interesting indeed.

It would indeed, I will, of course, do some benchmarks and tests
first, and report back.

S.

On Tue, Jun 24, 2008 at 6:45 AM, Stephen Nelson-Smith
[email protected] wrote:

Hi,

SSL operations are CPU intensive, so it makes sense to use as much cores
as possible.

You can offload SSL operations onto a hardware accelerator card. There
are several vendors available:
http://www.openbsd.org/crypto.html#hardware

OpenBSD has native support for them. Don’t see why others wouldn’t.

On Wed, 2008-06-25 at 15:53 -0400, Dan M wrote:

OpenBSD has native support for them. Don’t see why others wouldn’t.

OpenSSL has support for several hardware crypto devices.

Getting even further off-topic, I’ve often wished OpenSSL would support
GPU’s for doing crypto:

http://majuric.org/software/cudamd5/

Lot’s of servers have completely under-utilized GPU’s and a fast video
card is still quite a bit cheaper than a dedicated crypto device.

Cliff

On Thu, Jun 26, 2008 at 11:40:42AM -0700, Cliff W. wrote:

OpenBSD: Cryptography
Lot’s of servers have completely under-utilized GPU’s and a fast video
card is still quite a bit cheaper than a dedicated crypto device.

The most expensive SSL operation is handshake using public key
encryption.
On the second place are symmetric ciphers such as DES, RC4, and AES.
The hashing algorithms (MD5/SHA) are on the third place and modern CPUs
evalute MD5 fast enough.