Worker cpu affinity for 8 cores/cpus

Hey, as everyone knows, if you have 1-2-4 cpus/cores, you can set the
worker_cpu_affinity.

For 2 cores/cpus is like this: worker_cpu_affinity 0101 1010;
For 4 cores/cpus is like this: worker_cpu_affinity 1000 0100 0010 0001;

But how about 8 cores/cpus?

Please let me know.

Sorry for 4 cores/cpus is wrong from my side :slight_smile:

But anyway how about 8 cores.

On Fri, Sep 05, 2008 at 08:49:49AM +0200, Robert G. wrote:

Hey, as everyone knows, if you have 1-2-4 cpus/cores, you can set the
worker_cpu_affinity.

For 2 cores/cpus is like this: worker_cpu_affinity 0101 1010;
For 4 cores/cpus is like this: worker_cpu_affinity 1000 0100 0010 0001;

But how about 8 cores/cpus?

worker_cpu_affinity 1st_worker_mask 2nd_worker_mask … ;

therefore

worker_cpu_affinity 00001111 11110000;

will run first worker on 0-3 CPUS and second worker on 4-7 CPUS.

However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on
perfomance.

So work_cpu_affinity doesnt work on linux and nginx version 0.6.32?

P.S. what about the other post with the helpdesk stuff and the logs?! :slight_smile:

On Fri, Sep 05, 2008 at 12:14:04AM -0700, mike wrote:

On Thu, Sep 4, 2008 at 11:58 PM, Igor S. [email protected] wrote:

However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on perfomance.

does it work in linux?

worker_cpu_affinity had been written for Linux only and currently
it’s broken on Linux.

On Thu, Sep 4, 2008 at 11:58 PM, Igor S. [email protected] wrote:

However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on perfomance.

does it work in linux?

On Fri, Sep 05, 2008 at 12:35:31AM -0700, mike wrote:

ouch. so the logfile showing the SCHED_AFFINITY binding or whatever at
startup isn’t actually happening?

when did it break? i swear i was able to see each worker properly
bound to one core when i examined it in the past.

The problem that nginx says that it pass 32-bytes CPU mask to a kernel,
while actually it passes 4 valid bytes only. It depends on kernel
how it will treat this mask: if it has 2 or 4 CPUs it may work
correctly.

Besides, there are several Linux kernel and glibc sched_setaffinity()
interfaces, so you may have a case that works.

Hmm ok, one more stupid question, which version would you said its best
for a productive system: 0.6.32 / 0.7.14 / 0.5.37 ?

On Fri, Sep 05, 2008 at 10:14:55AM +0200, Robert G. wrote:

Hmm ok, one more stupid question, which version would you said its best
for a productive system: 0.6.32 / 0.7.14 / 0.5.37 ?

0.6.32.

ouch. so the logfile showing the SCHED_AFFINITY binding or whatever at
startup isn’t actually happening?

when did it break? i swear i was able to see each worker properly
bound to one core when i examined it in the past.

Thx very much!

what is cpu worker affinity? can you pl. explain?
thanks!

For example you have 2 workers, meaning to working processes of nginx,
which will deal with the incoming requests. The workers randomly on the
cores/cpus of the computer/server. With worker cpu affinity to tell the
workers, each to work one on core/cpu 1 and second on core/cpu 2, or
vice-versa. Of course this can be done with 4 cores/cpus or 8 cores/cpus
and so on.

P.S. Im not that good eider with how the kernel and stuff like that
handles everything, but I think I am right on this.

Sorry, for some reason I have a lot of “grammar” mistakes :slight_smile:

There’s a decent explanation of it here:
http://www.linuxjournal.com/article/6799

It is complex but I think if can have worthwhile performance impacts as
without affinity the cache memory gets invalidated as processes move
from core to core. Avoiding that has got to have an effect. I haven’t
seen any real benchmarks though.
Chris :slight_smile:

IMHO: Affinity is just a flag which tells kernel on what CPUs can run
the
code of specified process (even thread) - in other words you could set
affinity for first process to CPU0 and second to CPU1 and they should be
working on different CPU’s I don’t know if Linux kernel supports that,
but
windows should honour this setting. I don’t think that this kind of
separation would make big difference in performance.
Greetings,
zoltarx

2008/9/5 Robert G. [email protected]

Sorry, for some reason I have a lot of “grammar” mistakes :slight_smile:

Posted via http://www.ruby-forum.com/.


Filip Golewski
e-mail: [email protected]
e-mail: [email protected]