will run first worker on 0-3 CPUS and second worker on 4-7 CPUS.
However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on
perfomance.
However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on perfomance.
does it work in linux?
worker_cpu_affinity had been written for Linux only and currently
it’s broken on Linux.
However, please note that worker_cpu_affinity is currently broken and
it may not work correctly. Recently CPU affinity had appeared in
FreeBSD 7, so I will probably fix it to see how it may affect on perfomance.
On Fri, Sep 05, 2008 at 12:35:31AM -0700, mike wrote:
ouch. so the logfile showing the SCHED_AFFINITY binding or whatever at
startup isn’t actually happening?
when did it break? i swear i was able to see each worker properly
bound to one core when i examined it in the past.
The problem that nginx says that it pass 32-bytes CPU mask to a kernel,
while actually it passes 4 valid bytes only. It depends on kernel
how it will treat this mask: if it has 2 or 4 CPUs it may work
correctly.
Besides, there are several Linux kernel and glibc sched_setaffinity()
interfaces, so you may have a case that works.
For example you have 2 workers, meaning to working processes of nginx,
which will deal with the incoming requests. The workers randomly on the
cores/cpus of the computer/server. With worker cpu affinity to tell the
workers, each to work one on core/cpu 1 and second on core/cpu 2, or
vice-versa. Of course this can be done with 4 cores/cpus or 8 cores/cpus
and so on.
P.S. Im not that good eider with how the kernel and stuff like that
handles everything, but I think I am right on this.
It is complex but I think if can have worthwhile performance impacts as
without affinity the cache memory gets invalidated as processes move
from core to core. Avoiding that has got to have an effect. I haven’t
seen any real benchmarks though.
Chris
IMHO: Affinity is just a flag which tells kernel on what CPUs can run
the
code of specified process (even thread) - in other words you could set
affinity for first process to CPU0 and second to CPU1 and they should be
working on different CPU’s I don’t know if Linux kernel supports that,
but
windows should honour this setting. I don’t think that this kind of
separation would make big difference in performance.
Greetings,
zoltarx