Question about io problem

How can i optimize io on nginx ?

it’s serving image and flv files. IO is very high ( %90 )

is there any option to handle it ?

thanks.

M.Kursad DARA
Sistem ve Uygulama Muhendisi
Tel : +90 212 365 95 08
mailto:[email protected] [email protected]
http://www.mynet.com/ http://www.mynet.com

On Tue, May 27, 2008 at 09:20:12AM +0300, M.Kursad DARA wrote:

How can i optimize io on nginx ?

it’s serving image and flv files. IO is very high ( %90 )

is there any option to handle it ?

What OS do you use ? Where do you see 90% IO ?

Centos 5.

I’m using iostat -x 1
Always see %90

How many spindles do you have ? What kind of drives? SATA, PATA, SCSI,
FC, etc?

How much memory do you have, how much is free at the moment ?

Are you running inside a VPS ?

Dave

Can you post a few samples of iostat 5

How many disks are in your array ?

Concurrent established ~2000 users.

It’s SCSI

2 GB memory. 50 Mb is free now.
It’s not running on VPS

There is six disks in array.

avg-cpu: %user %nice %system %iowait %steal %idle
0.28 0.00 3.45 8.87 0.00 87.40

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.27 1.09 54.98 3.91 495.61 132.89 10.67
0.61 10.35 4.78 28.17

avg-cpu: %user %nice %system %iowait %steal %idle
3.40 0.00 20.20 71.30 0.00 5.10

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 1.60 2.40 390.20 13.80 32180.80 189.00 80.12
16.79 40.52 2.48 100.02

avg-cpu: %user %nice %system %iowait %steal %idle
3.00 0.00 16.50 62.90 0.00 17.60

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 0.80 0.20 342.00 29.00 26078.40 372.80 71.30
15.27 42.47 2.70 100.06

avg-cpu: %user %nice %system %iowait %steal %idle
3.59 0.00 20.06 67.47 0.00 8.88

Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 1.00 0.00 444.51 13.97 31861.08 146.91 69.81
14.83 32.27 2.18 99.86

Certainly under a big io load. I have found in the past that the
deadline scheduler gives better results than the CFQ scheduler. On
recent linux kernels you can switch the scheduler on a per device
basis like this

[root@rado ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
[root@rado ~]# echo “deadline” > /sys/block/sda/queue/scheduler
[root@rado ~]# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq

I don’t know how much more you can squeeze out of your setup

Cheers

Dave

Thanks I’ll try and monitor it.