public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
* Re: iowait problems on 2.6, not on 2.4
       [not found]   ` <200405281516.41901.antlarr@tedial.com>
@ 2004-05-28 22:45     ` Andrew Morton
  2004-05-30  0:46       ` Doug Ledford
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2004-05-28 22:45 UTC (permalink / raw)
  To: Antonio Larrosa Jiménez; +Cc: linux-kernel, linux-scsi

Antonio Larrosa Jiménez <antlarr@tedial.com> wrote:
>
> On Thursday 27 May 2004 05:52, you wrote:
> > Antonio Larrosa Jiménez <antlarr@tedial.com> wrote:
> > > My next test will be to do the "dd tests" on one of the internal hard
> > > disks and use it for the data instead of the external raid.
> >
> > That's a logical next step.  The reduced read bandwith on the raid array
> > should be fixed up before we can go any further.  I don't recall any
> > reports of qlogic fc-scsi performance regressions though.
> 
> Ok, let's analyze that first.
> 
> The dd tests gave the following results:

Let me cc linux-scsi.

Guys: poke.  Does anyone know why this:

  The machine is a 4 cpu Pentium III (Cascades) system with four SCSI
  SEAGATE ST336704 hard disks connected to an Adaptec AIC-7899P U160/m, and
  a external RAID connected to a QLA2200/QLA2xxx FC-SCSI Host Bus Adapter. 
  The machine has 1Gb RAM.

got all slow at reads?


> ext3 on the internal scsi HD:
>   2.4.21:
>      writing : 1m14s
>      reading : 1m2s
>      reading+writing : 2m16s
>   2.6.4:
>      writing : 1m19s
>      reading : 59s
>      reading+writing : 2m24s
> 
> reiserfs on the internal scsi HD:
>   2.4.21:
>      writing : 1m15s
>      reading : 1m1s 
>      reading+writing : 2m22s
>   2.6.4:
>      writing : 1m19s
>      reading : 1m 
>      reading+writing : 2m25s
> 
> ext3 on the raid using qlogic fc-scsi:
>   2.4.21:
>      writing : 30s
>      reading : 51s
>      reading+writing : 1m29s
>   2.6.4:
>      writing : 28s
>      reading : 1m26s
>      reading+writing : 2m19s
> 
> reiserfs on the raid using qlogic fc-scsi:
>   2.4.21:
>      writing : 37s
>      reading : 52s
>      reading+writing : 1m37s
>   2.6.4:
>      writing : 25s
>      reading : 1m27s
>      reading+writing : 2m3s
> 
> All the tests were made 3 times, and the average taken. In the cases where 
> there was too much variance, I repeated the tests some more times.
> 
> All the tests used 2Gb reads/writes (. I tried to make 8Gb reads/writes too, 
> but they got up to a minute variance (maybe the HD slowed itself down due to 
> temperature issues sometimes? I really don't know why this happened, but in 
> any case, I couldn't make reliable tests with files of that size).
> 
> So basically, there's no difference between 2.4.21 and 2.6.4 when using the 
> internal HD, but 2.6.4 is much slower when using the raid.
> What I found strange is that writing to that raid is a bit faster on 2.6.4 
> while reading is much slower, which I suppose is what makes the difference.
> 
> So yes, I suppose there's a regression on the qlogic fc-scsi module.
> 
> Btw, the tests I timed were:
> 
> count=2048
> write() { dd if=/dev/zero of=x bs=1M count=$count ; sync }
> read() { dd if=x of=/dev/null bs=1M count=$count }
> readwrite() { dd if=x of=y bs=1M count=$count ; sync }
> 
> In the case of read, I did the sync just before and after the timing, but 
> didn't include the sync inside the timed test.
> 
> As I said in my other mail, I can test any patch if needed.
> 
> Greetings and thanks for any help
> 
> --
> Antonio Larrosa
> Tecnologias Digitales Audiovisuales, S.L.
> http://www.tedial.com
> Parque Tecnologico de Andalucia . Málaga (Spain)
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: iowait problems on 2.6, not on 2.4
  2004-05-28 22:45     ` iowait problems on 2.6, not on 2.4 Andrew Morton
@ 2004-05-30  0:46       ` Doug Ledford
  2004-05-30  4:52         ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Doug Ledford @ 2004-05-30  0:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Antonio Larrosa Jiménez, linux-kernel,
	linux-scsi mailing list

On Fri, 2004-05-28 at 18:45, Andrew Morton wrote:
> Antonio Larrosa Jiménez <antlarr@tedial.com> wrote:
> >
> > On Thursday 27 May 2004 05:52, you wrote:
> > > Antonio Larrosa Jiménez <antlarr@tedial.com> wrote:
> > > > My next test will be to do the "dd tests" on one of the internal hard
> > > > disks and use it for the data instead of the external raid.
> > >
> > > That's a logical next step.  The reduced read bandwith on the raid array
> > > should be fixed up before we can go any further.  I don't recall any
> > > reports of qlogic fc-scsi performance regressions though.
> > 
> > Ok, let's analyze that first.
> > 
> > The dd tests gave the following results:
> 
> Let me cc linux-scsi.
> 
> Guys: poke.  Does anyone know why this:
> 
>   The machine is a 4 cpu Pentium III (Cascades) system with four SCSI
>   SEAGATE ST336704 hard disks connected to an Adaptec AIC-7899P U160/m, and
>   a external RAID connected to a QLA2200/QLA2xxx FC-SCSI Host Bus Adapter. 
>   The machine has 1Gb RAM.
> 
> got all slow at reads?

My first guess would be read ahead values.  Try poking around with
those.  When using a hard disk vs. a raid array, it's easier to trigger
firmware read ahead since all reads go to a single physical device and
that in turn compensates for lack of reasonable OS read ahead.  On a
raid array, depending on the vendor, there may be next to no firmware
initiated read ahead and that can drastically reduce read performance on
sequential reads.


> > So yes, I suppose there's a regression on the qlogic fc-scsi module.

A regression, yes.  In the qlogic-fc driver?  I doubt it.  Performance
tuning is what I think this basically boils down to.

But, I could be wrong.  Give it a try and see what happens.  In the 2.4
kernels I would tell you to tweak /proc/sys/vm/{min,max}-readahead,
don't know if those two knobs still exist in 2.6 and if they have the
same effect.  Andrew?

-- 
  Doug Ledford <dledford@redhat.com>     919-754-3700 x44233
         Red Hat, Inc.
         1801 Varsity Dr.
         Raleigh, NC 27606


-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: iowait problems on 2.6, not on 2.4
  2004-05-30  0:46       ` Doug Ledford
@ 2004-05-30  4:52         ` Andrew Morton
  2004-05-31 11:24           ` Antonio Larrosa Jiménez
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2004-05-30  4:52 UTC (permalink / raw)
  To: Doug Ledford; +Cc: antlarr, linux-kernel, linux-scsi

Doug Ledford <dledford@redhat.com> wrote:
>
> But, I could be wrong.  Give it a try and see what happens.  In the 2.4
>  kernels I would tell you to tweak /proc/sys/vm/{min,max}-readahead,
>  don't know if those two knobs still exist in 2.6 and if they have the
>  same effect.  Andrew?

blockdev --setra N /dev/sda 			(N is in 512 byte units)

echo N > /sys/block/sda/queue/read_ahead_kb	(N is in kilobytes)

Also there was breakage (recently fixed) in which /dev/sdaN's readahead
setting was being inherited from the blockdev on which /dev resides.  But
reading regular files is OK.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: iowait problems on 2.6, not on 2.4
  2004-05-30  4:52         ` Andrew Morton
@ 2004-05-31 11:24           ` Antonio Larrosa Jiménez
  0 siblings, 0 replies; 4+ messages in thread
From: Antonio Larrosa Jiménez @ 2004-05-31 11:24 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Doug Ledford, linux-kernel, linux-scsi

On Sunday 30 May 2004 06:52, Andrew Morton wrote:
> Also there was breakage (recently fixed) in which /dev/sdaN's readahead
> setting was being inherited from the blockdev on which /dev resides.  But
> reading regular files is OK.
>

I don't think I have that fix applied here.

I couldn't find any read_ahead* nor *ra_* under /sys, but blockdev works.

Btw, by default it was using a read ahead of 128 Kb (blockdev returned 256)

I re-run the dd read tests on reiserfs on the qla device (the one that took 
1m27s before). These are the results on 2.6.4 :

blockdev --setra 1024 /dev/sde : 28s
(but this is the dd test in a most probably unfragmented file, so it's not a 
value that can be used in real life)
blockdev --setra 512 /dev/sde : 51s (approx. what 2.4.21 took by default)
blockdev --setra 256 /dev/sde : 1m27s
blockdev --setra 128 /dev/sde : 2m9s
blockdev --setra 64 /dev/sde  : 2m55s
blockdev --setra 0 /dev/sde   : 13m33s

On 2.4.21:

By default, /proc/sys/vm/min-readahead is 3 and /proc/sys/vm/max-readahead is 
127. I suppose those are Kb, since it didn't allow me to set a read ahead 
value over 255 when using blockdev.

blockdev --setra 1024 /dev/sde : n/a (BLKRASET: Invalid argument)
blockdev --setra 512 /dev/sde : n/a (BLKRASET: Invalid argument)
blockdev --setra 256 /dev/sde : n/a (BLKRASET: Invalid argument)
blockdev --setra 255 /dev/sde :  52s
blockdev --setra 0 /dev/sde   : 54s

As this was unexpected (it seems blockdev is a NOP on 2.4.21 ?), I tried 
modifying /proc/sys/vm/min-readahead and max.

I set both to 512 (which should match blockdev --setra 1024 on 2.6.4) and got
25s reading

Setting both to 128kb (blockdev --setra 256), it took 49s.

Setting both to 0 (disabling read-ahead), it took 3m29s (compare that to  
13m33s on 2.6.4, so definitely, there's some problem here).

Anything else I can do to help to find the problem?

--
Antonio Larrosa
Tecnologias Digitales Audiovisuales, S.L.
http://www.tedial.com
Parque Tecnologico de Andalucia . Málaga (Spain)
-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2004-05-31 11:31 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <200405261743.28111.antlarr@tedial.com>
     [not found] ` <20040526205225.7a0866aa.akpm@osdl.org>
     [not found]   ` <200405281516.41901.antlarr@tedial.com>
2004-05-28 22:45     ` iowait problems on 2.6, not on 2.4 Andrew Morton
2004-05-30  0:46       ` Doug Ledford
2004-05-30  4:52         ` Andrew Morton
2004-05-31 11:24           ` Antonio Larrosa Jiménez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox