* Disk schedulers
@ 2008-02-14 16:21 Lukas Hejtmanek
2008-02-15 0:02 ` Tejun Heo
` (4 more replies)
0 siblings, 5 replies; 19+ messages in thread
From: Lukas Hejtmanek @ 2008-02-14 16:21 UTC (permalink / raw)
To: linux-kernel
Hello,
whom should I blame about disk schedulers?
I have the following setup:
1Gb network
2GB RAM
disk write speed about 20MB/s
If I'm scping file (about 500MB) from the network (which is faster than the
local disk), any process is totally unable to read anything from the local disk
till the scp finishes. It is not caused by low free memory, while scping
I have 500MB of free memory (not cached or buffered).
I tried cfq and anticipatory scheduler, none is different.
--
Lukáš Hejtmánek
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: Disk schedulers 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek @ 2008-02-15 0:02 ` Tejun Heo 2008-02-15 10:09 ` Lukas Hejtmanek 2008-02-15 14:42 ` Jan Engelhardt ` (3 subsequent siblings) 4 siblings, 1 reply; 19+ messages in thread From: Tejun Heo @ 2008-02-15 0:02 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: linux-kernel Lukas Hejtmanek wrote: > Hello, > > whom should I blame about disk schedulers? > > I have the following setup: > 1Gb network > 2GB RAM > disk write speed about 20MB/s > > If I'm scping file (about 500MB) from the network (which is faster than the > local disk), any process is totally unable to read anything from the local disk > till the scp finishes. It is not caused by low free memory, while scping > I have 500MB of free memory (not cached or buffered). > > I tried cfq and anticipatory scheduler, none is different. > Does deadline help? -- tejun ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 0:02 ` Tejun Heo @ 2008-02-15 10:09 ` Lukas Hejtmanek 0 siblings, 0 replies; 19+ messages in thread From: Lukas Hejtmanek @ 2008-02-15 10:09 UTC (permalink / raw) To: Tejun Heo; +Cc: linux-kernel On Fri, Feb 15, 2008 at 09:02:31AM +0900, Tejun Heo wrote: > > till the scp finishes. It is not caused by low free memory, while scping > > I have 500MB of free memory (not cached or buffered). > > > > I tried cfq and anticipatory scheduler, none is different. > > > > Does deadline help? well, deadline is a little bit better. I'm trying to read from disk opening maildir with 20000 mails with mutt. If I open that maildir, mutt shows progress. With cfq or anticipatory scheduler, progress is 0/20000 until scp finishes. With deadline, progress is 150/20000 after scp finished. So I would say, it is better but I doubt it is OK. -- Lukáš Hejtmánek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek 2008-02-15 0:02 ` Tejun Heo @ 2008-02-15 14:42 ` Jan Engelhardt 2008-02-15 14:57 ` Prakash Punnoor 2008-02-15 15:59 ` Lukas Hejtmanek 2008-02-15 17:24 ` Paulo Marques ` (2 subsequent siblings) 4 siblings, 2 replies; 19+ messages in thread From: Jan Engelhardt @ 2008-02-15 14:42 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: linux-kernel On Feb 14 2008 17:21, Lukas Hejtmanek wrote: >Hello, > >whom should I blame about disk schedulers? Also consider - DMA (e.g. only UDMA2 selected) - aging disk ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 14:42 ` Jan Engelhardt @ 2008-02-15 14:57 ` Prakash Punnoor 2008-02-15 17:11 ` Zan Lynx 2008-02-15 15:59 ` Lukas Hejtmanek 1 sibling, 1 reply; 19+ messages in thread From: Prakash Punnoor @ 2008-02-15 14:57 UTC (permalink / raw) To: Jan Engelhardt; +Cc: Lukas Hejtmanek, linux-kernel [-- Attachment #1: Type: text/plain, Size: 458 bytes --] On the day of Friday 15 February 2008 Jan Engelhardt hast written: > On Feb 14 2008 17:21, Lukas Hejtmanek wrote: > >Hello, > > > >whom should I blame about disk schedulers? > > Also consider > - DMA (e.g. only UDMA2 selected) > - aging disk Nope, I also reported this problem _years_ ago, but till now much hasn't changed. Large writes lead to read starvation. -- (°= =°) //\ Prakash Punnoor /\\ V_/ \_V [-- Attachment #2: This is a digitally signed message part. --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 14:57 ` Prakash Punnoor @ 2008-02-15 17:11 ` Zan Lynx 2008-02-15 21:32 ` FD Cami ` (2 more replies) 0 siblings, 3 replies; 19+ messages in thread From: Zan Lynx @ 2008-02-15 17:11 UTC (permalink / raw) To: Prakash Punnoor; +Cc: Jan Engelhardt, Lukas Hejtmanek, linux-kernel [-- Attachment #1: Type: text/plain, Size: 831 bytes --] On Fri, 2008-02-15 at 15:57 +0100, Prakash Punnoor wrote: > On the day of Friday 15 February 2008 Jan Engelhardt hast written: > > On Feb 14 2008 17:21, Lukas Hejtmanek wrote: > > >Hello, > > > > > >whom should I blame about disk schedulers? > > > > Also consider > > - DMA (e.g. only UDMA2 selected) > > - aging disk > > Nope, I also reported this problem _years_ ago, but till now much hasn't > changed. Large writes lead to read starvation. Yes, I see this often myself. It's like the disk IO queue (I set mine to 1024) fills up, and pdflush and friends can stuff write requests into it much more quickly than any other programs can provide read requests. CFQ and ionice work very well up until iostat shows average IO queuing above 1024 (where I set the queue number). -- Zan Lynx <zlynx@acm.org> [-- Attachment #2: This is a digitally signed message part --] [-- Type: application/pgp-signature, Size: 189 bytes --] ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 17:11 ` Zan Lynx @ 2008-02-15 21:32 ` FD Cami 2008-02-16 16:13 ` Lukas Hejtmanek 2008-02-20 17:04 ` Zdenek Kabelac 2 siblings, 0 replies; 19+ messages in thread From: FD Cami @ 2008-02-15 21:32 UTC (permalink / raw) To: Zan Lynx Cc: Prakash Punnoor, Jan Engelhardt, Lukas Hejtmanek, linux-kernel, megaraidlinux On Fri, 15 Feb 2008 10:11:26 -0700 Zan Lynx <zlynx@acm.org> wrote: > > On Fri, 2008-02-15 at 15:57 +0100, Prakash Punnoor wrote: > > On the day of Friday 15 February 2008 Jan Engelhardt hast written: > > > On Feb 14 2008 17:21, Lukas Hejtmanek wrote: > > > >Hello, > > > > > > > >whom should I blame about disk schedulers? > > > > > > Also consider > > > - DMA (e.g. only UDMA2 selected) > > > - aging disk > > > > Nope, I also reported this problem _years_ ago, but till now much hasn't > > changed. Large writes lead to read starvation. > > Yes, I see this often myself. It's like the disk IO queue (I set mine > to 1024) fills up, and pdflush and friends can stuff write requests into > it much more quickly than any other programs can provide read requests. > > CFQ and ionice work very well up until iostat shows average IO queuing > above 1024 (where I set the queue number). I can confirm that as well. This is easily reproductible with dd if=/dev/zero of=somefile bs=2048 for example. After a short while, trying to read the disks takes an awfully long time, even if the dd process is ionice'd. What is worse is that other drives attached to the same controller become unresponsive as well. I use a Dell Perc 5/i (megaraid_sas) with : * 2 SAS 15000 RPM drives, RAID1 => sda * 4 SAS 15000 RPM drives, RAID5 => sdb * 2 SATA 72000 RPM drives, RAID1 => sdc Using dd or mkfs on sdb or sdc makes sda unresponsive as well. Is this expected ? Cheers Francois ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 17:11 ` Zan Lynx 2008-02-15 21:32 ` FD Cami @ 2008-02-16 16:13 ` Lukas Hejtmanek 2008-02-20 17:04 ` Zdenek Kabelac 2 siblings, 0 replies; 19+ messages in thread From: Lukas Hejtmanek @ 2008-02-16 16:13 UTC (permalink / raw) To: Zan Lynx; +Cc: Prakash Punnoor, Jan Engelhardt, linux-kernel On Fri, Feb 15, 2008 at 10:11:26AM -0700, Zan Lynx wrote: > Yes, I see this often myself. It's like the disk IO queue (I set mine > to 1024) fills up, and pdflush and friends can stuff write requests into > it much more quickly than any other programs can provide read requests. > > CFQ and ionice work very well up until iostat shows average IO queuing > above 1024 (where I set the queue number). I though that CFQ would maintain IO queues per process and pick up request in round robin from non-empty queues. Am I wrong? And if wrong, isn't it desired behavior for desktop? -- Lukáš Hejtmánek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 17:11 ` Zan Lynx 2008-02-15 21:32 ` FD Cami 2008-02-16 16:13 ` Lukas Hejtmanek @ 2008-02-20 17:04 ` Zdenek Kabelac 2 siblings, 0 replies; 19+ messages in thread From: Zdenek Kabelac @ 2008-02-20 17:04 UTC (permalink / raw) To: Zan Lynx; +Cc: Prakash Punnoor, Jan Engelhardt, Lukas Hejtmanek, linux-kernel 2008/2/15, Zan Lynx <zlynx@acm.org>: > > On Fri, 2008-02-15 at 15:57 +0100, Prakash Punnoor wrote: > > On the day of Friday 15 February 2008 Jan Engelhardt hast written: > > > On Feb 14 2008 17:21, Lukas Hejtmanek wrote: > > > >Hello, > > > > > > > >whom should I blame about disk schedulers? > > > > > > Also consider > > > - DMA (e.g. only UDMA2 selected) > > > - aging disk > > > > Nope, I also reported this problem _years_ ago, but till now much hasn't > > changed. Large writes lead to read starvation. > > Yes, I see this often myself. It's like the disk IO queue (I set mine > to 1024) fills up, and pdflush and friends can stuff write requests into > it much more quickly than any other programs can provide read requests. > > CFQ and ionice work very well up until iostat shows average IO queuing > above 1024 (where I set the queue number). I should probably summarize my experience here as well: I'm using Qemu - inside of it I'm testing some kernel module which is doing a lot of disk copy operation - its virtual disk has 8GB. When my test is started my system starts to feel unresponsive couple times per minute for nearly 10 minutes - especially if I use some chat tool like pidgin I'm often left for 5 secs without any visible refresh on screen (redraw, typed keys,...) Firefox shows similar symptoms... Obviously piding has its own responsibility here - because from strace it's visible it tries to open and read files - that he read already zillion times before :) - but that's another story. But I've tried many things - I've started qemu with ionice -c0, used ionice -c2 for pidgin, tried different io-scheduler, niced qemu, changed swappiness to different values according to various tips & tricks around the web I could find and I cannot get properly running system with my qemu test case because the system feels unresponsive in some application which needs to touch my drive. Does anyone have any ideas what should I try/test/check BTW one interesting things I've noticed is very high kernel IPI number: i.e. 77,0% (3794,9) <kernel IPI> : Rescheduling interrupts Sometimes this number attacks 10000 barrier. My machine is 2.2GHz C2D, T61, 2GB - and CPU is 50% idle while machine freezes - and yes I can move the mouse all the time ;) and no I'm not out-of-ram Zdenek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 14:42 ` Jan Engelhardt 2008-02-15 14:57 ` Prakash Punnoor @ 2008-02-15 15:59 ` Lukas Hejtmanek 2008-02-15 16:22 ` Jeffrey E. Hundstad 2008-02-15 17:36 ` Roger Heflin 1 sibling, 2 replies; 19+ messages in thread From: Lukas Hejtmanek @ 2008-02-15 15:59 UTC (permalink / raw) To: Jan Engelhardt; +Cc: linux-kernel On Fri, Feb 15, 2008 at 03:42:58PM +0100, Jan Engelhardt wrote: > Also consider > - DMA (e.g. only UDMA2 selected) > - aging disk it's not the case. hdparm reports udma5 is used, if it is reliable with libata. The disk is 3 months old, kernel does not report any errors. And it has never been different. -- Lukáš Hejtmánek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 15:59 ` Lukas Hejtmanek @ 2008-02-15 16:22 ` Jeffrey E. Hundstad 2008-02-15 17:36 ` Roger Heflin 1 sibling, 0 replies; 19+ messages in thread From: Jeffrey E. Hundstad @ 2008-02-15 16:22 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: Jan Engelhardt, linux-kernel@vger.kernel.org Lukas Hejtmanek, I have to say, that I've heard this subject before, the summary answer seems to be, that the kernel can not guess the wishes of the user 100% of the time. If you have a low priority I/O task use ionice(1) to set the priority of that task so it doesn't nuke your high priority task. I have to personal stake in this answer but I can report that for my high I/O tasks it does work like a charm. -- Jeffrey Hundstad Lukas Hejtmanek wrote: > On Fri, Feb 15, 2008 at 03:42:58PM +0100, Jan Engelhardt wrote: > >> Also consider >> - DMA (e.g. only UDMA2 selected) >> - aging disk >> > > it's not the case. > > hdparm reports udma5 is used, if it is reliable with libata. > > The disk is 3 months old, kernel does not report any errors. And it has never > been different. > > -- > Lukáš Hejtmánek > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 15:59 ` Lukas Hejtmanek 2008-02-15 16:22 ` Jeffrey E. Hundstad @ 2008-02-15 17:36 ` Roger Heflin 1 sibling, 0 replies; 19+ messages in thread From: Roger Heflin @ 2008-02-15 17:36 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: Jan Engelhardt, linux-kernel Lukas Hejtmanek wrote: > On Fri, Feb 15, 2008 at 03:42:58PM +0100, Jan Engelhardt wrote: >> Also consider >> - DMA (e.g. only UDMA2 selected) >> - aging disk > > it's not the case. > > hdparm reports udma5 is used, if it is reliable with libata. > > The disk is 3 months old, kernel does not report any errors. And it has never > been different. > A new current ide/sata disk should do around 60mb/second, check the min/max bps rate listed on the disk companies site, and divide by 8, and take maybe 80% of that Also you may consider using the -l option on the scp command to limit its total usage. This feature has been around at least 8 years (from 2.2) that high levels of writes would significantly starve out reads, mainly because you can queue up 1000's of writes, and a read, when the read finishes there are another 1000's writes for the next read to get in line behind, and wait, and this continues until the writes stop. Roger ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek 2008-02-15 0:02 ` Tejun Heo 2008-02-15 14:42 ` Jan Engelhardt @ 2008-02-15 17:24 ` Paulo Marques 2008-02-16 16:15 ` Lukas Hejtmanek 2008-02-16 17:20 ` Pavel Machek 2008-02-17 19:38 ` Linda Walsh 4 siblings, 1 reply; 19+ messages in thread From: Paulo Marques @ 2008-02-15 17:24 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: linux-kernel Lukas Hejtmanek wrote: > [...] > If I'm scping file (about 500MB) from the network (which is faster than the > local disk), any process is totally unable to read anything from the local disk > till the scp finishes. It is not caused by low free memory, while scping > I have 500MB of free memory (not cached or buffered). If you want to take advantage of all that memory to buffer disk writes, so that the reads can proceed better, you might want to tweak your /proc/sys/vm/dirty_ratio amd /proc/sys/vm/dirty_background_ratio to more appropriate values. (maybe also dirty_writeback_centisecs and dirty_expire_centisecs) You can read all about those tunables in Documentation/filesystems/proc.txt. Just my 2 cents, -- Paulo Marques - www.grupopie.com "Very funny Scotty. Now beam up my clothes." ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-15 17:24 ` Paulo Marques @ 2008-02-16 16:15 ` Lukas Hejtmanek 0 siblings, 0 replies; 19+ messages in thread From: Lukas Hejtmanek @ 2008-02-16 16:15 UTC (permalink / raw) To: Paulo Marques; +Cc: linux-kernel On Fri, Feb 15, 2008 at 05:24:52PM +0000, Paulo Marques wrote: > If you want to take advantage of all that memory to buffer disk writes, > so that the reads can proceed better, you might want to tweak your > /proc/sys/vm/dirty_ratio amd /proc/sys/vm/dirty_background_ratio to more > appropriate values. (maybe also dirty_writeback_centisecs and > dirty_expire_centisecs) I don't feel like to have my whole memory eaten by a single file which is not to be read again and thus it is pretty useless. Instead, I would like to see slowdown of scp so that other processes can also access disk. Why is this possible with kernel process scheduler and not with IO scheduler? -- Lukáš Hejtmánek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek ` (2 preceding siblings ...) 2008-02-15 17:24 ` Paulo Marques @ 2008-02-16 17:20 ` Pavel Machek 2008-02-20 18:48 ` Lukas Hejtmanek 2008-02-17 19:38 ` Linda Walsh 4 siblings, 1 reply; 19+ messages in thread From: Pavel Machek @ 2008-02-16 17:20 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: linux-kernel Hi! > whom should I blame about disk schedulers? > > I have the following setup: > 1Gb network > 2GB RAM > disk write speed about 20MB/s > > If I'm scping file (about 500MB) from the network (which is faster than the > local disk), any process is totally unable to read anything from the local disk > till the scp finishes. It is not caused by low free memory, while scping > I have 500MB of free memory (not cached or buffered). > > I tried cfq and anticipatory scheduler, none is different. Is cat /dev/zero > file enough to reproduce this? ext3 filesystem? Will cat /etc/passwd work while machine is unresponsive? Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-16 17:20 ` Pavel Machek @ 2008-02-20 18:48 ` Lukas Hejtmanek 2008-02-21 23:50 ` Giuliano Pochini 0 siblings, 1 reply; 19+ messages in thread From: Lukas Hejtmanek @ 2008-02-20 18:48 UTC (permalink / raw) To: Pavel Machek; +Cc: linux-kernel, zdenek.kabelac On Sat, Feb 16, 2008 at 05:20:49PM +0000, Pavel Machek wrote: > Is cat /dev/zero > file enough to reproduce this? yes. > ext3 filesystem? yes. > Will cat /etc/passwd work while machine is unresponsive? yes. while find does not work: time find / / /etc /etc/manpath.config /etc/update-manager /etc/update-manager/release-upgrades /etc/gshadow- /etc/inputrc /etc/openalrc /etc/bonobo-activation /etc/bonobo-activation/bonobo-activation-config.xml /etc/gnome-vfs-2.0 /etc/gnome-vfs-2.0/modules /etc/gnome-vfs-2.0/modules/obex-module.conf /etc/gnome-vfs-2.0/modules/extra-modules.conf /etc/gnome-vfs-2.0/modules/theme-method.conf /etc/gnome-vfs-2.0/modules/font-method.conf /etc/gnome-vfs-2.0/modules/default-modules.conf ^C real 0m7.982s user 0m0.003s sys 0m0.000s i.e., it took 8 seconds to list just 17 dir entries. It looks like I have this problem: http://www.linuxinsight.com/first_benchmarks_of_the_ext4_file_system.html#comment-619 (the last comment with title: Sustained writes 2 or more times the amount of memfree....) -- Lukáš Hejtmánek ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-20 18:48 ` Lukas Hejtmanek @ 2008-02-21 23:50 ` Giuliano Pochini 0 siblings, 0 replies; 19+ messages in thread From: Giuliano Pochini @ 2008-02-21 23:50 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: Pavel Machek, linux-kernel, zdenek.kabelac On Wed, 20 Feb 2008 19:48:42 +0100 Lukas Hejtmanek <xhejtman@ics.muni.cz> wrote: > On Sat, Feb 16, 2008 at 05:20:49PM +0000, Pavel Machek wrote: > > Is cat /dev/zero > file enough to reproduce this? > > yes. > > > > ext3 filesystem? > > yes. > > > Will cat /etc/passwd work while machine is unresponsive? > > yes. > > while find does not work: > time find / > / > /etc > /etc/manpath.config > /etc/update-manager > /etc/update-manager/release-upgrades > /etc/gshadow- > /etc/inputrc > /etc/openalrc > /etc/bonobo-activation > /etc/bonobo-activation/bonobo-activation-config.xml > /etc/gnome-vfs-2.0 > /etc/gnome-vfs-2.0/modules > /etc/gnome-vfs-2.0/modules/obex-module.conf > /etc/gnome-vfs-2.0/modules/extra-modules.conf > /etc/gnome-vfs-2.0/modules/theme-method.conf > /etc/gnome-vfs-2.0/modules/font-method.conf > /etc/gnome-vfs-2.0/modules/default-modules.conf > ^C > > real 0m7.982s > user 0m0.003s > sys 0m0.000s > > > i.e., it took 8 seconds to list just 17 dir entries. It also happens when I'm writing to a slow external disk. Documentation/block/biodoc.txt says: "Per-queue granularity unplugging (still a Todo) may help reduce some of the concerns with just a single tq_disk flush approach. Something like blk_kick_queue() to unplug a specific queue (right away ?) or optionally, all queues, is in the plan." If I understand correctly, there is only one "plug" in common for all devices. It may explain why when a queue is full, access to other devices is also blocked. -- Giuliano. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek ` (3 preceding siblings ...) 2008-02-16 17:20 ` Pavel Machek @ 2008-02-17 19:38 ` Linda Walsh 2008-02-28 17:14 ` Bill Davidsen 4 siblings, 1 reply; 19+ messages in thread From: Linda Walsh @ 2008-02-17 19:38 UTC (permalink / raw) To: Lukas Hejtmanek; +Cc: linux-kernel Lukas Hejtmanek wrote: > whom should I blame about disk schedulers? > > I have the following setup: > 1Gb network > 2GB RAM > disk write speed about 20MB/s > > If I'm scping file (about 500MB) from the network (which is faster than the > local disk), any process is totally unable to read anything from the local disk > till the scp finishes. It is not caused by low free memory, while scping > I have 500MB of free memory (not cached or buffered). > > I tried cfq and anticipatory scheduler, none is different. > ---- > You didn't say anything about #processors or speed, nor did you say anything about your hard disk's raw-io ability. You also didn't mention what kernel version or whether or not you were using the new UID-group cpu scheduler in 2.6.24 (which likes to default to 'on'; not a great choice for single-user, desktop-type machines, if I understand its grouping policy). Are you sure neither end of the copy is cpu-bound on ssh/scp encrypt/decrypt calculations? It might not just be inability to read from disk, but low cpu availability. Scp can be alot more CPU intensive than you would expect... Just something to consider... Linda ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Disk schedulers 2008-02-17 19:38 ` Linda Walsh @ 2008-02-28 17:14 ` Bill Davidsen 0 siblings, 0 replies; 19+ messages in thread From: Bill Davidsen @ 2008-02-28 17:14 UTC (permalink / raw) To: Linda Walsh; +Cc: Lukas Hejtmanek, linux-kernel Linda Walsh wrote: > You didn't say anything about #processors or speed, nor > did you say anything about your hard disk's raw-io ability. > You also didn't mention what kernel version or whether or not > you were using the new UID-group cpu scheduler in 2.6.24 (which likes > to default to 'on'; not a great choice for single-user, desktop-type > machines, if I understand its grouping policy). > Are you sure neither end of the copy is cpu-bound on ssh/scp > encrypt/decrypt calculations? It might not just be inability > to read from disk, but low cpu availability. Scp can be alot > more CPU intensive than you would expect... Just something to > consider... > Good point, and may I note that you *can* choose your encryption, I use 'blowfish' for normal operation, since I have some faitly slow machines with a Gbit net between them. -- Bill Davidsen <davidsen@tmr.com> "We have more to fear from the bungling of the incompetent than from the machinations of the wicked." - from Slashdot ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2008-02-28 17:12 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-02-14 16:21 Disk schedulers Lukas Hejtmanek 2008-02-15 0:02 ` Tejun Heo 2008-02-15 10:09 ` Lukas Hejtmanek 2008-02-15 14:42 ` Jan Engelhardt 2008-02-15 14:57 ` Prakash Punnoor 2008-02-15 17:11 ` Zan Lynx 2008-02-15 21:32 ` FD Cami 2008-02-16 16:13 ` Lukas Hejtmanek 2008-02-20 17:04 ` Zdenek Kabelac 2008-02-15 15:59 ` Lukas Hejtmanek 2008-02-15 16:22 ` Jeffrey E. Hundstad 2008-02-15 17:36 ` Roger Heflin 2008-02-15 17:24 ` Paulo Marques 2008-02-16 16:15 ` Lukas Hejtmanek 2008-02-16 17:20 ` Pavel Machek 2008-02-20 18:48 ` Lukas Hejtmanek 2008-02-21 23:50 ` Giuliano Pochini 2008-02-17 19:38 ` Linda Walsh 2008-02-28 17:14 ` Bill Davidsen
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox