From: "Rafał Bilski" <rafalbilski@interia.pl>
To: Dimitrios Apostolou <jimis@gmx.net>
Cc: linux-kernel@vger.kernel.org
Subject: Re: high system cpu load during intense disk i/o
Date: Sun, 05 Aug 2007 22:08:50 +0200 [thread overview]
Message-ID: <46B62E52.6060600@interia.pl> (raw)
In-Reply-To: <200708052142.14630.jimis@gmx.net>
> Hello and thanks for your reply.
Hi,
> The cron job that is running every 10 min on my system is mpop (a
> fetchmail-like program) and another running every 5 min is mrtg. Both
> normally finish within 1-2 seconds.
>
> The fact that these simple cron jobs don't finish ever is certainly because of
> the high system CPU load. If you see the two_discs_bad.txt which I attached
> on my original message, you'll see that *vmlinux*, and specifically the
> *scheduler*, take up most time.
>
> And the fact that this happens only when running two i/o processes but when
> running only one everything is absolutely snappy (not at all slow, see
> one_disc.txt), makes me sure that this is a kernel bug. I'd be happy to help
> but I need some guidance to pinpoint the problem.
OK, but first can You try to fix Your cron daemon? Just make sure that if mpop
is already started it won't be started again. Maybe something like "pgrep mpop"
and "if [ $?".
I don't remember exactly, but some time ago somebody had problem with to large
disk buffers and sync(). Check LKML archives. MPOP is doing fsync().
You have VIA chipset. Me too. It isn't very reliable. Don't You have something
like "error { d0 BUSY }" in dmesg? This would explain high CPU load. Simply
DMA isn't used after such error and disk goes to PIO mode. On two disk system
load is about 4.0 in this case. Simple program takes hours to complete if
there is havy I/O in progress. Btw. SLUB seems to behave better in this
situation (at least up to 8.0).
> Thanks,
> Dimitris
Regards
Rafał
----------------------------------------------------------------------
Dowiedz sie, co naprawde podnieca kobiety. Wiecej wiesz, latwiej je
oczarujesz
>>>http://link.interia.pl/f1b17
next prev parent reply other threads:[~2007-08-05 20:09 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-03 16:03 high system cpu load during intense disk i/o Dimitrios Apostolou
2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-05 17:58 ` Rafał Bilski
2007-08-05 18:42 ` Dimitrios Apostolou
2007-08-05 20:08 ` Rafał Bilski [this message]
2007-08-06 16:14 ` Rafał Bilski
2007-08-06 19:18 ` Dimitrios Apostolou
2007-08-06 19:48 ` Alan Cox
2007-08-07 0:40 ` Dimitrios Apostolou
2007-08-07 0:37 ` Alan Cox
2007-08-07 13:15 ` Dimitrios Apostolou
2007-08-06 22:12 ` Rafał Bilski
2007-08-07 0:49 ` Dimitrios Apostolou
2007-08-07 9:03 ` Rafał Bilski
2007-08-07 9:43 ` Dimitrios Apostolou
2007-08-06 1:28 ` Andrew Morton
2007-08-06 14:20 ` Dimitrios Apostolou
2007-08-06 17:33 ` Andrew Morton
2007-08-06 19:27 ` Dimitrios Apostolou
2007-08-06 20:04 ` Dimitrios Apostolou
2007-08-06 16:09 ` Dimitrios Apostolou
2007-08-07 14:50 ` Dimitrios Apostolou
2007-08-08 19:08 ` Rafał Bilski
2007-08-09 8:17 ` Dimitrios Apostolou
2007-08-10 7:06 ` Rafał Bilski
2007-08-17 23:19 ` Dimitrios Apostolou
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46B62E52.6060600@interia.pl \
--to=rafalbilski@interia.pl \
--cc=jimis@gmx.net \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox