public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dimitrios Apostolou <jimis@gmx.net>
To: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: "Rafał Bilski" <rafalbilski@interia.pl>, linux-kernel@vger.kernel.org
Subject: Re: high system cpu load during intense disk i/o
Date: Tue, 07 Aug 2007 02:40:07 +0200	[thread overview]
Message-ID: <46B7BF67.8010506@gmx.net> (raw)
In-Reply-To: <20070806204853.6a693c4b@the-village.bc.nu>

Hi Alan,

Alan Cox wrote:
>>> In Your oprofile output I find "acpi_pm_read" particulary interesting. 
>>> Unlike other VIA chipsets, which I know, Your doesn't use VLink to 
>>> connect northbridge to southbridge. Instead PCI bus connects these two. 
>>> As You probably know maximal PCI throughtput is 133MiB/s. In theory. In 
>>> practice probably less.
> 
> acpi_pm_read is capable of disappearing into  SMM traps which will make
> it look very slow.

what is an SMM trap? I googled a bit but didn't get it...

> 
>> about 15MB/s for both disks. When reading I get about 30MB/s again from 
>> both disks. The other disk, the small one, is mostly idle, except for 
>> writing little bits and bytes now and then. Since the problem occurs 
>> when writing, 15MB/s is just too little I think for the PCI bus.
> 
> Its about right for some of the older VIA chipsets but if you are seeing
> speed loss then we need to know precisely which kernels the speed dropped
> at. Could be there is an I/O scheduling issue your system shows up or
> some kind of PCI bus contention when both disks are active at once.

I am sure throughput kept diminishing little by little with many 2.6 
releases, and that it wasn't a major regression on a specific version. 
Unfortunately I cannot backup my words with measurements from older 
kernels right now, since the system is hard to boot with such (new udev, 
new glibc). However I promise I'll test in the future (probably using 
old liveCDs) and come back then with proof.

> 
>> I have been ignoring these performance regressions because of no 
>> stability problems until now. So could it be that I'm reaching the 
>> 20MB/s driver limit and some requests take too long to be served?
> 
> Nope.

the reason I'm talking about a "software driver limit" is because I am 
sure about some facts:
- The disks can reach very high speeds (60 MB/s on other systems with udma5)
- The chipset on this specific motherboard can reach much higher 
numbers, as was measured with old kernels.
- No cable problems (have been changed), no strange dmesg output.

So what is left? Probably only the corresponding kernel module.


Thanks,
Dimitris


  reply	other threads:[~2007-08-06 23:40 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-03 16:03 high system cpu load during intense disk i/o Dimitrios Apostolou
2007-08-05 16:03 ` Dimitrios Apostolou
2007-08-05 17:58   ` Rafał Bilski
2007-08-05 18:42     ` Dimitrios Apostolou
2007-08-05 20:08       ` Rafał Bilski
2007-08-06 16:14       ` Rafał Bilski
2007-08-06 19:18         ` Dimitrios Apostolou
2007-08-06 19:48           ` Alan Cox
2007-08-07  0:40             ` Dimitrios Apostolou [this message]
2007-08-07  0:37               ` Alan Cox
2007-08-07 13:15                 ` Dimitrios Apostolou
2007-08-06 22:12           ` Rafał Bilski
2007-08-07  0:49             ` Dimitrios Apostolou
2007-08-07  9:03               ` Rafał Bilski
2007-08-07  9:43                 ` Dimitrios Apostolou
2007-08-06  1:28   ` Andrew Morton
2007-08-06 14:20     ` Dimitrios Apostolou
2007-08-06 17:33       ` Andrew Morton
2007-08-06 19:27         ` Dimitrios Apostolou
2007-08-06 20:04         ` Dimitrios Apostolou
2007-08-06 16:09     ` Dimitrios Apostolou
2007-08-07 14:50 ` Dimitrios Apostolou
2007-08-08 19:08   ` Rafał Bilski
2007-08-09  8:17     ` Dimitrios Apostolou
2007-08-10  7:06       ` Rafał Bilski
2007-08-17 23:19         ` Dimitrios Apostolou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=46B7BF67.8010506@gmx.net \
    --to=jimis@gmx.net \
    --cc=alan@lxorguk.ukuu.org.uk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=rafalbilski@interia.pl \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox