From: Grant Grundler <iod00d@hp.com>
To: Andi Kleen <ak@suse.de>
Cc: ishii.hironobu@jp.fujitsu.com, linux-kernel@vger.kernel.org,
linux-ia64@vger.kernel.org
Subject: Re: [RFC/PATCH, 1/4] readX_check() performance evaluation
Date: Wed, 28 Jan 2004 11:09:23 -0800 [thread overview]
Message-ID: <20040128190923.GA6333@cup.hp.com> (raw)
In-Reply-To: <20040128184137.616b6425.ak@suse.de>
On Wed, Jan 28, 2004 at 06:41:37PM +0100, Andi Kleen wrote:
> > I could be wrong. Exception handling is ugly.
...
> One big problem is how to get rid of the spinlocks after the exception though
> (hardware access usually happens inside a spinlock)
>
> I presume you could return a magic value (all ones), but then you still
> have to make sure the driver doesn't break when that happens.
yes - any proposal is going to require reviewing all PIO reads
and how the read return value is consumed (or discarded).
> That would
> likely require testing for that value on every read access and make
> the code similarly ugly and difficult to write as with Linus'
> explicit checking model.
yeah. My hope was it would be less invasive.
But more changes are probably needed than I expected.
...
> In short this stuff
> probably only makes sense when you're a system vendor who sells
> support contracts for whole systems including hardware support.
> For the normal linux model where software is independent from hardware
> (and hardware is usually crappy) it just doesn't work very well.
While ia64/parisc platforms have HW support for this,
I totally agree it won't work well for most (x86) platforms.
I'd like to reduce the burden on the driver writers for common
drivers (eg MPT) used on "vanilla" x86.
And like I pointed out before, linux kernel needs to review panic()
calls to see if some of them could easily be eliminated. The general
robustness issues (eg pci_map_single() panics on failure) aren't
prerequisites for IO error checking, but the latter seems less
useful with out the former.
I'd like to defend the pci_map_single() interface. It was designed
to reduce the complexity at the cost of robustness.
I think it was a fair trade off at the time and it sounds like
the time has come for a different trade off.
thanks,
grant
> -Andi
next prev parent reply other threads:[~2004-01-28 19:08 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-01-28 1:54 [RFC/PATCH, 1/4] readX_check() performance evaluation Hironobu Ishii
2004-01-28 17:20 ` Grant Grundler
2004-01-28 17:41 ` Andi Kleen
2004-01-28 18:31 ` David Mosberger
2004-01-28 18:52 ` Andi Kleen
2004-01-28 19:24 ` David Mosberger
2004-01-28 19:39 ` Andi Kleen
2004-01-28 19:48 ` David Mosberger
2004-01-28 20:01 ` Andi Kleen
2004-01-28 23:35 ` David Mosberger
2004-02-16 10:19 ` Pavel Machek
2004-01-29 8:23 ` Matthias Fouquet-Lapar
2004-01-29 19:28 ` David Mosberger
2004-01-29 20:16 ` Matthias Fouquet-Lapar
2004-01-29 21:09 ` David Mosberger
2004-01-29 22:20 ` Matthias Fouquet-Lapar
2004-01-28 19:09 ` Grant Grundler [this message]
2004-01-28 19:17 ` Andi Kleen
2004-01-28 21:14 ` Grant Grundler
2004-01-28 21:39 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040128190923.GA6333@cup.hp.com \
--to=iod00d@hp.com \
--cc=ak@suse.de \
--cc=ishii.hironobu@jp.fujitsu.com \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox