* Re: [Fwd: [RFT] major libata update]
[not found] ` <1147789098.3505.19.camel@mulgrave.il.steeleye.com>
@ 2006-05-16 15:41 ` Jeff Garzik
2006-05-16 15:51 ` James Bottomley
2006-05-16 18:15 ` Luben Tuikov
0 siblings, 2 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 15:41 UTC (permalink / raw)
To: James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
(CC's restored)
James Bottomley wrote:
> 1) your host_eh_scheduled logic looks wrong. It seems to me that you
> can miss the wakeup if the host is busy? I also don't see a need to
I can't see a case _in libata operation_ where a set of circumstances
arises that causes missed wakeups, can you elaborate?
> move the prototype out of scsi_priv.h ... it should only be used by
> transport classes, anyway.
We're talking about all ->eh_strategy_handler() users, which is a valid
EH API for an LLDD to choose. Granted libata is really the only one
right now.
Long term, ->eh_strategy_handler and transport classes are block layer
not SCSI level anyway, so scsi_priv.h is clearly inappropriate.
> 2) This scsi_req_abort_cmd() is fundamentally the wrong logic.
> Everything else is communicated back as a result code from the command
> in done(). This should be no different ... A status return of
> DID_FAILED which scsi_decide_disposition() always translates to FAILED
> would seem to do exactly what you want without all the overhead.
Inigo sez[1]: I do not think "fundamentally wrong" means what you think
it means.
You miss the fact that the timer may have already fired, in which
completing a command gets you...... not a damned thing. scsi_done()
will simply return, if the timeout has fired. This has always been an
annoying problem to work around.
scsi_req_abort_cmd() may perhaps be misnamed, but it GUARANTEES a set of
operations which libata EH wants. Certainly, if we can guarantee this
set of conditions another way, we are open to any alternate path.
Jeff
[1] from one of my favorite movies. search
http://us.imdb.com/title/tt0093779/quotes for "INCONCEIVABLE"
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:41 ` [Fwd: [RFT] major libata update] Jeff Garzik
@ 2006-05-16 15:51 ` James Bottomley
2006-05-16 16:06 ` Jeff Garzik
` (3 more replies)
2006-05-16 18:15 ` Luben Tuikov
1 sibling, 4 replies; 45+ messages in thread
From: James Bottomley @ 2006-05-16 15:51 UTC (permalink / raw)
To: Jeff Garzik
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
On Tue, 2006-05-16 at 11:41 -0400, Jeff Garzik wrote:
> I can't see a case _in libata operation_ where a set of circumstances
> arises that causes missed wakeups, can you elaborate?
This is scsi_eh_wakeup():
void scsi_eh_wakeup(struct Scsi_Host *shost)
{
if (shost->host_busy == shost->host_failed) {
wake_up_process(shost->ehandler);
so if you try a wakeup with no failed commands and the host still busy,
nothing happens.
> > move the prototype out of scsi_priv.h ... it should only be used by
> > transport classes, anyway.
>
> We're talking about all ->eh_strategy_handler() users, which is a valid
> EH API for an LLDD to choose. Granted libata is really the only one
> right now.
We're busy revoking the LLDD driver, so in future it will be transport
classes only.
> Long term, ->eh_strategy_handler and transport classes are block layer
> not SCSI level anyway, so scsi_priv.h is clearly inappropriate.
That can be sorted out if someone actually gets around to moving error
handling to the block level. In the meantime, it's SCSI that we're
discussing.
> > 2) This scsi_req_abort_cmd() is fundamentally the wrong logic.
> > Everything else is communicated back as a result code from the command
> > in done(). This should be no different ... A status return of
> > DID_FAILED which scsi_decide_disposition() always translates to FAILED
> > would seem to do exactly what you want without all the overhead.
>
> Inigo sez[1]: I do not think "fundamentally wrong" means what you think
> it means.
>
> You miss the fact that the timer may have already fired, in which
> completing a command gets you...... not a damned thing. scsi_done()
> will simply return, if the timeout has fired. This has always been an
> annoying problem to work around.
No ... in that case the eh is already active, and your API does this:
void scsi_req_abort_cmd(struct scsi_cmnd *cmd)
{
if (!scsi_delete_timer(cmd))
return;
^^^^^^^
scsi_times_out(cmd);
}
Which likewise does nothing if the timer has already fired, so they both
have the same effect.
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:51 ` James Bottomley
@ 2006-05-16 16:06 ` Jeff Garzik
2006-05-16 16:30 ` James Bottomley
2006-05-16 16:08 ` Tejun Heo
` (2 subsequent siblings)
3 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 16:06 UTC (permalink / raw)
To: James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> No ... in that case the eh is already active, and your API does this:
>
> void scsi_req_abort_cmd(struct scsi_cmnd *cmd)
> {
> if (!scsi_delete_timer(cmd))
> return;
> ^^^^^^^
> scsi_times_out(cmd);
> }
>
> Which likewise does nothing if the timer has already fired, so they both
> have the same effect.
Sigh. They clearly do not have the same effect, because the above code
guarantees that a timeout is forced, regardless of whether the timer has
fired or not. That in turn guarantees that the timeout callback
(->eh_timed_out) is called, and the cmd is in a very specific state.
Completion-or-timeout has none of these attributes.
Any alternative is forced to deal with two very different command, and
EH, states... to achieve the same eventual result. Thus, the code
presented is the one of least complexity, AFAICS.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:51 ` James Bottomley
2006-05-16 16:06 ` Jeff Garzik
@ 2006-05-16 16:08 ` Tejun Heo
2006-05-16 16:13 ` Tejun Heo
2006-05-16 16:29 ` James Bottomley
2006-05-16 16:12 ` [Fwd: [RFT] major libata update] Jeff Garzik
2006-05-16 18:28 ` Luben Tuikov
3 siblings, 2 replies; 45+ messages in thread
From: Tejun Heo @ 2006-05-16 16:08 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Tue, 2006-05-16 at 11:41 -0400, Jeff Garzik wrote:
>> I can't see a case _in libata operation_ where a set of circumstances
>> arises that causes missed wakeups, can you elaborate?
>
> This is scsi_eh_wakeup():
>
> void scsi_eh_wakeup(struct Scsi_Host *shost)
> {
> if (shost->host_busy == shost->host_failed) {
> wake_up_process(shost->ehandler);
>
> so if you try a wakeup with no failed commands and the host still busy,
> nothing happens.
It's handled the same way shost->host_failed is handled.
scsi_device_unbusy() wakes it up when the condition is met.
void scsi_device_unbusy(struct scsi_device *sdev)
{
struct Scsi_Host *shost = sdev->host;
unsigned long flags;
spin_lock_irqsave(shost->host_lock, flags);
shost->host_busy--;
if (unlikely(scsi_host_in_recovery(shost) &&
(shost->host_failed || shost->host_eh_scheduled)))
scsi_eh_wakeup(shost);
spin_unlock(shost->host_lock);
spin_lock(sdev->request_queue->queue_lock);
sdev->device_busy--;
spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
}
[--snip--]
>>> 2) This scsi_req_abort_cmd() is fundamentally the wrong logic.
>>> Everything else is communicated back as a result code from the command
>>> in done(). This should be no different ... A status return of
>>> DID_FAILED which scsi_decide_disposition() always translates to FAILED
>>> would seem to do exactly what you want without all the overhead.
>> Inigo sez[1]: I do not think "fundamentally wrong" means what you think
>> it means.
Currently, there is no reliable way to trigger DID_FAILED
unconditionally. I thought about adding some host code or whatever to
force it but it felt too hackish and went with the
scsi_eh_schedule_scmd(). Then, Luben suggested scsi_req_abort_cmd(), so
that's what I've ended up with.
>> You miss the fact that the timer may have already fired, in which
>> completing a command gets you...... not a damned thing. scsi_done()
>> will simply return, if the timeout has fired. This has always been an
>> annoying problem to work around.
>
> No ... in that case the eh is already active, and your API does this:
>
> void scsi_req_abort_cmd(struct scsi_cmnd *cmd)
> {
> if (!scsi_delete_timer(cmd))
> return;
> ^^^^^^^
> scsi_times_out(cmd);
> }
>
> Which likewise does nothing if the timer has already fired, so they both
> have the same effect.
--
tejun
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:51 ` James Bottomley
2006-05-16 16:06 ` Jeff Garzik
2006-05-16 16:08 ` Tejun Heo
@ 2006-05-16 16:12 ` Jeff Garzik
2006-05-16 16:38 ` James Bottomley
2006-05-16 18:28 ` Luben Tuikov
3 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 16:12 UTC (permalink / raw)
To: James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Tue, 2006-05-16 at 11:41 -0400, Jeff Garzik wrote:
>> I can't see a case _in libata operation_ where a set of circumstances
>> arises that causes missed wakeups, can you elaborate?
>
> This is scsi_eh_wakeup():
>
> void scsi_eh_wakeup(struct Scsi_Host *shost)
> {
> if (shost->host_busy == shost->host_failed) {
> wake_up_process(shost->ehandler);
>
> so if you try a wakeup with no failed commands and the host still busy,
> nothing happens.
Clearly. And where in the code do you see that this condition will strike?
If we are talking about impossible runtime conditions, then the
objection is academic.
>>> move the prototype out of scsi_priv.h ... it should only be used by
>>> transport classes, anyway.
>> We're talking about all ->eh_strategy_handler() users, which is a valid
>> EH API for an LLDD to choose. Granted libata is really the only one
>> right now.
>
> We're busy revoking the LLDD driver, so in future it will be transport
> classes only.
>
>> Long term, ->eh_strategy_handler and transport classes are block layer
>> not SCSI level anyway, so scsi_priv.h is clearly inappropriate.
>
> That can be sorted out if someone actually gets around to moving error
> handling to the block level. In the meantime, it's SCSI that we're
> discussing.
Its an API-which-only-libata-uses that we're discussing. And because
its moving to the block layer, its also a
temporary-API-which-only-libata-uses.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:08 ` Tejun Heo
@ 2006-05-16 16:13 ` Tejun Heo
2006-05-16 16:29 ` James Bottomley
1 sibling, 0 replies; 45+ messages in thread
From: Tejun Heo @ 2006-05-16 16:13 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
Tejun Heo wrote:
[--snip--]
> Currently, there is no reliable way to trigger DID_FAILED
> unconditionally. I thought about adding some host code or whatever to
> force it but it felt too hackish and went with the
> scsi_eh_schedule_scmd(). Then, Luben suggested scsi_req_abort_cmd(), so
> that's what I've ended up with.
Oops, wasn't thinking straight. Time to go bad. The obstacle I
couldn't avoid was the forced scmd timeout in scsi_softirq_done().
--
tejun
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:08 ` Tejun Heo
2006-05-16 16:13 ` Tejun Heo
@ 2006-05-16 16:29 ` James Bottomley
2006-05-16 16:37 ` Jeff Garzik
2006-05-16 16:39 ` Tejun Heo
1 sibling, 2 replies; 45+ messages in thread
From: James Bottomley @ 2006-05-16 16:29 UTC (permalink / raw)
To: Tejun Heo
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
On Wed, 2006-05-17 at 01:08 +0900, Tejun Heo wrote:
> It's handled the same way shost->host_failed is handled.
> scsi_device_unbusy() wakes it up when the condition is met.
That's the piece I hadn't spotted, yet ... thanks. That will do it.
could you move ata_schedule_scsi_eh to scsi_error (with the appropriate
API rename). That way the SCSI state model changes aren't in libata
(and no-one else who wants to use this has to open code the
ata_schedule_scsi_eh API).
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:06 ` Jeff Garzik
@ 2006-05-16 16:30 ` James Bottomley
2006-05-16 16:39 ` Jeff Garzik
2006-05-16 21:32 ` Luben Tuikov
0 siblings, 2 replies; 45+ messages in thread
From: James Bottomley @ 2006-05-16 16:30 UTC (permalink / raw)
To: Jeff Garzik
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
On Tue, 2006-05-16 at 12:06 -0400, Jeff Garzik wrote:
> Sigh. They clearly do not have the same effect, because the above code
> guarantees that a timeout is forced, regardless of whether the timer has
> fired or not. That in turn guarantees that the timeout callback
> (->eh_timed_out) is called, and the cmd is in a very specific state.
the API claims to be forcibly aborting a command, which is *not* a
timeout ... trying to pretend to the midlayer that it is is the wrong
processing model. You may choose to call this API because of a class
internal timeout, but you don't need the callback notification that it
is a timeout in this case, you already know it is.
> Completion-or-timeout has none of these attributes.
>
> Any alternative is forced to deal with two very different command, and
> EH, states... to achieve the same eventual result. Thus, the code
> presented is the one of least complexity, AFAICS.
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:29 ` James Bottomley
@ 2006-05-16 16:37 ` Jeff Garzik
2006-05-16 16:39 ` Tejun Heo
1 sibling, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 16:37 UTC (permalink / raw)
To: James Bottomley
Cc: Tejun Heo, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 01:08 +0900, Tejun Heo wrote:
>> It's handled the same way shost->host_failed is handled.
>> scsi_device_unbusy() wakes it up when the condition is met.
>
> That's the piece I hadn't spotted, yet ... thanks. That will do it.
>
> could you move ata_schedule_scsi_eh to scsi_error (with the appropriate
> API rename). That way the SCSI state model changes aren't in libata
> (and no-one else who wants to use this has to open code the
> ata_schedule_scsi_eh API).
ACK
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:12 ` [Fwd: [RFT] major libata update] Jeff Garzik
@ 2006-05-16 16:38 ` James Bottomley
2006-05-16 16:57 ` Jeff Garzik
0 siblings, 1 reply; 45+ messages in thread
From: James Bottomley @ 2006-05-16 16:38 UTC (permalink / raw)
To: Jeff Garzik
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
> Its an API-which-only-libata-uses that we're discussing. And because
> its moving to the block layer, its also a
> temporary-API-which-only-libata-uses.
OK ... this may be the root of the problem. I really would like libata
to migrate to being block only ... especially as PATA looks to be trying
to follow you into the SCSI subsystem. However, this has been the
statement for the past two years (at least), and really, few
enhancements have been made to block that you need to make good on this.
I think one of the things we'll try to find time to do at the storage
summit is to take a hard look at block to see exactly what has to be
added to make libata solely dependent upon it.
However, the bottom line is that if you want to modify the *SCSI* API
then you follow the same process as everyone else (i.e. demonstrate
justification and utility and worry about long lived maintainability of
the API).
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:30 ` James Bottomley
@ 2006-05-16 16:39 ` Jeff Garzik
2006-05-16 21:55 ` Luben Tuikov
2006-05-16 21:32 ` Luben Tuikov
1 sibling, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 16:39 UTC (permalink / raw)
To: James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Tue, 2006-05-16 at 12:06 -0400, Jeff Garzik wrote:
>> Sigh. They clearly do not have the same effect, because the above code
>> guarantees that a timeout is forced, regardless of whether the timer has
>> fired or not. That in turn guarantees that the timeout callback
>> (->eh_timed_out) is called, and the cmd is in a very specific state.
>
> the API claims to be forcibly aborting a command, which is *not* a
> timeout ... trying to pretend to the midlayer that it is is the wrong
> processing model. You may choose to call this API because of a class
> internal timeout, but you don't need the callback notification that it
> is a timeout in this case, you already know it is.
I can certainly agree the name may not be the best choice. Naming based
on implementation, it could be scsi_force_timeout_cmd() or somesuch.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:29 ` James Bottomley
2006-05-16 16:37 ` Jeff Garzik
@ 2006-05-16 16:39 ` Tejun Heo
2006-05-16 16:50 ` James Bottomley
1 sibling, 1 reply; 45+ messages in thread
From: Tejun Heo @ 2006-05-16 16:39 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 01:08 +0900, Tejun Heo wrote:
>> It's handled the same way shost->host_failed is handled.
>> scsi_device_unbusy() wakes it up when the condition is met.
>
> That's the piece I hadn't spotted, yet ... thanks. That will do it.
>
> could you move ata_schedule_scsi_eh to scsi_error (with the appropriate
> API rename). That way the SCSI state model changes aren't in libata
> (and no-one else who wants to use this has to open code the
> ata_schedule_scsi_eh API).
>
I certainly can, and it was done that way first time around. Please
note the following discussion.
http://thread.gmane.org/gmane.linux.scsi/23853/focus=9760
Luben objected the interface made public because SCSI host is not
supposed to know about exception conditions which are not associated
with ITL nor ITLQ nexus. Thus, I made it a temporary measure only for
libata, which is planned to move out.
So, SCSI contains only the necessary bits required to implement the
feature and libata open-codes the rest. As it's not an exported
interface, no other SCSI driver is supposed to use it and the SCSI
modifications can be easily removed after libata moves out.
As long as libata can do EH not associated with scmd or device, I'm okay
either way and think it's your call. So, considering the above
discussion, do you want it to be a generic SCSI interface?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:39 ` Tejun Heo
@ 2006-05-16 16:50 ` James Bottomley
2006-05-16 17:07 ` Tejun Heo
0 siblings, 1 reply; 45+ messages in thread
From: James Bottomley @ 2006-05-16 16:50 UTC (permalink / raw)
To: Tejun Heo
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
On Wed, 2006-05-17 at 01:39 +0900, Tejun Heo wrote:
> I certainly can, and it was done that way first time around. Please
> note the following discussion.
>
> http://thread.gmane.org/gmane.linux.scsi/23853/focus=9760
>
> Luben objected the interface made public because SCSI host is not
> supposed to know about exception conditions which are not associated
> with ITL nor ITLQ nexus. Thus, I made it a temporary measure only for
> libata, which is planned to move out.
His objection is still valid. However, as the balance of evils, I
think, if you have to do this, it's better to contain it in a way where
it's obvious what's being done. Plus you don't want someone to modify
the host state model and suddenly find libata doesn't work anymore
because they failed to spot that it needed to change as well ...
> So, SCSI contains only the necessary bits required to implement the
> feature and libata open-codes the rest. As it's not an exported
> interface, no other SCSI driver is supposed to use it and the SCSI
> modifications can be easily removed after libata moves out.
>
> As long as libata can do EH not associated with scmd or device, I'm okay
> either way and think it's your call. So, considering the above
> discussion, do you want it to be a generic SCSI interface?
Yes, but in scsi_priv.h, please ... I can actually think of another use
for it in terms of getting the SG reset handler to work properly.
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:38 ` James Bottomley
@ 2006-05-16 16:57 ` Jeff Garzik
2006-05-17 7:37 ` Jens Axboe
0 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 16:57 UTC (permalink / raw)
To: James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds, Jens Axboe
James Bottomley wrote:
> On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
>> Its an API-which-only-libata-uses that we're discussing. And because
>> its moving to the block layer, its also a
>> temporary-API-which-only-libata-uses.
>
> OK ... this may be the root of the problem. I really would like libata
> to migrate to being block only ... especially as PATA looks to be trying
> to follow you into the SCSI subsystem. However, this has been the
> statement for the past two years (at least), and really, few
> enhancements have been made to block that you need to make good on this.
> I think one of the things we'll try to find time to do at the storage
> summit is to take a hard look at block to see exactly what has to be
> added to make libata solely dependent upon it.
100% agreed...
The general list, off the top of my head:
* objects: storage message, storage device, storage host, and the
requisite interconnections
* queuecommand-style API
* EH thread(s)
* timers, for command timeouts
* SCSI-style MLqueue and state stuff, i.e. ability to return "device
busy", "host busy", "retry this command", ...
And once libata is happy at the block layer, move SCSI to using this
stuff too :)
FWIW, as ATAPI continues to align ever more closely with SCSI MMC, I
strongly feel that libata should continue to use the "SCSI MMC device
class driver" (i.e. sr), and other applicable SCSI device class drivers
(st, ...).
Like modern SAS controllers, which support plugging SATA drives, libata
must mix SCSI and non-SCSI. Thus, most of the above infrastructure
basically _must_ live outside the SCSI layer, if we are going to
properly support modern controllers in a modular fashion.
> However, the bottom line is that if you want to modify the *SCSI* API
> then you follow the same process as everyone else (i.e. demonstrate
> justification and utility and worry about long lived maintainability of
> the API).
Well...
* ->eh_strategy_handler() API was unusable before libata (evidence: all
the bugs we've fixed)
* there were zero users before libata (evidence: grep past 2.4.0)
* there is only one user today, libata (evidence: grep)
So you'll pardon my skepticism when newly elevated standards for this
API suddenly appear. libata is just trying to Do What Needs To Be Done,
And Nothing More.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:50 ` James Bottomley
@ 2006-05-16 17:07 ` Tejun Heo
2006-05-16 17:09 ` Jeff Garzik
2006-05-16 19:58 ` Christoph Hellwig
0 siblings, 2 replies; 45+ messages in thread
From: Tejun Heo @ 2006-05-16 17:07 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 01:39 +0900, Tejun Heo wrote:
>> I certainly can, and it was done that way first time around. Please
>> note the following discussion.
>>
>> http://thread.gmane.org/gmane.linux.scsi/23853/focus=9760
>>
>> Luben objected the interface made public because SCSI host is not
>> supposed to know about exception conditions which are not associated
>> with ITL nor ITLQ nexus. Thus, I made it a temporary measure only for
>> libata, which is planned to move out.
>
> His objection is still valid. However, as the balance of evils, I
> think, if you have to do this, it's better to contain it in a way where
> it's obvious what's being done. Plus you don't want someone to modify
> the host state model and suddenly find libata doesn't work anymore
> because they failed to spot that it needed to change as well ...
I see.
>> So, SCSI contains only the necessary bits required to implement the
>> feature and libata open-codes the rest. As it's not an exported
>> interface, no other SCSI driver is supposed to use it and the SCSI
>> modifications can be easily removed after libata moves out.
>>
>> As long as libata can do EH not associated with scmd or device, I'm okay
>> either way and think it's your call. So, considering the above
>> discussion, do you want it to be a generic SCSI interface?
>
> Yes, but in scsi_priv.h, please ... I can actually think of another use
> for it in terms of getting the SG reset handler to work properly.
Okay, will do in scsi_priv.h
Jeff, it seems that we need to reset #upstream and rebuild it. Do you
have any other idea than resetting libata-tj#for-jeff and
libata-dev#upstream?
--
tejun
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 17:07 ` Tejun Heo
@ 2006-05-16 17:09 ` Jeff Garzik
2006-05-16 19:58 ` Christoph Hellwig
1 sibling, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 17:09 UTC (permalink / raw)
To: Tejun Heo
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Andrew Morton, Linus Torvalds
Tejun Heo wrote:
> Okay, will do in scsi_priv.h
>
> Jeff, it seems that we need to reset #upstream and rebuild it. Do you
> have any other idea than resetting libata-tj#for-jeff and
> libata-dev#upstream?
Just apply an update patch that achieves the desired result.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:41 ` [Fwd: [RFT] major libata update] Jeff Garzik
2006-05-16 15:51 ` James Bottomley
@ 2006-05-16 18:15 ` Luben Tuikov
1 sibling, 0 replies; 45+ messages in thread
From: Luben Tuikov @ 2006-05-16 18:15 UTC (permalink / raw)
To: Jeff Garzik, James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
--- Jeff Garzik <jeff@garzik.org> wrote:
> (CC's restored)
>
> James Bottomley wrote:
> > 1) your host_eh_scheduled logic looks wrong. It seems to me that you
> > can miss the wakeup if the host is busy? I also don't see a need to
>
> I can't see a case _in libata operation_ where a set of circumstances
> arises that causes missed wakeups, can you elaborate?
>
>
> > move the prototype out of scsi_priv.h ... it should only be used by
> > transport classes, anyway.
>
> We're talking about all ->eh_strategy_handler() users, which is a valid
> EH API for an LLDD to choose. Granted libata is really the only one
> right now.
>
> Long term, ->eh_strategy_handler and transport classes are block layer
> not SCSI level anyway, so scsi_priv.h is clearly inappropriate.
The physical stack (HW) looks like:
Block ->
SCSI ->
Transport ->
Interconnect ->
Device.
Not sure how you're going to achieve a SW abstraction whereby
"->eh_strategy_handler and transport classes are block layer",
when, the HW abstraction is different.
Luben
>
>
> > 2) This scsi_req_abort_cmd() is fundamentally the wrong logic.
> > Everything else is communicated back as a result code from the command
> > in done(). This should be no different ... A status return of
> > DID_FAILED which scsi_decide_disposition() always translates to FAILED
> > would seem to do exactly what you want without all the overhead.
>
> Inigo sez[1]: I do not think "fundamentally wrong" means what you think
> it means.
>
> You miss the fact that the timer may have already fired, in which
> completing a command gets you...... not a damned thing. scsi_done()
> will simply return, if the timeout has fired. This has always been an
> annoying problem to work around.
>
> scsi_req_abort_cmd() may perhaps be misnamed, but it GUARANTEES a set of
> operations which libata EH wants. Certainly, if we can guarantee this
> set of conditions another way, we are open to any alternate path.
>
> Jeff
>
>
> [1] from one of my favorite movies. search
> http://us.imdb.com/title/tt0093779/quotes for "INCONCEIVABLE"
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 15:51 ` James Bottomley
` (2 preceding siblings ...)
2006-05-16 16:12 ` [Fwd: [RFT] major libata update] Jeff Garzik
@ 2006-05-16 18:28 ` Luben Tuikov
3 siblings, 0 replies; 45+ messages in thread
From: Luben Tuikov @ 2006-05-16 18:28 UTC (permalink / raw)
To: James Bottomley, Jeff Garzik
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
--- James Bottomley <James.Bottomley@SteelEye.com> wrote:
> On Tue, 2006-05-16 at 11:41 -0400, Jeff Garzik wrote:
> > You miss the fact that the timer may have already fired, in which
> > completing a command gets you...... not a damned thing. scsi_done()
> > will simply return, if the timeout has fired. This has always been an
> > annoying problem to work around.
>
> No ... in that case the eh is already active, and your API does this:
>
> void scsi_req_abort_cmd(struct scsi_cmnd *cmd)
> {
> if (!scsi_delete_timer(cmd))
> return;
> ^^^^^^^
> scsi_times_out(cmd);
> }
>
> Which likewise does nothing if the timer has already fired, so they both
> have the same effect.
I think I just explained this. Let me try again.
They don't both have the same effect. What you're suggesting. calling done()
with status so that, EH can turn around and recover the command, as in this thread
and in this thread: http://marc.theaimsgroup.com/?l=linux-scsi&m=113833937421677&w=2,
_terminates_ the command nexus with the driver!
Secondly, this function, solves races between the transport telling you "I want
to abort this command" and the command timing out. When the driver get transport
notification that this command should be aborted, it doesn't need to care
where and in what state with SCSI Core the command is. As long as it has nexus
it can call this function and then the command would be recovered with
ABORT TASK or another TMF as it pertinent to the transport.
Calling scsi_req_abort_cmd() _guaranees_ that this command would be recovered
either way. This gives LLDD and interconnects guarantee that ABORT TASK
would be called and then they can do recovert for the command/device.
Luben
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 17:07 ` Tejun Heo
2006-05-16 17:09 ` Jeff Garzik
@ 2006-05-16 19:58 ` Christoph Hellwig
2006-05-16 20:02 ` Jeff Garzik
2006-05-16 21:28 ` James Bottomley
1 sibling, 2 replies; 45+ messages in thread
From: Christoph Hellwig @ 2006-05-16 19:58 UTC (permalink / raw)
To: Tejun Heo
Cc: James Bottomley, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Andrew Morton, Linus Torvalds
On Wed, May 17, 2006 at 02:07:19AM +0900, Tejun Heo wrote:
> > Yes, but in scsi_priv.h, please ... I can actually think of another use
> > for it in terms of getting the SG reset handler to work properly.
>
> Okay, will do in scsi_priv.h
Things used by the transport classes traditionally weren't in scsi_priv.h,
that was for scsi_mod internals only. Should we changed that? I'd rather
have another header for semi-volatile APIs transport classes use.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 19:58 ` Christoph Hellwig
@ 2006-05-16 20:02 ` Jeff Garzik
2006-05-16 21:28 ` James Bottomley
1 sibling, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-16 20:02 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Tejun Heo, James Bottomley, SCSI Mailing List,
linux-ide@vger.kernel.org, Andrew Morton, Linus Torvalds
Christoph Hellwig wrote:
> On Wed, May 17, 2006 at 02:07:19AM +0900, Tejun Heo wrote:
>>> Yes, but in scsi_priv.h, please ... I can actually think of another use
>>> for it in terms of getting the SG reset handler to work properly.
>> Okay, will do in scsi_priv.h
>
> Things used by the transport classes traditionally weren't in scsi_priv.h,
> that was for scsi_mod internals only. Should we changed that? I'd rather
> have another header for semi-volatile APIs transport classes use.
Hence my objection... Transport classes also aren't SCSI-specific.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 19:58 ` Christoph Hellwig
2006-05-16 20:02 ` Jeff Garzik
@ 2006-05-16 21:28 ` James Bottomley
2006-05-18 3:27 ` Tejun Heo
1 sibling, 1 reply; 45+ messages in thread
From: James Bottomley @ 2006-05-16 21:28 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Tejun Heo, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Andrew Morton, Linus Torvalds
On Tue, 2006-05-16 at 20:58 +0100, Christoph Hellwig wrote:
> Things used by the transport classes traditionally weren't in scsi_priv.h,
> that was for scsi_mod internals only. Should we changed that? I'd rather
> have another header for semi-volatile APIs transport classes use.
well ... OK we don't necessarily have a header for SCSI APIs exported
exclusively to transport classes. how about
drivers/scsi/scsi_transport_api.h?
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:30 ` James Bottomley
2006-05-16 16:39 ` Jeff Garzik
@ 2006-05-16 21:32 ` Luben Tuikov
1 sibling, 0 replies; 45+ messages in thread
From: Luben Tuikov @ 2006-05-16 21:32 UTC (permalink / raw)
To: James Bottomley, Jeff Garzik
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
--- James Bottomley <James.Bottomley@SteelEye.com> wrote:
> On Tue, 2006-05-16 at 12:06 -0400, Jeff Garzik wrote:
> > Sigh. They clearly do not have the same effect, because the above code
> > guarantees that a timeout is forced, regardless of whether the timer has
> > fired or not. That in turn guarantees that the timeout callback
> > (->eh_timed_out) is called, and the cmd is in a very specific state.
>
> the API claims to be forcibly aborting a command, which is *not* a
> timeout ... trying to pretend to the midlayer that it is is the wrong
It doesn't matter.
> processing model. You may choose to call this API because of a class
No, it is not the "wrong processng model".
> internal timeout, but you don't need the callback notification that it
> is a timeout in this case, you already know it is.
Then if you already know that it is, you MAY decide to ignore the event.
It wouldn't matter to you now would it. Take a look at the GIT description:
Introduce scsi_req_abort_cmd(struct scsi_cmnd *).
This function requests that SCSI Core start recovery for the
command by deleting the timer and adding the command to the eh
queue. It can be called by either LLDDs or SCSI Core. LLDDs who
implement their own error recovery MAY ignore the timeout event if
they generated scsi_req_abort_cmd.
This function solves a myriad of issues, one of which is the atomic
marking of a command aborted by the transports when they've defined
BOTH the timeout callback and the error recovery callback to have complete
and consistent error recovery.
Also it doesn't matter that it is NOT a timeout as far as SCSI Core is
concerned. Why? Because proper status would be returned at Error Recover
time. I.e. by the TMFs being called as part of ER.
Luben
>
> > Completion-or-timeout has none of these attributes.
> >
> > Any alternative is forced to deal with two very different command, and
> > EH, states... to achieve the same eventual result. Thus, the code
> > presented is the one of least complexity, AFAICS.
>
> James
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ide" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:39 ` Jeff Garzik
@ 2006-05-16 21:55 ` Luben Tuikov
0 siblings, 0 replies; 45+ messages in thread
From: Luben Tuikov @ 2006-05-16 21:55 UTC (permalink / raw)
To: Jeff Garzik, James Bottomley
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
--- Jeff Garzik <jeff@garzik.org> wrote:
> James Bottomley wrote:
> > On Tue, 2006-05-16 at 12:06 -0400, Jeff Garzik wrote:
> >> Sigh. They clearly do not have the same effect, because the above code
> >> guarantees that a timeout is forced, regardless of whether the timer has
> >> fired or not. That in turn guarantees that the timeout callback
> >> (->eh_timed_out) is called, and the cmd is in a very specific state.
> >
> > the API claims to be forcibly aborting a command, which is *not* a
> > timeout ... trying to pretend to the midlayer that it is is the wrong
> > processing model. You may choose to call this API because of a class
> > internal timeout, but you don't need the callback notification that it
> > is a timeout in this case, you already know it is.
>
> I can certainly agree the name may not be the best choice. Naming based
> on implementation, it could be scsi_force_timeout_cmd() or somesuch.
But it is not that either!
As far as the LLDD is concerned, it is calling scsi_req_abort_cmd() to
request SCSI Core to abort it. At the point of LLDD calling this function
the command is not marked anything. This is important!
Why?
Because LLDD/interconnects
1. Do not want to race with SCSI Core changing the state of the command.
2. You want to give LLDD a chance as the command may finish in due time since
the interconnect requested it be aborted. I.e. there could be a race in the
transport too, which doesn't depend on the transport but on the end device, SDS,
and the transport.
For these reasons the best bet is to emulate a time out so that SCSI Core calls back
into the transport to tell the transpor that, it, SCSI Core, is about to start ER
on the command.
Then LLDD/interconnects have another chance, plus it gives notification to LLDD/IC
that SCSI Core has changed its disposition of that command.
Luben
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 16:57 ` Jeff Garzik
@ 2006-05-17 7:37 ` Jens Axboe
2006-05-17 15:06 ` Jeff Garzik
0 siblings, 1 reply; 45+ messages in thread
From: Jens Axboe @ 2006-05-17 7:37 UTC (permalink / raw)
To: Jeff Garzik
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
On Tue, May 16 2006, Jeff Garzik wrote:
> James Bottomley wrote:
> >On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
> >>Its an API-which-only-libata-uses that we're discussing. And because
> >>its moving to the block layer, its also a
> >>temporary-API-which-only-libata-uses.
> >
> >OK ... this may be the root of the problem. I really would like libata
> >to migrate to being block only ... especially as PATA looks to be trying
> >to follow you into the SCSI subsystem. However, this has been the
> >statement for the past two years (at least), and really, few
> >enhancements have been made to block that you need to make good on this.
> >I think one of the things we'll try to find time to do at the storage
> >summit is to take a hard look at block to see exactly what has to be
> >added to make libata solely dependent upon it.
>
> 100% agreed...
Ditto! I'd be more than willing to implement some of these features (and
already started to, the per command timeout for instance), but I was
starting to write off libata moving to block as a silly pipe dream in
all honesty... But if momentum is picking up behind this move, then I'll
all for it.
> The general list, off the top of my head:
>
> * objects: storage message, storage device, storage host, and the
> requisite interconnections
Storage message -> request. The rq-cmd-type branch of the block repo has
most/some of that done. For an explicit storage device + host, I have no
plans to expland on what we have.
> * queuecommand-style API
That's a style issue, rather than a required item. You can roll that on
top of the current api by just doing a:
int queuecommand_helper(request_queue_t *q, struct request *rq)
{
/* issue request */
...
return OK/DEFER/REJECT/WHATEVER
}
blk_queuecommand_helper(request_queue_t *q, queue_command_fn *fn)
{
struct request *rq;
int ret;
do {
rq = elv_next_request(q);
if (!rq)
break;
ret = fn(q, rq);
if (ret == OK)
continue;
/* handle replugging/killing/whatever */
} while (1);
}
if you really wanted.
> * EH thread(s)
> * timers, for command timeouts
Agree, we can abstract out the grunt handling of timeouts and the
context required in the block layer.
> * SCSI-style MLqueue and state stuff, i.e. ability to return "device
> busy", "host busy", "retry this command", ...
Comes with the blk_queuecommand_helper() type setup from above.
> And once libata is happy at the block layer, move SCSI to using this
> stuff too :)
Definitely.
--
Jens Axboe
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 7:37 ` Jens Axboe
@ 2006-05-17 15:06 ` Jeff Garzik
2006-05-17 15:50 ` James Bottomley
` (2 more replies)
0 siblings, 3 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 15:06 UTC (permalink / raw)
To: Jens Axboe
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
Jens Axboe wrote:
> On Tue, May 16 2006, Jeff Garzik wrote:
>> James Bottomley wrote:
>>> On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
>>>> Its an API-which-only-libata-uses that we're discussing. And because
>>>> its moving to the block layer, its also a
>>>> temporary-API-which-only-libata-uses.
>>> OK ... this may be the root of the problem. I really would like libata
>>> to migrate to being block only ... especially as PATA looks to be trying
>>> to follow you into the SCSI subsystem. However, this has been the
>>> statement for the past two years (at least), and really, few
>>> enhancements have been made to block that you need to make good on this.
>>> I think one of the things we'll try to find time to do at the storage
>>> summit is to take a hard look at block to see exactly what has to be
>>> added to make libata solely dependent upon it.
>> 100% agreed...
>
> Ditto! I'd be more than willing to implement some of these features (and
> already started to, the per command timeout for instance), but I was
> starting to write off libata moving to block as a silly pipe dream in
> all honesty... But if momentum is picking up behind this move, then I'll
> all for it.
Just gotta be patient. Rome wasn't built in a day, and all that :)
Like I mentioned in another message, the ideal world is that libata uses
an ATA disk driver and a SCSI MMC driver -- just like a modern SAS
controller (which likely supports SATA too) will use both an ATA disk
driver and a SCSI disk driver.
Given this "ideal world", its IMO best that the "storage driver"
infrastructure lives in the block layer not SCSI layer.
>> The general list, off the top of my head:
>>
>> * objects: storage message, storage device, storage host, and the
>> requisite interconnections
>
> Storage message -> request. The rq-cmd-type branch of the block repo has
> most/some of that done. For an explicit storage device + host, I have no
> plans to expland on what we have.
Agreed that storage message == request.
storage device and storage host are key objects included in the
infrastructure libata uses SCSI for. They fall naturally out of the
infrastructure that provides "device busy", "host busy", EH and EH
synchronization across multiple devices, etc. Though these, SCSI also
provides infrastructure through which an LLDD may export a bus topology
to the user.
>> * queuecommand-style API
>
> That's a style issue, rather than a required item. You can roll that on
> top of the current api by just doing a:
>
> int queuecommand_helper(request_queue_t *q, struct request *rq)
> {
> /* issue request */
> ...
> return OK/DEFER/REJECT/WHATEVER
> }
>
> blk_queuecommand_helper(request_queue_t *q, queue_command_fn *fn)
> {
> struct request *rq;
> int ret;
>
> do {
> rq = elv_next_request(q);
> if (!rq)
> break;
>
> ret = fn(q, rq);
> if (ret == OK)
> continue;
>
> /* handle replugging/killing/whatever */
> } while (1);
> }
>
> if you really wanted.
That's not an optional piece. Given the needed timeout / device / host
infrastructure, you inevitably wind up with the following code pattern:
infrastructure code
send fully prepared request to hardware
infrastructure code
At this point I should note that all of what I've been describing is an
_optional addition_ to the block layer. Its all helpers and a few new,
optional structs. This SHOULD NOT involve changing the core block layer
at all. Well, maybe struct request would like the addition of a timer.
But that's it, and such a mod is easy to do.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:06 ` Jeff Garzik
@ 2006-05-17 15:50 ` James Bottomley
2006-05-17 15:58 ` James Smart
` (2 more replies)
2006-05-17 16:05 ` Douglas Gilbert
2006-05-17 17:37 ` Jens Axboe
2 siblings, 3 replies; 45+ messages in thread
From: James Bottomley @ 2006-05-17 15:50 UTC (permalink / raw)
To: Jeff Garzik
Cc: Jens Axboe, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
On Wed, 2006-05-17 at 11:06 -0400, Jeff Garzik wrote:
> storage device and storage host are key objects included in the
This is one of the questions. Currently block has no concept of "host".
All it knows about are queues (which may be per host or per device
depending on the implementation). Do we need to introduce the concept
of something like queue grouping (a sort of lightweight infrastructure
that could be used by the underlying transport to implement a host
concept without introducing hosts at the block layer)?
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:50 ` James Bottomley
@ 2006-05-17 15:58 ` James Smart
2006-05-17 16:17 ` Jeff Garzik
2006-05-17 17:47 ` Linus Torvalds
2 siblings, 0 replies; 45+ messages in thread
From: James Smart @ 2006-05-17 15:58 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton,
Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 11:06 -0400, Jeff Garzik wrote:
>> storage device and storage host are key objects included in the
>
> This is one of the questions. Currently block has no concept of "host".
> All it knows about are queues (which may be per host or per device
> depending on the implementation). Do we need to introduce the concept
> of something like queue grouping (a sort of lightweight infrastructure
> that could be used by the underlying transport to implement a host
> concept without introducing hosts at the block layer)?
Boy, this sounds interesting. Could also be a more sane way to implement
can_queue depths for the host. Another thing comes to mind - queue depths
per target, which has always been missing from Linux. Although, any
grouping immediately brings to mind scheduling policies within the group.
-- james s
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:06 ` Jeff Garzik
2006-05-17 15:50 ` James Bottomley
@ 2006-05-17 16:05 ` Douglas Gilbert
2006-05-17 17:37 ` Jens Axboe
2 siblings, 0 replies; 45+ messages in thread
From: Douglas Gilbert @ 2006-05-17 16:05 UTC (permalink / raw)
To: Jeff Garzik
Cc: Jens Axboe, James Bottomley, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton,
Linus Torvalds
Jeff Garzik wrote:
> Jens Axboe wrote:
>
>> On Tue, May 16 2006, Jeff Garzik wrote:
>>
>>> James Bottomley wrote:
>>>
>>>> On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
>>>>
>>>>> Its an API-which-only-libata-uses that we're discussing. And
>>>>> because its moving to the block layer, its also a
>>>>> temporary-API-which-only-libata-uses.
>>>>
>>>> OK ... this may be the root of the problem. I really would like libata
>>>> to migrate to being block only ... especially as PATA looks to be
>>>> trying
>>>> to follow you into the SCSI subsystem. However, this has been the
>>>> statement for the past two years (at least), and really, few
>>>> enhancements have been made to block that you need to make good on
>>>> this.
>>>> I think one of the things we'll try to find time to do at the storage
>>>> summit is to take a hard look at block to see exactly what has to be
>>>> added to make libata solely dependent upon it.
>>>
>>> 100% agreed...
>>
>>
>> Ditto! I'd be more than willing to implement some of these features (and
>> already started to, the per command timeout for instance), but I was
>> starting to write off libata moving to block as a silly pipe dream in
>> all honesty... But if momentum is picking up behind this move, then I'll
>> all for it.
>
>
> Just gotta be patient. Rome wasn't built in a day, and all that :)
>
> Like I mentioned in another message, the ideal world is that libata uses
> an ATA disk driver and a SCSI MMC driver -- just like a modern SAS
> controller (which likely supports SATA too) will use both an ATA disk
> driver and a SCSI disk driver.
>
> Given this "ideal world", its IMO best that the "storage driver"
> infrastructure lives in the block layer not SCSI layer.
LSI have chosen to put a SAT layer in the HBA firmware
of their MPT fusion SAS cards. This makes both directly
connected SATA disks (not strictly speaking SAS) and
expander connected SATA disks (using the STP protocol)
look like SCSI devices to the OS. Interestingly LSI have
chosen not to implement the ATA PASSTHROUGH scsi commands
at this time. That makes the job of smartmontools
difficult.
More generally if you start putting SATA disks in a
multi-initiator fabric (FC or SAS) then there are
issues that need to be addressed. One issue is
when two initiators try to access a SATA disk at
around the same time. SAS has optional "affiliations"
for protecting individual SATA commands but I'm
unaware of anything like SCSI's persistent reservations.
In the case of SAS affiliations, as far as I can
see, some additional help is needed via SAS's
management protocol (SMP). Command queueing (at
the disk) is another issue.
A SAT layer in the OS (e.g. like libata) may not
be able address these issues, but a SATL in (or
aware of) the transport might.
So the point I'm making is the appropriate command
set(s) for a hard disk not only depends on what the
device itself talks, but also the way in which
it is connected.
libata demonstrates that it is viable to use both
the SCSI commands (for data) and SATA commands
(for maintenance) on the same SATA device.
Doug Gilbert
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:50 ` James Bottomley
2006-05-17 15:58 ` James Smart
@ 2006-05-17 16:17 ` Jeff Garzik
2006-05-17 17:53 ` James Bottomley
2006-05-17 17:47 ` Linus Torvalds
2 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 16:17 UTC (permalink / raw)
To: James Bottomley, Jens Axboe
Cc: SCSI Mailing List, linux-ide@vger.kernel.org, Tejun Heo,
Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 11:06 -0400, Jeff Garzik wrote:
>> storage device and storage host are key objects included in the
>
> This is one of the questions. Currently block has no concept of "host".
> All it knows about are queues (which may be per host or per device
> depending on the implementation). Do we need to introduce the concept
> of something like queue grouping (a sort of lightweight infrastructure
> that could be used by the underlying transport to implement a host
> concept without introducing hosts at the block layer)?
Yes, and not only that... you must describe the queue pipeline too.
i.e. N logical devices can be bottlenecked behind a bridge (expander,
port multiplier, tunnel) of queue depth Q, which may in turn be behind
another bottleneck. :)
But overall, libata and SAS controllers are forced to deal with the
reality of the situation: they all wind up either using, or recreating
from scratch, objects for host/device/bus/etc. in order to sanely allow
all the infrastructure to interoperate.
You'll all note that struct Scsi_Host and struct scsi_cmnd have very
little to do with SCSI. Its almost all infrastructure and driver
management. That's the _useful_ stuff that libata uses SCSI for.
Thus, moving libata to the block layer entails either
s/Scsi_Host/Storage_Host/g or a highly similar infrastructure, to
achieve the same gains.
It is _trivial_ to write a new SCSI driver [even if your hardware is not
SCSI], and there are good reasons for that. Please all, examine those
reasons...
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:06 ` Jeff Garzik
2006-05-17 15:50 ` James Bottomley
2006-05-17 16:05 ` Douglas Gilbert
@ 2006-05-17 17:37 ` Jens Axboe
2006-05-17 21:58 ` Jeff Garzik
2 siblings, 1 reply; 45+ messages in thread
From: Jens Axboe @ 2006-05-17 17:37 UTC (permalink / raw)
To: Jeff Garzik
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
On Wed, May 17 2006, Jeff Garzik wrote:
> Jens Axboe wrote:
> >On Tue, May 16 2006, Jeff Garzik wrote:
> >>James Bottomley wrote:
> >>>On Tue, 2006-05-16 at 12:12 -0400, Jeff Garzik wrote:
> >>>>Its an API-which-only-libata-uses that we're discussing. And because
> >>>>its moving to the block layer, its also a
> >>>>temporary-API-which-only-libata-uses.
> >>>OK ... this may be the root of the problem. I really would like libata
> >>>to migrate to being block only ... especially as PATA looks to be trying
> >>>to follow you into the SCSI subsystem. However, this has been the
> >>>statement for the past two years (at least), and really, few
> >>>enhancements have been made to block that you need to make good on this.
> >>>I think one of the things we'll try to find time to do at the storage
> >>>summit is to take a hard look at block to see exactly what has to be
> >>>added to make libata solely dependent upon it.
> >>100% agreed...
> >
> >Ditto! I'd be more than willing to implement some of these features (and
> >already started to, the per command timeout for instance), but I was
> >starting to write off libata moving to block as a silly pipe dream in
> >all honesty... But if momentum is picking up behind this move, then I'll
> >all for it.
>
> Just gotta be patient. Rome wasn't built in a day, and all that :)
:-)
> Like I mentioned in another message, the ideal world is that libata uses
> an ATA disk driver and a SCSI MMC driver -- just like a modern SAS
> controller (which likely supports SATA too) will use both an ATA disk
> driver and a SCSI disk driver.
>
> Given this "ideal world", its IMO best that the "storage driver"
> infrastructure lives in the block layer not SCSI layer.
Right
> >>The general list, off the top of my head:
> >>
> >>* objects: storage message, storage device, storage host, and the
> >>requisite interconnections
> >
> >Storage message -> request. The rq-cmd-type branch of the block repo has
> >most/some of that done. For an explicit storage device + host, I have no
> >plans to expland on what we have.
>
> Agreed that storage message == request.
>
> storage device and storage host are key objects included in the
> infrastructure libata uses SCSI for. They fall naturally out of the
> infrastructure that provides "device busy", "host busy", EH and EH
> synchronization across multiple devices, etc. Though these, SCSI also
> provides infrastructure through which an LLDD may export a bus topology
> to the user.
James/others already touched on that, and I agree it's a useful
abstraction. It's something that we can use for other drivers right now,
such as cciss.
> >>* queuecommand-style API
> >
> >That's a style issue, rather than a required item. You can roll that on
> >top of the current api by just doing a:
> >
> >int queuecommand_helper(request_queue_t *q, struct request *rq)
> >{
> > /* issue request */
> > ...
> > return OK/DEFER/REJECT/WHATEVER
> >}
> >
> >blk_queuecommand_helper(request_queue_t *q, queue_command_fn *fn)
> >{
> > struct request *rq;
> > int ret;
> >
> > do {
> > rq = elv_next_request(q);
> > if (!rq)
> > break;
> >
> > ret = fn(q, rq);
> > if (ret == OK)
> > continue;
> >
> > /* handle replugging/killing/whatever */
> > } while (1);
> >}
> >
> >if you really wanted.
>
> That's not an optional piece. Given the needed timeout / device / host
I think we have a different opinion on what 'optional' is then - because
things can certainly work just fine the way they current do. And it's
faster, too.
> infrastructure, you inevitably wind up with the following code pattern:
>
> infrastructure code
> send fully prepared request to hardware
> infrastructure code
But yes, you can make the code nicer for _some_ things with a
->queueone() type setup.
> At this point I should note that all of what I've been describing is
> an _optional addition_ to the block layer. Its all helpers and a few
> new, optional structs. This SHOULD NOT involve changing the core
> block layer at all. Well, maybe struct request would like the
> addition of a timer. But that's it, and such a mod is easy to do.
The timer is a given, we can't escape that. And the ->queueone() is
basically hashed out above, no infrastructure changes needed.
queuecommand_helper would be driver supplied, blk_queuecommand_helper()
would be a block layer helper. With better names of course, I truly do
suck at naming functions :-)
--
Jens Axboe
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 15:50 ` James Bottomley
2006-05-17 15:58 ` James Smart
2006-05-17 16:17 ` Jeff Garzik
@ 2006-05-17 17:47 ` Linus Torvalds
2006-05-17 17:55 ` Jens Axboe
` (3 more replies)
2 siblings, 4 replies; 45+ messages in thread
From: Linus Torvalds @ 2006-05-17 17:47 UTC (permalink / raw)
To: James Bottomley
Cc: Jeff Garzik, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
On Wed, 17 May 2006, James Bottomley wrote:
>
> This is one of the questions. Currently block has no concept of "host".
That's good.
I don't understand why you'd ever _want_ a concept of "host". The whole
concept is broken and unnecessary. At no point should you even need to map
from request to host, but if you do, you don't need to introduce any
generic "host" notion, you can do it easily per-queue.
> All it knows about are queues (which may be per host or per device
> depending on the implementation). Do we need to introduce the concept
> of something like queue grouping (a sort of lightweight infrastructure
> that could be used by the underlying transport to implement a host
> concept without introducing hosts at the block layer)?
We already have it. Each queue has its lock pointer. If you want to have a
"host" notion, you do it by just setting the queue lock to point to the
host lock (since if they are dependent, you'd _better_ share the lock
between the queues anyway), and then you do
struct myhost_struct *queue_host(struct request_queue *queue)
{
return container_of(queue->queue_lock, myhost_struct, host_lock);
}
and there are no changes necessary to the queue layer.
You can do it other ways too: the "struct kobject" in the queue obviously
contains a pointer to the parent of the queue, and that might well be your
"host" object. Again, exact same deal, except now you'd use
container_of(queue->kobj.parent, myhost_struct, host_kobject);
instead. Entirely up to you.
The whole fixation with "host" in the SCSI layer is a bug, I think. What
does it matter, really? And when do you actually have a "request_queue"
entry without already knowing which controller it is connected to (ie why
do you even need that mapping)?
Linus
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 16:17 ` Jeff Garzik
@ 2006-05-17 17:53 ` James Bottomley
2006-05-17 22:08 ` Jeff Garzik
0 siblings, 1 reply; 45+ messages in thread
From: James Bottomley @ 2006-05-17 17:53 UTC (permalink / raw)
To: Jeff Garzik
Cc: Jens Axboe, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
On Wed, 2006-05-17 at 12:17 -0400, Jeff Garzik wrote:
> Yes, and not only that... you must describe the queue pipeline too.
> i.e. N logical devices can be bottlenecked behind a bridge (expander,
> port multiplier, tunnel) of queue depth Q, which may in turn be behind
> another bottleneck. :)
Well ... no, I'm not convinced of this. Block is currently a nice, fast
abstraction. It's designed to manage storage infrastructure and provide
helpers to implementation. The question is how much more common
infrastructure do we need to slim down all of our storage stacks. I.e.
block provides the building blocks to allow the storage implementation
to do what it wants, but it doesn't necessarily provide the full
implementation.
> But overall, libata and SAS controllers are forced to deal with the
> reality of the situation: they all wind up either using, or recreating
> from scratch, objects for host/device/bus/etc. in order to sanely allow
> all the infrastructure to interoperate.
but that doesn't go for all storage ... look at the way usb and firewire
implement host in SCSI at the moment.
> You'll all note that struct Scsi_Host and struct scsi_cmnd have very
> little to do with SCSI. Its almost all infrastructure and driver
> management. That's the _useful_ stuff that libata uses SCSI for.
Some is driver management, others are SCSI specific. We'll never get
away from the need for Scsi_Host and scsi_cmnd, but we can make sure
they contain only truly SCSI specific pieces. scsi_cmnd is the closest
since it pretty much has a one to one mapping with a block request.
> Thus, moving libata to the block layer entails either
> s/Scsi_Host/Storage_Host/g or a highly similar infrastructure, to
> achieve the same gains.
I'm not sure. Block is currently nicely lightweight. A large number of
implementations have no use for a host concept ... I don't think we
should be forcing it on them.
> It is _trivial_ to write a new SCSI driver [even if your hardware is not
> SCSI], and there are good reasons for that. Please all, examine those
> reasons...
I really think the initial question is what more could be moved up to
block as common infrastructure ... I don't think we need to take
concepts up wholesale as long as we get the infrastructure right.
James
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:47 ` Linus Torvalds
@ 2006-05-17 17:55 ` Jens Axboe
2006-05-17 22:04 ` Linus Torvalds
2006-05-17 21:41 ` Jeff Garzik
` (2 subsequent siblings)
3 siblings, 1 reply; 45+ messages in thread
From: Jens Axboe @ 2006-05-17 17:55 UTC (permalink / raw)
To: Linus Torvalds
Cc: James Bottomley, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
On Wed, May 17 2006, Linus Torvalds wrote:
>
>
> On Wed, 17 May 2006, James Bottomley wrote:
> >
> > This is one of the questions. Currently block has no concept of "host".
>
> That's good.
>
> I don't understand why you'd ever _want_ a concept of "host". The whole
> concept is broken and unnecessary. At no point should you even need to map
> from request to host, but if you do, you don't need to introduce any
> generic "host" notion, you can do it easily per-queue.
Maybe "host" is the wrong word, but there is some usefulness to look at
dependencies of queues.
> > All it knows about are queues (which may be per host or per device
> > depending on the implementation). Do we need to introduce the concept
> > of something like queue grouping (a sort of lightweight infrastructure
> > that could be used by the underlying transport to implement a host
> > concept without introducing hosts at the block layer)?
>
> We already have it. Each queue has its lock pointer. If you want to have a
> "host" notion, you do it by just setting the queue lock to point to the
> host lock (since if they are dependent, you'd _better_ share the lock
> between the queues anyway), and then you do
>
> struct myhost_struct *queue_host(struct request_queue *queue)
> {
> return container_of(queue->queue_lock, myhost_struct, host_lock);
> }
>
> and there are no changes necessary to the queue layer.
That gets you the "parent" structure (or "host") for the queues, but it
doesn't help you with managing them. One issue could be a shared tag
queue - we already solved that by sharing the tag map between the
queues, but that still allows on queue to deplete more entries that it
should be allowed to. And so on.
> You can do it other ways too: the "struct kobject" in the queue obviously
> contains a pointer to the parent of the queue, and that might well be your
> "host" object. Again, exact same deal, except now you'd use
>
> container_of(queue->kobj.parent, myhost_struct, host_kobject);
>
> instead. Entirely up to you.
>
> The whole fixation with "host" in the SCSI layer is a bug, I think. What
> does it matter, really? And when do you actually have a "request_queue"
> entry without already knowing which controller it is connected to (ie why
> do you even need that mapping)?
Some devices need serialization between them, and the only way to
achieve that currently is by sharing a queue. But that fails eg at the
io scheduler level - and other places also assume a 1:1 mapping between
queues and devices. And this you cannot do generically right now, and
it's one thing I would like to handle.
--
Jens Axboe
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:47 ` Linus Torvalds
2006-05-17 17:55 ` Jens Axboe
@ 2006-05-17 21:41 ` Jeff Garzik
2006-05-17 21:52 ` Douglas Gilbert
2006-05-18 3:04 ` Luben Tuikov
3 siblings, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 21:41 UTC (permalink / raw)
To: Linus Torvalds
Cc: James Bottomley, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
Linus Torvalds wrote:
>
> On Wed, 17 May 2006, James Bottomley wrote:
>> This is one of the questions. Currently block has no concept of "host".
>
> That's good.
>
> I don't understand why you'd ever _want_ a concept of "host". The whole
> concept is broken and unnecessary. At no point should you even need to map
> from request to host, but if you do, you don't need to introduce any
> generic "host" notion, you can do it easily per-queue.
"host" is ultimately the wrong word, sure.
It's more about (a) grouping queues into a topology that the user
expects, as exported by sysfs, and (b) grouping queues for the purposes
of useful and/or necessary hardware operations like "stop <these>
queues, so that we can bitbang the hardware".
That grouping of queues, along with the lib-ification of highly common
request management code[1], is part of the non-SCSI utility that libata
derives from drivers/scsi.
"group-wide operations" are highly common, and generic code inevitably
results from that. But IMO that's helper code, living on top of the
perfectly-fine existing code.
> The whole fixation with "host" in the SCSI layer is a bug, I think. What
> does it matter, really? And when do you actually have a "request_queue"
> entry without already knowing which controller it is connected to (ie why
> do you even need that mapping)?
True, the mapping is obtained from request_queue not request.
The entry point into a block driver is via q->request_fn(), which only
has a request_queue for an argument. So in practice, one usually
obtains the private controller-info and bus-info data via the
request_queue's ->queuedata.
Deep into sub-APIs, I've seen that sometimes you'll see only the request
passed as an argument, because it's easier to walk
request->queue->controller_info than to pass additional arguments to
every function.
Jeff
[1] "resource management": refers to drivers/scsi's handling of 'device
busy', 'group-of-queues busy' style transient errors, well integrated
with block layer's command queueing and well synchronized with the EH
thread.
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:47 ` Linus Torvalds
2006-05-17 17:55 ` Jens Axboe
2006-05-17 21:41 ` Jeff Garzik
@ 2006-05-17 21:52 ` Douglas Gilbert
2006-05-17 22:20 ` Linus Torvalds
2006-05-18 3:04 ` Luben Tuikov
3 siblings, 1 reply; 45+ messages in thread
From: Douglas Gilbert @ 2006-05-17 21:52 UTC (permalink / raw)
To: Linus Torvalds
Cc: James Bottomley, Jeff Garzik, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
Linus Torvalds wrote:
>
> On Wed, 17 May 2006, James Bottomley wrote:
>
>>This is one of the questions. Currently block has no concept of "host".
>
>
> That's good.
>
> I don't understand why you'd ever _want_ a concept of "host". The whole
> concept is broken and unnecessary. At no point should you even need to map
> from request to host, but if you do, you don't need to introduce any
> generic "host" notion, you can do it easily per-queue.
>
>
>>All it knows about are queues (which may be per host or per device
>>depending on the implementation). Do we need to introduce the concept
>>of something like queue grouping (a sort of lightweight infrastructure
>>that could be used by the underlying transport to implement a host
>>concept without introducing hosts at the block layer)?
>
>
> We already have it. Each queue has its lock pointer. If you want to have a
> "host" notion, you do it by just setting the queue lock to point to the
> host lock (since if they are dependent, you'd _better_ share the lock
> between the queues anyway), and then you do
>
> struct myhost_struct *queue_host(struct request_queue *queue)
> {
> return container_of(queue->queue_lock, myhost_struct, host_lock);
> }
>
> and there are no changes necessary to the queue layer.
>
> You can do it other ways too: the "struct kobject" in the queue obviously
> contains a pointer to the parent of the queue, and that might well be your
> "host" object. Again, exact same deal, except now you'd use
>
> container_of(queue->kobj.parent, myhost_struct, host_kobject);
>
> instead. Entirely up to you.
>
> The whole fixation with "host" in the SCSI layer is a bug, I think. What
> does it matter, really? And when do you actually have a "request_queue"
> entry without already knowing which controller it is connected to (ie why
> do you even need that mapping)?
I'll bite.
>From the perspective of a pass through, any pass
through, the host is all I want. I'm happy to
completely forget about devices as they are
loosely defined at the moment. I view a
host (port) and a network interface as (almost)
the same thing. Carrying on with the analogy,
I view a device as an IP address.
Currently linux has a storage device for each
unique instance of this tuple:
<host_port, target_port, lu_name>
So if a dual ported disk is fully connected to three
HBAs (hosts) on the same machine then the OS should
show six different devices for the same disk.
Just knowing the wwn of a device (i.e. lu_name)
is not sufficient for error processing, hot plug
processing, etc.
There is also a natural hierarchy of errors (seen
from the OS). Fatal errors associated with the host
(e.g. it is hot unplugged) make starting or ongoing
error recovery at a lower level (i.e. target or lu)
futile.
The linux block layer still shares the quaint
Unix idea of foreseeing errors so it can
disallow requests. This is in the hope that error
processing associated with responses (or lack of
them) will be simplified.
Doug Gilbert
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:37 ` Jens Axboe
@ 2006-05-17 21:58 ` Jeff Garzik
2006-05-18 7:21 ` Jens Axboe
0 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 21:58 UTC (permalink / raw)
To: Jens Axboe
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
Jens Axboe wrote:
> I think we have a different opinion on what 'optional' is then - because
> things can certainly work just fine the way they current do. And it's
> faster, too.
>
>> infrastructure, you inevitably wind up with the following code pattern:
>>
>> infrastructure code
>> send fully prepared request to hardware
>> infrastructure code
>
> But yes, you can make the code nicer for _some_ things with a
> ->queueone() type setup.
That ->queueone() maps the closest to what most hardware appears to
want: "attempt to push request onto an async hardware queue".
It also enables additional entry points for returning 'device busy' or
(in SCSI lingo) 'host busy'. The ability to signal and handle random
transient conditions like that when hardware resources disappear is easy
to overlook, but its really powerful.
>> At this point I should note that all of what I've been describing is
>> an _optional addition_ to the block layer. Its all helpers and a few
>> new, optional structs. This SHOULD NOT involve changing the core
>> block layer at all. Well, maybe struct request would like the
>> addition of a timer. But that's it, and such a mod is easy to do.
>
> The timer is a given, we can't escape that. And the ->queueone() is
> basically hashed out above, no infrastructure changes needed.
> queuecommand_helper would be driver supplied, blk_queuecommand_helper()
> would be a block layer helper. With better names of course, I truly do
> suck at naming functions :-)
Likewise :)
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:55 ` Jens Axboe
@ 2006-05-17 22:04 ` Linus Torvalds
2006-05-17 22:12 ` Jeff Garzik
0 siblings, 1 reply; 45+ messages in thread
From: Linus Torvalds @ 2006-05-17 22:04 UTC (permalink / raw)
To: Jens Axboe
Cc: James Bottomley, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
On Wed, 17 May 2006, Jens Axboe wrote:
>
> That gets you the "parent" structure (or "host") for the queues, but it
> doesn't help you with managing them. One issue could be a shared tag
> queue - we already solved that by sharing the tag map between the
> queues, but that still allows on queue to deplete more entries that it
> should be allowed to. And so on.
Right. But all of this is very much dependent on the actual hardware. It
makes no sense to have a "generic host" kind of structure like SCSI has,
except for historical reasons.
It makes sense to make _individual_ design issues be able to share some
resource(s) between queues, and that's exactly what we do. We have the
notion of having a shared lock (which makes sense because while the queues
might be otherwise independent, they may go through a shared hw
interface), and as you point out, we can also share the tags for exactly
the same reason.
But in _no_ case is is valid to think that a shared resource would somehow
mean anything more being shared. There simply is no "host" level sharing.
It's very much -not- an all-or-nothing thing. It's a "some hardware may
share some stuff, and it very much depends on the hw which parts it might
end up sharing between the queues.
So what I was objecting to was introducing the overlying "host" notion
that SCSI has. I don't think it makes any sense from any generic
standpoint, and I wanted to point out that _if_ you have a "all or
nothing" shared model like SCSI has, you can use any of the fields that we
_can_ share (eg the lock field) as a way to tie that one field together
with all the other fields.
But it would be wrong to see that as anything but a special case.
> Some devices need serialization between them, and the only way to
> achieve that currently is by sharing a queue.
Hmm? No. We share the queue for some things, but the most _common_ example
of a device that needs serialization between queues is IDE, and we don't
share queues there. We have independent queues, they just end up sharing
certain infrastructure (tags and locking).
But they _are_ independent, and you can have different elevators,
different merging rules, and even different request functions for the
different queues - even if they also have some things they share.
And yes, you can see an ATA host as a host in the SCSI sense, but I wanted
to point out that you don't _need_ to see it that way, and indeed, the
requests queues don't. The fact that a driver can then _use_ the request
queues as if there was a "host" is fine. But we should not _enforce_ such
a totally broken and limited model on the queues.
Linus
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:53 ` James Bottomley
@ 2006-05-17 22:08 ` Jeff Garzik
2006-05-17 22:15 ` Jeff Garzik
0 siblings, 1 reply; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 22:08 UTC (permalink / raw)
To: James Bottomley
Cc: Jens Axboe, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Wed, 2006-05-17 at 12:17 -0400, Jeff Garzik wrote:
>> Yes, and not only that... you must describe the queue pipeline too.
>> i.e. N logical devices can be bottlenecked behind a bridge (expander,
>> port multiplier, tunnel) of queue depth Q, which may in turn be behind
>> another bottleneck. :)
>
> Well ... no, I'm not convinced of this. Block is currently a nice, fast
> abstraction. It's designed to manage storage infrastructure and provide
> helpers to implementation. The question is how much more common
> infrastructure do we need to slim down all of our storage stacks. I.e.
> block provides the building blocks to allow the storage implementation
> to do what it wants, but it doesn't necessarily provide the full
> implementation.
My central thesis is that
* SCSI provides a generic _storage driver_ infrastructure, encapsulating
many common idioms generic to SCSI and non-SCSI hardware alike.
* Any such storage driver infrastructure, outside of SCSI, should impose
no burdens on existing block drivers.
Call such storage driver infrastructure "libstorage" if you will.
>> But overall, libata and SAS controllers are forced to deal with the
>> reality of the situation: they all wind up either using, or recreating
>> from scratch, objects for host/device/bus/etc. in order to sanely allow
>> all the infrastructure to interoperate.
>
> but that doesn't go for all storage ... look at the way usb and firewire
> implement host in SCSI at the moment.
Let's just stop using the word host, its too confusing for all involved
:) I'm well aware of this, look at how libata uses Scsi_Host too...
>> You'll all note that struct Scsi_Host and struct scsi_cmnd have very
>> little to do with SCSI. Its almost all infrastructure and driver
>> management. That's the _useful_ stuff that libata uses SCSI for.
>
> Some is driver management, others are SCSI specific. We'll never get
Agreed, though IMO I claim that "a lot" is driver management.
> away from the need for Scsi_Host and scsi_cmnd, but we can make sure
> they contain only truly SCSI specific pieces. scsi_cmnd is the closest
> since it pretty much has a one to one mapping with a block request.
Agreed.
>> Thus, moving libata to the block layer entails either
>> s/Scsi_Host/Storage_Host/g or a highly similar infrastructure, to
>> achieve the same gains.
>
> I'm not sure. Block is currently nicely lightweight. A large number of
> implementations have no use for a host concept ... I don't think we
> should be forcing it on them.
Like I said above, think "libstorage". I think block as-is, too.
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 22:04 ` Linus Torvalds
@ 2006-05-17 22:12 ` Jeff Garzik
0 siblings, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 22:12 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jens Axboe, James Bottomley, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
Linus Torvalds wrote:
> On Wed, 17 May 2006, Jens Axboe wrote:
>> Some devices need serialization between them, and the only way to
>> achieve that currently is by sharing a queue.
>
> Hmm? No. We share the queue for some things, but the most _common_ example
> of a device that needs serialization between queues is IDE, and we don't
> share queues there. We have independent queues, they just end up sharing
> certain infrastructure (tags and locking).
Strongly agreed.
Note that libata uses independent queues, but serializes access to them.
This is the sort of "group of queues" infrastructure I find helpful.
> But they _are_ independent, and you can have different elevators,
> different merging rules, and even different request functions for the
> different queues - even if they also have some things they share.
Yep.
> And yes, you can see an ATA host as a host in the SCSI sense, but I wanted
Actually libata presents one Scsi_Host per SATA port on each controller,
FWIW :) So the Scsi_Host notion WRT queues _is_ limiting, which is the
reason why I refer to "group of queues".
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 22:08 ` Jeff Garzik
@ 2006-05-17 22:15 ` Jeff Garzik
0 siblings, 0 replies; 45+ messages in thread
From: Jeff Garzik @ 2006-05-17 22:15 UTC (permalink / raw)
To: James Bottomley
Cc: Jens Axboe, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
Jeff Garzik wrote:
> Like I said above, think "libstorage". I think block as-is, too.
Er, make that "I like block as-is, too"
Jeff
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 21:52 ` Douglas Gilbert
@ 2006-05-17 22:20 ` Linus Torvalds
0 siblings, 0 replies; 45+ messages in thread
From: Linus Torvalds @ 2006-05-17 22:20 UTC (permalink / raw)
To: Douglas Gilbert
Cc: James Bottomley, Jeff Garzik, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
On Wed, 17 May 2006, Douglas Gilbert wrote:
>
> From the perspective of a pass through, any pass
> through, the host is all I want.
Hell no it isn't.
Trying to make network interfaces and block device queues sound like they
are the same thing is bogus. They are NOT the same thing.
> Currently linux has a storage device for each
> unique instance of this tuple:
> <host_port, target_port, lu_name>
No it doesn't.
That's some SCSI internal brain-damage.
Linux has a set of queues, and a way to map from device nodes to queues.
That's all. There's no "host port". There's no "target port". There's a
queue. You can certainly combine one queue into multiple queues, or you
can take multiple queues and combine them into one. But don't at _any_
point think that it's anything but just a queue.
The queue has a lot of data associated with it (rules for how requests can
be combined, what the DMA alignment is, etc etc). But it has not, and
should not have, any idiotic association like a "host" or "target". It's a
queue, dammit, nothing more.
The thing that associates it with an actual device is a combination of
queue mappers and the queue lookup logic.
> So if a dual ported disk is fully connected to three
> HBAs (hosts) on the same machine then the OS should
> show six different devices for the same disk.
It should have six queues, any of which can access the same physical disk.
And you can decide how to route the requests between those queues with the
mapping layer or queue lookup. At no point has this got anything to do
with a "host".
The moment you confuse the issue with a "host", you lose sight of the fact
that you often don't have a host at all. A queue might be a totally
virtual thing, that just feeds to other queues (mirroring). It _has_ no
host. Thinking it has a host is setting yourself up for just doing totally
idiotic and wrong things.
And even if you can point to a physical chip and say "that is the host",
that's a totally irrelevant thing. That physical chip migth be an ethernet
chip, but why the hell would you care, if the _real_ issue is that you
created a queue that gets transported over TCP on port 666. Another queue
that gets transported on UDP with some random other protocol on port 1234
over the same ethernet chip doesn't make those "connected" in any way. The
"host chip" simply has nothing to do with the block layer.
Similarly, you might have a SCSI chip that implements several "hosts" and
it's all driven by DMA engines and a nice in-memory mailbox setup, but
then it shares some of the control registers across all of them, so you
might have to have a single driver lock. They are really totally
independent, but they share one point of control - they don't really in
any way share any "hostness", even if there's one physical connection at
one point and there migth be soem shared locking to access the setup
registers.
So the whole "host" notion is broken. It's a totally made-up abstraction,
that SCSI people seem to have a damn hard time to just _let_go_ of.
There are _real_ points of connection. But "host" doesn't define them.
Linus
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 17:47 ` Linus Torvalds
` (2 preceding siblings ...)
2006-05-17 21:52 ` Douglas Gilbert
@ 2006-05-18 3:04 ` Luben Tuikov
3 siblings, 0 replies; 45+ messages in thread
From: Luben Tuikov @ 2006-05-18 3:04 UTC (permalink / raw)
To: Linus Torvalds, James Bottomley
Cc: Jeff Garzik, Jens Axboe, SCSI Mailing List,
linux-ide@vger.kernel.org, Tejun Heo, Andrew Morton
--- Linus Torvalds <torvalds@osdl.org> wrote:
> On Wed, 17 May 2006, James Bottomley wrote:
> >
> > This is one of the questions. Currently block has no concept of "host".
>
> That's good.
>
> I don't understand why you'd ever _want_ a concept of "host". The whole
> concept is broken and unnecessary. At no point should you even need to map
> from request to host, but if you do, you don't need to introduce any
> generic "host" notion, you can do it easily per-queue.
You are correct. The concept of "host" should NOT be exported to
higher than PCI/SCSI* level.
* See below.
> The whole fixation with "host" in the SCSI layer is a bug, I think. What
> does it matter, really? And when do you actually have a "request_queue"
> entry without already knowing which controller it is connected to (ie why
> do you even need that mapping)?
Not quite, the concept of a "host" is needed as it is the "host" which
provides "SCSI ports" to the "SCSI domain".
Luben
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-16 21:28 ` James Bottomley
@ 2006-05-18 3:27 ` Tejun Heo
2006-05-19 12:07 ` [PATCH] SCSI: make scsi_implement_eh() generic API for SCSI transports Tejun Heo
0 siblings, 1 reply; 45+ messages in thread
From: Tejun Heo @ 2006-05-18 3:27 UTC (permalink / raw)
To: James Bottomley
Cc: Christoph Hellwig, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Andrew Morton, Linus Torvalds
James Bottomley wrote:
> On Tue, 2006-05-16 at 20:58 +0100, Christoph Hellwig wrote:
>> Things used by the transport classes traditionally weren't in scsi_priv.h,
>> that was for scsi_mod internals only. Should we changed that? I'd rather
>> have another header for semi-volatile APIs transport classes use.
>
> well ... OK we don't necessarily have a header for SCSI APIs exported
> exclusively to transport classes. how about
> drivers/scsi/scsi_transport_api.h?
>
Are we agreed on this? A new header file drivers/scsi/scsi_transport_api.h?
--
tejun
^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [Fwd: [RFT] major libata update]
2006-05-17 21:58 ` Jeff Garzik
@ 2006-05-18 7:21 ` Jens Axboe
0 siblings, 0 replies; 45+ messages in thread
From: Jens Axboe @ 2006-05-18 7:21 UTC (permalink / raw)
To: Jeff Garzik
Cc: James Bottomley, SCSI Mailing List, linux-ide@vger.kernel.org,
Tejun Heo, Andrew Morton, Linus Torvalds
On Wed, May 17 2006, Jeff Garzik wrote:
> Jens Axboe wrote:
> >I think we have a different opinion on what 'optional' is then - because
> >things can certainly work just fine the way they current do. And it's
> >faster, too.
> >
> >>infrastructure, you inevitably wind up with the following code pattern:
> >>
> >> infrastructure code
> >> send fully prepared request to hardware
> >> infrastructure code
> >
> >But yes, you can make the code nicer for _some_ things with a
> >->queueone() type setup.
>
> That ->queueone() maps the closest to what most hardware appears to
> want: "attempt to push request onto an async hardware queue".
>
> It also enables additional entry points for returning 'device busy' or
> (in SCSI lingo) 'host busy'. The ability to signal and handle random
> transient conditions like that when hardware resources disappear is easy
> to overlook, but its really powerful.
'queue a request' is certainly how most hardware works. The fact is that
->queueone() doesn't enable _anything_ that you cannot already do. In
fact it takes information _away_ from you, since you can no longer peek
at the queue and see if there's one more there for you to issue. The
only selling point for ->queueone() is that it makes it more logical to
structure the code layout for handling the issue.
Which is why I never added this helper. From the block layer
perspective, the driver can only really tell you three things:
- request issued, give me another one
- request not issued, transient error (meaning, call me with this
request again in the future).
- request not issued, permanent error. kill this request.
A ->queueone() setup deals with returning the right issue error code, a
->queuemany() setup deals with calling the right function for the
condition.
--
Jens Axboe
^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH] SCSI: make scsi_implement_eh() generic API for SCSI transports
2006-05-18 3:27 ` Tejun Heo
@ 2006-05-19 12:07 ` Tejun Heo
0 siblings, 0 replies; 45+ messages in thread
From: Tejun Heo @ 2006-05-19 12:07 UTC (permalink / raw)
To: James Bottomley
Cc: Christoph Hellwig, Jeff Garzik, SCSI Mailing List,
linux-ide@vger.kernel.org, Andrew Morton, Linus Torvalds
libata implemented a feature to schedule EH without an associated EH
by manipulating shost->host_eh_scheduled in ata_scsi_schedule_eh()
directly. Move this function to scsi_error.c and rename it to
scsi_schedule_eh(). It is now an exported API for SCSI transports and
exported via new header file drivers/scsi/scsi_transport_api.h
This patch also de-export scsi_eh_wakeup() which was exported
specifically for ata_scsi_schedule_eh().
Signed-off-by: Tejun Heo <htejun@gmail.com>
---
Okay, I went with drivers/scsi/scsi_transport_api.h. I intentionally
didn't add any banner at the top of the file. Feel free to fill with
appropriate description/blurb.
drivers/scsi/libata-eh.c | 3 ++-
drivers/scsi/libata-scsi.c | 24 ------------------------
drivers/scsi/scsi_error.c | 23 ++++++++++++++++++++++-
drivers/scsi/scsi_priv.h | 1 +
drivers/scsi/scsi_transport_api.h | 6 ++++++
include/scsi/scsi_eh.h | 1 -
6 files changed, 31 insertions(+), 27 deletions(-)
create mode 100644 drivers/scsi/scsi_transport_api.h
6daec98e81a98db0dafc4846a97432746bf94f9c
diff --git a/drivers/scsi/libata-eh.c b/drivers/scsi/libata-eh.c
index 750e734..71b45ad 100644
--- a/drivers/scsi/libata-eh.c
+++ b/drivers/scsi/libata-eh.c
@@ -39,6 +39,7 @@ #include <scsi/scsi_host.h>
#include <scsi/scsi_eh.h>
#include <scsi/scsi_device.h>
#include <scsi/scsi_cmnd.h>
+#include "scsi_transport_api.h"
#include <linux/libata.h>
@@ -432,7 +433,7 @@ void ata_port_schedule_eh(struct ata_por
WARN_ON(!ap->ops->error_handler);
ap->flags |= ATA_FLAG_EH_PENDING;
- ata_schedule_scsi_eh(ap->host);
+ scsi_schedule_eh(ap->host);
DPRINTK("port EH scheduled\n");
}
diff --git a/drivers/scsi/libata-scsi.c b/drivers/scsi/libata-scsi.c
index f036ae4..2007b4b 100644
--- a/drivers/scsi/libata-scsi.c
+++ b/drivers/scsi/libata-scsi.c
@@ -2745,27 +2745,3 @@ void ata_scsi_scan_host(struct ata_port
scsi_scan_target(&ap->host->shost_gendev, 0, i, 0, 0);
}
}
-
-/**
- * ata_schedule_scsi_eh - schedule EH for SCSI host
- * @shost: SCSI host to invoke error handling on.
- *
- * Schedule SCSI EH without scmd. This is a hack.
- *
- * LOCKING:
- * spin_lock_irqsave(host_set lock)
- **/
-void ata_schedule_scsi_eh(struct Scsi_Host *shost)
-{
- unsigned long flags;
-
- spin_lock_irqsave(shost->host_lock, flags);
-
- if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 ||
- scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) {
- shost->host_eh_scheduled++;
- scsi_eh_wakeup(shost);
- }
-
- spin_unlock_irqrestore(shost->host_lock, flags);
-}
diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index 9ca71cb..346ab72 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -56,7 +56,28 @@ void scsi_eh_wakeup(struct Scsi_Host *sh
printk("Waking error handler thread\n"));
}
}
-EXPORT_SYMBOL_GPL(scsi_eh_wakeup);
+
+/**
+ * scsi_schedule_eh - schedule EH for SCSI host
+ * @shost: SCSI host to invoke error handling on.
+ *
+ * Schedule SCSI EH without scmd.
+ **/
+void scsi_schedule_eh(struct Scsi_Host *shost)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(shost->host_lock, flags);
+
+ if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 ||
+ scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) {
+ shost->host_eh_scheduled++;
+ scsi_eh_wakeup(shost);
+ }
+
+ spin_unlock_irqrestore(shost->host_lock, flags);
+}
+EXPORT_SYMBOL_GPL(scsi_schedule_eh);
/**
* scsi_eh_scmd_add - add scsi cmd to error handling.
diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
index 0b39081..27c4827 100644
--- a/drivers/scsi/scsi_priv.h
+++ b/drivers/scsi/scsi_priv.h
@@ -63,6 +63,7 @@ extern int scsi_delete_timer(struct scsi
extern void scsi_times_out(struct scsi_cmnd *cmd);
extern int scsi_error_handler(void *host);
extern int scsi_decide_disposition(struct scsi_cmnd *cmd);
+extern void scsi_eh_wakeup(struct Scsi_Host *shost);
extern int scsi_eh_scmd_add(struct scsi_cmnd *, int);
/* scsi_lib.c */
diff --git a/drivers/scsi/scsi_transport_api.h b/drivers/scsi/scsi_transport_api.h
new file mode 100644
index 0000000..934f0e6
--- /dev/null
+++ b/drivers/scsi/scsi_transport_api.h
@@ -0,0 +1,6 @@
+#ifndef _SCSI_TRANSPORT_API_H
+#define _SCSI_TRANSPORT_API_H
+
+void scsi_schedule_eh(struct Scsi_Host *shost);
+
+#endif /* _SCSI_TRANSPORT_API_H */
diff --git a/include/scsi/scsi_eh.h b/include/scsi/scsi_eh.h
index 212c983..d160880 100644
--- a/include/scsi/scsi_eh.h
+++ b/include/scsi/scsi_eh.h
@@ -35,7 +35,6 @@ static inline int scsi_sense_valid(struc
}
-extern void scsi_eh_wakeup(struct Scsi_Host *shost);
extern void scsi_eh_finish_cmd(struct scsi_cmnd *scmd,
struct list_head *done_q);
extern void scsi_eh_flush_done_q(struct list_head *done_q);
--
1.3.2
^ permalink raw reply related [flat|nested] 45+ messages in thread
end of thread, other threads:[~2006-05-19 12:07 UTC | newest]
Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <4468B596.9090508@garzik.org>
[not found] ` <1147789098.3505.19.camel@mulgrave.il.steeleye.com>
2006-05-16 15:41 ` [Fwd: [RFT] major libata update] Jeff Garzik
2006-05-16 15:51 ` James Bottomley
2006-05-16 16:06 ` Jeff Garzik
2006-05-16 16:30 ` James Bottomley
2006-05-16 16:39 ` Jeff Garzik
2006-05-16 21:55 ` Luben Tuikov
2006-05-16 21:32 ` Luben Tuikov
2006-05-16 16:08 ` Tejun Heo
2006-05-16 16:13 ` Tejun Heo
2006-05-16 16:29 ` James Bottomley
2006-05-16 16:37 ` Jeff Garzik
2006-05-16 16:39 ` Tejun Heo
2006-05-16 16:50 ` James Bottomley
2006-05-16 17:07 ` Tejun Heo
2006-05-16 17:09 ` Jeff Garzik
2006-05-16 19:58 ` Christoph Hellwig
2006-05-16 20:02 ` Jeff Garzik
2006-05-16 21:28 ` James Bottomley
2006-05-18 3:27 ` Tejun Heo
2006-05-19 12:07 ` [PATCH] SCSI: make scsi_implement_eh() generic API for SCSI transports Tejun Heo
2006-05-16 16:12 ` [Fwd: [RFT] major libata update] Jeff Garzik
2006-05-16 16:38 ` James Bottomley
2006-05-16 16:57 ` Jeff Garzik
2006-05-17 7:37 ` Jens Axboe
2006-05-17 15:06 ` Jeff Garzik
2006-05-17 15:50 ` James Bottomley
2006-05-17 15:58 ` James Smart
2006-05-17 16:17 ` Jeff Garzik
2006-05-17 17:53 ` James Bottomley
2006-05-17 22:08 ` Jeff Garzik
2006-05-17 22:15 ` Jeff Garzik
2006-05-17 17:47 ` Linus Torvalds
2006-05-17 17:55 ` Jens Axboe
2006-05-17 22:04 ` Linus Torvalds
2006-05-17 22:12 ` Jeff Garzik
2006-05-17 21:41 ` Jeff Garzik
2006-05-17 21:52 ` Douglas Gilbert
2006-05-17 22:20 ` Linus Torvalds
2006-05-18 3:04 ` Luben Tuikov
2006-05-17 16:05 ` Douglas Gilbert
2006-05-17 17:37 ` Jens Axboe
2006-05-17 21:58 ` Jeff Garzik
2006-05-18 7:21 ` Jens Axboe
2006-05-16 18:28 ` Luben Tuikov
2006-05-16 18:15 ` Luben Tuikov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).