* block_abort_queue (blk_abort_request) racing with scsi_request_fn @ 2010-05-12 5:23 Mike Anderson 2010-11-10 7:09 ` [dm-devel] " Mike Christie 0 siblings, 1 reply; 8+ messages in thread From: Mike Anderson @ 2010-05-12 5:23 UTC (permalink / raw) To: Jens Axboe, James Bottomley; +Cc: dm-devel, linux-scsi I was looking at a dump from a weekend run and I believe I am seeing a case where blk_abort_request through blk_abort_queue picked up a request for timeout that scsi_request_fn decided not to start. This test was under error injection. I assume the case in scsi_request_fn this is hitting is that a request has been put on the timeout_list with blk_start_request and then one of the not_ready checks is hit and the request is decided not to be started. I believe the drop It appears that my usage of walking the timeout_list in block_abort_queue and using blk_mark_rq_complete in block_abort_request will not work in this case. While it would be good to have way to ensure a command is started, it is unclear if even at a low timeout of 1 second that a user other than blk_abort_queue would hit this race. The dropping / acquiring of host_lock and queue_lock in scsi_request_fn and scsi_dispatch_cmd make it unclear to me if usage of blk_mark_rq_complete will cover all cases. I looked at checking serial_number in scsi_times_out along with a couple blk_mark_rq_complete additions, but unclear if this would be good and / or work in all cases. I looked at just accelerating deadline by some default value but unclear if that would be acceptable. I also looked at just using just the mark interface I previously posted and not calling blk_abort_request at all, but that would change current behavior that has been in use for a while. Looking for suggestions. Thanks, -andmike -- Michael Anderson andmike@linux.vnet.ibm.com ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-05-12 5:23 block_abort_queue (blk_abort_request) racing with scsi_request_fn Mike Anderson @ 2010-11-10 7:09 ` Mike Christie 2010-11-10 7:30 ` Mike Christie 0 siblings, 1 reply; 8+ messages in thread From: Mike Christie @ 2010-11-10 7:09 UTC (permalink / raw) To: device-mapper development Cc: Mike Anderson, Jens Axboe, James Bottomley, linux-scsi [-- Attachment #1: Type: text/plain, Size: 3589 bytes --] On 05/12/2010 12:23 AM, Mike Anderson wrote: > I was looking at a dump from a weekend run and I believe I am seeing a > case where blk_abort_request through blk_abort_queue picked up a request > for timeout that scsi_request_fn decided not to start. This test was under > error injection. > > I assume the case in scsi_request_fn this is hitting is that a request has > been put on the timeout_list with blk_start_request and then one of the > not_ready checks is hit and the request is decided not to be started. I > believe the drop > > It appears that my usage of walking the timeout_list in block_abort_queue > and using blk_mark_rq_complete in block_abort_request will not work in > this case. > > While it would be good to have way to ensure a command is started, it is > unclear if even at a low timeout of 1 second that a user other than > blk_abort_queue would hit this race. > > The dropping / acquiring of host_lock and queue_lock in scsi_request_fn > and scsi_dispatch_cmd make it unclear to me if usage of > blk_mark_rq_complete will cover all cases. > > I looked at checking serial_number in scsi_times_out along with a couple > blk_mark_rq_complete additions, but unclear if this would be good and / or > work in all cases. > > I looked at just accelerating deadline by some default value but unclear > if that would be acceptable. > > I also looked at just using just the mark interface I previously posted > and not calling blk_abort_request at all, but that would change current > behavior that has been in use for a while. > Did you ever solve this? I am hitting this with the dm-multipath blk_abort_queue case (the email I sent you a couple weeks ago). It seems we could fix this by just having blk_requeue_request do a check for if the request timedout similar to what scsi used to do. A hacky way might be to have 2 requeue functions. blk_requeue_completed_request - This is the blk_requeue_request we have today. It unconditionally requeues the request. It should only be used if the command has been completed either from blk_complete_request or from block layer timeout handling (blk_rq_timed_out_timer->blk_rq_timed_out->rq_timed_out_fn). blk_requeue_running_request - This checks if the timer is running before requeueing the request. If blk_rq_timed_out_timer/blk_rq_timed_out has taken over the request and is going to handle it then this function just returns and does not requeue. So for this we could just have it check if the queue has a rq_timed_out_fn and if rq->timeout_list is empty or not. I think this might be confusing to use, so I tried something slightly different below. I also tried a patch where we just add another req bit. We set it in blk_rq_timed_out_timer and clear it in a new function that clears it then calls blk_requeue_reqeust. The new function: blk_requeue_timedout_request - used when request is to be requeued if a LLD q->rq_timed_out_fn returned BLK_EH_NOT_HANDLED and has resolved the problem and wants the request to be requeued. This function clears REQ_ATOM_TIMEDOUT and then calls blk_requeue_request. blk_reqeuue_request would then check if REQ_ATOM_TIMEDOUT is set and if it was just drop the request assuming the rq_timed_out_fn was handling it. This still requires the caller to know how the command is supposed to be reqeueud. But I think it might be easier since the driver here has returned BLK_EH_NOT_HANDLED in the q timeout fn so they know that they are going to be handling the request in a special way. I attached the last idea here. Made over Linus's tree. Only compile tested. [-- Attachment #2: blk-requeue-timedout-request.patch --] [-- Type: text/plain, Size: 5492 bytes --] diff --git a/block/blk-core.c b/block/blk-core.c index f0834e2..92279d4 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -981,6 +981,9 @@ EXPORT_SYMBOL(blk_make_request); */ void blk_requeue_request(struct request_queue *q, struct request *rq) { + if (test_bit(REQ_ATOM_TIMEDOUT, &rq->atomic_flags)) + return; + blk_delete_timer(rq); blk_clear_rq_complete(rq); trace_block_rq_requeue(q, rq); diff --git a/block/blk-softirq.c b/block/blk-softirq.c index ee9c216..e0a8d11 100644 --- a/block/blk-softirq.c +++ b/block/blk-softirq.c @@ -156,8 +156,10 @@ void blk_complete_request(struct request *req) { if (unlikely(blk_should_fake_timeout(req->q))) return; - if (!blk_mark_rq_complete(req)) + if (!blk_mark_rq_complete(req)) { + blk_clear_rq_timedout(req); __blk_complete_request(req); + } } EXPORT_SYMBOL(blk_complete_request); diff --git a/block/blk-timeout.c b/block/blk-timeout.c index 4f0c06c..afc6e5f 100644 --- a/block/blk-timeout.c +++ b/block/blk-timeout.c @@ -97,6 +97,7 @@ static void blk_rq_timed_out(struct request *req) * and we can move more of the generic scsi eh code to * the blk layer. */ + blk_mark_rq_timedout(req); break; default: printk(KERN_ERR "block: bad eh return: %d\n", ret); @@ -104,6 +105,25 @@ static void blk_rq_timed_out(struct request *req) } } +/** + * blk_requeue_timedout_request - put a request that timedout back on queue + * @q: request queue where request should be inserted + * @rq: request to be inserted + * + * Description: + * If a module has returned BLK_EH_NOT_HANDLED from its + * rq_timed_out_fn and needs to requeue the request this + * function should be used instead of blk_requeue_request. + * + * queue_lock must be held. + */ +void blk_requeue_timedout_request(struct request_queue *q, struct request *req) +{ + blk_clear_rq_timedout(req); + blk_requeue_request(q, req); +} +EXPORT_SYMBOL_GPL(blk_requeue_timedout_request); + void blk_rq_timed_out_timer(unsigned long data) { struct request_queue *q = (struct request_queue *) data; diff --git a/block/blk.h b/block/blk.h index 2db8f32..ad93258 100644 --- a/block/blk.h +++ b/block/blk.h @@ -30,6 +30,7 @@ void __generic_unplug_device(struct request_queue *); */ enum rq_atomic_flags { REQ_ATOM_COMPLETE = 0, + REQ_ATOM_TIMEDOUT = 1, }; /* @@ -46,7 +47,15 @@ static inline void blk_clear_rq_complete(struct request *rq) clear_bit(REQ_ATOM_COMPLETE, &rq->atomic_flags); } -/* +static inline int blk_mark_rq_timedout(struct request *rq) +{ + return test_and_set_bit(REQ_ATOM_TIMEDOUT, &rq->atomic_flags); +} + +static inline void blk_clear_rq_timedout(struct request *rq) +{ + clear_bit(REQ_ATOM_TIMEDOUT, &rq->atomic_flags); +}/* * Internal elevator interface */ #define ELV_ON_HASH(rq) (!hlist_unhashed(&(rq)->hash)) diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c index 1de30eb..06df25a 100644 --- a/drivers/scsi/scsi_error.c +++ b/drivers/scsi/scsi_error.c @@ -1680,7 +1680,11 @@ void scsi_eh_flush_done_q(struct list_head *done_q) " retry cmd: %p\n", current->comm, scmd)); - scsi_queue_insert(scmd, SCSI_MLQUEUE_EH_RETRY); + printk(KERN_ERR "scmd %p %p %p\n", scmd, + scmd->eh_entry.next, scmd->eh_entry.prev); + + scsi_queue_insert(scmd, + SCSI_MLQUEUE_EH_TIMEDOUT_RETRY); } else { /* * If just we got sense for the device (called diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index eafeeda..cef49b2 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -158,7 +158,10 @@ static int __scsi_queue_insert(struct scsi_cmnd *cmd, int reason, int unbusy) * and plugs the queue appropriately. */ spin_lock_irqsave(q->queue_lock, flags); - blk_requeue_request(q, cmd->request); + if (reason == SCSI_MLQUEUE_EH_TIMEDOUT_RETRY) + blk_requeue_timedout_request(q, cmd->request); + else + blk_requeue_request(q, cmd->request); spin_unlock_irqrestore(q->queue_lock, flags); scsi_run_queue(q); diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 5027a59..e56f28e 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -205,7 +205,10 @@ typedef int (dma_drain_needed_fn)(struct request *); typedef int (lld_busy_fn) (struct request_queue *q); enum blk_eh_timer_return { - BLK_EH_NOT_HANDLED, + BLK_EH_NOT_HANDLED, /* If this is returned the module must + * call blk_requeue_timedout_request to + * requeue it + */ BLK_EH_HANDLED, BLK_EH_RESET_TIMER, }; @@ -653,6 +656,8 @@ extern struct request *blk_get_request(struct request_queue *, int, gfp_t); extern struct request *blk_make_request(struct request_queue *, struct bio *, gfp_t); extern void blk_insert_request(struct request_queue *, struct request *, int, void *); +extern void blk_requeue_timedout_request(struct request_queue *, + struct request *); extern void blk_requeue_request(struct request_queue *, struct request *); extern void blk_add_request_payload(struct request *rq, struct page *page, unsigned int len); diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h index 216af85..5bde952 100644 --- a/include/scsi/scsi.h +++ b/include/scsi/scsi.h @@ -442,6 +442,7 @@ static inline int scsi_is_wlun(unsigned int lun) #define SCSI_MLQUEUE_DEVICE_BUSY 0x1056 #define SCSI_MLQUEUE_EH_RETRY 0x1057 #define SCSI_MLQUEUE_TARGET_BUSY 0x1058 +#define SCSI_MLQUEUE_EH_TIMEDOUT_RETRY 0x1059 /* * Use these to separate status msg and our bytes ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-10 7:09 ` [dm-devel] " Mike Christie @ 2010-11-10 7:30 ` Mike Christie 2010-11-10 16:30 ` Mike Anderson 0 siblings, 1 reply; 8+ messages in thread From: Mike Christie @ 2010-11-10 7:30 UTC (permalink / raw) To: device-mapper development; +Cc: Mike Anderson, James Bottomley, linux-scsi On 11/10/2010 01:09 AM, Mike Christie wrote: > On 05/12/2010 12:23 AM, Mike Anderson wrote: >> I was looking at a dump from a weekend run and I believe I am seeing a >> case where blk_abort_request through blk_abort_queue picked up a request >> for timeout that scsi_request_fn decided not to start. This test was >> under >> error injection. >> >> I assume the case in scsi_request_fn this is hitting is that a request >> has >> been put on the timeout_list with blk_start_request and then one of the >> not_ready checks is hit and the request is decided not to be started. I >> believe the drop >> >> It appears that my usage of walking the timeout_list in block_abort_queue >> and using blk_mark_rq_complete in block_abort_request will not work in >> this case. >> >> While it would be good to have way to ensure a command is started, it is >> unclear if even at a low timeout of 1 second that a user other than >> blk_abort_queue would hit this race. >> >> The dropping / acquiring of host_lock and queue_lock in scsi_request_fn >> and scsi_dispatch_cmd make it unclear to me if usage of >> blk_mark_rq_complete will cover all cases. >> >> I looked at checking serial_number in scsi_times_out along with a couple >> blk_mark_rq_complete additions, but unclear if this would be good and >> / or >> work in all cases. >> >> I looked at just accelerating deadline by some default value but unclear >> if that would be acceptable. >> >> I also looked at just using just the mark interface I previously posted >> and not calling blk_abort_request at all, but that would change current >> behavior that has been in use for a while. >> > > Did you ever solve this? I am hitting this with the dm-multipath > blk_abort_queue case (the email I sent you a couple weeks ago). > > It seems we could fix this by just having blk_requeue_request do a check > for if the request timedout similar to what scsi used to do. A hacky way > might be to have 2 requeue functions. > > blk_requeue_completed_request - This is the blk_requeue_request we have > today. It unconditionally requeues the request. It should only be used > if the command has been completed either from blk_complete_request or > from block layer timeout handling > (blk_rq_timed_out_timer->blk_rq_timed_out->rq_timed_out_fn). > > blk_requeue_running_request - This checks if the timer is running before > requeueing the request. If blk_rq_timed_out_timer/blk_rq_timed_out has > taken over the request and is going to handle it then this function just > returns and does not requeue. So for this we could just have it check if > the queue has a rq_timed_out_fn and if rq->timeout_list is empty or not. > > I think this might be confusing to use, so I tried something slightly > different below. > > > I also tried a patch where we just add another req bit. We set it in > blk_rq_timed_out_timer and clear it in a new function that clears it > then calls blk_requeue_reqeust. The new function: > > blk_requeue_timedout_request - used when request is to be requeued if a > LLD q->rq_timed_out_fn returned BLK_EH_NOT_HANDLED and has resolved the > problem and wants the request to be requeued. This function clears > REQ_ATOM_TIMEDOUT and then calls blk_requeue_request. > > blk_reqeuue_request would then check if REQ_ATOM_TIMEDOUT is set and if > it was just drop the request assuming the rq_timed_out_fn was handling > it. This still requires the caller to know how the command is supposed > to be reqeueud. But I think it might be easier since the driver here has > returned BLK_EH_NOT_HANDLED in the q timeout fn so they know that they > are going to be handling the request in a special way. > > I attached the last idea here. Made over Linus's tree. Only compile tested. > Oops, nevermind. I think this is trying to solve a slightly different problem. I saw your other mail. My patch will not handle the case where: 1. cmd is in scsi_reqeust_fn has run blk_start_request and dropped the queue_lock. Has not yet taken the host lock and incremented host busy counters. 2. blk_abort_queue runs q->rq_timed_out_fn and adds cmd to host eh list. 3. Somehow scsi eh runs and is finishing its stuff before #1 has done anything, so the cmd was just processed by scsi eh *and* at the same time is still lingering in scsi_request_fn (somehow #1 has still not taken the host lock). scsi eh is then does scsi_eh_flush_done_q->scsi_queue_insert->blk_requeue_request on the request while request is also in scsi_request_fn processing. So now we hit some bug ons in the blk code. The cmd from #1 then finally grabs the host lock but it is too late. ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-10 7:30 ` Mike Christie @ 2010-11-10 16:30 ` Mike Anderson 2010-11-10 21:16 ` Mike Christie 0 siblings, 1 reply; 8+ messages in thread From: Mike Anderson @ 2010-11-10 16:30 UTC (permalink / raw) To: device-mapper development; +Cc: James Bottomley, linux-scsi Mike Christie <michaelc@cs.wisc.edu> wrote: > On 11/10/2010 01:09 AM, Mike Christie wrote: > >On 05/12/2010 12:23 AM, Mike Anderson wrote: > >>I was looking at a dump from a weekend run and I believe I am seeing a > >>case where blk_abort_request through blk_abort_queue picked up a request > >>for timeout that scsi_request_fn decided not to start. This test was > >>under > >>error injection. > >> > >>I assume the case in scsi_request_fn this is hitting is that a request > >>has > >>been put on the timeout_list with blk_start_request and then one of the > >>not_ready checks is hit and the request is decided not to be started. I > >>believe the drop > >> > >>It appears that my usage of walking the timeout_list in block_abort_queue > >>and using blk_mark_rq_complete in block_abort_request will not work in > >>this case. > >> > >>While it would be good to have way to ensure a command is started, it is > >>unclear if even at a low timeout of 1 second that a user other than > >>blk_abort_queue would hit this race. > >> > >>The dropping / acquiring of host_lock and queue_lock in scsi_request_fn > >>and scsi_dispatch_cmd make it unclear to me if usage of > >>blk_mark_rq_complete will cover all cases. > >> > >>I looked at checking serial_number in scsi_times_out along with a couple > >>blk_mark_rq_complete additions, but unclear if this would be good and > >>/ or > >>work in all cases. > >> > >>I looked at just accelerating deadline by some default value but unclear > >>if that would be acceptable. > >> > >>I also looked at just using just the mark interface I previously posted > >>and not calling blk_abort_request at all, but that would change current > >>behavior that has been in use for a while. > >> > > > >Did you ever solve this? I am hitting this with the dm-multipath > >blk_abort_queue case (the email I sent you a couple weeks ago). > > No. I am also not seeing it in my recent error injection testing. Is your test configuration / error injection testing able to reproduce fairly reliably. If so can you provide some general details on how you are generating this error. > >It seems we could fix this by just having blk_requeue_request do a check > >for if the request timedout similar to what scsi used to do. A hacky way > >might be to have 2 requeue functions. > > > >blk_requeue_completed_request - This is the blk_requeue_request we have > >today. It unconditionally requeues the request. It should only be used > >if the command has been completed either from blk_complete_request or > >from block layer timeout handling > >(blk_rq_timed_out_timer->blk_rq_timed_out->rq_timed_out_fn). > > > >blk_requeue_running_request - This checks if the timer is running before > >requeueing the request. If blk_rq_timed_out_timer/blk_rq_timed_out has > >taken over the request and is going to handle it then this function just > >returns and does not requeue. So for this we could just have it check if > >the queue has a rq_timed_out_fn and if rq->timeout_list is empty or not. > > > >I think this might be confusing to use, so I tried something slightly > >different below. > > > > > >I also tried a patch where we just add another req bit. We set it in > >blk_rq_timed_out_timer and clear it in a new function that clears it > >then calls blk_requeue_reqeust. The new function: > > > >blk_requeue_timedout_request - used when request is to be requeued if a > >LLD q->rq_timed_out_fn returned BLK_EH_NOT_HANDLED and has resolved the > >problem and wants the request to be requeued. This function clears > >REQ_ATOM_TIMEDOUT and then calls blk_requeue_request. > > > >blk_reqeuue_request would then check if REQ_ATOM_TIMEDOUT is set and if > >it was just drop the request assuming the rq_timed_out_fn was handling > >it. This still requires the caller to know how the command is supposed > >to be reqeueud. But I think it might be easier since the driver here has > >returned BLK_EH_NOT_HANDLED in the q timeout fn so they know that they > >are going to be handling the request in a special way. > > > >I attached the last idea here. Made over Linus's tree. Only compile tested. > > > > Oops, nevermind. I think this is trying to solve a slightly > different problem. I saw your other mail. My patch will not handle > the case where: > > 1. cmd is in scsi_reqeust_fn has run blk_start_request and dropped > the queue_lock. Has not yet taken the host lock and incremented host > busy counters. > 2. blk_abort_queue runs q->rq_timed_out_fn and adds cmd to host eh list. > 3. Somehow scsi eh runs and is finishing its stuff before #1 has > done anything, so the cmd was just processed by scsi eh *and* at the > same time is still lingering in scsi_request_fn (somehow #1 has > still not taken the host lock). > While #1 could also return with a busy from queuecommand which will call scsi_queue_insert with no check for complete. One could add a blk_mark_rq_complete check prior to calling scsi_queue_insert. This does not cover the not_ready label case in scsi_request_fn. Another option might be to make blk_abort less aggressive if we cannot close all the paths and switch it to more of a drain model, but then we may be in the same boat in selecting how fast we can drain based what we perceive to be a safe time value. This option leaves the existing races open in the scsi_request_fn / scsi_dispatch_cmd. -andmike -- Michael Anderson andmike@linux.vnet.ibm.com ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-10 16:30 ` Mike Anderson @ 2010-11-10 21:16 ` Mike Christie 2010-11-12 17:54 ` Mike Anderson 0 siblings, 1 reply; 8+ messages in thread From: Mike Christie @ 2010-11-10 21:16 UTC (permalink / raw) To: Mike Anderson; +Cc: device-mapper development, James Bottomley, linux-scsi On 11/10/2010 10:30 AM, Mike Anderson wrote: > Mike Christie<michaelc@cs.wisc.edu> wrote: >> On 11/10/2010 01:09 AM, Mike Christie wrote: >>> On 05/12/2010 12:23 AM, Mike Anderson wrote: >>>> I was looking at a dump from a weekend run and I believe I am seeing a >>>> case where blk_abort_request through blk_abort_queue picked up a request >>>> for timeout that scsi_request_fn decided not to start. This test was >>>> under >>>> error injection. >>>> >>>> I assume the case in scsi_request_fn this is hitting is that a request >>>> has >>>> been put on the timeout_list with blk_start_request and then one of the >>>> not_ready checks is hit and the request is decided not to be started. I >>>> believe the drop >>>> >>>> It appears that my usage of walking the timeout_list in block_abort_queue >>>> and using blk_mark_rq_complete in block_abort_request will not work in >>>> this case. >>>> >>>> While it would be good to have way to ensure a command is started, it is >>>> unclear if even at a low timeout of 1 second that a user other than >>>> blk_abort_queue would hit this race. >>>> >>>> The dropping / acquiring of host_lock and queue_lock in scsi_request_fn >>>> and scsi_dispatch_cmd make it unclear to me if usage of >>>> blk_mark_rq_complete will cover all cases. >>>> >>>> I looked at checking serial_number in scsi_times_out along with a couple >>>> blk_mark_rq_complete additions, but unclear if this would be good and >>>> / or >>>> work in all cases. >>>> >>>> I looked at just accelerating deadline by some default value but unclear >>>> if that would be acceptable. >>>> >>>> I also looked at just using just the mark interface I previously posted >>>> and not calling blk_abort_request at all, but that would change current >>>> behavior that has been in use for a while. >>>> >>> >>> Did you ever solve this? I am hitting this with the dm-multipath >>> blk_abort_queue case (the email I sent you a couple weeks ago). >>> > > No. I am also not seeing it in my recent error injection testing. > > Is your test configuration / error injection testing able to reproduce > fairly reliably. If so can you provide some general details on how you > are generating this error. In one test we just run dm-mutlipath over FC with the default timeouts, then disable the target controller. This leads to IO timing out and the scsi eh running. We then bring the controller back up and depending on timing that then can lead to either the scsi eh failing and IO being failed upwards or the scsi eh succeeding and IO retried. Also a really easy way, but probably unrealistic, to hit something similar (more like what you hit in your 1 second timeout case in the other mail) is if you just set the queue timeout to 0, the queue_depth lower than the number of commands you are sending and let the IO test run on the scsi disk. This sort of replicates the problem because the request timesout while it is in the scsi_request_fn similar to how the blk_abort_queue can be called on the cmd while is in there. > >>> It seems we could fix this by just having blk_requeue_request do a check >>> for if the request timedout similar to what scsi used to do. A hacky way >>> might be to have 2 requeue functions. >>> >>> blk_requeue_completed_request - This is the blk_requeue_request we have >>> today. It unconditionally requeues the request. It should only be used >>> if the command has been completed either from blk_complete_request or >> >from block layer timeout handling >>> (blk_rq_timed_out_timer->blk_rq_timed_out->rq_timed_out_fn). >>> >>> blk_requeue_running_request - This checks if the timer is running before >>> requeueing the request. If blk_rq_timed_out_timer/blk_rq_timed_out has >>> taken over the request and is going to handle it then this function just >>> returns and does not requeue. So for this we could just have it check if >>> the queue has a rq_timed_out_fn and if rq->timeout_list is empty or not. >>> >>> I think this might be confusing to use, so I tried something slightly >>> different below. >>> >>> >>> I also tried a patch where we just add another req bit. We set it in >>> blk_rq_timed_out_timer and clear it in a new function that clears it >>> then calls blk_requeue_reqeust. The new function: >>> >>> blk_requeue_timedout_request - used when request is to be requeued if a >>> LLD q->rq_timed_out_fn returned BLK_EH_NOT_HANDLED and has resolved the >>> problem and wants the request to be requeued. This function clears >>> REQ_ATOM_TIMEDOUT and then calls blk_requeue_request. >>> >>> blk_reqeuue_request would then check if REQ_ATOM_TIMEDOUT is set and if >>> it was just drop the request assuming the rq_timed_out_fn was handling >>> it. This still requires the caller to know how the command is supposed >>> to be reqeueud. But I think it might be easier since the driver here has >>> returned BLK_EH_NOT_HANDLED in the q timeout fn so they know that they >>> are going to be handling the request in a special way. >>> >>> I attached the last idea here. Made over Linus's tree. Only compile tested. >>> >> >> Oops, nevermind. I think this is trying to solve a slightly >> different problem. I saw your other mail. My patch will not handle >> the case where: >> >> 1. cmd is in scsi_reqeust_fn has run blk_start_request and dropped >> the queue_lock. Has not yet taken the host lock and incremented host >> busy counters. >> 2. blk_abort_queue runs q->rq_timed_out_fn and adds cmd to host eh list. >> 3. Somehow scsi eh runs and is finishing its stuff before #1 has >> done anything, so the cmd was just processed by scsi eh *and* at the >> same time is still lingering in scsi_request_fn (somehow #1 has >> still not taken the host lock). >> > > While #1 could also return with a busy from queuecommand which will call > scsi_queue_insert with no check for complete. One could add a > blk_mark_rq_complete check prior to calling scsi_queue_insert. This does > not cover the not_ready label case in scsi_request_fn. > Yeah, I was trying to not to have to add blk_mark_rq_complete checks in the scsi layer and just have the block layer handle it. I think my patch in the previous mail covers both the scsi_request_fn and scsi_dispatch_cmd/queuecommand cases. The patch should work as long as the scsi eh has not already completed and run blk_requeue_request on it. The problem with my patch is only that case above where the cmd gets blk_abort_queued/timedout run on it and the scsi eh somehow is able to complete and run scsi_queue_insert while scsi_request_fn is still trying to process the request. This could be solved by just having the scsi eh thread flush the queue before running. This would make sure the cmd is not running in scsi_request_fn while the scsi the scsi is also processing it. We sort of do this today with the host_busy checks in scsi_error_handler. The problem is that window from the time scsi_request_fn drops the queue_lock and grabs the host_lock to increment host_busy. In the window, blk_abort_request can run, the scsi_eh thread can see host_busy == host_failed, start the scsi eh, run to completion and hit the problem I described above. > Another option might be to make blk_abort less aggressive if we cannot > close all the paths and switch it to more of a drain model, but then we > may be in the same boat in selecting how fast we can drain based what we > perceive to be a safe time value. This option leaves the existing races > open in the scsi_request_fn / scsi_dispatch_cmd. > > -andmike > -- > Michael Anderson > andmike@linux.vnet.ibm.com > -- > To unsubscribe from this list: send the line "unsubscribe linux-scsi" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-10 21:16 ` Mike Christie @ 2010-11-12 17:54 ` Mike Anderson 2010-11-16 21:39 ` Mike Snitzer 0 siblings, 1 reply; 8+ messages in thread From: Mike Anderson @ 2010-11-12 17:54 UTC (permalink / raw) To: Mike Christie; +Cc: device-mapper development, James Bottomley, linux-scsi Mike Christie <michaelc@cs.wisc.edu> wrote: > On 11/10/2010 10:30 AM, Mike Anderson wrote: > > > >Is your test configuration / error injection testing able to reproduce > >fairly reliably. If so can you provide some general details on how you > >are generating this error. > > In one test we just run dm-mutlipath over FC with the default > timeouts, then disable the target controller. This leads to IO > timing out and the scsi eh running. We then bring the controller > back up and depending on timing that then can lead to either the > scsi eh failing and IO being failed upwards or the scsi eh > succeeding and IO retried. > > Also a really easy way, but probably unrealistic, to hit something > similar (more like what you hit in your 1 second timeout case in the > other mail) is if you just set the queue timeout to 0, the > queue_depth lower than the number of commands you are sending and > let the IO test run on the scsi disk. This sort of replicates the > problem because the request timesout while it is in the > scsi_request_fn similar to how the blk_abort_queue can be called on > the cmd while is in there. > I ran the timeout set to zero test using "modprobe scsi_debug max_queue=2 virtual_gb=20" along with fio. Wihtout any changes the I/O locks up in with in a minute. I applied your patch and the I/O locked up in near the same amount of time which based on your previously mail I think this is to be expected. I then reordered scsi_request_fn to do the host_busy work up front prior to getting a request and starting it. It has been running for ~ 18 hours (although it is slow due to all the timeouts occurring). The reordering in scsi_request_fn concerns me as we are bouncing the locks more and unclear how much impact the reordering has on I/O rates. I did not attach a patch as currently it does not look very good and I think I am missing the handling of a couple of error cases. > > >>> > >>>I also tried a patch where we just add another req bit. We set it in > >>>blk_rq_timed_out_timer and clear it in a new function that clears it > >>>then calls blk_requeue_reqeust. The new function: > >>> > >>>blk_requeue_timedout_request - used when request is to be requeued if a > >>>LLD q->rq_timed_out_fn returned BLK_EH_NOT_HANDLED and has resolved the > >>>problem and wants the request to be requeued. This function clears > >>>REQ_ATOM_TIMEDOUT and then calls blk_requeue_request. > >>> > >>>blk_reqeuue_request would then check if REQ_ATOM_TIMEDOUT is set and if > >>>it was just drop the request assuming the rq_timed_out_fn was handling > >>>it. This still requires the caller to know how the command is supposed > >>>to be reqeueud. But I think it might be easier since the driver here has > >>>returned BLK_EH_NOT_HANDLED in the q timeout fn so they know that they > >>>are going to be handling the request in a special way. > >>> > >>>I attached the last idea here. Made over Linus's tree. Only compile tested. > >>> > >> > >>Oops, nevermind. I think this is trying to solve a slightly > >>different problem. I saw your other mail. My patch will not handle > >>the case where: > >> > >>1. cmd is in scsi_reqeust_fn has run blk_start_request and dropped > >>the queue_lock. Has not yet taken the host lock and incremented host > >>busy counters. > >>2. blk_abort_queue runs q->rq_timed_out_fn and adds cmd to host eh list. > >>3. Somehow scsi eh runs and is finishing its stuff before #1 has > >>done anything, so the cmd was just processed by scsi eh *and* at the > >>same time is still lingering in scsi_request_fn (somehow #1 has > >>still not taken the host lock). > >> > > > >While #1 could also return with a busy from queuecommand which will call > >scsi_queue_insert with no check for complete. One could add a > >blk_mark_rq_complete check prior to calling scsi_queue_insert. This does > >not cover the not_ready label case in scsi_request_fn. > > > > Yeah, I was trying to not to have to add blk_mark_rq_complete checks > in the scsi layer and just have the block layer handle it. I think > my patch in the previous mail covers both the scsi_request_fn and > scsi_dispatch_cmd/queuecommand cases. The patch should work as long > as the scsi eh has not already completed and run blk_requeue_request > on it. The problem with my patch is only that case above where the > cmd gets blk_abort_queued/timedout run on it and the scsi eh somehow > is able to complete and run scsi_queue_insert while scsi_request_fn > is still trying to process the request. > > This could be solved by just having the scsi eh thread flush the > queue before running. This would make sure the cmd is not running in > scsi_request_fn while the scsi the scsi is also processing it. We > sort of do this today with the host_busy checks in > scsi_error_handler. The problem is that window from the time > scsi_request_fn drops the queue_lock and grabs the host_lock to > increment host_busy. In the window, blk_abort_request can run, the > scsi_eh thread can see host_busy == host_failed, start the scsi eh, > run to completion and hit the problem I described above. > > > > > >Another option might be to make blk_abort less aggressive if we cannot > >close all the paths and switch it to more of a drain model, but then we > >may be in the same boat in selecting how fast we can drain based what we > >perceive to be a safe time value. This option leaves the existing races > >open in the scsi_request_fn / scsi_dispatch_cmd. With the lock push down work and other clean ups I assume some of the lock / unlocks in scsi_request_fn might be reduced making a reordering / addressing this issue less impact to the I/O path. This might be incorrect. As mentioned above an option might be to provide a floor for the timeout part of the abort instead of calling a timeout directly. While this seems like leaving a hole for others to fall into, it might be the least disruptive to most users and may have the side effect of avoiding a eh wakeup in some cases. At the same time a safe in all cases value cannot really be selected. We currently are providing some protection to the SG interfaces from hitting this race by providing a floor for a timeout value of BLK_MIN_SG_TIMEOUT. We also default the sd interface to SD_TIMEOUT making it unlikely that a user would hit this case unless they modified the timeout to low value like in the experiment above. By not directly timing out the I/O but accelerating the timeout by a factor. The value could be calculated as a percentage of the queue timeout value for a default with the option of exposing a sysfs attribute similar to fast_io_fail_tmo. The attribute could also provide a off method which we do not have today and is my bad that we do not have one (I posted the features patch to multipath but did not followup which would have provided a off). Since the queue is cleared increasing the lifetime of the I/O a small amount should still provide good latency reduction. The intent was to reduce high I/O latency during some failure cases (since we do not fast fail timeouts anymore the timeout * N retries plus a large number of I/Os queue case). At the same time this interface needs to safe. Is this pushing off the real fix a bad idea to consider / direction to look at? -andmike -- Michael Anderson andmike@linux.vnet.ibm.com ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-12 17:54 ` Mike Anderson @ 2010-11-16 21:39 ` Mike Snitzer 2010-11-17 17:49 ` [dm-devel] " Mike Anderson 0 siblings, 1 reply; 8+ messages in thread From: Mike Snitzer @ 2010-11-16 21:39 UTC (permalink / raw) To: Mike Anderson Cc: Mike Christie, James Bottomley, linux-scsi, device-mapper development Hi Mike, On Fri, Nov 12 2010 at 12:54pm -0500, Mike Anderson <andmike@linux.vnet.ibm.com> wrote: > By not directly timing out the I/O but accelerating the timeout by a > factor. The value could be calculated as a percentage of the queue timeout > value for a default with the option of exposing a sysfs attribute > similar to fast_io_fail_tmo. The attribute could also provide a off > method which we do not have today and is my bad that we do not have one > (I posted the features patch to multipath but did not followup which > would have provided a off). You're referring to these patches: https://patchwork.kernel.org/patch/96674/ https://patchwork.kernel.org/patch/96673/ Do you have an interest in pursuing these further? In the near-term should we default to off (so introduce MP_FEATURE_ABORT_Q) -- given the current race which exposes corruption? Or are you now interested in accelerating the timeout? I'd need to review this thread in more detail to give you an opinion. But I do know that simply disabling dm-mpath's call to blk_abort_queue() enables some extensive path failure load testing to _not_ cause the list corruption that leads to a crash. Mike ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dm-devel] block_abort_queue (blk_abort_request) racing with scsi_request_fn 2010-11-16 21:39 ` Mike Snitzer @ 2010-11-17 17:49 ` Mike Anderson 0 siblings, 0 replies; 8+ messages in thread From: Mike Anderson @ 2010-11-17 17:49 UTC (permalink / raw) To: device-mapper development; +Cc: Mike Christie, James Bottomley, linux-scsi Mike Snitzer <snitzer@redhat.com> wrote: > Hi Mike, > > On Fri, Nov 12 2010 at 12:54pm -0500, > Mike Anderson <andmike@linux.vnet.ibm.com> wrote: > > > By not directly timing out the I/O but accelerating the timeout by a > > factor. The value could be calculated as a percentage of the queue timeout > > value for a default with the option of exposing a sysfs attribute > > similar to fast_io_fail_tmo. The attribute could also provide a off > > method which we do not have today and is my bad that we do not have one > > (I posted the features patch to multipath but did not followup which > > would have provided a off). > > You're referring to these patches: > https://patchwork.kernel.org/patch/96674/ > https://patchwork.kernel.org/patch/96673/ > Yes these are the patches that I was referring to. > Do you have an interest in pursuing these further? Yes. > In the near-term > should we default to off (so introduce MP_FEATURE_ABORT_Q) -- given the > current race which exposes corruption? > Given the current race exposure default to off might be the best choice. > Or are you now interested in accelerating the timeout? I'd need to > review this thread in more detail to give you an opinion. But I do know > that simply disabling dm-mpath's call to blk_abort_queue() enables some > extensive path failure load testing to _not_ cause the list corruption > that leads to a crash. I think the on/off control plus a fix to address the issue when it is on would be good. Since I do not believe we want the impact the normal IO path by more lock bouncing adding modification of the blk_abort_queue function appeared like one of the least distributive options. There might be others. -andmike -- Michael Anderson andmike@linux.vnet.ibm.com ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2010-11-17 17:49 UTC | newest] Thread overview: 8+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-05-12 5:23 block_abort_queue (blk_abort_request) racing with scsi_request_fn Mike Anderson 2010-11-10 7:09 ` [dm-devel] " Mike Christie 2010-11-10 7:30 ` Mike Christie 2010-11-10 16:30 ` Mike Anderson 2010-11-10 21:16 ` Mike Christie 2010-11-12 17:54 ` Mike Anderson 2010-11-16 21:39 ` Mike Snitzer 2010-11-17 17:49 ` [dm-devel] " Mike Anderson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).