linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] p00001_scsi_tcq_queue_lock
@ 2004-08-03 17:49 Brian King
  2004-08-07  4:09 ` James Bottomley
  0 siblings, 1 reply; 5+ messages in thread
From: Brian King @ 2004-08-03 17:49 UTC (permalink / raw)
  To: james Bottomley; +Cc: SCSI Mailing List

[-- Attachment #1: Type: text/plain, Size: 64 bytes --]


-- 
Brian King
eServer Storage I/O
IBM Linux Technology Center

[-- Attachment #2: p00001_scsi_tcq_queue_lock.patch --]
[-- Type: text/plain, Size: 1474 bytes --]


Add locking to scsi_activate_tcq and scsi_deactivate_tcq to fix a race
condition that can occur when disabling tcqing with commands in flight.

Signed-off-by: Brian King <brking@us.ibm.com>
---

 linux-2.6.8-rc2-bjking1/include/scsi/scsi_tcq.h |    8 ++++++++
 1 files changed, 8 insertions(+)

diff -puN include/scsi/scsi_tcq.h~scsi_tcq_queue_lock include/scsi/scsi_tcq.h
--- linux-2.6.8-rc2/include/scsi/scsi_tcq.h~scsi_tcq_queue_lock	2004-08-03 11:05:28.000000000 -0500
+++ linux-2.6.8-rc2-bjking1/include/scsi/scsi_tcq.h	2004-08-03 11:12:19.000000000 -0500
@@ -25,9 +25,13 @@
  **/
 static inline void scsi_activate_tcq(struct scsi_device *sdev, int depth)
 {
+	unsigned long flags;
+
         if (sdev->tagged_supported) {
+		spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
 		if (!blk_queue_tagged(sdev->request_queue))
 			blk_queue_init_tags(sdev->request_queue, depth, NULL);
+		spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
 		scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, depth);
         }
 }
@@ -38,8 +42,12 @@ static inline void scsi_activate_tcq(str
  **/
 static inline void scsi_deactivate_tcq(struct scsi_device *sdev, int depth)
 {
+	unsigned long flags;
+
+	spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
 	if (blk_queue_tagged(sdev->request_queue))
 		blk_queue_free_tags(sdev->request_queue);
+	spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
 	scsi_adjust_queue_depth(sdev, 0, depth);
 }
 
_

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] p00001_scsi_tcq_queue_lock
  2004-08-03 17:49 [PATCH] p00001_scsi_tcq_queue_lock Brian King
@ 2004-08-07  4:09 ` James Bottomley
  2004-08-09 14:04   ` Brian King
  0 siblings, 1 reply; 5+ messages in thread
From: James Bottomley @ 2004-08-07  4:09 UTC (permalink / raw)
  To: brking; +Cc: SCSI Mailing List

On Tue, 2004-08-03 at 10:49, Brian King wrote:
> Add locking to scsi_activate_tcq and scsi_deactivate_tcq to fix a race
> condition that can occur when disabling tcqing with commands in flight.

It's possible to do it like this.  However, we should really quiesce the
SCSI device before disabling tcq.  It is not legal in the scsi spec to
have both tagged and untagged commands outstanding.  Most drives do the
right thing and return BUSY to an untagged request if they have tags in
flight.  Also, we have to wait until all tags return before freeing the
block layer tag structures...and there's always the devices that do
strange things in this situation...

James



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] p00001_scsi_tcq_queue_lock
  2004-08-07  4:09 ` James Bottomley
@ 2004-08-09 14:04   ` Brian King
  2004-08-09 14:14     ` Brian King
  2004-09-17 17:55     ` Brian King
  0 siblings, 2 replies; 5+ messages in thread
From: Brian King @ 2004-08-09 14:04 UTC (permalink / raw)
  To: James Bottomley; +Cc: SCSI Mailing List

[-- Attachment #1: Type: text/plain, Size: 1266 bytes --]

James Bottomley wrote:
> On Tue, 2004-08-03 at 10:49, Brian King wrote:
> 
>>Add locking to scsi_activate_tcq and scsi_deactivate_tcq to fix a race
>>condition that can occur when disabling tcqing with commands in flight.
> 
> 
> It's possible to do it like this.  However, we should really quiesce the
> SCSI device before disabling tcq.  It is not legal in the scsi spec to
> have both tagged and untagged commands outstanding.  Most drives do the
> right thing and return BUSY to an untagged request if they have tags in
> flight.  

Ok. I agree that we do not want both tagged and untagged requests going 
to the device. In the case of the ipr driver, the adapter microcode 
handles this for me. If there are tagged requests outstanding to a 
device and an untagged request is sent, the tagged requests must first 
finish, then the untagged request is sent to the device with no other 
requests outstanding.

> Also, we have to wait until all tags return before freeing the
> block layer tag structures...and there's always the devices that do
> strange things in this situation...

I sent the following patch to Jens last week as well and he applied it. 
This fixes the free problem you mention.


-- 
Brian King
eServer Storage I/O
IBM Linux Technology Center

[-- Attachment #2: blk_queue_free_tags.patch --]
[-- Type: text/plain, Size: 4623 bytes --]



Signed-off-by: Brian King <brking@us.ibm.com>
---

 linux-2.6.8-rc2-bjking1/drivers/block/ll_rw_blk.c |   47 +++++++++++++++++-----
 linux-2.6.8-rc2-bjking1/include/scsi/scsi_tcq.h   |    8 +++
 2 files changed, 46 insertions(+), 9 deletions(-)

diff -puN drivers/block/ll_rw_blk.c~blk_queue_free_tags_bug2 drivers/block/ll_rw_blk.c
--- linux-2.6.8-rc2/drivers/block/ll_rw_blk.c~blk_queue_free_tags_bug2	2004-08-02 17:02:43.000000000 -0500
+++ linux-2.6.8-rc2-bjking1/drivers/block/ll_rw_blk.c	2004-08-03 10:45:14.000000000 -0500
@@ -482,15 +482,14 @@ struct request *blk_queue_find_tag(reque
 EXPORT_SYMBOL(blk_queue_find_tag);
 
 /**
- * blk_queue_free_tags - release tag maintenance info
+ * _blk_queue_free_tags - release tag maintenance info
  * @q:  the request queue for the device
  *
  *  Notes:
  *    blk_cleanup_queue() will take care of calling this function, if tagging
- *    has been used. So there's usually no need to call this directly, unless
- *    tagging is just being disabled but the queue remains in function.
+ *    has been used.
  **/
-void blk_queue_free_tags(request_queue_t *q)
+static void _blk_queue_free_tags(request_queue_t *q)
 {
 	struct blk_queue_tag *bqt = q->queue_tags;
 
@@ -511,7 +510,21 @@ void blk_queue_free_tags(request_queue_t
 	}
 
 	q->queue_tags = NULL;
-	q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
+	clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+}
+
+/**
+ * blk_queue_free_tags - release tag maintenance info
+ * @q:  the request queue for the device
+ *
+ *  Notes:
+ *    blk_cleanup_queue() will take care of calling this function, if tagging
+ *    has been used. So there's usually no need to call this directly, unless
+ *    tagging is just being disabled but the queue remains in function.
+ **/
+void blk_queue_free_tags(request_queue_t *q)
+{
+	clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
 }
 
 EXPORT_SYMBOL(blk_queue_free_tags);
@@ -564,13 +577,22 @@ fail:
 int blk_queue_init_tags(request_queue_t *q, int depth,
 			struct blk_queue_tag *tags)
 {
-	if (!tags) {
+	int rc;
+
+	BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
+
+	if (!tags && !q->queue_tags) {
 		tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
 		if (!tags)
 			goto fail;
 
 		if (init_tag_map(q, tags, depth))
 			goto fail;
+	} else if (q->queue_tags) {
+		if ((rc = blk_queue_resize_tags(q, depth)))
+			return rc;
+		set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+		return 0;
 	} else
 		atomic_inc(&tags->refcnt);
 
@@ -578,7 +600,7 @@ int blk_queue_init_tags(request_queue_t 
 	 * assign it, all done
 	 */
 	q->queue_tags = tags;
-	q->queue_flags |= (1 << QUEUE_FLAG_QUEUED);
+	set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
 	return 0;
 fail:
 	kfree(tags);
@@ -1333,8 +1355,8 @@ void blk_cleanup_queue(request_queue_t *
 	if (rl->rq_pool)
 		mempool_destroy(rl->rq_pool);
 
-	if (blk_queue_tagged(q))
-		blk_queue_free_tags(q);
+	if (q->queue_tags)
+		_blk_queue_free_tags(q);
 
 	kmem_cache_free(requestq_cachep, q);
 }
@@ -2034,6 +2056,13 @@ void __blk_put_request(request_queue_t *
 
 		elv_completed_request(q, req);
 
+		if (!list_empty(&req->queuelist)) {
+			if (req->flags & REQ_QUEUED)
+				printk(KERN_ERR "REQ_QUEUED still set in __blk_put_request\n");
+			printk(KERN_ERR "req: %p, q: %p, req_flags: %lx, q_flags: %lx\n",
+			       req, q, req->flags, q->queue_flags);
+		}
+
 		BUG_ON(!list_empty(&req->queuelist));
 
 		blk_free_request(q, req);
diff -puN include/scsi/scsi_tcq.h~blk_queue_free_tags_bug2 include/scsi/scsi_tcq.h
--- linux-2.6.8-rc2/include/scsi/scsi_tcq.h~blk_queue_free_tags_bug2	2004-08-02 17:02:43.000000000 -0500
+++ linux-2.6.8-rc2-bjking1/include/scsi/scsi_tcq.h	2004-08-02 17:02:43.000000000 -0500
@@ -25,9 +25,13 @@
  **/
 static inline void scsi_activate_tcq(struct scsi_device *sdev, int depth)
 {
+        unsigned long flags;
+
         if (sdev->tagged_supported) {
+		spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
 		if (!blk_queue_tagged(sdev->request_queue))
 			blk_queue_init_tags(sdev->request_queue, depth, NULL);
+		spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
 		scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, depth);
         }
 }
@@ -38,8 +42,12 @@ static inline void scsi_activate_tcq(str
  **/
 static inline void scsi_deactivate_tcq(struct scsi_device *sdev, int depth)
 {
+	unsigned long flags;
+
+	spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
 	if (blk_queue_tagged(sdev->request_queue))
 		blk_queue_free_tags(sdev->request_queue);
+	spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
 	scsi_adjust_queue_depth(sdev, 0, depth);
 }
 
_

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] p00001_scsi_tcq_queue_lock
  2004-08-09 14:04   ` Brian King
@ 2004-08-09 14:14     ` Brian King
  2004-09-17 17:55     ` Brian King
  1 sibling, 0 replies; 5+ messages in thread
From: Brian King @ 2004-08-09 14:14 UTC (permalink / raw)
  To: brking; +Cc: James Bottomley, SCSI Mailing List

[-- Attachment #1: Type: text/plain, Size: 1382 bytes --]

Oops. I guess this was the actual patch I sent.

Brian King wrote:
> James Bottomley wrote:
> 
>> On Tue, 2004-08-03 at 10:49, Brian King wrote:
>>
>>> Add locking to scsi_activate_tcq and scsi_deactivate_tcq to fix a race
>>> condition that can occur when disabling tcqing with commands in flight.
>>
>>
>>
>> It's possible to do it like this.  However, we should really quiesce the
>> SCSI device before disabling tcq.  It is not legal in the scsi spec to
>> have both tagged and untagged commands outstanding.  Most drives do the
>> right thing and return BUSY to an untagged request if they have tags in
>> flight.  
> 
> 
> Ok. I agree that we do not want both tagged and untagged requests going 
> to the device. In the case of the ipr driver, the adapter microcode 
> handles this for me. If there are tagged requests outstanding to a 
> device and an untagged request is sent, the tagged requests must first 
> finish, then the untagged request is sent to the device with no other 
> requests outstanding.
> 
>> Also, we have to wait until all tags return before freeing the
>> block layer tag structures...and there's always the devices that do
>> strange things in this situation...
> 
> 
> I sent the following patch to Jens last week as well and he applied it. 
> This fixes the free problem you mention.


-- 
Brian King
eServer Storage I/O
IBM Linux Technology Center

[-- Attachment #2: blk_queue_free_tags.patch --]
[-- Type: text/plain, Size: 2920 bytes --]


Currently blk_queue_free_tags cannot be called with ops outstanding. The
scsi_tcq API defined to LLD scsi drivers allows for scsi_deactivate_tcq
to be called (which calls blk_queue_free_tags) with ops outstanding. Change
blk_queue_free_tags to no longer free the tags, but rather just disable
tagged queuing and also modify blk_queue_init_tags to handle re-enabling
tagged queuing after it has been disabled.

Signed-off-by: Brian King <brking@us.ibm.com>
---

 linux-2.6.8-rc2-bjking1/drivers/block/ll_rw_blk.c |   35 +++++++++++++++++-----
 1 files changed, 28 insertions(+), 7 deletions(-)

diff -puN drivers/block/ll_rw_blk.c~blk_queue_free_tags drivers/block/ll_rw_blk.c
--- linux-2.6.8-rc2/drivers/block/ll_rw_blk.c~blk_queue_free_tags	2004-08-03 10:53:50.000000000 -0500
+++ linux-2.6.8-rc2-bjking1/drivers/block/ll_rw_blk.c	2004-08-03 11:05:06.000000000 -0500
@@ -482,15 +482,14 @@ struct request *blk_queue_find_tag(reque
 EXPORT_SYMBOL(blk_queue_find_tag);
 
 /**
- * blk_queue_free_tags - release tag maintenance info
+ * _blk_queue_free_tags - release tag maintenance info
  * @q:  the request queue for the device
  *
  *  Notes:
  *    blk_cleanup_queue() will take care of calling this function, if tagging
- *    has been used. So there's usually no need to call this directly, unless
- *    tagging is just being disabled but the queue remains in function.
+ *    has been used. So there's no need to call this directly.
  **/
-void blk_queue_free_tags(request_queue_t *q)
+static void _blk_queue_free_tags(request_queue_t *q)
 {
 	struct blk_queue_tag *bqt = q->queue_tags;
 
@@ -514,6 +513,19 @@ void blk_queue_free_tags(request_queue_t
 	q->queue_flags &= ~(1 << QUEUE_FLAG_QUEUED);
 }
 
+/**
+ * blk_queue_free_tags - release tag maintenance info
+ * @q:  the request queue for the device
+ *
+ *  Notes:
+ *	This is used to disabled tagged queuing to a device, yet leave
+ *	queue in function.
+ **/
+void blk_queue_free_tags(request_queue_t *q)
+{
+	clear_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+}
+
 EXPORT_SYMBOL(blk_queue_free_tags);
 
 static int
@@ -564,13 +576,22 @@ fail:
 int blk_queue_init_tags(request_queue_t *q, int depth,
 			struct blk_queue_tag *tags)
 {
-	if (!tags) {
+	int rc;
+
+	BUG_ON(tags && q->queue_tags && tags != q->queue_tags);
+
+	if (!tags && !q->queue_tags) {
 		tags = kmalloc(sizeof(struct blk_queue_tag), GFP_ATOMIC);
 		if (!tags)
 			goto fail;
 
 		if (init_tag_map(q, tags, depth))
 			goto fail;
+	} else if (q->queue_tags) {
+		if ((rc = blk_queue_resize_tags(q, depth)))
+			return rc;
+		set_bit(QUEUE_FLAG_QUEUED, &q->queue_flags);
+		return 0;
 	} else
 		atomic_inc(&tags->refcnt);
 
@@ -1333,8 +1354,8 @@ void blk_cleanup_queue(request_queue_t *
 	if (rl->rq_pool)
 		mempool_destroy(rl->rq_pool);
 
-	if (blk_queue_tagged(q))
-		blk_queue_free_tags(q);
+	if (q->queue_tags)
+		_blk_queue_free_tags(q);
 
 	kmem_cache_free(requestq_cachep, q);
 }
_

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] p00001_scsi_tcq_queue_lock
  2004-08-09 14:04   ` Brian King
  2004-08-09 14:14     ` Brian King
@ 2004-09-17 17:55     ` Brian King
  1 sibling, 0 replies; 5+ messages in thread
From: Brian King @ 2004-09-17 17:55 UTC (permalink / raw)
  To: James Bottomley; +Cc: SCSI Mailing List

James,

Any chance of applying this patch? I think most of the issues you raised
I've answered. This patch certainly does not preclude other LLD's from
quiescing the device before changing the tcqing state if they need to.

Or would you prefer adding code to the scsi core to quiesce the device
before changing its tcqing state? The scsi core could feasibly quiesce the
device, enable or disable tagged queuing, then resume the queue. I haven't
looked at generating a patch for this yet, so I'm not sure how simple it
would be to do what I am suggesting...

Thanks

-Brian


Brian King wrote:
> James Bottomley wrote:
> 
>> On Tue, 2004-08-03 at 10:49, Brian King wrote:
>>
>>> Add locking to scsi_activate_tcq and scsi_deactivate_tcq to fix a race
>>> condition that can occur when disabling tcqing with commands in flight.
>>
>>
>>
>> It's possible to do it like this.  However, we should really quiesce the
>> SCSI device before disabling tcq.  It is not legal in the scsi spec to
>> have both tagged and untagged commands outstanding.  Most drives do the
>> right thing and return BUSY to an untagged request if they have tags in
>> flight.  
> 
> 
> Ok. I agree that we do not want both tagged and untagged requests going 
> to the device. In the case of the ipr driver, the adapter microcode 
> handles this for me. If there are tagged requests outstanding to a 
> device and an untagged request is sent, the tagged requests must first 
> finish, then the untagged request is sent to the device with no other 
> requests outstanding.
> 
>> Also, we have to wait until all tags return before freeing the
>> block layer tag structures...and there's always the devices that do
>> strange things in this situation...
> 

The block layer now handles this appropriately and does not free the tags
until blk_cleanup_queue is called.


> diff -puN include/scsi/scsi_tcq.h~blk_queue_free_tags_bug2 include/scsi/scsi_tcq.h
> --- linux-2.6.8-rc2/include/scsi/scsi_tcq.h~blk_queue_free_tags_bug2	2004-08-02 17:02:43.000000000 -0500
> +++ linux-2.6.8-rc2-bjking1/include/scsi/scsi_tcq.h	2004-08-02 17:02:43.000000000 -0500
> @@ -25,9 +25,13 @@
>   **/
>  static inline void scsi_activate_tcq(struct scsi_device *sdev, int depth)
>  {
> +        unsigned long flags;
> +
>          if (sdev->tagged_supported) {
> +		spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
>  		if (!blk_queue_tagged(sdev->request_queue))
>  			blk_queue_init_tags(sdev->request_queue, depth, NULL);
> +		spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
>  		scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, depth);
>          }
>  }
> @@ -38,8 +42,12 @@ static inline void scsi_activate_tcq(str
>   **/
>  static inline void scsi_deactivate_tcq(struct scsi_device *sdev, int depth)
>  {
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
>  	if (blk_queue_tagged(sdev->request_queue))
>  		blk_queue_free_tags(sdev->request_queue);
> +	spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
>  	scsi_adjust_queue_depth(sdev, 0, depth);
>  }
>  
> _

-- 
Brian King
eServer Storage I/O
IBM Linux Technology Center


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2004-09-17 17:55 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-03 17:49 [PATCH] p00001_scsi_tcq_queue_lock Brian King
2004-08-07  4:09 ` James Bottomley
2004-08-09 14:04   ` Brian King
2004-08-09 14:14     ` Brian King
2004-09-17 17:55     ` Brian King

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).