All of lore.kernel.org
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Mike Christie <michaelc@cs.wisc.edu>
Cc: bugme-daemon@bugzilla.kernel.org, linux-scsi@vger.kernel.org
Subject: Re: [Bug 11898] mke2fs hang on AIC79 device.
Date: Sun, 09 Nov 2008 09:47:17 -0600	[thread overview]
Message-ID: <1226245637.19841.7.camel@localhost.localdomain> (raw)
In-Reply-To: <4911D6F2.2080309@cs.wisc.edu>

On Wed, 2008-11-05 at 11:25 -0600, Mike Christie wrote:
> James Bottomley wrote:
> > The reason for doing it like this is so that if someone slices the loop
> > apart again (which is how this crept in) they won't get a continue or
> > something which allows this to happen.
> > 
> > It shouldn't be conditional on the starved list (or anything else)
> > because it's probably a register and should happen at the same point as
> > the list deletion but before we drop the problem lock (because once we
> > drop that lock we'll need to recompute starvation).
> > 
> > James
> > 
> > ---
> > 
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index f5d3b96..f9a531f 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -606,6 +606,7 @@ static void scsi_run_queue(struct request_queue *q)
> >  		}
> >  
> >  		list_del_init(&sdev->starved_entry);
> > +		starved_entry = NULL;
> 
> Should this be starved_head?
> 
> >  		spin_unlock(shost->host_lock);
> >  
> >  		spin_lock(sdev->request_queue->queue_lock);
> > 
> 
> Do you think we can just splice the list like the attached patch (patch 
> is example only and is not tested)?
> 
> I thought the code is clearer, but I think it may be less efficient. If 
> scsi_run_queue is run on multiple processors then with the attached 
> patch one processor would splice the list and possibly have to execute 
> __blk_run_queue for all the devices on the list serially.
> 
> Currently we can at least prep the devices in parallel. One processor 
> would grab one entry on the list and drop the host lock, so then another 
> processor could grab another entry on the list and start the execution 
> process (I wrote start the process because it might turn out that this 
> second entry execution might have to wait on the first one when the scsi 
> layer has to grab the queue lock again).

I reconsidered:  I think something like this would work well if we
simply to run through the starved list once each time, giving them the
chance of executing.  Something like this.

James

---

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f5d3b96..979e07a 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -567,15 +567,18 @@ static inline int scsi_host_is_busy(struct Scsi_Host *shost)
  */
 static void scsi_run_queue(struct request_queue *q)
 {
-	struct scsi_device *starved_head = NULL, *sdev = q->queuedata;
+	struct scsi_device *tmp, *sdev = q->queuedata;
 	struct Scsi_Host *shost = sdev->host;
+	LIST_HEAD(starved_list);
 	unsigned long flags;
 
 	if (scsi_target(sdev)->single_lun)
 		scsi_single_lun_run(sdev);
 
 	spin_lock_irqsave(shost->host_lock, flags);
-	while (!list_empty(&shost->starved_list) && !scsi_host_is_busy(shost)) {
+	list_splice_init(&shost->starved_list, &starved_list);
+
+	list_for_each_entry_safe(sdev, tmp, &starved_list, starved_entry) {
 		int flagset;
 
 		/*
@@ -588,22 +591,10 @@ static void scsi_run_queue(struct request_queue *q)
 		 * scsi_request_fn must get the host_lock before checking
 		 * or modifying starved_list or starved_entry.
 		 */
-		sdev = list_entry(shost->starved_list.next,
-					  struct scsi_device, starved_entry);
-		/*
-		 * The *queue_ready functions can add a device back onto the
-		 * starved list's tail, so we must check for a infinite loop.
-		 */
-		if (sdev == starved_head)
+		if (scsi_host_is_busy(shost))
 			break;
-		if (!starved_head)
-			starved_head = sdev;
-
-		if (scsi_target_is_busy(scsi_target(sdev))) {
-			list_move_tail(&sdev->starved_entry,
-				       &shost->starved_list);
+		if (scsi_target_is_busy(scsi_target(sdev)))
 			continue;
-		}
 
 		list_del_init(&sdev->starved_entry);
 		spin_unlock(shost->host_lock);
@@ -621,6 +612,9 @@ static void scsi_run_queue(struct request_queue *q)
 
 		spin_lock(shost->host_lock);
 	}
+
+	/* put any unprocessed entries back */
+	list_splice(&starved_list, &shost->starved_list);
 	spin_unlock_irqrestore(shost->host_lock, flags);
 
 	blk_run_queue(q);



  parent reply	other threads:[~2008-11-09 15:47 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-30  8:17 [Bug 11898] New: mke2fs hang on AIC79 device bugme-daemon
2008-10-30  8:53 ` [Bug 11898] " bugme-daemon
2008-10-30  8:54 ` bugme-daemon
2008-10-30 10:23 ` bugme-daemon
2008-10-30 12:37 ` bugme-daemon
2008-10-30 14:06 ` bugme-daemon
2008-10-31  3:12 ` bugme-daemon
2008-11-03  8:02 ` bugme-daemon
2008-11-04  7:37 ` bugme-daemon
2008-11-04  7:41 ` bugme-daemon
2008-11-04  9:05 ` bugme-daemon
2008-11-05  1:32 ` bugme-daemon
2008-11-05  1:55   ` Mike Christie
2008-11-05  2:10     ` James Bottomley
2008-11-05  1:56 ` bugme-daemon
2008-11-05  2:11 ` bugme-daemon
2008-11-05  2:43 ` bugme-daemon
2008-11-05  2:56 ` bugme-daemon
2008-11-05  3:19 ` bugme-daemon
2008-11-05  4:01 ` bugme-daemon
2008-11-05 15:24   ` James Bottomley
2008-11-05 17:25     ` Mike Christie
2008-11-05 18:46       ` James Bottomley
2008-11-09 15:47       ` James Bottomley [this message]
2008-11-11 18:22         ` Mike Christie
2008-11-11 19:42           ` Mike Christie
2008-11-05  4:26 ` bugme-daemon
2008-11-05 10:48 ` bugme-daemon
2008-11-05 14:32 ` bugme-daemon
2008-11-05 15:25 ` bugme-daemon
2008-11-05 17:25 ` bugme-daemon
2008-11-05 18:47 ` bugme-daemon
2008-11-06  1:44 ` bugme-daemon
2008-11-06  1:59 ` bugme-daemon
2008-11-06  2:06 ` bugme-daemon
2008-11-06  2:19 ` bugme-daemon
2008-11-06 14:57   ` James Bottomley
2008-11-06 14:58 ` bugme-daemon
2008-11-07  1:04 ` bugme-daemon
2008-11-09 15:47 ` bugme-daemon
2008-11-09 17:54 ` bugme-daemon
2008-11-09 19:01 ` bugme-daemon
2008-11-09 19:15 ` bugme-daemon
2008-11-10  2:15 ` bugme-daemon
2008-11-11 11:23 ` bugme-daemon
2008-11-11 11:28 ` bugme-daemon
2008-11-11 18:23 ` bugme-daemon
2008-11-11 19:43 ` bugme-daemon
2008-11-12 10:47 ` bugme-daemon
2008-11-14 15:40   ` James Bottomley
2008-11-14 15:41 ` bugme-daemon
2008-11-16 17:17 ` bugme-daemon
2008-11-19  1:49 ` bugme-daemon
2008-12-02  7:20 ` bugme-daemon
2008-12-07 21:52 ` bugme-daemon
2008-12-13 18:23 ` bugme-daemon
  -- strict thread matches above, loose matches on Subject: below --
2008-11-02 16:04 2.6.28-rc2-git7: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-11-02 16:07 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-11-02 16:07   ` Rafael J. Wysocki
2008-11-09 17:53 2.6.28-rc3-git6: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-11-09 17:59 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-11-09 17:59   ` Rafael J. Wysocki
2008-11-16 16:24 2.6.28-rc5: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-11-16 16:35 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-11-16 16:35   ` Rafael J. Wysocki
2008-11-22 20:24 2.6.28-rc6-git1: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-11-22 20:28 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-11-22 20:28   ` Rafael J. Wysocki
2008-12-03 21:49 2.6.28-rc7-git2: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-12-03 21:57 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-12-03 21:57   ` Rafael J. Wysocki
2008-12-07 20:27 2.6.28-rc8-git5: Reported regressions from 2.6.27 Rafael J. Wysocki
2008-12-07 20:32 ` [Bug #11898] mke2fs hang on AIC79 device Rafael J. Wysocki
2008-12-07 20:32   ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1226245637.19841.7.camel@localhost.localdomain \
    --to=james.bottomley@hansenpartnership.com \
    --cc=bugme-daemon@bugzilla.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=michaelc@cs.wisc.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.