From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Chris Boot <bootc@bootc.net>
Cc: "Woodhouse, David" <david.woodhouse@intel.com>,
adam radford <aradford@gmail.com>,
lkml <linux-kernel@vger.kernel.org>,
Adam Radford <linuxraid@lsi.com>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Subject: Re: iommu_iova leak [inside 3w-9xxx]
Date: Sun, 18 Sep 2011 18:56:20 +0400 [thread overview]
Message-ID: <1316357780.27722.9.camel@dabdike> (raw)
In-Reply-To: <1316355934.27722.6.camel@dabdike>
On Sun, 2011-09-18 at 18:25 +0400, James Bottomley wrote:
> On Sun, 2011-09-18 at 15:05 +0100, Chris Boot wrote:
> > > Hardly ... all it's saying is that twa_exit doesn't wait for pending I/O
> > > to complete, so when you remove the module it tears down in the middle
> > > of an I/O. A bug, yes, but it's not indicative of any sort of leak in
> > > the maps/unmaps.
> >
> >
> > James,
> >
> > I don't think that's the case - I had unmounted all filesystems, deactivated all volume groups, and performed a sync before waiting a few seconds and running rmmod. Next time I'll also 'echo 1 > /sys/block/sdX/device/delete' if that's helpful.
>
> Actually, I take all that back: the driver has a bug in QUEUE_FULL
> handling: twa_scsi_queue() calls twa_scsiop_execute_scsi(), which maps
> the dma buffer, but if the card responds QUEUE_FULL it just returns
> SCSI_MLQUEUE_HOST_BUSY without ever unmapping. That leg in the code
> frees the request but also doesn't unmap it. In fact any error return
> from twa_scsiop_execute_scsi() seems to have the same problem (but
> QUEUE_FULL is the only silent one).
>
> I trust Adam will fix this.
Actually, while Adam's mulling this, try the following. It should at
least confirm we're on the right track.
James
---
diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
index b7bd5b0..3868ab2 100644
--- a/drivers/scsi/3w-9xxx.c
+++ b/drivers/scsi/3w-9xxx.c
@@ -1800,10 +1800,12 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
switch (retval) {
case SCSI_MLQUEUE_HOST_BUSY:
twa_free_request_id(tw_dev, request_id);
+ twa_unmap_scsi_data(tw_dev, request_id);
break;
case 1:
tw_dev->state[request_id] = TW_S_COMPLETED;
twa_free_request_id(tw_dev, request_id);
+ twa_unmap_scsi_data(tw_dev, request_id);
SCpnt->result = (DID_ERROR << 16);
done(SCpnt);
retval = 0;
next prev parent reply other threads:[~2011-09-18 14:56 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-16 12:43 iommu_iova leak Chris Boot
2011-09-17 10:45 ` Woodhouse, David
2011-09-17 11:57 ` Chris Boot
2011-09-17 14:29 ` iommu_iova leak [inside 3w-9xxx] Chris Boot
2011-09-17 19:22 ` adam radford
2011-09-17 19:31 ` Woodhouse, David
2011-09-17 20:17 ` Chris Boot
2011-09-18 12:06 ` Chris Boot
2011-09-18 13:39 ` Woodhouse, David
2011-09-18 14:01 ` James Bottomley
2011-09-18 14:05 ` Chris Boot
2011-09-18 14:25 ` James Bottomley
2011-09-18 14:56 ` James Bottomley [this message]
2011-09-18 16:19 ` Chris Boot
2011-09-19 15:32 ` Chris Boot
2011-09-19 19:26 ` adam radford
2011-09-19 20:54 ` Chris Boot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1316357780.27722.9.camel@dabdike \
--to=james.bottomley@hansenpartnership.com \
--cc=aradford@gmail.com \
--cc=bootc@bootc.net \
--cc=david.woodhouse@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linuxraid@lsi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox