From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Tejun Heo <htejun@gmail.com>
Cc: Jeff Garzik <jeff@garzik.org>,
linux-scsi <linux-scsi@vger.kernel.org>,
linux-ide <linux-ide@vger.kernel.org>,
Jens Axboe <Jens.Axboe@oracle.com>,
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Subject: Re: [PATCH RESEND number 2] libata: eliminate the home grown dma padding in favour of that provided by the block layer
Date: Sun, 03 Feb 2008 10:12:36 -0600 [thread overview]
Message-ID: <1202055156.3318.58.camel@localhost.localdomain> (raw)
In-Reply-To: <47A5DA5F.3070209@gmail.com>
On Mon, 2008-02-04 at 00:14 +0900, Tejun Heo wrote:
> James Bottomley wrote:
> > I'm reluctant to add another parameter to the request, but this one you
> > can calculate: you just do it wherever you work out the size of the
> > request. If data_len is the true data length and total_data_len is the
> > data length plus the drain length, the calculation fragment is
> >
> > if (blk_pc_request(req))
> > data_len = req->data_len;
> > else
> > data_len = req->nr_sectors << 9;
> > total_data_len = data_len + req->q->dma_drain_size;
> >
> > If the request has already been mapped by scsi, then data_len is
> > actually scsi_cmnd->sdb.length
>
> We either need to add a field or a helper and rq->data_len should
> probably record the size with drain buffer attached and either add
> raw_data_len or blk_rq_raw_data_len(). That size is the length of data
> in sg and should be programmed into the controller etc... For ATAPI the
> raw size is only used to program the chunk size for odd devices.
OK, could you show me an example of where you need it and I'll come up
with the macro ... that should also help us decide whether it needs to
be in block or in libata alone. Note that aic94xx only wants the true
size (we effectively treat the drain element as non existent), and I
anticipate this being true of most conforming implementations. It's
only the problem HBAs that need to know how much slack they have for DMA
overruns.
> >> What's needed is updating libata accordingly and testing it.
> >
> > Actually, I sent the patch to do this a few days ago:
> >
> > http://marc.info/?l=linux-ide&m=120189565418258
> >
> > But I've attached it again.
>
> Thanks a lot.
>
> >> I'm currently away from all my toys. I'll implement the ATA part,
> >> test it and submit the patch on Monday.
> >
> > Great, will look forward to the results. Like I said, I think the turn
> > on draining in slave configure should be narrowed from all ATAPI devices
> > to all AHCI ATAPI devices if there's no evidence that any other
> > implementation that uses DMA to transfer PIO isn't similarly broken (I
> > know the aic94xx works fine, for instance ...
>
> What do you mean by aic94xx working fine? Does the controller
> automatically throw away extra data FISes?
The aic94xx sequencer has a very finely honed sense of DMA transfers.
It's fully automated, and handles both ATA DMA and ATA PIO in the
sequencer engine (so all the driver sees is DMA).
It reports both underrun and overrun conditions. For DMA underrun
(device transfers less than expected, it just returns what it has and
how much was missing as the residual) for DMA overrun (as in device
tried to take more than it was programmed to send on either read or
write) for PIO it does seem to zero fill or discard and then simply
report task complete with overrun and let libsas sort it out. I suspect
for DMA it first tries DMAT before taking other actions, but I'd need a
protocol analyser (or the sequencer docs) to be sure.
We handle overruns as error conditions in both SAS and ATA at the
moment, but the point is that the ATAPI device is fully happy and
quiesced when we do this.
James
next prev parent reply other threads:[~2008-02-03 16:12 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-12-31 21:56 [PATCH] libata: eliminate the home grown dma padding in favour of that provided by the block layer James Bottomley
2007-12-31 22:56 ` Jeff Garzik
2008-01-03 7:58 ` FUJITA Tomonori
2008-01-03 15:12 ` James Bottomley
2008-01-09 2:10 ` Tejun Heo
2008-01-09 4:24 ` James Bottomley
2008-01-09 5:13 ` Tejun Heo
2008-01-09 15:13 ` James Bottomley
2008-01-18 23:14 ` [PATCH RESEND] " James Bottomley
2008-02-01 19:40 ` [PATCH RESEND number 2] " James Bottomley
2008-02-01 20:02 ` Jeff Garzik
2008-02-01 21:09 ` James Bottomley
2008-02-03 3:04 ` Tejun Heo
2008-02-03 4:32 ` James Bottomley
2008-02-03 7:37 ` Tejun Heo
2008-02-03 14:38 ` James Bottomley
2008-02-03 15:14 ` Tejun Heo
2008-02-03 16:12 ` James Bottomley [this message]
2008-02-03 16:38 ` Jeff Garzik
2008-02-03 17:12 ` James Bottomley
2008-02-04 1:21 ` Tejun Heo
2008-02-04 1:28 ` Tejun Heo
2008-02-04 9:25 ` Tejun Heo
2008-02-04 14:43 ` Tejun Heo
2008-02-04 16:23 ` James Bottomley
2008-02-05 0:06 ` Tejun Heo
2008-02-05 0:32 ` James Bottomley
2008-02-05 0:43 ` Tejun Heo
2008-02-05 0:53 ` James Bottomley
2008-02-05 1:07 ` Tejun Heo
2008-02-05 5:03 ` James Bottomley
2008-02-05 5:22 ` Tejun Heo
2008-02-04 15:43 ` James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1202055156.3318.58.camel@localhost.localdomain \
--to=james.bottomley@hansenpartnership.com \
--cc=Jens.Axboe@oracle.com \
--cc=fujita.tomonori@lab.ntt.co.jp \
--cc=htejun@gmail.com \
--cc=jeff@garzik.org \
--cc=linux-ide@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).