From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: Pavel Machek <pavel@suse.cz>
Cc: Tejun Heo <tj@kernel.org>, Andreas Schwab <schwab@suse.de>,
kernel list <linux-kernel@vger.kernel.org>,
jgarzik@pobox.com,
IDE/ATA development list <linux-ide@vger.kernel.org>,
Trivial patch monkey <trivial@kernel.org>
Subject: Re: sata_svw data corruption, strange problems
Date: Mon, 23 Jun 2008 19:04:19 +1000 [thread overview]
Message-ID: <1214211859.8011.250.camel@pasglop> (raw)
In-Reply-To: <20080623090130.GF1850@elf.ucw.cz>
On Mon, 2008-06-23 at 11:01 +0200, Pavel Machek wrote:
> On Mon 2008-06-23 17:56:32, Tejun Heo wrote:
> > Pavel Machek wrote:
> > > On Mon 2008-06-23 10:39:40, Andreas Schwab wrote:
> > >> Pavel Machek <pavel@suse.cz> writes:
> > >>
> > >>> + controller, the controller could hang. In other cases it
> > >>> + could return partial data returning in data
> > >>> + corruption. This problem has been seen in PPC systems and
> > >> s/returning/resulting/ ?
> > >
> > > Fix thinko in sata_svw comment.
> > >
> > > Signed-off-by: Pavel Machek <pavel@suse.cz>
> >
> > Please collapse into one patch. Thanks.
Am I the only one to find Pavel variant almost as obscure as
the original one ? :-)
It should explain precisely what the workaround is. Ie. to start the
DMA there instead of where it normally is started which is the
bmdma_setup() function.
BTW. Tejun, I suppose that usually starting DMA after issuing the
command is a standard practice of legacy/sff type controllers ? Or it's
just because that's how linux did it until now ?
Ben.
> ---
>
> Clarify comment in sata_svw.c.
>
> Signed-off-by: Pavel Machek <pavel@suse.cz>
>
> diff --git a/drivers/ata/sata_svw.c b/drivers/ata/sata_svw.c
> index 16aa683..fb13b82 100644
> --- a/drivers/ata/sata_svw.c
> +++ b/drivers/ata/sata_svw.c
> @@ -253,21 +253,29 @@ static void k2_bmdma_start_mmio(struct a
> /* start host DMA transaction */
> dmactl = readb(mmio + ATA_DMA_CMD);
> writeb(dmactl | ATA_DMA_START, mmio + ATA_DMA_CMD);
> - /* There is a race condition in certain SATA controllers that can
> - be seen when the r/w command is given to the controller before the
> - host DMA is started. On a Read command, the controller would initiate
> - the command to the drive even before it sees the DMA start. When there
> - are very fast drives connected to the controller, or when the data request
> - hits in the drive cache, there is the possibility that the drive returns a part
> - or all of the requested data to the controller before the DMA start is issued.
> - In this case, the controller would become confused as to what to do with the data.
> - In the worst case when all the data is returned back to the controller, the
> - controller could hang. In other cases it could return partial data returning
> - in data corruption. This problem has been seen in PPC systems and can also appear
> - on an system with very fast disks, where the SATA controller is sitting behind a
> - number of bridges, and hence there is significant latency between the r/w command
> - and the start command. */
> - /* issue r/w command if the access is to ATA*/
> + /* This works around possible data corruption.
> +
> + On certain SATA controllers that can be seen when the r/w
> + command is given to the controller before the host DMA is
> + started.
> +
> + On a Read command, the controller would initiate the
> + command to the drive even before it sees the DMA
> + start. When there are very fast drives connected to the
> + controller, or when the data request hits in the drive
> + cache, there is the possibility that the drive returns a
> + part or all of the requested data to the controller before
> + the DMA start is issued. In this case, the controller
> + would become confused as to what to do with the data. In
> + the worst case when all the data is returned back to the
> + controller, the controller could hang. In other cases it
> + could return partial data returning in data
> + corruption. This problem has been seen in PPC systems and
> + can also appear on an system with very fast disks, where
> + the SATA controller is sitting behind a number of bridges,
> + and hence there is significant latency between the r/w
> + command and the start command. */
> + /* issue r/w command if the access is to ATA */
> if (qc->tf.protocol == ATA_PROT_DMA)
> ap->ops->sff_exec_command(ap, &qc->tf);
> }
>
>
next prev parent reply other threads:[~2008-06-23 9:23 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20080617093602.GA28140@elf.ucw.cz>
2008-06-23 0:37 ` sata_svw data corruption, strange problems Tejun Heo
2008-06-23 8:20 ` Pavel Machek
2008-06-23 8:22 ` Tejun Heo
2008-06-23 8:39 ` Andreas Schwab
2008-06-23 8:53 ` Pavel Machek
2008-06-23 8:56 ` Tejun Heo
2008-06-23 9:01 ` Pavel Machek
2008-06-23 9:04 ` Benjamin Herrenschmidt [this message]
2008-06-23 9:26 ` Pavel Machek
2008-06-23 9:48 ` Tejun Heo
2008-06-23 9:42 ` Alan Cox
2008-06-23 10:23 ` Benjamin Herrenschmidt
2008-06-23 13:05 ` Tejun Heo
2008-06-27 6:41 ` Jeff Garzik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1214211859.8011.250.camel@pasglop \
--to=benh@kernel.crashing.org \
--cc=jgarzik@pobox.com \
--cc=linux-ide@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pavel@suse.cz \
--cc=schwab@suse.de \
--cc=tj@kernel.org \
--cc=trivial@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).