linux-ide.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mark Lord <liml@rtr.ca>
To: Jeff Garzik <jgarzik@pobox.com>,
	IDE/ATA development list <linux-ide@vger.kernel.org>
Subject: [PATCH 01/02] sata_mv: tidy up qc->tf usage in qc_prep() functions
Date: Mon, 13 Apr 2009 11:27:18 -0400	[thread overview]
Message-ID: <49E359D6.1090309@rtr.ca> (raw)

Tidy up qc->tf accesses in the mv_qc_prep() functions.

Signed-off-by: Mark Lord <mlord@pobox.com>
---

For upstream-linus, please.
In preparation for the multi_count errata patch which follows.

--- old/drivers/ata/sata_mv.c.upstream	2009-04-13 09:46:51.000000000 -0400
+++ new/drivers/ata/sata_mv.c	2009-04-13 10:09:54.000000000 -0400
@@ -1898,17 +1898,17 @@
 	struct ata_port *ap = qc->ap;
 	struct mv_port_priv *pp = ap->private_data;
 	__le16 *cw;
-	struct ata_taskfile *tf;
+	struct ata_taskfile *tf = &qc->tf;
 	u16 flags = 0;
 	unsigned in_index;
 
-	if ((qc->tf.protocol != ATA_PROT_DMA) &&
-	    (qc->tf.protocol != ATA_PROT_NCQ))
+	if ((tf->protocol != ATA_PROT_DMA) &&
+	    (tf->protocol != ATA_PROT_NCQ))
 		return;
 
 	/* Fill in command request block
 	 */
-	if (!(qc->tf.flags & ATA_TFLAG_WRITE))
+	if (!(tf->flags & ATA_TFLAG_WRITE))
 		flags |= CRQB_FLAG_READ;
 	WARN_ON(MV_MAX_Q_DEPTH <= qc->tag);
 	flags |= qc->tag << CRQB_TAG_SHIFT;
@@ -1924,7 +1924,6 @@
 	pp->crqb[in_index].ctrl_flags = cpu_to_le16(flags);
 
 	cw = &pp->crqb[in_index].ata_cmd[0];
-	tf = &qc->tf;
 
 	/* Sadly, the CRQB cannot accomodate all registers--there are
 	 * only 11 bytes...so we must pick and choose required
@@ -1990,16 +1989,16 @@
 	struct ata_port *ap = qc->ap;
 	struct mv_port_priv *pp = ap->private_data;
 	struct mv_crqb_iie *crqb;
-	struct ata_taskfile *tf;
+	struct ata_taskfile *tf = &qc->tf;
 	unsigned in_index;
 	u32 flags = 0;
 
-	if ((qc->tf.protocol != ATA_PROT_DMA) &&
-	    (qc->tf.protocol != ATA_PROT_NCQ))
+	if ((tf->protocol != ATA_PROT_DMA) &&
+	    (tf->protocol != ATA_PROT_NCQ))
 		return;
 
 	/* Fill in Gen IIE command request block */
-	if (!(qc->tf.flags & ATA_TFLAG_WRITE))
+	if (!(tf->flags & ATA_TFLAG_WRITE))
 		flags |= CRQB_FLAG_READ;
 
 	WARN_ON(MV_MAX_Q_DEPTH <= qc->tag);
@@ -2015,7 +2014,6 @@
 	crqb->addr_hi = cpu_to_le32((pp->sg_tbl_dma[qc->tag] >> 16) >> 16);
 	crqb->flags = cpu_to_le32(flags);
 
-	tf = &qc->tf;
 	crqb->ata_cmd[0] = cpu_to_le32(
 			(tf->command << 16) |
 			(tf->feature << 24)

             reply	other threads:[~2009-04-13 15:27 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-04-13 15:27 Mark Lord [this message]
2009-04-13 15:29 ` [PATCH 02/02] sata_mv: workaround for multi_count errata sata24 Mark Lord
2009-04-13 15:36   ` Mark Lord
2009-04-13 17:25     ` Jeff Garzik
2009-04-13 17:36       ` Mark Lord
2009-04-13 15:40   ` Alan Cox
2009-04-13 15:44     ` Mark Lord
2009-04-13 15:46       ` Mark Lord
2009-04-17 23:03   ` Jeff Garzik
2009-04-18  1:11     ` Mark Lord
2009-04-17 22:59 ` libata git repo guide (was Re: [PATCH 01/02] sata_mv: tidy up qc->tf usage in qc_prep() functions) Jeff Garzik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=49E359D6.1090309@rtr.ca \
    --to=liml@rtr.ca \
    --cc=jgarzik@pobox.com \
    --cc=linux-ide@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).