linux-spi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Charles Keepax <ckeepax@opensource.cirrus.com>
To: <broonie@kernel.org>
Cc: <srinivas.goud@amd.com>, <linux-spi@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <patches@opensource.cirrus.com>
Subject: [PATCH 1/2] spi: spi-cadence: Avoid read of RX FIFO before its ready
Date: Tue, 9 May 2023 17:41:52 +0100	[thread overview]
Message-ID: <20230509164153.3907694-1-ckeepax@opensource.cirrus.com> (raw)

Recent changes to cdns_spi_irq introduced some issues.

Firstly, when writing the end of a longer transaction, the code in
cdns_spi_irq will write data into the TX FIFO, then immediately
fall into the if (!xspi->tx_bytes) path and attempt to read data
from the RX FIFO. However this required waiting for the TX FIFO to
empty before the RX data was ready.

Secondly, the variable trans_cnt is now rather inaccurately named
since in cases, where the watermark is set to 1, trans_cnt will be
1 but the count of bytes transferred would be much longer.

Finally, when setting up the transaction we set the watermark to 50%
of the FIFO if the transaction is great than 50% of the FIFO. However,
there is no need to split a tranaction that is smaller than the
whole FIFO, so anything up to the FIFO size can be done in a single
transaction.

Tidy up the code a little, to avoid repeatedly calling
cdns_spi_read_rx_fifo with a count of 1, and correct the three issues
noted above.

Fixes: b1b90514eaa3 ("spi: spi-cadence: Add support for Slave mode")
Signed-off-by: Charles Keepax <ckeepax@opensource.cirrus.com>
---
 drivers/spi/spi-cadence.c | 42 ++++++++++++++-------------------------
 1 file changed, 15 insertions(+), 27 deletions(-)

diff --git a/drivers/spi/spi-cadence.c b/drivers/spi/spi-cadence.c
index ac85d55622127..b0ccb138e3566 100644
--- a/drivers/spi/spi-cadence.c
+++ b/drivers/spi/spi-cadence.c
@@ -304,13 +304,11 @@ static int cdns_spi_setup_transfer(struct spi_device *spi,
  * cdns_spi_fill_tx_fifo - Fills the TX FIFO with as many bytes as possible
  * @xspi:	Pointer to the cdns_spi structure
  */
-static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi)
+static void cdns_spi_fill_tx_fifo(struct cdns_spi *xspi, unsigned int avail)
 {
 	unsigned long trans_cnt = 0;
 
-	while ((trans_cnt < xspi->tx_fifo_depth) &&
-	       (xspi->tx_bytes > 0)) {
-
+	while ((trans_cnt < avail) && (xspi->tx_bytes > 0)) {
 		/* When xspi in busy condition, bytes may send failed,
 		 * then spi control did't work thoroughly, add one byte delay
 		 */
@@ -381,33 +379,23 @@ static irqreturn_t cdns_spi_irq(int irq, void *dev_id)
 		spi_finalize_current_transfer(ctlr);
 		status = IRQ_HANDLED;
 	} else if (intr_status & CDNS_SPI_IXR_TXOW) {
-		int trans_cnt = cdns_spi_read(xspi, CDNS_SPI_THLD);
+		int threshold = cdns_spi_read(xspi, CDNS_SPI_THLD);
+		int trans_cnt = xspi->rx_bytes - xspi->tx_bytes;
+
+		if (threshold > 1)
+			trans_cnt -= threshold;
+
 		/* Set threshold to one if number of pending are
 		 * less than half fifo
 		 */
 		if (xspi->tx_bytes < xspi->tx_fifo_depth >> 1)
 			cdns_spi_write(xspi, CDNS_SPI_THLD, 1);
 
-		while (trans_cnt) {
-			cdns_spi_read_rx_fifo(xspi, 1);
-
-			if (xspi->tx_bytes) {
-				if (xspi->txbuf)
-					cdns_spi_write(xspi, CDNS_SPI_TXD,
-						       *xspi->txbuf++);
-				else
-					cdns_spi_write(xspi, CDNS_SPI_TXD, 0);
-				xspi->tx_bytes--;
-			}
-			trans_cnt--;
-		}
-		if (!xspi->tx_bytes) {
-			/* Fixed delay due to controller limitation with
-			 * RX_NEMPTY incorrect status
-			 * Xilinx AR:65885 contains more details
-			 */
-			udelay(10);
-			cdns_spi_read_rx_fifo(xspi, xspi->rx_bytes);
+		cdns_spi_read_rx_fifo(xspi, trans_cnt);
+
+		if (xspi->tx_bytes) {
+			cdns_spi_fill_tx_fifo(xspi, trans_cnt);
+		} else {
 			cdns_spi_write(xspi, CDNS_SPI_IDR,
 				       CDNS_SPI_IXR_DEFAULT);
 			spi_finalize_current_transfer(ctlr);
@@ -456,10 +444,10 @@ static int cdns_transfer_one(struct spi_controller *ctlr,
 	/* Set TX empty threshold to half of FIFO depth
 	 * only if TX bytes are more than half FIFO depth.
 	 */
-	if (xspi->tx_bytes > (xspi->tx_fifo_depth >> 1))
+	if (xspi->tx_bytes > xspi->tx_fifo_depth)
 		cdns_spi_write(xspi, CDNS_SPI_THLD, xspi->tx_fifo_depth >> 1);
 
-	cdns_spi_fill_tx_fifo(xspi);
+	cdns_spi_fill_tx_fifo(xspi, xspi->tx_fifo_depth);
 	spi_transfer_delay_exec(transfer);
 
 	cdns_spi_write(xspi, CDNS_SPI_IER, CDNS_SPI_IXR_DEFAULT);
-- 
2.30.2


             reply	other threads:[~2023-05-09 16:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-09 16:41 Charles Keepax [this message]
2023-05-09 16:41 ` [PATCH 2/2] spi: spi-cadence: Only overlap FIFO transactions in slave mode Charles Keepax
2023-05-15 11:09 ` [PATCH 1/2] spi: spi-cadence: Avoid read of RX FIFO before its ready Mark Brown
2023-05-15 12:04 ` Goud, Srinivas
2023-05-15 12:54   ` Charles Keepax
2023-05-17  5:24     ` Goud, Srinivas
2023-05-17 10:46       ` Charles Keepax

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230509164153.3907694-1-ckeepax@opensource.cirrus.com \
    --to=ckeepax@opensource.cirrus.com \
    --cc=broonie@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-spi@vger.kernel.org \
    --cc=patches@opensource.cirrus.com \
    --cc=srinivas.goud@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).