From: Ulf Hansson <ulf.hansson@linaro.org>
To: linux-mmc@vger.kernel.org, Ulf Hansson <ulf.hansson@linaro.org>
Cc: Sascha Sommer <saschasommer@freenet.de>
Subject: [PATCH v2 13/19] mmc: sdricoh_cs: Throttle polling rate for commands
Date: Fri, 8 May 2020 11:52:18 +0200 [thread overview]
Message-ID: <20200508095218.14177-1-ulf.hansson@linaro.org> (raw)
Rather than to poll in a busy-loop, let's convert into using
read_poll_timeout() and insert a small delay between each polling attempts.
In particular, this avoids hogging the CPU.
Additionally, to convert to read_poll_timeout() we also need to switch from
using a specific number of polling attempts, into a specific timeout in us
instead. The previous 100000 attempts, is translated into a total timeout
of total 1s, as that seemed like reasonable value to pick.
Cc: Sascha Sommer <saschasommer@freenet.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
Changes in v2:
- Use read_poll_timeout() instead of readl_poll_timeout(), as to
preserve the debug print in sdricoh_readl().
---
drivers/mmc/host/sdricoh_cs.c | 33 ++++++++++++++++-----------------
1 file changed, 16 insertions(+), 17 deletions(-)
diff --git a/drivers/mmc/host/sdricoh_cs.c b/drivers/mmc/host/sdricoh_cs.c
index 8392158e2e9f..0594b5ffe151 100644
--- a/drivers/mmc/host/sdricoh_cs.c
+++ b/drivers/mmc/host/sdricoh_cs.c
@@ -59,7 +59,7 @@ static unsigned int switchlocked;
#define STATUS_BUSY 0x40000000
/* timeouts */
-#define CMD_TIMEOUT 100000
+#define SDRICOH_CMD_TIMEOUT_US 1000000
#define SDRICOH_DATA_TIMEOUT_US 1000000
/* list of supported pcmcia devices */
@@ -158,8 +158,7 @@ static int sdricoh_query_status(struct sdricoh_host *host, unsigned int wanted)
static int sdricoh_mmc_cmd(struct sdricoh_host *host, struct mmc_command *cmd)
{
unsigned int status;
- int result = 0;
- unsigned int loop = 0;
+ int ret;
unsigned char opcode = cmd->opcode;
/* reset status reg? */
@@ -175,24 +174,24 @@ static int sdricoh_mmc_cmd(struct sdricoh_host *host, struct mmc_command *cmd)
/* fill parameters */
sdricoh_writel(host, R204_CMD_ARG, cmd->arg);
sdricoh_writel(host, R200_CMD, (0x10000 << 8) | opcode);
+
/* wait for command completion */
- if (opcode) {
- for (loop = 0; loop < CMD_TIMEOUT; loop++) {
- status = sdricoh_readl(host, R21C_STATUS);
- sdricoh_writel(host, R2E4_STATUS_RESP, status);
- if (status & STATUS_CMD_FINISHED)
- break;
- }
- /* don't check for timeout in the loop it is not always
- reset correctly
- */
- if (loop == CMD_TIMEOUT || status & STATUS_CMD_TIMEOUT)
- result = -ETIMEDOUT;
+ if (!opcode)
+ return 0;
- }
+ ret = read_poll_timeout(sdricoh_readl, status,
+ sdricoh_status_ok(host, status, STATUS_CMD_FINISHED),
+ 32, SDRICOH_CMD_TIMEOUT_US, false,
+ host, R21C_STATUS);
- return result;
+ /*
+ * Don't check for timeout status in the loop, as it's not always reset
+ * correctly.
+ */
+ if (ret || status & STATUS_CMD_TIMEOUT)
+ return -ETIMEDOUT;
+ return 0;
}
static int sdricoh_reset(struct sdricoh_host *host)
--
2.20.1
reply other threads:[~2020-05-08 9:52 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200508095218.14177-1-ulf.hansson@linaro.org \
--to=ulf.hansson@linaro.org \
--cc=linux-mmc@vger.kernel.org \
--cc=saschasommer@freenet.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).