From: Adrian Hunter <adrian.hunter@intel.com>
To: "Kornel Dulęba" <korneld@chromium.org>
Cc: linux-mmc@vger.kernel.org, linux-kernel@vger.kernel.org,
Ulf Hansson <ulf.hansson@linaro.org>,
Radoslaw Biernacki <biernacki@google.com>,
Gwendal Grignou <gwendal@chromium.org>
Subject: Re: [PATCH 1/2] mmc: cqhci: Add a quirk to clear stale TC
Date: Thu, 2 Nov 2023 13:01:22 +0200 [thread overview]
Message-ID: <63e54bfd-9bb3-423b-a965-e0a9b399671c@intel.com> (raw)
In-Reply-To: <CAD=Nsqzg2zRNFx7E800vnpNH6bZeHv9zdAnZksJjiRcXOQ4t=A@mail.gmail.com>
On 2/11/23 11:21, Kornel Dulęba wrote:
> On Mon, Oct 30, 2023 at 8:31 PM Adrian Hunter <adrian.hunter@intel.com> wrote:
>>
>> On 27/10/23 17:56, Kornel Dulęba wrote:
>>> This fix addresses a stale task completion event issued right after the
>>> CQE recovery. As it's a hardware issue the fix is done in form of a
>>> quirk.
>>>
>>> When error interrupt is received the driver runs recovery logic is run.
>>> It halts the controller, clears all pending tasks, and then re-enables
>>> it. On some platforms a stale task completion event is observed,
>>> regardless of the CQHCI_CLEAR_ALL_TASKS bit being set.
>>>
>>> This results in either:
>>> a) Spurious TC completion event for an empty slot.
>>> b) Corrupted data being passed up the stack, as a result of premature
>>> completion for a newly added task.
>>>
>>> To fix that re-enable the controller, clear task completion bits,
>>> interrupt status register and halt it again.
>>> This is done at the end of the recovery process, right before interrupts
>>> are re-enabled.
>>>
>>> Signed-off-by: Kornel Dulęba <korneld@chromium.org>
>>> ---
>>> drivers/mmc/host/cqhci-core.c | 42 +++++++++++++++++++++++++++++++++++
>>> drivers/mmc/host/cqhci.h | 1 +
>>> 2 files changed, 43 insertions(+)
>>>
>>> diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c
>>> index b3d7d6d8d654..e534222df90c 100644
>>> --- a/drivers/mmc/host/cqhci-core.c
>>> +++ b/drivers/mmc/host/cqhci-core.c
>>> @@ -1062,6 +1062,45 @@ static void cqhci_recover_mrqs(struct cqhci_host *cq_host)
>>> /* CQHCI could be expected to clear it's internal state pretty quickly */
>>> #define CQHCI_CLEAR_TIMEOUT 20
>>>
>>> +/*
>>> + * During CQE recovery all pending tasks are cleared from the
>>> + * controller and its state is being reset.
>>> + * On some platforms the controller sets a task completion bit for
>>> + * a stale(previously cleared) task right after being re-enabled.
>>> + * This results in a spurious interrupt at best and corrupted data
>>> + * being passed up the stack at worst. The latter happens when
>>> + * the driver enqueues a new request on the problematic task slot
>>> + * before the "spurious" task completion interrupt is handled.
>>> + * To fix it:
>>> + * 1. Re-enable controller by clearing the halt flag.
>>> + * 2. Clear interrupt status and the task completion register.
>>> + * 3. Halt the controller again to be consistent with quirkless logic.
>>> + *
>>> + * This assumes that there are no pending requests on the queue.
>>> + */
>>> +static void cqhci_quirk_clear_stale_tc(struct cqhci_host *cq_host)
>>> +{
>>> + u32 reg;
>>> +
>>> + WARN_ON(cq_host->qcnt);
>>> + cqhci_writel(cq_host, 0, CQHCI_CTL);
>>> + if ((cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT)) {
>>> + pr_err("%s: cqhci: CQE failed to exit halt state\n",
>>> + mmc_hostname(cq_host->mmc));
>>> + }
>>> + reg = cqhci_readl(cq_host, CQHCI_TCN);
>>> + cqhci_writel(cq_host, reg, CQHCI_TCN);
>>> + reg = cqhci_readl(cq_host, CQHCI_IS);
>>> + cqhci_writel(cq_host, reg, CQHCI_IS);
>>> +
>>> + /*
>>> + * Halt the controller again.
>>> + * This is only needed so that we're consistent across quirk
>>> + * and quirkless logic.
>>> + */
>>> + cqhci_halt(cq_host->mmc, CQHCI_FINISH_HALT_TIMEOUT);
>>> +}
>>
>> Thanks a lot for tracking this down!
>>
>> It could be that the "un-halt" starts a task, so it would be
>> better to force the "clear" to work if possible, which
>> should be the case if CQE is disabled.
>>
>> Would you mind trying the code below? Note the increased
>> CQHCI_START_HALT_TIMEOUT helps avoid trying to clear tasks
>> when CQE has not halted.
>
> I've run a quick test and it works just fine.
Thank you!
> Your approach looks better than what I proposed, since as you
> mentioned, doing it like this avoids some weird side effects, e.g. DMA
> to freed memory.
> Do you plan to include it in the other series that you posted yesterday?
Yes I will do that
next prev parent reply other threads:[~2023-11-02 11:01 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20231027145623.2258723-1-korneld@chromium.org>
[not found] ` <20231027145623.2258723-2-korneld@chromium.org>
2023-10-30 19:30 ` [PATCH 1/2] mmc: cqhci: Add a quirk to clear stale TC Adrian Hunter
2023-11-01 11:31 ` Kornel Dulęba
2023-11-01 12:00 ` Adrian Hunter
2023-11-02 9:21 ` Kornel Dulęba
2023-11-02 11:01 ` Adrian Hunter [this message]
2023-11-02 14:06 ` Kornel Dulęba
2023-11-02 14:18 ` Radoslaw Biernacki
2023-11-02 14:24 ` Adrian Hunter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63e54bfd-9bb3-423b-a965-e0a9b399671c@intel.com \
--to=adrian.hunter@intel.com \
--cc=biernacki@google.com \
--cc=gwendal@chromium.org \
--cc=korneld@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mmc@vger.kernel.org \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox