From: Stephen Boyd <swboyd@chromium.org>
To: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>,
Mark Gross <markgross@kernel.org>,
linux-kernel@vger.kernel.org, patches@lists.linux.dev,
platform-driver-x86@vger.kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@linux.intel.com>,
Prashant Malani <pmalani@chromium.org>
Subject: Re: [PATCH v2 1/3] platform/x86: intel_scu_ipc: Check status after timeout in busy_loop()
Date: Thu, 7 Sep 2023 13:11:17 -0700 [thread overview]
Message-ID: <CAE-0n51Ut296M2ZetuzXGpX32pS11bbWzfcbaFfqNxgSjzafJw@mail.gmail.com> (raw)
In-Reply-To: <20230907053513.GH1599918@black.fi.intel.com>
Quoting Mika Westerberg (2023-09-06 22:35:13)
> On Wed, Sep 06, 2023 at 11:09:41AM -0700, Stephen Boyd wrote:
> > It's possible for the polling loop in busy_loop() to get scheduled away
> > for a long time.
> >
> > status = ipc_read_status(scu); // status = IPC_STATUS_BUSY
> > <long time scheduled away>
> > if (!(status & IPC_STATUS_BUSY))
> >
> > If this happens, then the status bit could change while the task is
> > scheduled away and this function would never read the status again after
> > timing out. Instead, the function will return -ETIMEDOUT when it's
> > possible that scheduling didn't work out and the status bit was cleared.
> > Bit polling code should always check the bit being polled one more time
> > after the timeout in case this happens.
> >
> > Fix this by reading the status once more after the while loop breaks.
> >
> > Cc: Prashant Malani <pmalani@chromium.org>
> > Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > Fixes: e7b7ab3847c9 ("platform/x86: intel_scu_ipc: Sleeping is fine when polling")
> > Signed-off-by: Stephen Boyd <swboyd@chromium.org>
> > ---
> >
> > This is sufficiently busy so I didn't add any tags from previous round.
> >
> > drivers/platform/x86/intel_scu_ipc.c | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
> > index 6851d10d6582..b2a2de22b8ff 100644
> > --- a/drivers/platform/x86/intel_scu_ipc.c
> > +++ b/drivers/platform/x86/intel_scu_ipc.c
> > @@ -232,18 +232,21 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset)
> > static inline int busy_loop(struct intel_scu_ipc_dev *scu)
> > {
> > unsigned long end = jiffies + IPC_TIMEOUT;
> > + u32 status;
> >
> > do {
> > - u32 status;
> > -
> > status = ipc_read_status(scu);
> > if (!(status & IPC_STATUS_BUSY))
> > - return (status & IPC_STATUS_ERR) ? -EIO : 0;
> > + goto not_busy;
> >
> > usleep_range(50, 100);
> > } while (time_before(jiffies, end));
> >
> > - return -ETIMEDOUT;
> > + status = ipc_read_status(scu);
>
> Does the issue happen again if we get scheduled away here for a long
> time? ;-)
Given the smiley I'll assume you're making a joke. But to clarify, the
issue can't happen again because we've already waited at least
IPC_TIMEOUT jiffies, maybe quite a bit more, so if we get scheduled away
again it's a non-issue. If the status is still busy here then it's a
timeout guaranteed.
>
> Regardless, I'm fine with this as is but if you make any changes, I
> would prefer see readl_busy_timeout() used here instead (as was in the
> previous version).
We can't use readl_busy_timeout() (you mean readl_poll_timeout() right?)
because that implements the timeout with timekeeping and we don't know
if this is called from suspend paths after timekeeping is suspended or
from early boot paths where timekeeping isn't started.
We could use readl_poll_timeout_atomic() and then the usleep would be
changed to udelay. Not sure that is acceptable though to delay 50
microseconds vs. intentionally schedule away like the usleep call is
doing.
next prev parent reply other threads:[~2023-09-07 20:11 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-06 18:09 [PATCH v2 0/3] platform/x86: intel_scu_ipc: Timeout fixes Stephen Boyd
2023-09-06 18:09 ` [PATCH v2 1/3] platform/x86: intel_scu_ipc: Check status after timeout in busy_loop() Stephen Boyd
2023-09-06 20:04 ` Andy Shevchenko
2023-09-06 20:14 ` Stephen Boyd
2023-09-06 20:20 ` Kuppuswamy Sathyanarayanan
2023-09-06 20:23 ` Stephen Boyd
2023-09-07 5:35 ` Mika Westerberg
2023-09-07 20:11 ` Stephen Boyd [this message]
2023-09-08 4:59 ` Mika Westerberg
2023-09-08 21:29 ` Stephen Boyd
2023-09-06 18:09 ` [PATCH v2 2/3] platform/x86: intel_scu_ipc: Check status upon timeout in ipc_wait_for_interrupt() Stephen Boyd
2023-09-06 20:06 ` Andy Shevchenko
2023-09-06 18:09 ` [PATCH v2 3/3] platform/x86: intel_scu_ipc: Fail IPC send if still busy Stephen Boyd
2023-09-06 20:13 ` Andy Shevchenko
2023-09-06 20:22 ` Stephen Boyd
2023-09-06 20:46 ` Andy Shevchenko
2023-09-06 20:59 ` Stephen Boyd
2023-09-07 5:29 ` Mika Westerberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAE-0n51Ut296M2ZetuzXGpX32pS11bbWzfcbaFfqNxgSjzafJw@mail.gmail.com \
--to=swboyd@chromium.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=hdegoede@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=markgross@kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=patches@lists.linux.dev \
--cc=platform-driver-x86@vger.kernel.org \
--cc=pmalani@chromium.org \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).