From: Stephen Boyd <swboyd@chromium.org>
To: Mika Westerberg <mika.westerberg@linux.intel.com>,
Hans de Goede <hdegoede@redhat.com>,
Mark Gross <markgross@kernel.org>
Cc: linux-kernel@vger.kernel.org, patches@lists.linux.dev,
platform-driver-x86@vger.kernel.org,
Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
Kuppuswamy Sathyanarayanan
<sathyanarayanan.kuppuswamy@linux.intel.com>,
Prashant Malani <pmalani@chromium.org>
Subject: [PATCH 3/3] platform/x86: intel_scu_ipc: Fail IPC send if still busy
Date: Wed, 30 Aug 2023 18:14:03 -0700 [thread overview]
Message-ID: <20230831011405.3246849-4-swboyd@chromium.org> (raw)
In-Reply-To: <20230831011405.3246849-1-swboyd@chromium.org>
It's possible for interrupts to get significantly delayed to the point
that callers of intel_scu_ipc_dev_command() and friends can call the
function once, hit a timeout, and call it again while the interrupt
still hasn't been processed. This driver will get seriously confused if
the interrupt is finally processed after the second IPC has been sent
with ipc_command(). It won't know which IPC has been completed. This
could be quite disastrous if calling code assumes something has happened
upon return from intel_scu_ipc_dev_simple_command() when it actually
hasn't.
Let's avoid this scenario by simply returning -EBUSY in this case.
Hopefully higher layers will know to back off or fail gracefully when
this happens. It's all highly unlikely anyway, but it's better to be
correct here as we have no way to know which IPC the status register is
telling us about if we send a second IPC while the previous IPC is still
processing.
Cc: Prashant Malani <pmalani@chromium.org>
Cc: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Fixes: ed12f295bfd5 ("ipc: Added support for IPC interrupt mode")
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
---
drivers/platform/x86/intel_scu_ipc.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)
diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c
index 2a21153e3bf3..5a56be319f0e 100644
--- a/drivers/platform/x86/intel_scu_ipc.c
+++ b/drivers/platform/x86/intel_scu_ipc.c
@@ -266,6 +266,19 @@ static int intel_scu_ipc_check_status(struct intel_scu_ipc_dev *scu)
return scu->irq > 0 ? ipc_wait_for_interrupt(scu) : busy_loop(scu);
}
+static bool intel_scu_ipc_busy(struct intel_scu_ipc_dev *scu)
+{
+ u8 status;
+
+ status = ipc_read_status(scu);
+ if (status & IPC_STATUS_BUSY) {
+ dev_err(&scu->dev, "device is busy\n");
+ return true;
+ }
+
+ return false;
+}
+
/* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */
static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
u32 count, u32 op, u32 id)
@@ -285,6 +298,10 @@ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data,
mutex_unlock(&ipclock);
return -ENODEV;
}
+ if (intel_scu_ipc_busy(scu)) {
+ mutex_unlock(&ipclock);
+ return -EBUSY;
+ }
for (nc = 0; nc < count; nc++, offset += 2) {
cbuf[offset] = addr[nc];
@@ -445,6 +462,10 @@ int intel_scu_ipc_dev_simple_command(struct intel_scu_ipc_dev *scu, int cmd,
return -ENODEV;
}
scu = ipcdev;
+ if (intel_scu_ipc_busy(scu)) {
+ mutex_unlock(&ipclock);
+ return -EBUSY;
+ }
cmdval = sub << 12 | cmd;
ipc_command(scu, cmdval);
err = intel_scu_ipc_check_status(scu);
@@ -490,6 +511,10 @@ int intel_scu_ipc_dev_command_with_size(struct intel_scu_ipc_dev *scu, int cmd,
mutex_unlock(&ipclock);
return -ENODEV;
}
+ if (intel_scu_ipc_busy(scu)) {
+ mutex_unlock(&ipclock);
+ return -EBUSY;
+ }
memcpy(inbuf, in, inlen);
for (i = 0; i < inbuflen; i++)
--
https://chromeos.dev
next prev parent reply other threads:[~2023-08-31 1:14 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-31 1:14 [PATCH 0/3] platform/x86: intel_scu_ipc: Timeout fixes Stephen Boyd
2023-08-31 1:14 ` [PATCH 1/3] platform/x86: intel_scu_ipc: Check status after timeouts in busy_loop() Stephen Boyd
2023-08-31 13:53 ` Andy Shevchenko
2023-09-05 22:24 ` Stephen Boyd
2023-09-06 13:58 ` Andy Shevchenko
2023-08-31 14:15 ` Kuppuswamy Sathyanarayanan
2023-09-01 5:50 ` Mika Westerberg
2023-09-05 22:27 ` Stephen Boyd
2023-09-06 13:46 ` Andy Shevchenko
2023-09-06 14:31 ` Mika Westerberg
2023-08-31 1:14 ` [PATCH 2/3] platform/x86: intel_scu_ipc: Check status upon timeout in ipc_wait_for_interrupt() Stephen Boyd
2023-08-31 13:58 ` Andy Shevchenko
2023-09-05 22:36 ` Stephen Boyd
2023-08-31 14:27 ` Kuppuswamy Sathyanarayanan
2023-09-05 22:56 ` Stephen Boyd
2023-09-01 6:04 ` Mika Westerberg
2023-08-31 1:14 ` Stephen Boyd [this message]
2023-08-31 14:07 ` [PATCH 3/3] platform/x86: intel_scu_ipc: Fail IPC send if still busy Andy Shevchenko
2023-09-01 6:06 ` Mika Westerberg
2023-09-05 22:42 ` Stephen Boyd
2023-08-31 3:28 ` [PATCH 0/3] platform/x86: intel_scu_ipc: Timeout fixes Kuppuswamy Sathyanarayanan
2023-09-05 22:55 ` Stephen Boyd
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230831011405.3246849-4-swboyd@chromium.org \
--to=swboyd@chromium.org \
--cc=andriy.shevchenko@linux.intel.com \
--cc=hdegoede@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=markgross@kernel.org \
--cc=mika.westerberg@linux.intel.com \
--cc=patches@lists.linux.dev \
--cc=platform-driver-x86@vger.kernel.org \
--cc=pmalani@chromium.org \
--cc=sathyanarayanan.kuppuswamy@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).