From: Bhaumik Bhatt <bbhatt@codeaurora.org>
To: manivannan.sadhasivam@linaro.org
Cc: linux-arm-msm@vger.kernel.org, hemantk@codeaurora.org,
carl.yin@quectel.com, naveen.kumar@quectel.com,
jhugo@codeaurora.org, linux-kernel@vger.kernel.org,
loic.poulain@linaro.org, Bhaumik Bhatt <bbhatt@codeaurora.org>
Subject: [PATCH v2 3/3] bus: mhi: core: Process execution environment changes serially
Date: Thu, 14 Jan 2021 11:16:35 -0800 [thread overview]
Message-ID: <1610651795-31287-4-git-send-email-bbhatt@codeaurora.org> (raw)
In-Reply-To: <1610651795-31287-1-git-send-email-bbhatt@codeaurora.org>
In current design, whenever the BHI interrupt is fired, the execution
environment is updated. This can cause race conditions and impede any
ongoing power up/down processing. For example, if a power down is in
progress and the host has updated the execution environment to a
local "disabled" state, any BHI interrupt firing later could replace
it with the value from the BHI EE register. Another example would be
that the device can enter mission mode while device creation for SBL
is still going on, leading to multiple attempts at opening the same
channel.
Ensure that EE changes are handled only from appropriate places and
occur one after another and handle only PBL or RDDM EE changes as
critical events directly from the interrupt handler. This also makes
sure that we use the correct execution environment to notify the
controller driver when the device resets to one of the PBL execution
environments.
Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
---
drivers/bus/mhi/core/main.c | 14 ++++++++------
drivers/bus/mhi/core/pm.c | 5 +++--
2 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 1a7192e..2929e9f 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -411,7 +411,7 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state state = MHI_STATE_MAX;
enum mhi_pm_state pm_state = 0;
- enum mhi_ee_type ee = 0;
+ enum mhi_ee_type ee = MHI_EE_MAX;
write_lock_irq(&mhi_cntrl->pm_lock);
if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
@@ -420,8 +420,7 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
}
state = mhi_get_mhi_state(mhi_cntrl);
- ee = mhi_cntrl->ee;
- mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+ ee = mhi_get_exec_env(mhi_cntrl);
dev_dbg(dev, "local ee:%s device ee:%s dev_state:%s\n",
TO_MHI_EXEC_STR(mhi_cntrl->ee), TO_MHI_EXEC_STR(ee),
TO_MHI_STATE_STR(state));
@@ -439,8 +438,9 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
if (!mhi_is_active(mhi_cntrl))
goto exit_intvec;
- if (mhi_cntrl->ee == MHI_EE_RDDM && mhi_cntrl->ee != ee) {
+ if (ee == MHI_EE_RDDM && mhi_cntrl->ee != MHI_EE_RDDM) {
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
+ mhi_cntrl->ee = ee;
wake_up_all(&mhi_cntrl->state_event);
}
goto exit_intvec;
@@ -450,10 +450,12 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
wake_up_all(&mhi_cntrl->state_event);
/* For fatal errors, we let controller decide next step */
- if (MHI_IN_PBL(ee))
+ if (MHI_IN_PBL(ee)) {
mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_FATAL_ERROR);
- else
+ mhi_cntrl->ee = ee;
+ } else {
mhi_pm_sys_err_handler(mhi_cntrl);
+ }
}
exit_intvec:
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 44aa7eb..c870fa8 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -384,14 +384,15 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
write_lock_irq(&mhi_cntrl->pm_lock);
if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
- mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+ ee = mhi_get_exec_env(mhi_cntrl);
- if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
+ if (!MHI_IN_MISSION_MODE(ee)) {
mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
write_unlock_irq(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
return -EIO;
}
+ mhi_cntrl->ee = ee;
write_unlock_irq(&mhi_cntrl->pm_lock);
wake_up_all(&mhi_cntrl->state_event);
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
next prev parent reply other threads:[~2021-01-14 19:18 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-14 19:16 [PATCH v2 0/3] Serialize execution environment changes for MHI Bhaumik Bhatt
2021-01-14 19:16 ` [PATCH v2 1/3] bus: mhi: core: Clear devices when moving execution environments Bhaumik Bhatt
2021-01-15 17:45 ` Hemant Kumar
2021-01-21 7:50 ` Manivannan Sadhasivam
2021-02-04 23:52 ` Bhaumik Bhatt
2021-01-14 19:16 ` [PATCH v2 2/3] bus: mhi: core: Download AMSS image from appropriate function Bhaumik Bhatt
2021-01-15 18:58 ` Hemant Kumar
2021-01-14 19:16 ` Bhaumik Bhatt [this message]
2021-01-15 19:02 ` [PATCH v2 3/3] bus: mhi: core: Process execution environment changes serially Hemant Kumar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1610651795-31287-4-git-send-email-bbhatt@codeaurora.org \
--to=bbhatt@codeaurora.org \
--cc=carl.yin@quectel.com \
--cc=hemantk@codeaurora.org \
--cc=jhugo@codeaurora.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=loic.poulain@linaro.org \
--cc=manivannan.sadhasivam@linaro.org \
--cc=naveen.kumar@quectel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).