From: Bart Van Assche <bvanassche@acm.org>
To: "Martin K . Petersen" <martin.petersen@oracle.com>
Cc: linux-scsi@vger.kernel.org, Bart Van Assche <bvanassche@acm.org>,
Manivannan Sadhasivam <mani@kernel.org>,
"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Subject: [PATCH v2 2/2] ufs: qcom: Reduce interrupt latency
Date: Thu, 2 Apr 2026 10:14:02 -0700 [thread overview]
Message-ID: <20260402171404.3008494-3-bvanassche@acm.org> (raw)
In-Reply-To: <20260402171404.3008494-1-bvanassche@acm.org>
Defer completion processing to thread context on slower CPU cores to
prevent interrupt latency spikes. On the fastest CPU cores, keep
processing all completions in interrupt context.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
drivers/ufs/host/ufs-qcom.c | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 5a58ffef3d27..4aabbd14631e 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -2370,6 +2370,16 @@ struct ufs_qcom_irq {
struct ufs_hba *hba;
};
+static irqreturn_t ufs_qcom_mcq_threaded_esi_handler(int irq, void *data)
+{
+ struct ufs_qcom_irq *qi = data;
+ struct ufs_hba *hba = qi->hba;
+
+ ufshcd_mcq_poll_cqe_lock(hba, &hba->uhq[qi->idx]);
+
+ return IRQ_HANDLED;
+}
+
static irqreturn_t ufs_qcom_mcq_esi_handler(int irq, void *data)
{
struct ufs_qcom_irq *qi = data;
@@ -2377,9 +2387,22 @@ static irqreturn_t ufs_qcom_mcq_esi_handler(int irq, void *data)
struct ufs_hw_queue *hwq = &hba->uhq[qi->idx];
ufshcd_mcq_write_cqis(hba, 0x1, qi->idx);
- ufshcd_mcq_poll_cqe_lock(hba, hwq);
- return IRQ_HANDLED;
+ if (arch_scale_cpu_capacity(raw_smp_processor_id()) ==
+ SCHED_CAPACITY_SCALE) {
+ ufshcd_mcq_poll_cqe_lock(hba, hwq);
+ return IRQ_HANDLED;
+ }
+
+ if (ufshcd_mcq_poll_n_cqe_lock(hba, hwq, 4) < 4)
+ return IRQ_HANDLED;
+
+ /*
+ * Defer further completion processing to thread context because
+ * processing a large number of completions in interrupt context on
+ * slower CPU cores can result in unacceptably high interrupt latencies.
+ */
+ return IRQ_WAKE_THREAD;
}
static int ufs_qcom_config_esi(struct ufs_hba *hba)
@@ -2415,8 +2438,10 @@ static int ufs_qcom_config_esi(struct ufs_hba *hba)
qi[idx].idx = idx;
qi[idx].hba = hba;
- ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler,
- IRQF_SHARED, "qcom-mcq-esi", qi + idx);
+ ret = devm_request_threaded_irq(hba->dev, qi[idx].irq,
+ ufs_qcom_mcq_esi_handler,
+ ufs_qcom_mcq_threaded_esi_handler,
+ IRQF_SHARED | IRQF_ONESHOT, "qcom-mcq-esi", qi + idx);
if (ret) {
dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n",
__func__, qi[idx].irq, ret);
prev parent reply other threads:[~2026-04-02 17:14 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-02 17:14 [PATCH v2 0/2] ufs-qcom: Reduce interrupt latency Bart Van Assche
2026-04-02 17:14 ` [PATCH v2 1/2] ufs: core: Introduce ufshcd_mcq_poll_n_cqe_lock() Bart Van Assche
2026-04-02 17:14 ` Bart Van Assche [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260402171404.3008494-3-bvanassche@acm.org \
--to=bvanassche@acm.org \
--cc=James.Bottomley@HansenPartnership.com \
--cc=linux-scsi@vger.kernel.org \
--cc=mani@kernel.org \
--cc=martin.petersen@oracle.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox