From: Bart Van Assche <bvanassche@acm.org>
To: "Peter Wang (王信友)" <peter.wang@mediatek.com>,
"martin.petersen@oracle.com" <martin.petersen@oracle.com>
Cc: "linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"beanhuo@micron.com" <beanhuo@micron.com>,
"vamshigajjela@google.com" <vamshigajjela@google.com>,
"James.Bottomley@HansenPartnership.com"
<James.Bottomley@HansenPartnership.com>,
"chenyuan0y@gmail.com" <chenyuan0y@gmail.com>,
"ping.gao@samsung.com" <ping.gao@samsung.com>,
"alok.a.tiwari@oracle.com" <alok.a.tiwari@oracle.com>
Subject: Re: [PATCH 2/3] ufs: core: Introduce ufshcd_mcq_poll_cqe_lock_n()
Date: Tue, 31 Mar 2026 12:31:36 -0700 [thread overview]
Message-ID: <a043946b-14e7-43da-8b68-dc47a08c12ff@acm.org> (raw)
In-Reply-To: <b76180520d10f2811c072f0944a0864e779e3868.camel@mediatek.com>
On 3/31/26 2:43 AM, Peter Wang (王信友) wrote:
> On Mon, 2026-03-30 at 11:33 -0700, Bart Van Assche wrote:
>> Introduce a new function for processing completions and that accepts
>> an
>> upper limit for the number of completions to poll. Tell
>> ufshcd_mcq_poll_cqe_lock() to poll at most hwq->max_entries. This is
>> sufficient to poll all pending completions since there are never more
>> than hwq->max_entries - 1 completions on a completion queue. This
>> patch
>> prepares for reducing the interrupt latency.
>
> Could it be more than (hwq->max_entries - 1) if the host keeps
> sending requests to the same hardware queue from different CPU cores?
No, because the completion queue head is read before processing
completion queue entries starts. No matter how many completions are
pushed onto the completion queue while ufshcd_mcq_poll_cqe_lock() is
in progress, it won't process more than (hwq->max_entries - 1) because
that's the maximum difference between the tail and the head pointers.
Thanks,
Bart.
next prev parent reply other threads:[~2026-03-31 19:31 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-30 18:33 [PATCH 0/3] Reduce interrupt latency Bart Van Assche
2026-03-30 18:33 ` [PATCH 1/3] ufs: core: Fix ufshcd_mcq_force_compl_one() Bart Van Assche
2026-03-31 9:41 ` Peter Wang (王信友)
2026-03-31 19:29 ` Bart Van Assche
2026-04-01 13:10 ` Peter Wang (王信友)
2026-04-01 15:51 ` Bart Van Assche
2026-04-02 7:19 ` Peter Wang (王信友)
2026-03-30 18:33 ` [PATCH 2/3] ufs: core: Introduce ufshcd_mcq_poll_cqe_lock_n() Bart Van Assche
2026-03-31 9:43 ` Peter Wang (王信友)
2026-03-31 19:31 ` Bart Van Assche [this message]
2026-04-01 13:11 ` Peter Wang (王信友)
2026-04-01 15:51 ` Bart Van Assche
2026-04-02 7:20 ` Peter Wang (王信友)
2026-03-30 18:33 ` [PATCH 3/3] ufs: qcom: Reduce interrupt latency Bart Van Assche
2026-03-31 7:09 ` Manivannan Sadhasivam
2026-03-31 19:44 ` Bart Van Assche
2026-04-04 3:11 ` Manivannan Sadhasivam
2026-04-06 16:02 ` Bart Van Assche
2026-03-30 18:36 ` [PATCH 0/3] " Bart Van Assche
2026-03-31 2:01 ` Can Guo
2026-04-07 8:20 ` Xiaosen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a043946b-14e7-43da-8b68-dc47a08c12ff@acm.org \
--to=bvanassche@acm.org \
--cc=James.Bottomley@HansenPartnership.com \
--cc=alok.a.tiwari@oracle.com \
--cc=beanhuo@micron.com \
--cc=chenyuan0y@gmail.com \
--cc=linux-scsi@vger.kernel.org \
--cc=martin.petersen@oracle.com \
--cc=peter.wang@mediatek.com \
--cc=ping.gao@samsung.com \
--cc=vamshigajjela@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox