From: Tudor Ambarus <tudor.ambarus@linaro.org>
To: Jassi Brar <jassisinghbrar@gmail.com>
Cc: peter.griffin@linaro.org, andre.draszik@linaro.org,
willmcvicker@google.com, cristian.marussi@arm.com,
sudeep.holla@arm.com, kernel-team@android.com,
arm-scmi@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mailbox: stop the release and reacquire of the chan lock
Date: Fri, 4 Jul 2025 13:30:50 +0100 [thread overview]
Message-ID: <fac9a5fb-7a39-4c12-9dca-d2338b6dad8c@linaro.org> (raw)
In-Reply-To: <CABb+yY2HYgS25xouVJpq+Aia1M=b1_ocbHiyrnVqZcf0c0xcGg@mail.gmail.com>
Hi, Jassi,
Sorry for the delay, I was out for a while.
On 6/23/25 12:41 AM, Jassi Brar wrote:
> On Fri, Jun 6, 2025 at 8:41 AM Tudor Ambarus <tudor.ambarus@linaro.org> wrote:
>>
>> There are two cases where the chan lock is released and reacquired
>> were it shouldn't really be:
>>
>> 1/ released at the end of add_to_rbuf() and reacquired at the beginning
>> of msg_submit(). After the lock is released at the end of add_to_rbuf(),
>> if the mailbox core is under heavy load, the mailbox software queue may
>> fill up without any of the threads getting the chance to drain the
>> software queue.
>> T#0 acquires chan lock, fills rbuf, releases the lock, then
>> T#1 acquires chan lock, fills rbuf, releases the lock, then
>> ...
>> T#MBOX_TX_QUEUE_LEN returns -ENOBUFS;
>> We shall drain the software queue as fast as we can, while still holding
>> the channel lock.
>>
> I don't see any issue to fix to begin with.
> T#0 does drain the queue by moving on to submit the message after
> adding it to the rbuf.
The problem is that the code releases the chan->lock after adding the
message to rbuf and then reacquires it on submit. A thread can be
preempted after add_to_rbuf(), without getting the chance to get to
msg_submit().
Let's assume that
T#0 adds to rbuf and gets preempted by T#1
T#1 adds to rbuf and gets preempted by T#2
...
T#n-1 adds to rbuf and gets preempted by T#n
We fill the mailbox software queue without any thread getting to
msg_submit().
Thanks,
ta
> And until the tx is done, T#1 would still be only adding to the rbuf
> because of chan->active_req.
>
>> 2/ tx_tick() releases the lock after setting chan->active_req = NULL.
>> This gives again the possibility for the software queue to fill up, as
>> described in case 1/.
>>
> This again is not an issue. The user(s) should account for the fact
> that the message bus
> may be busy and there can be only limited buffers in the queue.
>
> Thanks
next prev parent reply other threads:[~2025-07-04 12:30 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-06 13:41 [PATCH] mailbox: stop the release and reacquire of the chan lock Tudor Ambarus
2025-06-22 23:41 ` Jassi Brar
2025-07-04 12:30 ` Tudor Ambarus [this message]
2025-07-06 2:19 ` Jassi Brar
2025-07-07 4:42 ` Peng Fan
2025-07-07 8:04 ` Tudor Ambarus
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fac9a5fb-7a39-4c12-9dca-d2338b6dad8c@linaro.org \
--to=tudor.ambarus@linaro.org \
--cc=andre.draszik@linaro.org \
--cc=arm-scmi@vger.kernel.org \
--cc=cristian.marussi@arm.com \
--cc=jassisinghbrar@gmail.com \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=peter.griffin@linaro.org \
--cc=sudeep.holla@arm.com \
--cc=willmcvicker@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).