linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Cristian Marussi <cristian.marussi@arm.com>
To: Justin Chen <justin.chen@broadcom.com>
Cc: Cristian Marussi <cristian.marussi@arm.com>,
	Sudeep Holla <sudeep.holla@arm.com>,
	arm-scmi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	peng.fan@nxp.com, bcm-kernel-feedback-list@broadcom.com,
	florian.fainelli@broadcom.com
Subject: Re: [PATCH] firmware: arm_scmi: Queue in scmi layer for mailbox implementation
Date: Wed, 9 Oct 2024 09:32:42 +0100	[thread overview]
Message-ID: <ZwY_nrvxCEkDgN8u@pluto> (raw)
In-Reply-To: <ed8b9825-c655-4712-806e-fee22f30c113@broadcom.com>

On Tue, Oct 08, 2024 at 12:23:28PM -0700, Justin Chen wrote:
> 
> 
> On 10/8/24 6:37 AM, Cristian Marussi wrote:
> > On Tue, Oct 08, 2024 at 02:34:59PM +0100, Sudeep Holla wrote:
> > > On Tue, Oct 08, 2024 at 02:23:00PM +0100, Sudeep Holla wrote:
> > > > On Tue, Oct 08, 2024 at 01:10:39PM +0100, Cristian Marussi wrote:
> > > > > On Mon, Oct 07, 2024 at 10:58:47AM -0700, Justin Chen wrote:
> > > > > > Thanks for the response. I'll try to elaborate.
> > > > > > 
> > > > > > When comparing SMC and mailbox transport, we noticed mailbox transport
> > > > > > timesout much quicker when under load. Originally we thought this was the
> > > > > > latency of the mailbox implementation, but after debugging we noticed a
> > > > > > weird behavior. We saw SMCI transactions timing out before the mailbox even
> > > > > > transmitted the message.
> > > > > > 
> > > > > > This issue lies in the SCMI layer. drivers/firmware/arm_scmi/driver.c
> > > > > > do_xfer() function.
> > > > > > 
> > > > > > The fundamental issue is send_message() blocks for SMC transport, but
> > > > > > doesn't block for mailbox transport. So if send_message() doesn't block we
> > > > > > can have multiple messages waiting at scmi_wait_for_message_response().
> > > > > > 
> > > > > 
> > > > > oh...yes...now I can see it...tx_prepare is really never called given
> > > > > how the mailbox subsystem de-queues messages once at time...so we end up
> > > > > waiting for a reply to some message that is still to be sent...so the
> > > > > message inflight is really NOT corrupted because the next remain pending
> > > > > until the reply in the shmem is read back , BUT the timeout will drift away
> > > > > if you multiple inflights are pending to be sent...
> > > > > 
> > > > 
> > > > Indeed.
> > > > 
> > > > > > SMC looks like this
> > > > > > CPU #0 SCMI message 0 -> calls send_message() then calls
> > > > > > scmi_wait_for_message_response(), timesout after 30ms.
> > > > > > CPU #1 SCMI message 1 -> blocks at send_message() waiting for SCMI message 0
> > > > > > to complete.
> > > > > > 
> > > > > > Mailbox looks like this
> > > > > > CPU #0 SCMI message 0 -> calls send_message(), mailbox layer queues up
> > > > > > message, mailbox layer sees no message is outgoing and sends it. CPU waits
> > > > > > at scmi_wait_for_message_response(), timesout after 30ms
> > > > > > CPU #1 SCMI message 1 -> calls send_message(), mailbox layer queues up
> > > > > > message, mailbox layer sees message pending, hold message in queue. CPU
> > > > > > waits at scmi_wait_for_message_response(), timesout after 30ms.
> > > > > > 
> > > > > > Lets say if transport takes 25ms. The first message would succeed, the
> > > > > > second message would timeout after 5ms.
> > > > > > 
> > > > > > Hopefully this makes sense.
> > > > > 
> > > > > Yes, of course, thanks, for reporting this, and taking time to
> > > > > explain...
> > > > > 
> > > > > ...in general the patch LGTM...I think your patch is good also because it
> > > > > could be easily backported as a fix....can you add a Fixes tag in your
> > > > > next version ?
> > > > > 
> > > > 
> > > > Are you seeing this issue a lot ? IOW, do we need this to be backported ?
> > > > 
> 
> I wouldn't say a lot. But we are seeing it with standard use of our devices
> running over an extended amount of time. Yes we would like this backported.
> 
> > > > > Also can you explain in more detail the issue and the solution in the commit
> > > > > message....that will help having it merged as a Fix in stables...
> > > > > 
> > > > > ...for the future (definitely NOT in this series) we could probably think to
> > > > > get rid of the sleeping mutex in favour of some other non-sleeping form of
> > > > > mutual exclusion around the channnel (like in SMC transport) and enable
> > > > > (optionally) Atomic transmission support AND also review if the shmem
> > > > > layer busy-waiting in txprepare is anymore needed at all...
> > > > > 
> > > > 
> > > > Agreed, if we are locking the channel in SCMI, we can drop the busy-waiting
> > > > in tx_prepare and the associated details in the comment as this locking
> > > > voids that. It is better have both the changes in the same patch to indicate
> > > > the relation between them.
> > > 
> > > Actually scratch that last point. The waiting in tx_prepare until the platform
> > > marks it free for agent to use is still needed. One usecase is when agent/OS
> > > times out but platform continues to process and eventually releases the shmem.
> > > Sorry I completely forgot about that.
> > > 
> > 
> > Yes indeed it is the mechanism that we avoid to reclaim forcibly anyway the shmem
> > if the transmission times out...and we should keep that to avoid
> > corruption of newer messages by late replies from the earlier ones that
> > have timed out.
> > 
> 
> Yup. I saw an interesting interaction from this. Since modifying shmem and
> ringing the doorbell are often two different task. The modification of shmem
> can race with processing of timed out messages from the platform. This
> usually leads to an early ACK and spurious interrupt. Mostly harmless. We
> did see lockups when multiple timeouts occur, but it was unclear if this was
> an issue with the SCMI transport layer or our driver/platform.
> 

Are you talking about something similar to this:

https://lore.kernel.org/all/20231220172112.763539-1-cristian.marussi@arm.com/

... reported as a side effect of a spurious IRQ on late timed-out
replies, it should have been fixed with the above commit in v6.8.

Thanks,
Cristian


  parent reply	other threads:[~2024-10-09  9:45 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-04 22:12 [PATCH] firmware: arm_scmi: Queue in scmi layer for mailbox implementation Justin Chen
2024-10-07 12:34 ` Cristian Marussi
2024-10-07 13:04 ` Sudeep Holla
2024-10-07 13:10   ` Cristian Marussi
2024-10-07 17:58     ` Justin Chen
2024-10-08  2:43       ` Peng Fan
2024-10-08  5:02         ` Justin Chen
2024-10-08 12:10       ` Cristian Marussi
2024-10-08 13:23         ` Sudeep Holla
2024-10-08 13:34           ` Sudeep Holla
2024-10-08 13:37             ` Cristian Marussi
2024-10-08 19:23               ` Justin Chen
2024-10-08 19:40                 ` Justin Chen
2024-10-09  8:32                 ` Cristian Marussi [this message]
2024-10-09 19:20                   ` Justin Chen
2024-10-08 13:17       ` Sudeep Holla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZwY_nrvxCEkDgN8u@pluto \
    --to=cristian.marussi@arm.com \
    --cc=arm-scmi@vger.kernel.org \
    --cc=bcm-kernel-feedback-list@broadcom.com \
    --cc=florian.fainelli@broadcom.com \
    --cc=justin.chen@broadcom.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=peng.fan@nxp.com \
    --cc=sudeep.holla@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).