public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mathieu Poirier <mathieu.poirier@linaro.org>
To: Tanmay Shah <tanmay.shah@amd.com>
Cc: jassisinghbrar@gmail.com, andersson@kernel.org,
	linux-kernel@vger.kernel.org, linux-remoteproc@vger.kernel.org
Subject: Re: [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full
Date: Mon, 5 Jan 2026 14:21:56 -0700	[thread overview]
Message-ID: <aVwrdGiZZ9E-5Nkq@p14s> (raw)
In-Reply-To: <20251217212728.1540043-3-tanmay.shah@amd.com>

Good day,

On Wed, Dec 17, 2025 at 01:27:28PM -0800, Tanmay Shah wrote:
> If MBOX_TX_QUEUE_LEN number of kicks are pending, then no need to keep
> doing more kicks because it will fail anyway. Preventing further kicks
> is needed because it avoids printing false positive warning messages
> from mailbox framework. Functionally nothing changes from RPMsg or
> remoteproc point of view.
> 
> Also, allocate different mbox client data structure for tx channel, as
> it's a requirement from the mailbox framework.
> 

The semantic of these two changes is different enough to mandate two separate
patches.  I'm fine with the changes themselves.

Thanks,
Mathieu 

> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
>  drivers/remoteproc/xlnx_r5_remoteproc.c | 24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index a7b75235f53e..2db158c189be 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -9,6 +9,7 @@
>  #include <linux/firmware/xlnx-zynqmp.h>
>  #include <linux/kernel.h>
>  #include <linux/mailbox_client.h>
> +#include <linux/mailbox_controller.h>
>  #include <linux/mailbox/zynqmp-ipi-message.h>
>  #include <linux/module.h>
>  #include <linux/of_address.h>
> @@ -74,7 +75,8 @@ struct zynqmp_sram_bank {
>   * @tx_mc_buf: to copy data to mailbox tx channel
>   * @r5_core: this mailbox's corresponding r5_core pointer
>   * @mbox_work: schedule work after receiving data from mailbox
> - * @mbox_cl: mailbox client
> + * @mbox_tx_cl: tx channel mailbox client
> + * @mbox_rx_cl: rx channel mailbox client
>   * @tx_chan: mailbox tx channel
>   * @rx_chan: mailbox rx channel
>   */
> @@ -83,7 +85,8 @@ struct mbox_info {
>  	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
>  	struct zynqmp_r5_core *r5_core;
>  	struct work_struct mbox_work;
> -	struct mbox_client mbox_cl;
> +	struct mbox_client mbox_tx_cl;
> +	struct mbox_client mbox_rx_cl;
>  	struct mbox_chan *tx_chan;
>  	struct mbox_chan *rx_chan;
>  };
> @@ -230,7 +233,7 @@ static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
>  	struct mbox_info *ipi;
>  	size_t len;
>  
> -	ipi = container_of(cl, struct mbox_info, mbox_cl);
> +	ipi = container_of(cl, struct mbox_info, mbox_rx_cl);
>  
>  	/* copy data from ipi buffer to r5_core */
>  	ipi_msg = (struct zynqmp_ipi_message *)msg;
> @@ -269,8 +272,8 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>  	if (!ipi)
>  		return NULL;
>  
> -	mbox_cl = &ipi->mbox_cl;
> -	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
> +	mbox_cl = &ipi->mbox_tx_cl;
> +	mbox_cl->rx_callback = NULL;
>  	mbox_cl->tx_block = false;
>  	mbox_cl->knows_txdone = false;
>  	mbox_cl->tx_done = NULL;
> @@ -285,6 +288,13 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>  		return NULL;
>  	}
>  
> +	mbox_cl = &ipi->mbox_rx_cl;
> +	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
> +	mbox_cl->tx_block = false;
> +	mbox_cl->knows_txdone = false;
> +	mbox_cl->tx_done = NULL;
> +	mbox_cl->dev = cdev;
> +
>  	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
>  	if (IS_ERR(ipi->rx_chan)) {
>  		mbox_free_channel(ipi->tx_chan);
> @@ -335,6 +345,10 @@ static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
>  	if (!ipi)
>  		return;
>  
> +	/* Do not need new kick as already many kicks are pending. */
> +	if (ipi->tx_chan->cl->msg_slot_ro == 0)
> +		return;
> +
>  	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
>  	memcpy(mb_msg->data, &vqid, sizeof(vqid));
>  	mb_msg->len = sizeof(vqid);
> -- 
> 2.34.1
> 

  reply	other threads:[~2026-01-05 21:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-17 21:27 [PATCH v3 0/2] mailbox queue length check Tanmay Shah
2025-12-17 21:27 ` [PATCH v3 1/2] mailbox: check mailbox queue is full or not Tanmay Shah
2026-01-18 19:53   ` Jassi Brar
2026-01-30 18:13     ` Shah, Tanmay
2025-12-17 21:27 ` [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full Tanmay Shah
2026-01-05 21:21   ` Mathieu Poirier [this message]
2026-01-30 18:14     ` Shah, Tanmay

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aVwrdGiZZ9E-5Nkq@p14s \
    --to=mathieu.poirier@linaro.org \
    --cc=andersson@kernel.org \
    --cc=jassisinghbrar@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=tanmay.shah@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox