public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] mailbox queue length check
@ 2025-12-17 21:27 Tanmay Shah
  2025-12-17 21:27 ` [PATCH v3 1/2] mailbox: check mailbox queue is full or not Tanmay Shah
  2025-12-17 21:27 ` [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full Tanmay Shah
  0 siblings, 2 replies; 7+ messages in thread
From: Tanmay Shah @ 2025-12-17 21:27 UTC (permalink / raw)
  To: jassisinghbrar, andersson, mathieu.poirier
  Cc: linux-kernel, linux-remoteproc, Tanmay Shah

Provide mechanism to check mailbox queue length before posting new
message to the mailbox queue.

Changes in v3:
  - move msg_slot_ro to mbox_client instead of mbox_chan
  - allocate separate mailbox client to the "tx" mbox channel
  - modify rest of the patch to use msg_slot_ro from mbox_client

Changes in v2:
  - Split patchset in two patches for two different subsystems.
  - Change design and introduce new field per channel to track mbox
    queue length.
  - use this new field in xlnx remoteproc driver to prevent sending new
    messages if no slots available in the mbox tx channel queue.


Note: The patch series is developed on top of latest remoteproc for-next
branch.

Tanmay Shah (2):
  mailbox: check mailbox queue is full or not
  remoteproc: xlnx: do not kick if mbox queue is full

 drivers/mailbox/mailbox.c               |  3 +++
 drivers/remoteproc/xlnx_r5_remoteproc.c | 24 +++++++++++++++++++-----
 include/linux/mailbox_client.h          |  2 ++
 3 files changed, 24 insertions(+), 5 deletions(-)


base-commit: 6a95c70e60a03d83b31b855ce9af8cf6aeec7f90
-- 
2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/2] mailbox: check mailbox queue is full or not
  2025-12-17 21:27 [PATCH v3 0/2] mailbox queue length check Tanmay Shah
@ 2025-12-17 21:27 ` Tanmay Shah
  2026-01-18 19:53   ` Jassi Brar
  2025-12-17 21:27 ` [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full Tanmay Shah
  1 sibling, 1 reply; 7+ messages in thread
From: Tanmay Shah @ 2025-12-17 21:27 UTC (permalink / raw)
  To: jassisinghbrar, andersson, mathieu.poirier
  Cc: linux-kernel, linux-remoteproc, Tanmay Shah

Sometimes clients need to know if mailbox queue is full or not before
posting new message via mailbox. If mailbox queue is full clients can
choose not to post new message. This doesn't mean current queue length
should be increased, but clients may want to wait till previous Tx is
done. Introduce variable per channel to track available msg slots.
Clients can check this variable and decide not to send new message if
it is 0. This will help avoid false positive warning from mailbox
framework "Try increasing MBOX_TX_QUEUE_LEN".

Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
 drivers/mailbox/mailbox.c      | 3 +++
 include/linux/mailbox_client.h | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
index 2acc6ec229a4..b7ae5c173143 100644
--- a/drivers/mailbox/mailbox.c
+++ b/drivers/mailbox/mailbox.c
@@ -36,6 +36,7 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg)
 	idx = chan->msg_free;
 	chan->msg_data[idx] = mssg;
 	chan->msg_count++;
+	chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
 
 	if (idx == MBOX_TX_QUEUE_LEN - 1)
 		chan->msg_free = 0;
@@ -71,6 +72,7 @@ static void msg_submit(struct mbox_chan *chan)
 		if (!err) {
 			chan->active_req = data;
 			chan->msg_count--;
+			chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
 		}
 	}
 
@@ -321,6 +323,7 @@ static int __mbox_bind_client(struct mbox_chan *chan, struct mbox_client *cl)
 		chan->msg_count = 0;
 		chan->active_req = NULL;
 		chan->cl = cl;
+		chan->cl->msg_slot_ro = MBOX_TX_QUEUE_LEN;
 		init_completion(&chan->tx_complete);
 
 		if (chan->txdone_method	== TXDONE_BY_POLL && cl->knows_txdone)
diff --git a/include/linux/mailbox_client.h b/include/linux/mailbox_client.h
index c6eea9afb943..a073fb0c03d1 100644
--- a/include/linux/mailbox_client.h
+++ b/include/linux/mailbox_client.h
@@ -17,6 +17,7 @@ struct mbox_chan;
  * @dev:		The client device
  * @tx_block:		If the mbox_send_message should block until data is
  *			transmitted.
+ * @msg_slot_ro:	msg slots remaining for this client's channel.
  * @tx_tout:		Max block period in ms before TX is assumed failure
  * @knows_txdone:	If the client could run the TX state machine. Usually
  *			if the client receives some ACK packet for transmission.
@@ -29,6 +30,7 @@ struct mbox_chan;
 struct mbox_client {
 	struct device *dev;
 	bool tx_block;
+	unsigned int msg_slot_ro;
 	unsigned long tx_tout;
 	bool knows_txdone;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full
  2025-12-17 21:27 [PATCH v3 0/2] mailbox queue length check Tanmay Shah
  2025-12-17 21:27 ` [PATCH v3 1/2] mailbox: check mailbox queue is full or not Tanmay Shah
@ 2025-12-17 21:27 ` Tanmay Shah
  2026-01-05 21:21   ` Mathieu Poirier
  1 sibling, 1 reply; 7+ messages in thread
From: Tanmay Shah @ 2025-12-17 21:27 UTC (permalink / raw)
  To: jassisinghbrar, andersson, mathieu.poirier
  Cc: linux-kernel, linux-remoteproc, Tanmay Shah

If MBOX_TX_QUEUE_LEN number of kicks are pending, then no need to keep
doing more kicks because it will fail anyway. Preventing further kicks
is needed because it avoids printing false positive warning messages
from mailbox framework. Functionally nothing changes from RPMsg or
remoteproc point of view.

Also, allocate different mbox client data structure for tx channel, as
it's a requirement from the mailbox framework.

Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
---
 drivers/remoteproc/xlnx_r5_remoteproc.c | 24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
index a7b75235f53e..2db158c189be 100644
--- a/drivers/remoteproc/xlnx_r5_remoteproc.c
+++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
@@ -9,6 +9,7 @@
 #include <linux/firmware/xlnx-zynqmp.h>
 #include <linux/kernel.h>
 #include <linux/mailbox_client.h>
+#include <linux/mailbox_controller.h>
 #include <linux/mailbox/zynqmp-ipi-message.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
@@ -74,7 +75,8 @@ struct zynqmp_sram_bank {
  * @tx_mc_buf: to copy data to mailbox tx channel
  * @r5_core: this mailbox's corresponding r5_core pointer
  * @mbox_work: schedule work after receiving data from mailbox
- * @mbox_cl: mailbox client
+ * @mbox_tx_cl: tx channel mailbox client
+ * @mbox_rx_cl: rx channel mailbox client
  * @tx_chan: mailbox tx channel
  * @rx_chan: mailbox rx channel
  */
@@ -83,7 +85,8 @@ struct mbox_info {
 	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
 	struct zynqmp_r5_core *r5_core;
 	struct work_struct mbox_work;
-	struct mbox_client mbox_cl;
+	struct mbox_client mbox_tx_cl;
+	struct mbox_client mbox_rx_cl;
 	struct mbox_chan *tx_chan;
 	struct mbox_chan *rx_chan;
 };
@@ -230,7 +233,7 @@ static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
 	struct mbox_info *ipi;
 	size_t len;
 
-	ipi = container_of(cl, struct mbox_info, mbox_cl);
+	ipi = container_of(cl, struct mbox_info, mbox_rx_cl);
 
 	/* copy data from ipi buffer to r5_core */
 	ipi_msg = (struct zynqmp_ipi_message *)msg;
@@ -269,8 +272,8 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
 	if (!ipi)
 		return NULL;
 
-	mbox_cl = &ipi->mbox_cl;
-	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
+	mbox_cl = &ipi->mbox_tx_cl;
+	mbox_cl->rx_callback = NULL;
 	mbox_cl->tx_block = false;
 	mbox_cl->knows_txdone = false;
 	mbox_cl->tx_done = NULL;
@@ -285,6 +288,13 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
 		return NULL;
 	}
 
+	mbox_cl = &ipi->mbox_rx_cl;
+	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
+	mbox_cl->tx_block = false;
+	mbox_cl->knows_txdone = false;
+	mbox_cl->tx_done = NULL;
+	mbox_cl->dev = cdev;
+
 	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
 	if (IS_ERR(ipi->rx_chan)) {
 		mbox_free_channel(ipi->tx_chan);
@@ -335,6 +345,10 @@ static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
 	if (!ipi)
 		return;
 
+	/* Do not need new kick as already many kicks are pending. */
+	if (ipi->tx_chan->cl->msg_slot_ro == 0)
+		return;
+
 	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
 	memcpy(mb_msg->data, &vqid, sizeof(vqid));
 	mb_msg->len = sizeof(vqid);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full
  2025-12-17 21:27 ` [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full Tanmay Shah
@ 2026-01-05 21:21   ` Mathieu Poirier
  2026-01-30 18:14     ` Shah, Tanmay
  0 siblings, 1 reply; 7+ messages in thread
From: Mathieu Poirier @ 2026-01-05 21:21 UTC (permalink / raw)
  To: Tanmay Shah; +Cc: jassisinghbrar, andersson, linux-kernel, linux-remoteproc

Good day,

On Wed, Dec 17, 2025 at 01:27:28PM -0800, Tanmay Shah wrote:
> If MBOX_TX_QUEUE_LEN number of kicks are pending, then no need to keep
> doing more kicks because it will fail anyway. Preventing further kicks
> is needed because it avoids printing false positive warning messages
> from mailbox framework. Functionally nothing changes from RPMsg or
> remoteproc point of view.
> 
> Also, allocate different mbox client data structure for tx channel, as
> it's a requirement from the mailbox framework.
> 

The semantic of these two changes is different enough to mandate two separate
patches.  I'm fine with the changes themselves.

Thanks,
Mathieu 

> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
>  drivers/remoteproc/xlnx_r5_remoteproc.c | 24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
> index a7b75235f53e..2db158c189be 100644
> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
> @@ -9,6 +9,7 @@
>  #include <linux/firmware/xlnx-zynqmp.h>
>  #include <linux/kernel.h>
>  #include <linux/mailbox_client.h>
> +#include <linux/mailbox_controller.h>
>  #include <linux/mailbox/zynqmp-ipi-message.h>
>  #include <linux/module.h>
>  #include <linux/of_address.h>
> @@ -74,7 +75,8 @@ struct zynqmp_sram_bank {
>   * @tx_mc_buf: to copy data to mailbox tx channel
>   * @r5_core: this mailbox's corresponding r5_core pointer
>   * @mbox_work: schedule work after receiving data from mailbox
> - * @mbox_cl: mailbox client
> + * @mbox_tx_cl: tx channel mailbox client
> + * @mbox_rx_cl: rx channel mailbox client
>   * @tx_chan: mailbox tx channel
>   * @rx_chan: mailbox rx channel
>   */
> @@ -83,7 +85,8 @@ struct mbox_info {
>  	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
>  	struct zynqmp_r5_core *r5_core;
>  	struct work_struct mbox_work;
> -	struct mbox_client mbox_cl;
> +	struct mbox_client mbox_tx_cl;
> +	struct mbox_client mbox_rx_cl;
>  	struct mbox_chan *tx_chan;
>  	struct mbox_chan *rx_chan;
>  };
> @@ -230,7 +233,7 @@ static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
>  	struct mbox_info *ipi;
>  	size_t len;
>  
> -	ipi = container_of(cl, struct mbox_info, mbox_cl);
> +	ipi = container_of(cl, struct mbox_info, mbox_rx_cl);
>  
>  	/* copy data from ipi buffer to r5_core */
>  	ipi_msg = (struct zynqmp_ipi_message *)msg;
> @@ -269,8 +272,8 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>  	if (!ipi)
>  		return NULL;
>  
> -	mbox_cl = &ipi->mbox_cl;
> -	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
> +	mbox_cl = &ipi->mbox_tx_cl;
> +	mbox_cl->rx_callback = NULL;
>  	mbox_cl->tx_block = false;
>  	mbox_cl->knows_txdone = false;
>  	mbox_cl->tx_done = NULL;
> @@ -285,6 +288,13 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>  		return NULL;
>  	}
>  
> +	mbox_cl = &ipi->mbox_rx_cl;
> +	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
> +	mbox_cl->tx_block = false;
> +	mbox_cl->knows_txdone = false;
> +	mbox_cl->tx_done = NULL;
> +	mbox_cl->dev = cdev;
> +
>  	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
>  	if (IS_ERR(ipi->rx_chan)) {
>  		mbox_free_channel(ipi->tx_chan);
> @@ -335,6 +345,10 @@ static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
>  	if (!ipi)
>  		return;
>  
> +	/* Do not need new kick as already many kicks are pending. */
> +	if (ipi->tx_chan->cl->msg_slot_ro == 0)
> +		return;
> +
>  	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
>  	memcpy(mb_msg->data, &vqid, sizeof(vqid));
>  	mb_msg->len = sizeof(vqid);
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/2] mailbox: check mailbox queue is full or not
  2025-12-17 21:27 ` [PATCH v3 1/2] mailbox: check mailbox queue is full or not Tanmay Shah
@ 2026-01-18 19:53   ` Jassi Brar
  2026-01-30 18:13     ` Shah, Tanmay
  0 siblings, 1 reply; 7+ messages in thread
From: Jassi Brar @ 2026-01-18 19:53 UTC (permalink / raw)
  To: Tanmay Shah; +Cc: andersson, mathieu.poirier, linux-kernel, linux-remoteproc

On Wed, Dec 17, 2025 at 3:27 PM Tanmay Shah <tanmay.shah@amd.com> wrote:
>
> Sometimes clients need to know if mailbox queue is full or not before
> posting new message via mailbox. If mailbox queue is full clients can
> choose not to post new message. This doesn't mean current queue length
> should be increased, but clients may want to wait till previous Tx is
> done. Introduce variable per channel to track available msg slots.
> Clients can check this variable and decide not to send new message if
> it is 0. This will help avoid false positive warning from mailbox
> framework "Try increasing MBOX_TX_QUEUE_LEN".
>
> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
> ---
>  drivers/mailbox/mailbox.c      | 3 +++
>  include/linux/mailbox_client.h | 2 ++
>  2 files changed, 5 insertions(+)
>
> diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
> index 2acc6ec229a4..b7ae5c173143 100644
> --- a/drivers/mailbox/mailbox.c
> +++ b/drivers/mailbox/mailbox.c
> @@ -36,6 +36,7 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg)
>         idx = chan->msg_free;
>         chan->msg_data[idx] = mssg;
>         chan->msg_count++;
> +       chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
>
msg_count is protected by a lock and limited within MBOX_TX_QUEUE_LEN,
so how about simply   chan->cl->msg_slot_ro--;

>         if (idx == MBOX_TX_QUEUE_LEN - 1)
>                 chan->msg_free = 0;
> @@ -71,6 +72,7 @@ static void msg_submit(struct mbox_chan *chan)
>                 if (!err) {
>                         chan->active_req = data;
>                         chan->msg_count--;
> +                       chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
>
Similarly   chan->cl->msg_slot_ro++  ?

>                 }
>         }
>
> @@ -321,6 +323,7 @@ static int __mbox_bind_client(struct mbox_chan *chan, struct mbox_client *cl)
>                 chan->msg_count = 0;
>                 chan->active_req = NULL;
>                 chan->cl = cl;
> +               chan->cl->msg_slot_ro = MBOX_TX_QUEUE_LEN;
>                 init_completion(&chan->tx_complete);
>
>                 if (chan->txdone_method == TXDONE_BY_POLL && cl->knows_txdone)
> diff --git a/include/linux/mailbox_client.h b/include/linux/mailbox_client.h
> index c6eea9afb943..a073fb0c03d1 100644
> --- a/include/linux/mailbox_client.h
> +++ b/include/linux/mailbox_client.h
> @@ -17,6 +17,7 @@ struct mbox_chan;
>   * @dev:               The client device
>   * @tx_block:          If the mbox_send_message should block until data is
>   *                     transmitted.
> + * @msg_slot_ro:       msg slots remaining for this client's channel.
>   * @tx_tout:           Max block period in ms before TX is assumed failure
>   * @knows_txdone:      If the client could run the TX state machine. Usually
>   *                     if the client receives some ACK packet for transmission.
> @@ -29,6 +30,7 @@ struct mbox_chan;
>  struct mbox_client {
>         struct device *dev;
>         bool tx_block;
> +       unsigned int msg_slot_ro;
>
maybe call it 'tx_slots_available_ro' ?

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 1/2] mailbox: check mailbox queue is full or not
  2026-01-18 19:53   ` Jassi Brar
@ 2026-01-30 18:13     ` Shah, Tanmay
  0 siblings, 0 replies; 7+ messages in thread
From: Shah, Tanmay @ 2026-01-30 18:13 UTC (permalink / raw)
  To: Jassi Brar, Tanmay Shah
  Cc: andersson, mathieu.poirier, linux-kernel, linux-remoteproc

Hi Jassi,

Thank You for reviews.

I agree to all your comments, and I will address them in the next
revision. I am returning from the leave now hence delay in the response.

Tanmay

On 1/18/2026 1:53 PM, Jassi Brar wrote:
> On Wed, Dec 17, 2025 at 3:27 PM Tanmay Shah <tanmay.shah@amd.com> wrote:
>>
>> Sometimes clients need to know if mailbox queue is full or not before
>> posting new message via mailbox. If mailbox queue is full clients can
>> choose not to post new message. This doesn't mean current queue length
>> should be increased, but clients may want to wait till previous Tx is
>> done. Introduce variable per channel to track available msg slots.
>> Clients can check this variable and decide not to send new message if
>> it is 0. This will help avoid false positive warning from mailbox
>> framework "Try increasing MBOX_TX_QUEUE_LEN".
>>
>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>> ---
>>  drivers/mailbox/mailbox.c      | 3 +++
>>  include/linux/mailbox_client.h | 2 ++
>>  2 files changed, 5 insertions(+)
>>
>> diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
>> index 2acc6ec229a4..b7ae5c173143 100644
>> --- a/drivers/mailbox/mailbox.c
>> +++ b/drivers/mailbox/mailbox.c
>> @@ -36,6 +36,7 @@ static int add_to_rbuf(struct mbox_chan *chan, void *mssg)
>>         idx = chan->msg_free;
>>         chan->msg_data[idx] = mssg;
>>         chan->msg_count++;
>> +       chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
>>
> msg_count is protected by a lock and limited within MBOX_TX_QUEUE_LEN,
> so how about simply   chan->cl->msg_slot_ro--;
> 
>>         if (idx == MBOX_TX_QUEUE_LEN - 1)
>>                 chan->msg_free = 0;
>> @@ -71,6 +72,7 @@ static void msg_submit(struct mbox_chan *chan)
>>                 if (!err) {
>>                         chan->active_req = data;
>>                         chan->msg_count--;
>> +                       chan->cl->msg_slot_ro = (MBOX_TX_QUEUE_LEN - chan->msg_count);
>>
> Similarly   chan->cl->msg_slot_ro++  ?
> 
>>                 }
>>         }
>>
>> @@ -321,6 +323,7 @@ static int __mbox_bind_client(struct mbox_chan *chan, struct mbox_client *cl)
>>                 chan->msg_count = 0;
>>                 chan->active_req = NULL;
>>                 chan->cl = cl;
>> +               chan->cl->msg_slot_ro = MBOX_TX_QUEUE_LEN;
>>                 init_completion(&chan->tx_complete);
>>
>>                 if (chan->txdone_method == TXDONE_BY_POLL && cl->knows_txdone)
>> diff --git a/include/linux/mailbox_client.h b/include/linux/mailbox_client.h
>> index c6eea9afb943..a073fb0c03d1 100644
>> --- a/include/linux/mailbox_client.h
>> +++ b/include/linux/mailbox_client.h
>> @@ -17,6 +17,7 @@ struct mbox_chan;
>>   * @dev:               The client device
>>   * @tx_block:          If the mbox_send_message should block until data is
>>   *                     transmitted.
>> + * @msg_slot_ro:       msg slots remaining for this client's channel.
>>   * @tx_tout:           Max block period in ms before TX is assumed failure
>>   * @knows_txdone:      If the client could run the TX state machine. Usually
>>   *                     if the client receives some ACK packet for transmission.
>> @@ -29,6 +30,7 @@ struct mbox_chan;
>>  struct mbox_client {
>>         struct device *dev;
>>         bool tx_block;
>> +       unsigned int msg_slot_ro;
>>
> maybe call it 'tx_slots_available_ro' ?
> 
> Thanks


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full
  2026-01-05 21:21   ` Mathieu Poirier
@ 2026-01-30 18:14     ` Shah, Tanmay
  0 siblings, 0 replies; 7+ messages in thread
From: Shah, Tanmay @ 2026-01-30 18:14 UTC (permalink / raw)
  To: Mathieu Poirier, Tanmay Shah
  Cc: jassisinghbrar, andersson, linux-kernel, linux-remoteproc


On 1/5/2026 3:21 PM, Mathieu Poirier wrote:
> Good day,
> 
> On Wed, Dec 17, 2025 at 01:27:28PM -0800, Tanmay Shah wrote:
>> If MBOX_TX_QUEUE_LEN number of kicks are pending, then no need to keep
>> doing more kicks because it will fail anyway. Preventing further kicks
>> is needed because it avoids printing false positive warning messages
>> from mailbox framework. Functionally nothing changes from RPMsg or
>> remoteproc point of view.
>>
>> Also, allocate different mbox client data structure for tx channel, as
>> it's a requirement from the mailbox framework.
>>
> 
> The semantic of these two changes is different enough to mandate two separate
> patches.  I'm fine with the changes themselves.
> 

Thanks Mathieu.

Ack, I will send two separate patches for each change.

Thanks,
Tanmay

> Thanks,
> Mathieu 
> 
>> Signed-off-by: Tanmay Shah <tanmay.shah@amd.com>
>> ---
>>  drivers/remoteproc/xlnx_r5_remoteproc.c | 24 +++++++++++++++++++-----
>>  1 file changed, 19 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/remoteproc/xlnx_r5_remoteproc.c b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> index a7b75235f53e..2db158c189be 100644
>> --- a/drivers/remoteproc/xlnx_r5_remoteproc.c
>> +++ b/drivers/remoteproc/xlnx_r5_remoteproc.c
>> @@ -9,6 +9,7 @@
>>  #include <linux/firmware/xlnx-zynqmp.h>
>>  #include <linux/kernel.h>
>>  #include <linux/mailbox_client.h>
>> +#include <linux/mailbox_controller.h>
>>  #include <linux/mailbox/zynqmp-ipi-message.h>
>>  #include <linux/module.h>
>>  #include <linux/of_address.h>
>> @@ -74,7 +75,8 @@ struct zynqmp_sram_bank {
>>   * @tx_mc_buf: to copy data to mailbox tx channel
>>   * @r5_core: this mailbox's corresponding r5_core pointer
>>   * @mbox_work: schedule work after receiving data from mailbox
>> - * @mbox_cl: mailbox client
>> + * @mbox_tx_cl: tx channel mailbox client
>> + * @mbox_rx_cl: rx channel mailbox client
>>   * @tx_chan: mailbox tx channel
>>   * @rx_chan: mailbox rx channel
>>   */
>> @@ -83,7 +85,8 @@ struct mbox_info {
>>  	unsigned char tx_mc_buf[MBOX_CLIENT_BUF_MAX];
>>  	struct zynqmp_r5_core *r5_core;
>>  	struct work_struct mbox_work;
>> -	struct mbox_client mbox_cl;
>> +	struct mbox_client mbox_tx_cl;
>> +	struct mbox_client mbox_rx_cl;
>>  	struct mbox_chan *tx_chan;
>>  	struct mbox_chan *rx_chan;
>>  };
>> @@ -230,7 +233,7 @@ static void zynqmp_r5_mb_rx_cb(struct mbox_client *cl, void *msg)
>>  	struct mbox_info *ipi;
>>  	size_t len;
>>  
>> -	ipi = container_of(cl, struct mbox_info, mbox_cl);
>> +	ipi = container_of(cl, struct mbox_info, mbox_rx_cl);
>>  
>>  	/* copy data from ipi buffer to r5_core */
>>  	ipi_msg = (struct zynqmp_ipi_message *)msg;
>> @@ -269,8 +272,8 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>>  	if (!ipi)
>>  		return NULL;
>>  
>> -	mbox_cl = &ipi->mbox_cl;
>> -	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
>> +	mbox_cl = &ipi->mbox_tx_cl;
>> +	mbox_cl->rx_callback = NULL;
>>  	mbox_cl->tx_block = false;
>>  	mbox_cl->knows_txdone = false;
>>  	mbox_cl->tx_done = NULL;
>> @@ -285,6 +288,13 @@ static struct mbox_info *zynqmp_r5_setup_mbox(struct device *cdev)
>>  		return NULL;
>>  	}
>>  
>> +	mbox_cl = &ipi->mbox_rx_cl;
>> +	mbox_cl->rx_callback = zynqmp_r5_mb_rx_cb;
>> +	mbox_cl->tx_block = false;
>> +	mbox_cl->knows_txdone = false;
>> +	mbox_cl->tx_done = NULL;
>> +	mbox_cl->dev = cdev;
>> +
>>  	ipi->rx_chan = mbox_request_channel_byname(mbox_cl, "rx");
>>  	if (IS_ERR(ipi->rx_chan)) {
>>  		mbox_free_channel(ipi->tx_chan);
>> @@ -335,6 +345,10 @@ static void zynqmp_r5_rproc_kick(struct rproc *rproc, int vqid)
>>  	if (!ipi)
>>  		return;
>>  
>> +	/* Do not need new kick as already many kicks are pending. */
>> +	if (ipi->tx_chan->cl->msg_slot_ro == 0)
>> +		return;
>> +
>>  	mb_msg = (struct zynqmp_ipi_message *)ipi->tx_mc_buf;
>>  	memcpy(mb_msg->data, &vqid, sizeof(vqid));
>>  	mb_msg->len = sizeof(vqid);
>> -- 
>> 2.34.1
>>


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-01-30 18:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-17 21:27 [PATCH v3 0/2] mailbox queue length check Tanmay Shah
2025-12-17 21:27 ` [PATCH v3 1/2] mailbox: check mailbox queue is full or not Tanmay Shah
2026-01-18 19:53   ` Jassi Brar
2026-01-30 18:13     ` Shah, Tanmay
2025-12-17 21:27 ` [PATCH v3 2/2] remoteproc: xlnx: do not kick if mbox queue is full Tanmay Shah
2026-01-05 21:21   ` Mathieu Poirier
2026-01-30 18:14     ` Shah, Tanmay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox