* [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP
@ 2026-05-04 19:34 Badal Nilawar
2026-05-04 19:34 ` [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox Badal Nilawar
2026-05-05 2:15 ` [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Jeremy Kerr
0 siblings, 2 replies; 5+ messages in thread
From: Badal Nilawar @ 2026-05-04 19:34 UTC (permalink / raw)
To: dri-devel, intel-xe, netdev, linux-kernel
Cc: badal.nilawar, rodrigo.vivi, wojciech.drewek, michael.brooks,
heikki.krogerus, michael.j.ruhl, thomas.hellstrom,
michal.winiarski, anshuman.gupta, jacob.e.keller,
maarten.lankhorst, matthew.brost, anthony.l.nguyen,
przemyslaw.kitszel, mika.westerberg, andriy.shevchenko,
singaravelan.nallasellan, kelvin.gardiner, jk, matt,
andrew+netdev, davem, edumazet, kuba, pabeni, james.ausmus
Problem statement:
The CRI platform includes a dedicated I2C controller for in-band access to AMC.
AMC firmware updates leverage PLDM over MCTP, using the I2C controller as the
physical transport, enabled by the Linux kernel's MCTP-I2C driver
(drivers/net/mctp/mctp-i2c.c), which manages MCTP framing and delivery over the
I2C bus.
CRI Inband firmware update flow
+-----------------------------------+
| User Space |
| (PLDM requester/responder) |
| PLDM client via AF_MCTP socket |
+-----------------------------------+
|
+-----------------------------------+
| net/mctp core |
| (routing, EID resolution, |
| MCTP framing) |
+-----------------------------------+
|
+-----------------------------------+
| mctp-i2c.c |
| MCTP transport over i2c |
+-----------------------------------+
|
+-----------------------------------+
| AMC Firmware |
| (PLDM requester/responder) |
| MCTP Decode -> PLDM Processing |
+-----------------------------------+
Future GPU platforms may not include a dedicated I2C controller for MCTP
access; however, the requirement to support PLDM over MCTP persists. To
address this, a vendor-specific mailbox-based transport is proposed to
carry MCTP messages between the host and the GPU/AMC.
Solution:
The proposed approach involves implementing an MCTP transport layer with
binding type MCTP_PHYS_BINDING_VENDOR (0xFF) and incorporating it either
into the drivers/net/mctp/ directory or the drivers/gpu/drm/xe subsystem.
The latter is particularly fitting, as Xe is the exclusive consumer of
this functionality.
Option 1: MCTP Transport as Part of drivers/gpu/drm/xe Subsystem
Under this design, the Xe KMD takes sole ownership of the MCTP mailbox
transport layer. The implementation is introduced as a new source file,
xe-mctp-mailbox.c, placed within the drivers/gpu/drm/xe/ driver tree.
Although xe-mctp-mailbox.c resides within the Xe subsystem, it does not
operate in isolation from the broader networking stack. It integrates
with the standard Linux MCTP stack by registering an MCTP net device
via mctp_register_netdev(), allowing the net/mctp core to manage
routing, EID resolution, and MCTP framing in the usual way.
This means Xe retains direct ownership of the mailbox MMIO interface
(status, and data registers via BAR), while still participating as
transport in the MCTP network subsystem.
+--------------------------------------------------+
| User Space |
| (PLDM Requester / Responder) |
| PLDM Client via AF_MCTP Socket |
+--------------------------------------------------+
|
AF_MCTP API
|
+--------------------------------------------------+
| net/mctp Core |
| (Routing, EID Resolution, MCTP Framing) |
+--------------------------------------------------+
| ^
| mctp_register_netdev() | netdev / mctp_dev ops
| (called by Xe KMD) |
+--------------------------------------------------+
| [NEW] drivers/gpu/drm/xe/xe-mctp-mailbox.c |
| |
| Owned by : Xe KMD (drivers/gpu/drm/xe/) |
| - Registers directly as MCTP net device |
| - Implements MCTP transport over Mailbox MMIO |
+--------------------------------------------------+
|
Direct MMIO access (BAR registers)
|
+--------------------------------------------------+
| Intel Mailbox MMIO Registers |
| (Doorbell, Status, Data Registers via BAR) |
+--------------------------------------------------+
|
Hardware Mailbox Interface
|
+--------------------------------------------------+
| GPU / AMC Firmware |
| (PLDM Requester / Responder) |
| Mailbox Handler --> MCTP Decode |
| --> PLDM Processing |
+--------------------------------------------------+
Pros:
+ Tightly coupled with Xe and doesn't polute current generic mctp
subsystem with vendor specific mailbox solution.
Cons:
- Not a generic layer; cannot be easily reused or consumed
by other drivers or subsystems outside of Xe.
RFC Patch: "xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox"
Option 2: MCTP Transport as Part of net/mctp Subsystem
The MCTP mailbox driver (intel-mctp-mailbox.c) resides under
drivers/net/mctp/ and registers itself as an Auxiliary Bus Driver on the
Linux auxiliary bus. The Xe KMD is responsible for constructing an
Auxiliary Bus Device which encapsulates the mailbox ops and registering
it against the MCTP auxiliary bus driver. This binding allows the MCTP
layer to send and receive messages through the GPU mailbox interface.
+--------------------------------------------------+
| User Space |
| (PLDM Requester / Responder) |
| PLDM Client via AF_MCTP Socket |
+--------------------------------------------------+
|
AF_MCTP API
|
+--------------------------------------------------+
| net/mctp Core |
| (Routing, EID Resolution, MCTP Framing) |
+--------------------------------------------------+
|
mctp_dev ops / netdev interface
|
+--------------------------------------------------+
| [NEW] drivers/net/mctp/intel-mctp-mailbox.c |
| |
| Registered as: Auxiliary Bus Driver |
| - Binds to Auxiliary Device exposed by Xe KMD |
| - Accesses mailbox ops shared via aux device |
+--------------------------------------------------+
| |
| auxiliary_driver_register() | auxiliary_device_register()
| |
+--------------------+ +-------------------------+
| Auxiliary Bus |<->| Auxiliary Device |
| Driver | | (created by Xe KMD) |
| (intel-mctp- | | - Contains mailbox_ops |
| mailbox.c) | | |
| drivers/net/mctp/ | | |
+--------------------+ +-------------------------+
|
mailbox_ops (send/recv via MMIO)
|
+--------------------------------------------------+
| Intel Mailbox MMIO Registers |
| (Status, Data Registers via BAR) |
+--------------------------------------------------+
|
Hardware Mailbox Interface
|
+--------------------------------------------------+
| GPU / AMC Firmware |
| (PLDM Requester / Responder) |
| Mailbox Handler --> MCTP Decode |
| --> PLDM Processing |
+--------------------------------------------------+
Pros:
+ Natively part of the MCTP stack.
+ Consistent with existing MCTP transport implementations
such as mctp-i2c and mctp-i3c. Probe occurs naturally
when the auxiliary device is created, following standard
kernel device model patterns.
Cons:
- Introduction of vendor specific cases along with current
generic mctp entries
Open for other ideas and suggestions.
Note: This RFC is prepared with AI assistance (e.g. GitHub Copilot etc).
Badal Nilawar (1):
xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_device.c | 3 +
drivers/gpu/drm/xe/xe_device_types.h | 4 +
drivers/gpu/drm/xe/xe_mctp_mailbox.c | 186 +++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_mctp_mailbox.h | 14 ++
5 files changed, 208 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_mctp_mailbox.c
create mode 100644 drivers/gpu/drm/xe/xe_mctp_mailbox.h
--
2.54.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox
2026-05-04 19:34 [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Badal Nilawar
@ 2026-05-04 19:34 ` Badal Nilawar
2026-05-05 2:15 ` Jeremy Kerr
2026-05-05 8:23 ` Andy Shevchenko
2026-05-05 2:15 ` [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Jeremy Kerr
1 sibling, 2 replies; 5+ messages in thread
From: Badal Nilawar @ 2026-05-04 19:34 UTC (permalink / raw)
To: dri-devel, intel-xe, netdev, linux-kernel
Cc: badal.nilawar, rodrigo.vivi, wojciech.drewek, michael.brooks,
heikki.krogerus, michael.j.ruhl, thomas.hellstrom,
michal.winiarski, anshuman.gupta, jacob.e.keller,
maarten.lankhorst, matthew.brost, anthony.l.nguyen,
przemyslaw.kitszel, mika.westerberg, andriy.shevchenko,
singaravelan.nallasellan, kelvin.gardiner, jk, matt,
andrew+netdev, davem, edumazet, kuba, pabeni, james.ausmus
Add support for MCTP transport over the Intel vendor-specific mailbox
protocol to enable in-band firmware updates for GPU/AMC via PLDM
Signed-off-by: Badal Nilawar <badal.nilawar@intel.com>
---
drivers/gpu/drm/xe/Makefile | 1 +
drivers/gpu/drm/xe/xe_device.c | 3 +
drivers/gpu/drm/xe/xe_device_types.h | 4 +
drivers/gpu/drm/xe/xe_mctp_mailbox.c | 186 +++++++++++++++++++++++++++
drivers/gpu/drm/xe/xe_mctp_mailbox.h | 14 ++
5 files changed, 208 insertions(+)
create mode 100644 drivers/gpu/drm/xe/xe_mctp_mailbox.c
create mode 100644 drivers/gpu/drm/xe/xe_mctp_mailbox.h
diff --git a/drivers/gpu/drm/xe/Makefile b/drivers/gpu/drm/xe/Makefile
index 09661f079d03..d427152bc3fd 100644
--- a/drivers/gpu/drm/xe/Makefile
+++ b/drivers/gpu/drm/xe/Makefile
@@ -89,6 +89,7 @@ xe-y += xe_bb.o \
xe_late_bind_fw.o \
xe_lrc.o \
xe_mem_pool.o \
+ xe_mctp_mailbox.o \
xe_migrate.o \
xe_mmio.o \
xe_mmio_gem.o \
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 4b45b617a039..ebf456f580d1 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -49,6 +49,7 @@
#include "xe_i2c.h"
#include "xe_irq.h"
#include "xe_late_bind_fw.h"
+#include "xe_mctp_mailbox.h"
#include "xe_mmio.h"
#include "xe_module.h"
#include "xe_nvm.h"
@@ -1065,6 +1066,8 @@ int xe_device_probe(struct xe_device *xe)
for_each_gt(gt, xe, id)
xe_gt_sanitize_freq(gt);
+ xe_mctp_mailbox_init(xe);
+
xe_vsec_init(xe);
err = xe_sriov_init_late(xe);
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 89437de3001a..9cfe70428c71 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -38,6 +38,7 @@
struct drm_pagemap_shrinker;
struct intel_display;
struct intel_dg_nvm_dev;
+struct xe_mctp_mailbox;
struct xe_ggtt;
struct xe_i2c;
struct xe_pat_ops;
@@ -500,6 +501,9 @@ struct xe_device {
struct llist_head async_list;
} bo_device;
+ /** @mctp_mailbox: mctp mailbox */
+ struct xe_mctp_mailbox *mctp_mailbox;
+
/** @pmu: performance monitoring unit */
struct xe_pmu pmu;
diff --git a/drivers/gpu/drm/xe/xe_mctp_mailbox.c b/drivers/gpu/drm/xe/xe_mctp_mailbox.c
new file mode 100644
index 000000000000..f1a81208a9a1
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_mctp_mailbox.c
@@ -0,0 +1,186 @@
+// SPDX-License-Identifier: MIT
+/*
+ * MCTP-over-MAILBOX transport for Xe.
+ *
+ * Copyright 2026 Intel Corporation
+ */
+
+#include "xe_device_types.h"
+
+#include <linux/netdevice.h>
+#include <linux/jiffies.h>
+#include <linux/workqueue.h>
+
+#include <net/mctp.h>
+#include <net/mctpdevice.h>
+#include <net/pkt_sched.h>
+
+#include <uapi/linux/if_arp.h>
+
+#include "xe_mctp_mailbox.h"
+
+#define XE_MCTP_MAILBOX_RX_POLL_MS 100
+
+/** @mctp_mailbox: Struct for mctp over mailbox */
+struct xe_mctp_mailbox {
+ /** @mctp_mailbox.netdev: network device */
+ struct net_device *netdev;
+ /** @running: true while netdev is opened */
+ bool running;
+ /** @work: worker to handle mctp requests from firmware */
+ struct delayed_work work;
+ /** @wq: workqueue to schecdule mctp rx worker */
+ struct workqueue_struct *wq;
+};
+
+/*
+ * mailbox protocol is interrupt free so for receive path i.e. endpoint to host
+ * there is no irq available so rx handler need to be polled in worker periodically.
+ */
+static void mctp_mailbox_rx_handler(struct work_struct *work)
+{
+ struct xe_mctp_mailbox *mctp_mailbox =
+ container_of(work, struct xe_mctp_mailbox, work.work);
+ struct net_device *netdev = mctp_mailbox->netdev;
+
+ if (!netdev)
+ return;
+
+ dev_hold(netdev);
+
+ /*
+ * if (mctp_mailbox_rx_ready()) {
+ * Get data over MAILBOX
+ * Allocate skb and copy rx data to skb
+ * Queue skb to upper layer
+ * netif_rx(skb);
+ }
+ */
+ dev_put(netdev);
+
+ if (mctp_mailbox->running)
+ queue_delayed_work(mctp_mailbox->wq, &mctp_mailbox->work,
+ msecs_to_jiffies(XE_MCTP_MAILBOX_RX_POLL_MS));
+}
+
+static netdev_tx_t mctp_mailbox_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ /* RFC stub: send skb over MAILBOX */
+ dev_dstats_tx_dropped(dev);
+ kfree_skb(skb);
+
+ return NETDEV_TX_OK;
+}
+
+static int mctp_mailbox_open(struct net_device *dev)
+{
+ struct xe_mctp_mailbox *mctp_mailbox = netdev_priv(dev);
+
+ mctp_mailbox->running = true;
+ netif_start_queue(dev);
+
+ queue_delayed_work(mctp_mailbox->wq, &mctp_mailbox->work, 0);
+
+ return 0;
+}
+
+static int mctp_mailbox_stop(struct net_device *dev)
+{
+ struct xe_mctp_mailbox *mctp_mailbox = netdev_priv(dev);
+
+ mctp_mailbox->running = false;
+ netif_stop_queue(dev);
+
+ cancel_delayed_work_sync(&mctp_mailbox->work);
+ flush_workqueue(mctp_mailbox->wq);
+
+ return 0;
+}
+
+static const struct net_device_ops mctp_mailbox_netdev_ops = {
+ .ndo_start_xmit = mctp_mailbox_start_xmit,
+ .ndo_open = mctp_mailbox_open,
+ .ndo_stop = mctp_mailbox_stop,
+};
+
+static void mctp_mailbox_netdev_setup(struct net_device *dev)
+{
+ /* Populate netdev structure */
+ dev->type = ARPHRD_MCTP;
+ /*
+ * dev->mtu = MCTP_MAILBOX_MTU_MIN;
+ * dev->min_mtu = MCTP_MAILBOX_MTU_MIN;
+ * dev->max_mtu = MCTP_MAILBOX_MTU_MAX;
+ *
+ * dev->hard_header_len = sizeof(struct mctp_mailbox_hdr);
+ * dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN;
+ */
+ dev->flags = IFF_NOARP;
+ dev->netdev_ops = &mctp_mailbox_netdev_ops;
+ dev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
+}
+
+static void xe_mctp_mailbox_fini(void *arg)
+{
+ struct xe_device *xe = arg;
+ struct xe_mctp_mailbox *mctp_mailbox = xe->mctp_mailbox;
+ struct net_device *netdev;
+
+ if (!mctp_mailbox)
+ return;
+
+ netdev = mctp_mailbox->netdev;
+ if (!netdev) {
+ xe->mctp_mailbox = NULL;
+ return;
+ }
+
+ if (mctp_mailbox->wq) {
+ mctp_mailbox->running = false;
+ cancel_delayed_work_sync(&mctp_mailbox->work);
+ destroy_workqueue(mctp_mailbox->wq);
+ mctp_mailbox->wq = NULL;
+ }
+
+ xe->mctp_mailbox = NULL;
+ mctp_unregister_netdev(netdev);
+ free_netdev(netdev);
+}
+
+int xe_mctp_mailbox_init(struct xe_device *xe)
+{
+ struct xe_mctp_mailbox *mctp_mailbox;
+ struct net_device *netdev;
+ int ret, err;
+
+ netdev = alloc_netdev(sizeof(*mctp_mailbox), "mctp_mailbox%d", NET_NAME_ENUM,
+ mctp_mailbox_netdev_setup);
+ if (!netdev)
+ return -ENOMEM;
+
+ SET_NETDEV_DEV(netdev, xe->drm.dev);
+ mctp_mailbox = netdev_priv(netdev);
+ mctp_mailbox->netdev = netdev;
+
+ ret = mctp_register_netdev(netdev, NULL, MCTP_PHYS_BINDING_VENDOR);
+ if (ret) {
+ free_netdev(netdev);
+ return ret;
+ }
+
+ INIT_DELAYED_WORK(&mctp_mailbox->work, mctp_mailbox_rx_handler);
+ mctp_mailbox->wq = alloc_ordered_workqueue("mctp-mailbox-ordered-wq", 0);
+ if (!mctp_mailbox->wq) {
+ mctp_unregister_netdev(netdev);
+ free_netdev(netdev);
+ return -ENOMEM;
+ }
+
+ xe->mctp_mailbox = mctp_mailbox;
+ err = devm_add_action_or_reset(xe->drm.dev, xe_mctp_mailbox_fini, xe);
+ if (err)
+ return err;
+
+ return 0;
+}
diff --git a/drivers/gpu/drm/xe/xe_mctp_mailbox.h b/drivers/gpu/drm/xe/xe_mctp_mailbox.h
new file mode 100644
index 000000000000..318ae173aaa1
--- /dev/null
+++ b/drivers/gpu/drm/xe/xe_mctp_mailbox.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ *
+ * Copyright 2026 Intel Corporation
+ */
+
+#ifndef _XE_MCTP_MAILBOX_H_
+#define _XE_MCTP_MAILBOX_H_
+
+struct xe_device;
+
+int xe_mctp_mailbox_init(struct xe_device *xe);
+
+#endif /* _XE_MCTP_MAILBOX_H_ */
--
2.54.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP
2026-05-04 19:34 [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Badal Nilawar
2026-05-04 19:34 ` [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox Badal Nilawar
@ 2026-05-05 2:15 ` Jeremy Kerr
1 sibling, 0 replies; 5+ messages in thread
From: Jeremy Kerr @ 2026-05-05 2:15 UTC (permalink / raw)
To: Badal Nilawar, dri-devel, intel-xe, netdev, linux-kernel
Cc: rodrigo.vivi, wojciech.drewek, michael.brooks, heikki.krogerus,
michael.j.ruhl, thomas.hellstrom, michal.winiarski,
anshuman.gupta, jacob.e.keller, maarten.lankhorst, matthew.brost,
anthony.l.nguyen, przemyslaw.kitszel, mika.westerberg,
andriy.shevchenko, singaravelan.nallasellan, kelvin.gardiner,
matt, andrew+netdev, davem, edumazet, kuba, pabeni, james.ausmus
Hi Badal,
Thanks for sending this RFC through! It's good to have some early input
on the structure here.
> Problem statement:
This is exceptionally verbose for "we would like to add a MCTP transport
driver". :)
> Option 1: MCTP Transport as Part of drivers/gpu/drm/xe Subsystem
This sounds like the best approach to me. The MCTP transport drivers in
drivers/net/mctp are intended to be fairly hardware-agnostic, and all
are spec compliant. I see no issue with having your own code do a
mctp_register_netdev() from elsewhere in the tree.
I'll also reply on 1/1 with some implementation comments.
> Note: This RFC is prepared with AI assistance (e.g. GitHub Copilot etc).
Then please ensure you have read
Documentation/process/coding-assistants.rst, as you are missing the
Attribution requirements from that.
Cheers,
Jeremy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox
2026-05-04 19:34 ` [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox Badal Nilawar
@ 2026-05-05 2:15 ` Jeremy Kerr
2026-05-05 8:23 ` Andy Shevchenko
1 sibling, 0 replies; 5+ messages in thread
From: Jeremy Kerr @ 2026-05-05 2:15 UTC (permalink / raw)
To: Badal Nilawar, dri-devel, intel-xe, netdev, linux-kernel
Cc: rodrigo.vivi, wojciech.drewek, michael.brooks, heikki.krogerus,
michael.j.ruhl, thomas.hellstrom, michal.winiarski,
anshuman.gupta, jacob.e.keller, maarten.lankhorst, matthew.brost,
anthony.l.nguyen, przemyslaw.kitszel, mika.westerberg,
andriy.shevchenko, singaravelan.nallasellan, kelvin.gardiner,
matt, andrew+netdev, davem, edumazet, kuba, pabeni, james.ausmus
Hi Badal,
> Add support for MCTP transport over the Intel vendor-specific mailbox
> protocol to enable in-band firmware updates for GPU/AMC via PLDM
I know this is an RFC, but you're missing the distinctive part, around
the hardware interactions on the transport side. The skeleton of a MCTP
driver is mostly the same as existing transport drivers, so there's not
a lot to comment on at present.
Some items on the parts here though:
I'd suggest not using the term "MCTP over mailbox" / mctp_mailbox in
general; there are lots of "mailbox" implementations around, including
the proposed PCC mailbox driver. Perhaps "MCTP over Xe mailbox"?
> diff --git a/drivers/gpu/drm/xe/xe_mctp_mailbox.c b/drivers/gpu/drm/xe/xe_mctp_mailbox.c
> new file mode 100644
> index 000000000000..f1a81208a9a1
> --- /dev/null
> +++ b/drivers/gpu/drm/xe/xe_mctp_mailbox.c
> @@ -0,0 +1,186 @@
> +// SPDX-License-Identifier: MIT
This will need to include GPLv2.
> +/*
> + * MCTP-over-MAILBOX transport for Xe.
> + *
> + * Copyright 2026 Intel Corporation
> + */
> +
> +#include "xe_device_types.h"
> +
> +#include <linux/netdevice.h>
> +#include <linux/jiffies.h>
> +#include <linux/workqueue.h>
> +
> +#include <net/mctp.h>
> +#include <net/mctpdevice.h>
> +#include <net/pkt_sched.h>
> +
> +#include <uapi/linux/if_arp.h>
> +
> +#include "xe_mctp_mailbox.h"
> +
> +#define XE_MCTP_MAILBOX_RX_POLL_MS 100
> +
> +/** @mctp_mailbox: Struct for mctp over mailbox */
> +struct xe_mctp_mailbox {
> + /** @mctp_mailbox.netdev: network device */
> + struct net_device *netdev;
> + /** @running: true while netdev is opened */
> + bool running;
> + /** @work: worker to handle mctp requests from firmware */
> + struct delayed_work work;
> + /** @wq: workqueue to schecdule mctp rx worker */
> + struct workqueue_struct *wq;
> +};
> +
> +/*
> + * mailbox protocol is interrupt free so for receive path i.e. endpoint to host
> + * there is no irq available so rx handler need to be polled in worker periodically.
> + */
This is unfortunate - there's no facility you can use to trigger RX?
> +static void mctp_mailbox_rx_handler(struct work_struct *work)
> +{
> + struct xe_mctp_mailbox *mctp_mailbox =
> + container_of(work, struct xe_mctp_mailbox, work.work);
> + struct net_device *netdev = mctp_mailbox->netdev;
> +
> + if (!netdev)
> + return;
> +
> + dev_hold(netdev);
> +
> + /*
> + * if (mctp_mailbox_rx_ready()) {
> + * Get data over MAILBOX
> + * Allocate skb and copy rx data to skb
> + * Queue skb to upper layer
> + * netif_rx(skb);
> + }
> + */
> + dev_put(netdev);
> +
> + if (mctp_mailbox->running)
> + queue_delayed_work(mctp_mailbox->wq, &mctp_mailbox->work,
> + msecs_to_jiffies(XE_MCTP_MAILBOX_RX_POLL_MS));
> +}
One packet per 100ms will mean you will very likely miss a retry timeout
on fragmented messages. You probably want to schedule the next RX
immediately until the mailbox is empty, assuming that will work with the
semantics of the Xe mailbox.
Even then, a single-packet (worst-case) transmission delay of 100ms is
getting a bit large. It may be necessary to increase the polling
frequency, but that has downsides. Hence the question about an RX
notification facility.
> +
> +static netdev_tx_t mctp_mailbox_start_xmit(struct sk_buff *skb,
> + struct net_device *dev)
> +{
> + /* RFC stub: send skb over MAILBOX */
> + dev_dstats_tx_dropped(dev);
> + kfree_skb(skb);
> +
> + return NETDEV_TX_OK;
> +}
> +
> +static int mctp_mailbox_open(struct net_device *dev)
> +{
> + struct xe_mctp_mailbox *mctp_mailbox = netdev_priv(dev);
> +
> + mctp_mailbox->running = true;
> + netif_start_queue(dev);
> +
> + queue_delayed_work(mctp_mailbox->wq, &mctp_mailbox->work, 0);
> +
> + return 0;
> +}
> +
> +static int mctp_mailbox_stop(struct net_device *dev)
> +{
> + struct xe_mctp_mailbox *mctp_mailbox = netdev_priv(dev);
> +
> + mctp_mailbox->running = false;
> + netif_stop_queue(dev);
> +
> + cancel_delayed_work_sync(&mctp_mailbox->work);
> + flush_workqueue(mctp_mailbox->wq);
> +
> + return 0;
> +}
> +
> +static const struct net_device_ops mctp_mailbox_netdev_ops = {
> + .ndo_start_xmit = mctp_mailbox_start_xmit,
> + .ndo_open = mctp_mailbox_open,
> + .ndo_stop = mctp_mailbox_stop,
> +};
> +
> +static void mctp_mailbox_netdev_setup(struct net_device *dev)
> +{
> + /* Populate netdev structure */
> + dev->type = ARPHRD_MCTP;
> + /*
> + * dev->mtu = MCTP_MAILBOX_MTU_MIN;
> + * dev->min_mtu = MCTP_MAILBOX_MTU_MIN;
> + * dev->max_mtu = MCTP_MAILBOX_MTU_MAX;
> + *
> + * dev->hard_header_len = sizeof(struct mctp_mailbox_hdr);
What's in struct mctp_mailbox_hdr? I assume you don't need to handle any
physical addressing, but can you confirm?
> + * dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN;
> + */
> + dev->flags = IFF_NOARP;
> + dev->netdev_ops = &mctp_mailbox_netdev_ops;
> + dev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
> +}
> +
> +static void xe_mctp_mailbox_fini(void *arg)
> +{
> + struct xe_device *xe = arg;
> + struct xe_mctp_mailbox *mctp_mailbox = xe->mctp_mailbox;
> + struct net_device *netdev;
> +
> + if (!mctp_mailbox)
> + return;
> +
> + netdev = mctp_mailbox->netdev;
> + if (!netdev) {
> + xe->mctp_mailbox = NULL;
> + return;
> + }
> +
> + if (mctp_mailbox->wq) {
> + mctp_mailbox->running = false;
> + cancel_delayed_work_sync(&mctp_mailbox->work);
> + destroy_workqueue(mctp_mailbox->wq);
> + mctp_mailbox->wq = NULL;
> + }
> +
> + xe->mctp_mailbox = NULL;
> + mctp_unregister_netdev(netdev);
> + free_netdev(netdev);
> +}
> +
> +int xe_mctp_mailbox_init(struct xe_device *xe)
> +{
> + struct xe_mctp_mailbox *mctp_mailbox;
> + struct net_device *netdev;
> + int ret, err;
> +
> + netdev = alloc_netdev(sizeof(*mctp_mailbox), "mctp_mailbox%d", NET_NAME_ENUM,
> + mctp_mailbox_netdev_setup);
I'd suggest mctpxe%d here. We don't tend to have underscores in netdev
names, and 'mailbox' is quite generic.
Can you tie the name to the instance of the GPU (and then use
NET_NAME_PREDICTABLE) perhaps?
> + if (!netdev)
> + return -ENOMEM;
> +
> + SET_NETDEV_DEV(netdev, xe->drm.dev);
> + mctp_mailbox = netdev_priv(netdev);
> + mctp_mailbox->netdev = netdev;
> +
> + ret = mctp_register_netdev(netdev, NULL, MCTP_PHYS_BINDING_VENDOR);
> + if (ret) {
> + free_netdev(netdev);
> + return ret;
> + }
> +
> + INIT_DELAYED_WORK(&mctp_mailbox->work, mctp_mailbox_rx_handler);
> + mctp_mailbox->wq = alloc_ordered_workqueue("mctp-mailbox-ordered-wq", 0);
> + if (!mctp_mailbox->wq) {
> + mctp_unregister_netdev(netdev);
> + free_netdev(netdev);
> + return -ENOMEM;
> + }
> +
> + xe->mctp_mailbox = mctp_mailbox;
> + err = devm_add_action_or_reset(xe->drm.dev, xe_mctp_mailbox_fini, xe);
> + if (err)
> + return err;
> +
> + return 0;
> +}
Cheers,
Jeremy
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox
2026-05-04 19:34 ` [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox Badal Nilawar
2026-05-05 2:15 ` Jeremy Kerr
@ 2026-05-05 8:23 ` Andy Shevchenko
1 sibling, 0 replies; 5+ messages in thread
From: Andy Shevchenko @ 2026-05-05 8:23 UTC (permalink / raw)
To: Badal Nilawar
Cc: dri-devel, intel-xe, netdev, linux-kernel, rodrigo.vivi,
wojciech.drewek, michael.brooks, heikki.krogerus, michael.j.ruhl,
thomas.hellstrom, michal.winiarski, anshuman.gupta,
jacob.e.keller, maarten.lankhorst, matthew.brost,
anthony.l.nguyen, przemyslaw.kitszel, mika.westerberg,
singaravelan.nallasellan, kelvin.gardiner, jk, matt,
andrew+netdev, davem, edumazet, kuba, pabeni, james.ausmus
On Tue, May 05, 2026 at 01:04:22AM +0530, Badal Nilawar wrote:
> Add support for MCTP transport over the Intel vendor-specific mailbox
> protocol to enable in-band firmware updates for GPU/AMC via PLDM
...
> +#include "xe_device_types.h"
Why is the location of this inclusion is here?
> +#include <linux/netdevice.h>
> +#include <linux/jiffies.h>
> +#include <linux/workqueue.h>
> +
> +#include <net/mctp.h>
> +#include <net/mctpdevice.h>
> +#include <net/pkt_sched.h>
> +
> +#include <uapi/linux/if_arp.h>
> +
The above doesn't sound like more generic than linux/* ones. Move it here.
> +#include "xe_mctp_mailbox.h"
...
> +static void mctp_mailbox_rx_handler(struct work_struct *work)
> +{
> + struct xe_mctp_mailbox *mctp_mailbox =
> + container_of(work, struct xe_mctp_mailbox, work.work);
> + struct net_device *netdev = mctp_mailbox->netdev;
> +
> + if (!netdev)
> + return;
This is bad style from maintenance perspective. Use
struct net_device *netdev;
netdev = mctp_mailbox->netdev;
if (!netdev)
return;
> + dev_hold(netdev);
> +
> + /*
> + * if (mctp_mailbox_rx_ready()) {
> + * Get data over MAILBOX
> + * Allocate skb and copy rx data to skb
> + * Queue skb to upper layer
> + * netif_rx(skb);
> + }
> + */
What is this?! If you want to put a nice comment, format it accordingly.
> + dev_put(netdev);
> +
> + if (mctp_mailbox->running)
> + queue_delayed_work(mctp_mailbox->wq, &mctp_mailbox->work,
> + msecs_to_jiffies(XE_MCTP_MAILBOX_RX_POLL_MS));
> +}
...
> +static void mctp_mailbox_netdev_setup(struct net_device *dev)
> +{
> + /* Populate netdev structure */
> + dev->type = ARPHRD_MCTP;
> + /*
> + * dev->mtu = MCTP_MAILBOX_MTU_MIN;
> + * dev->min_mtu = MCTP_MAILBOX_MTU_MIN;
> + * dev->max_mtu = MCTP_MAILBOX_MTU_MAX;
> + *
> + * dev->hard_header_len = sizeof(struct mctp_mailbox_hdr);
> + * dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN;
> + */
Even for RFC these should not exist in this form. Always add the respective
FIXME/TODO/et cetera to explain the commented out code.
> + dev->flags = IFF_NOARP;
> + dev->netdev_ops = &mctp_mailbox_netdev_ops;
> + dev->pcpu_stat_type = NETDEV_PCPU_STAT_DSTATS;
> +}
--
With Best Regards,
Andy Shevchenko
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-05 8:23 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-04 19:34 [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Badal Nilawar
2026-05-04 19:34 ` [RFC PATCH 1/1] xe/xe_mctp_mailbox: Add support for MCTP transport over mailbox Badal Nilawar
2026-05-05 2:15 ` Jeremy Kerr
2026-05-05 8:23 ` Andy Shevchenko
2026-05-05 2:15 ` [RFC PATCH 0/1] Proposal for in-band firmware update over PLDM-MCTP Jeremy Kerr
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox