linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeremy Kerr <jk@codeconstruct.com.au>
To: aspeedyh <yh_chung@aspeedtech.com>,
	matt@codeconstruct.com.au,  andrew+netdev@lunn.ch,
	davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	 pabeni@redhat.com, linux-kernel@vger.kernel.org,
	netdev@vger.kernel.org,  bmc-sw@aspeedtech.com
Cc: Khang D Nguyen <khangng@amperemail.onmicrosoft.com>
Subject: Re: [PATCH] net: mctp: Add MCTP PCIe VDM transport driver
Date: Mon, 14 Jul 2025 16:54:27 +0800	[thread overview]
Message-ID: <a01f2ed55c69fc22dac9c8e5c2e84b557346aa4d.camel@codeconstruct.com.au> (raw)
In-Reply-To: <20250714062544.2612693-1-yh_chung@aspeedtech.com>

Hi YH,

[+CC Khang]

I have some overall questions before we get into a full review of the
series, inline below.

> Add an implementation for DMTF DSP0238 MCTP PCIe VDM transport spec.
> 
> Introduce struct mctp_pcie_vdm_dev to represent each PCIe VDM
> interface and its send/receive operations.  Register a
> net_device with the MCTP core so packets traverse the standard
> networking socket API.
> 
> Because there is no generic PCIe VDM bus framework in-tree, this
> driver provides a transport interface for lower layers to
> implement vendor-specific read/write callbacks.

Do we really need an abstraction for MCTP VDM drivers? How many are you
expecting? Can you point us to a client of the VDM abstraction?

There is some value in keeping consistency for the MCTP lladdr formats
across PCIe transports, but I'm not convinced we need a whole
abstraction layer for this.

> TX path uses a dedicated kernel thread and ptr_ring: skbs queued by the
> MCTP stack are enqueued on the ring and processed in-thread context.

Is this somehow more suitable than the existing netdev queues?

> +struct mctp_pcie_vdm_route_info {
> +       u8 eid;
> +       u8 dirty;
> +       u16 bdf_addr;
> +       struct hlist_node hnode;
> +};

Why are you keeping your own routing table in the transport driver? We
already have the route and neighbour tables in the MCTP core code.

Your assumption that you can intercept MCTP control messages to keep a
separate routing table will not work.

> +static void mctp_pcie_vdm_net_setup(struct net_device *ndev)
> +{
> +       ndev->type = ARPHRD_MCTP;
> +
> +       ndev->mtu = MCTP_PCIE_VDM_MIN_MTU;
> +       ndev->min_mtu = MCTP_PCIE_VDM_MIN_MTU;
> +       ndev->max_mtu = MCTP_PCIE_VDM_MAX_MTU;
> +       ndev->tx_queue_len = MCTP_PCIE_VDM_NET_DEV_TX_QUEUE_LEN;
> +       ndev->addr_len = 2; //PCIe bdf is 2 bytes

One of the critical things to get right is the lladdr format for PCIe
VDM devices, as it will be visible to userspace. While the PCIe b/d/fn
data is indeed two bytes, the MCTP addressing format for VDM data is not
entirely representable in two bytes: we also have the routing type to
encode. DSP0238 requires us to use Route to Root Complex and Broadcast
from Root Complex for certain messages, so we need some way to represent
that in your proposed lladdr format.

For this reason, you may want to encode this as [type, bdfn] data.

Also, in flit mode, we also have the segment byte to encode.

Cheers,


Jeremy

  reply	other threads:[~2025-07-14  8:54 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-14  6:25 [PATCH] net: mctp: Add MCTP PCIe VDM transport driver aspeedyh
2025-07-14  8:54 ` Jeremy Kerr [this message]
2025-07-14 12:37   ` YH Chung
2025-07-15  1:08     ` Jeremy Kerr
2025-07-17  3:07       ` YH Chung
2025-07-17  5:37         ` Jeremy Kerr
2025-07-17  7:17           ` YH Chung
2025-07-17  7:31             ` Jeremy Kerr
2025-07-17  8:21               ` YH Chung
2025-07-17 21:40                 ` Adam Young
2025-07-18  3:24                 ` Jeremy Kerr
2025-07-17 21:43             ` Adam Young
2025-07-18  5:48           ` Khang D Nguyen
2025-07-18  6:03             ` Jeremy Kerr
2025-07-18 11:05               ` YH Chung
2025-07-15  2:26 ` kernel test robot
2025-07-17 10:39 ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a01f2ed55c69fc22dac9c8e5c2e84b557346aa4d.camel@codeconstruct.com.au \
    --to=jk@codeconstruct.com.au \
    --cc=andrew+netdev@lunn.ch \
    --cc=bmc-sw@aspeedtech.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=khangng@amperemail.onmicrosoft.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=matt@codeconstruct.com.au \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=yh_chung@aspeedtech.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).