From: Simon Horman <horms@kernel.org>
To: aleksander.lobakin@intel.com
Cc: 'Simon Horman' <horms@kernel.org>,
intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com,
przemyslaw.kitszel@intel.com, andrew+netdev@lunn.ch,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, kohei@enjuk.jp, jacob.e.keller@intel.com,
aleksandr.loktionov@intel.com,
nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v5 5/5] ice: add support for transmitting unreadable frags
Date: Fri, 8 May 2026 16:42:00 +0100 [thread overview]
Message-ID: <20260508154159.1250696-2-horms@kernel.org> (raw)
In-Reply-To: <20260505152923.1040589-6-aleksander.lobakin@intel.com>
From: 'Simon Horman' <horms@kernel.org>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
ice: add support for transmitting unreadable frags
This commit advertises netmem Tx support in the ice driver. It modifies
the transmit path to conditionally set the buffer type to ICE_TX_BUF_FRAG
only when the fragment is not a net_iov, allowing the driver to skip DMA
unmapping operations for unreadable fragments.
> diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
> index a88f9e3c00771..0e61b38e53a5e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_main.c
> +++ b/drivers/net/ethernet/intel/ice/ice_main.c
> @@ -3453,6 +3453,7 @@ static void ice_set_ops(struct ice_vsi *vsi)
>
> netdev->netdev_ops = &ice_netdev_ops;
> netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
> + netdev->netmem_tx = true;
> netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
By setting netmem_tx to true here, the driver accepts packets containing
net_iov fragments. If a net_iov packet exceeds ICE_MAX_BUF_TXD fragments, it
appears it will fall back to linearization in the transmit path.
Since __skb_linearize() uses CPU memory accessors that expect a valid struct
page, will this cause an invalid memory access when skb_frag_page() returns a
net_iov pointer?
Should the driver explicitly drop highly fragmented net_iov packets instead of
attempting to linearize them?
[ ... ]
> diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
> index 43b4670760273..1d97e2cc2adee 100644
> --- a/drivers/net/ethernet/intel/ice/ice_txrx.c
> +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
> @@ -113,11 +113,17 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc,
> static void
> ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf)
> {
> - if (tx_buf->type != ICE_TX_BUF_XDP_TX && dma_unmap_len(tx_buf, len))
> + switch (tx_buf->type) {
> + case ICE_TX_BUF_DUMMY:
> + case ICE_TX_BUF_FRAG:
> + case ICE_TX_BUF_SKB:
> + case ICE_TX_BUF_XDP_XMIT:
> dma_unmap_page(ring->dev,
> dma_unmap_addr(tx_buf, dma),
> dma_unmap_len(tx_buf, len),
> DMA_TO_DEVICE);
> + break;
> + }
Does removing the dma_unmap_len() check alter the safety of the DMA error
recovery path?
If a mapping fails in ice_tx_map(), execution jumps to dma_error and calls
ice_unmap_and_free_tx_buf(). Since the failure happens before
dma_unmap_len_set() updates the descriptor, it seems this might unconditionally
unmap using stale descriptor state.
Could this result in 0-length unmaps or erroneously unmapping persistent
net_iov memory?
[ ... ]
> @@ -338,12 +344,14 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget)
> }
>
> /* unmap any remaining paged data */
> - if (dma_unmap_len(tx_buf, len)) {
> + if (tx_buf->type != ICE_TX_BUF_EMPTY) {
> dma_unmap_page(tx_ring->dev,
> dma_unmap_addr(tx_buf, dma),
> dma_unmap_len(tx_buf, len),
> DMA_TO_DEVICE);
> +
> dma_unmap_len_set(tx_buf, len, 0);
> + tx_buf->type = ICE_TX_BUF_EMPTY;
> }
> }
When mapping a net_iov fragment in ice_tx_map(), the descriptor is populated
with the length and DMA address, but tx_buf->type remains ICE_TX_BUF_EMPTY.
Because dma_unmap_len_set(tx_buf, len, 0) is now placed inside the
tx_buf->type != ICE_TX_BUF_EMPTY check, does this skip clearing the len and
dma fields for net_iov descriptors when they are returned to the free pool?
If these descriptor fields are leaked, could they be incorrectly unmapped later
if the descriptor is reused and a subsequent mapping fails?
WARNING: multiple messages have this Message-ID (diff)
From: Simon Horman <horms@kernel.org>
To: aleksander.lobakin@intel.com
Cc: 'Simon Horman' <horms@kernel.org>,
intel-wired-lan@lists.osuosl.org, anthony.l.nguyen@intel.com,
przemyslaw.kitszel@intel.com, andrew+netdev@lunn.ch,
davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, kohei@enjuk.jp, jacob.e.keller@intel.com,
aleksandr.loktionov@intel.com,
nxne.cnse.osdt.itp.upstreaming@intel.com, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH iwl-next v5 5/5] ice: add support for transmitting unreadable frags
Date: Fri, 8 May 2026 16:42:00 +0100 [thread overview]
Message-ID: <20260508154159.1250696-2-horms@kernel.org> (raw)
In-Reply-To: <20260505152923.1040589-6-aleksander.lobakin@intel.com>
From: 'Simon Horman' <horms@kernel.org>
This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
ice: add support for transmitting unreadable frags
This commit advertises netmem Tx support in the ice driver. It modifies
the transmit path to conditionally set the buffer type to ICE_TX_BUF_FRAG
only when the fragment is not a net_iov, allowing the driver to skip DMA
unmapping operations for unreadable fragments.
> diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
> index a88f9e3c00771..0e61b38e53a5e 100644
> --- a/drivers/net/ethernet/intel/ice/ice_main.c
> +++ b/drivers/net/ethernet/intel/ice/ice_main.c
> @@ -3453,6 +3453,7 @@ static void ice_set_ops(struct ice_vsi *vsi)
>
> netdev->netdev_ops = &ice_netdev_ops;
> netdev->queue_mgmt_ops = &ice_queue_mgmt_ops;
> + netdev->netmem_tx = true;
> netdev->udp_tunnel_nic_info = &pf->hw.udp_tunnel_nic;
By setting netmem_tx to true here, the driver accepts packets containing
net_iov fragments. If a net_iov packet exceeds ICE_MAX_BUF_TXD fragments, it
appears it will fall back to linearization in the transmit path.
Since __skb_linearize() uses CPU memory accessors that expect a valid struct
page, will this cause an invalid memory access when skb_frag_page() returns a
net_iov pointer?
Should the driver explicitly drop highly fragmented net_iov packets instead of
attempting to linearize them?
[ ... ]
> diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
> index 43b4670760273..1d97e2cc2adee 100644
> --- a/drivers/net/ethernet/intel/ice/ice_txrx.c
> +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
> @@ -113,11 +113,17 @@ ice_prgm_fdir_fltr(struct ice_vsi *vsi, struct ice_fltr_desc *fdir_desc,
> static void
> ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf)
> {
> - if (tx_buf->type != ICE_TX_BUF_XDP_TX && dma_unmap_len(tx_buf, len))
> + switch (tx_buf->type) {
> + case ICE_TX_BUF_DUMMY:
> + case ICE_TX_BUF_FRAG:
> + case ICE_TX_BUF_SKB:
> + case ICE_TX_BUF_XDP_XMIT:
> dma_unmap_page(ring->dev,
> dma_unmap_addr(tx_buf, dma),
> dma_unmap_len(tx_buf, len),
> DMA_TO_DEVICE);
> + break;
> + }
Does removing the dma_unmap_len() check alter the safety of the DMA error
recovery path?
If a mapping fails in ice_tx_map(), execution jumps to dma_error and calls
ice_unmap_and_free_tx_buf(). Since the failure happens before
dma_unmap_len_set() updates the descriptor, it seems this might unconditionally
unmap using stale descriptor state.
Could this result in 0-length unmaps or erroneously unmapping persistent
net_iov memory?
[ ... ]
> @@ -338,12 +344,14 @@ static bool ice_clean_tx_irq(struct ice_tx_ring *tx_ring, int napi_budget)
> }
>
> /* unmap any remaining paged data */
> - if (dma_unmap_len(tx_buf, len)) {
> + if (tx_buf->type != ICE_TX_BUF_EMPTY) {
> dma_unmap_page(tx_ring->dev,
> dma_unmap_addr(tx_buf, dma),
> dma_unmap_len(tx_buf, len),
> DMA_TO_DEVICE);
> +
> dma_unmap_len_set(tx_buf, len, 0);
> + tx_buf->type = ICE_TX_BUF_EMPTY;
> }
> }
When mapping a net_iov fragment in ice_tx_map(), the descriptor is populated
with the length and DMA address, but tx_buf->type remains ICE_TX_BUF_EMPTY.
Because dma_unmap_len_set(tx_buf, len, 0) is now placed inside the
tx_buf->type != ICE_TX_BUF_EMPTY check, does this skip clearing the len and
dma fields for net_iov descriptors when they are returned to the free pool?
If these descriptor fields are leaked, could they be incorrectly unmapped later
if the descriptor is reused and a subsequent mapping fails?
next prev parent reply other threads:[~2026-05-08 15:42 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-05 15:29 [Intel-wired-lan] [PATCH iwl-next v5 0/5] ice: add support for devmem/io_uring Rx and Tx Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-05 15:29 ` [Intel-wired-lan] [PATCH iwl-next v5 1/5] libeth: pass Rx queue index to PP when creating a fill queue Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-05 15:29 ` [Intel-wired-lan] [PATCH iwl-next v5 2/5] libeth: handle creating pools with unreadable buffers Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-05 15:29 ` [Intel-wired-lan] [PATCH iwl-next v5 3/5] ice: migrate to netdev ops lock Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-08 13:28 ` [Intel-wired-lan] " Simon Horman
2026-05-08 13:28 ` Simon Horman
2026-05-08 13:57 ` [Intel-wired-lan] " Alexander Lobakin
2026-05-08 13:57 ` Alexander Lobakin
2026-05-10 8:57 ` [Intel-wired-lan] " Simon Horman
2026-05-10 8:57 ` Simon Horman
2026-05-05 15:29 ` [Intel-wired-lan] [PATCH iwl-next v5 4/5] ice: implement Rx queue management ops Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-08 14:08 ` [Intel-wired-lan] " Simon Horman
2026-05-08 14:26 ` Simon Horman
2026-05-08 15:07 ` Alexander Lobakin
2026-05-08 14:25 ` Simon Horman
2026-05-08 14:25 ` Simon Horman
2026-05-05 15:29 ` [Intel-wired-lan] [PATCH iwl-next v5 5/5] ice: add support for transmitting unreadable frags Alexander Lobakin
2026-05-05 15:29 ` Alexander Lobakin
2026-05-08 15:42 ` Simon Horman [this message]
2026-05-08 15:42 ` Simon Horman
2026-05-08 12:06 ` [Intel-wired-lan] [PATCH iwl-next v5 0/5] ice: add support for devmem/io_uring Rx and Tx Alexander Lobakin
2026-05-08 12:06 ` Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260508154159.1250696-2-horms@kernel.org \
--to=horms@kernel.org \
--cc=aleksander.lobakin@intel.com \
--cc=aleksandr.loktionov@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=kohei@enjuk.jp \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nxne.cnse.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.