netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>, <davem@davemloft.net>,
	<pabeni@redhat.com>, <edumazet@google.com>,
	<andrew+netdev@lunn.ch>, <netdev@vger.kernel.org>,
	<maciej.fijalkowski@intel.com>, <magnus.karlsson@intel.com>,
	<michal.kubiak@intel.com>, <przemyslaw.kitszel@intel.com>,
	<ast@kernel.org>, <daniel@iogearbox.net>, <hawk@kernel.org>,
	<john.fastabend@gmail.com>, <horms@kernel.org>,
	<bpf@vger.kernel.org>, Mina Almasry <almasrymina@google.com>
Subject: Re: [PATCH net-next 01/16] libeth: convert to netmem
Date: Wed, 28 May 2025 16:54:39 +0200	[thread overview]
Message-ID: <20d9b588-f813-4bad-a4da-e058508322df@intel.com> (raw)
In-Reply-To: <20250527185749.5053f557@kernel.org>

From: Jakub Kicinski <kuba@kernel.org>
Date: Tue, 27 May 2025 18:57:49 -0700

> On Tue, 20 May 2025 13:59:02 -0700 Tony Nguyen wrote:
>> @@ -3277,16 +3277,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>>  			     struct libeth_fqe *buf, u32 data_len)
>>  {
>>  	u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
>> +	struct page *hdr_page, *buf_page;
>>  	const void *src;
>>  	void *dst;
>>  
>> -	if (!libeth_rx_sync_for_cpu(buf, copy))
>> +	if (unlikely(netmem_is_net_iov(buf->netmem)) ||
>> +	    !libeth_rx_sync_for_cpu(buf, copy))
>>  		return 0;
> 
> So what happens to the packet that landed in a netmem buffer in case
> when HDS failed? I don't see the handling.

This should not happen in general, as in order to use TCP devmem, you
need to isolate the traffic, and idpf parses all TCP frames correctly.
If this condition is true, then napi_build_skb() will be called with the
devmem buffer passed as head. Should I drop such packets, so that this
would never happen?

> 
>> -	dst = page_address(hdr->page) + hdr->offset + hdr->page->pp->p.offset;
>> -	src = page_address(buf->page) + buf->offset + buf->page->pp->p.offset;
>> -	memcpy(dst, src, LARGEST_ALIGN(copy));
>> +	hdr_page = __netmem_to_page(hdr->netmem);
>> +	buf_page = __netmem_to_page(buf->netmem);
>> +	dst = page_address(hdr_page) + hdr->offset + hdr_page->pp->p.offset;
>> +	src = page_address(buf_page) + buf->offset + buf_page->pp->p.offset;
>>  
>> +	memcpy(dst, src, LARGEST_ALIGN(copy));
>>  	buf->offset += copy;
>>  
>>  	return copy;
>> @@ -3302,11 +3306,12 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
>>   */
>>  struct sk_buff *idpf_rx_build_skb(const struct libeth_fqe *buf, u32 size)
>>  {
>> -	u32 hr = buf->page->pp->p.offset;
>> +	struct page *buf_page = __netmem_to_page(buf->netmem);
>> +	u32 hr = buf_page->pp->p.offset;
>>  	struct sk_buff *skb;
>>  	void *va;
>>  
>> -	va = page_address(buf->page) + buf->offset;
>> +	va = page_address(buf_page) + buf->offset;
>>  	prefetch(va + hr);
> 
> If you don't want to have to validate the low bit during netmem -> page
> conversions - you need to clearly maintain the separation between 
> the two in the driver. These __netmem_to_page() calls are too much of 
> a liability.

This is only for the header buffers. They always are in the host memory.
The separation is done in the idpf_buf_queue structure. There are 2 PPs:
hdr_pp and pp, and the first one is always host.
We never allow to build an skb with a devmem head, right? So why check
it here...

Thanks,
Olek

  parent reply	other threads:[~2025-05-28 14:54 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-20 20:59 [PATCH net-next 00/16][pull request] libeth: add libeth_xdp helper lib Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 01/16] libeth: convert to netmem Tony Nguyen
2025-05-28  1:57   ` Jakub Kicinski
2025-05-28  3:49     ` Mina Almasry
2025-05-29  0:23       ` Jakub Kicinski
2025-05-28 14:54     ` Alexander Lobakin [this message]
2025-05-29  0:25       ` Jakub Kicinski
2025-05-20 20:59 ` [PATCH net-next 02/16] libeth: support native XDP and register memory model Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 03/16] libeth: xdp: add XDP_TX buffers sending Tony Nguyen
2025-05-28  2:03   ` Jakub Kicinski
2025-05-28 14:57     ` Alexander Lobakin
2025-05-29  0:18       ` Jakub Kicinski
2025-05-20 20:59 ` [PATCH net-next 04/16] libeth: xdp: add .ndo_xdp_xmit() helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 05/16] libeth: xdp: add XDPSQE completion helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 06/16] libeth: xdp: add XDPSQ locking helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 07/16] libeth: xdp: add XDPSQ cleanup timers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 08/16] libeth: xdp: add helpers for preparing/processing &libeth_xdp_buff Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 09/16] libeth: xdp: add XDP prog run and verdict result handling Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 10/16] libeth: xdp: add templates for building driver-side callbacks Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 11/16] libeth: xdp: add RSS hash hint and XDP features setup helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 12/16] libeth: xsk: add XSk XDP_TX sending helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 13/16] libeth: xsk: add XSk xmit functions Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 14/16] libeth: xsk: add XSk Rx processing support Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 15/16] libeth: xsk: add XSkFQ refill and XSk wakeup helpers Tony Nguyen
2025-05-20 20:59 ` [PATCH net-next 16/16] libeth: xdp, xsk: access adjacent u32s as u64 where applicable Tony Nguyen
2025-05-27 13:49 ` [PATCH net-next 00/16][pull request] libeth: add libeth_xdp helper lib Alexander Lobakin
  -- strict thread matches above, loose matches on Subject: below --
2025-03-05 16:21 [PATCH net-next 00/16] idpf: add XDP support Alexander Lobakin
2025-03-05 16:21 ` [PATCH net-next 01/16] libeth: convert to netmem Alexander Lobakin
2025-03-06  0:13   ` Mina Almasry
2025-03-11 17:22     ` Alexander Lobakin
2025-03-11 17:43       ` Mina Almasry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20d9b588-f813-4bad-a4da-e058508322df@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=almasrymina@google.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=horms@kernel.org \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).