* [PATCH net-next v12 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
@ 2024-07-31 12:44 Yunsheng Lin
2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin
2024-07-31 12:45 ` [PATCH net-next v12 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin
0 siblings, 2 replies; 13+ messages in thread
From: Yunsheng Lin @ 2024-07-31 12:44 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
John Fastabend, Matthias Brugger, AngeloGioacchino Del Regno, bpf,
linux-arm-kernel, linux-mediatek
After [1], there are still two implementations for page frag:
1. mm/page_alloc.c: net stack seems to be using it in the
rx part with 'struct page_frag_cache' and the main API
being page_frag_alloc_align().
2. net/core/sock.c: net stack seems to be using it in the
tx part with 'struct page_frag' and the main API being
skb_page_frag_refill().
This patchset tries to unfiy the page frag implementation
by replacing page_frag with page_frag_cache for sk_page_frag()
first. net_high_order_alloc_disable_key for the implementation
in net/core/sock.c doesn't seems matter that much now as pcp
is also supported for high-order pages:
commit 44042b449872 ("mm/page_alloc: allow high-order pages to
be stored on the per-cpu lists")
As the related change is mostly related to networking, so
targeting the net-next. And will try to replace the rest
of page_frag in the follow patchset.
After this patchset:
1. Unify the page frag implementation by taking the best out of
two the existing implementations: we are able to save some space
for the 'page_frag_cache' API user, and avoid 'get_page()' for
the old 'page_frag' API user.
2. Future bugfix and performance can be done in one place, hence
improving maintainability of page_frag's implementation.
Kernel Image changing:
Linux Kernel total | text data bss
------------------------------------------------------
after 45250307 | 27274279 17209996 766032
before 45254134 | 27278118 17209984 766032
delta -3827 | -3839 +12 +0
Performance validation:
1. Using micro-benchmark ko added in patch 1 to test aligned and
non-aligned API performance impact for the existing users, there
is no notiable performance degradation. Instead we seems to some
minor performance boot for both aligned and non-aligned API after
this patchset as below.
2. Use the below netcat test case, we also have some minor
performance boot for replacing 'page_frag' with 'page_frag_cache'
after this patchset.
server: taskset -c 32 nc -l -k 1234 > /dev/null
client: perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
In order to avoid performance noise as much as possible, the testing
is done in system without any other load and have enough iterations to
prove the data is stable enough, complete log for testing is below:
taskset -c 32 nc -l -k 1234 > /dev/null
perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17
perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_align=1
perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
*After* this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17' (200 runs):
17.829030 task-clock (msec) # 0.001 CPUs utilized ( +- 0.30% )
7 context-switches # 0.386 K/sec ( +- 0.35% )
0 cpu-migrations # 0.003 K/sec ( +- 28.06% )
83 page-faults # 0.005 M/sec ( +- 0.10% )
46303585 cycles # 2.597 GHz ( +- 0.30% )
61119216 instructions # 1.32 insn per cycle ( +- 0.01% )
14811318 branches # 830.742 M/sec ( +- 0.01% )
21046 branch-misses # 0.14% of all branches ( +- 0.09% )
23.856064365 seconds time elapsed ( +- 0.08% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_align=1' (200 runs):
17.628569 task-clock (msec) # 0.001 CPUs utilized ( +- 0.01% )
7 context-switches # 0.397 K/sec ( +- 0.12% )
0 cpu-migrations # 0.000 K/sec
0 cpu-migrations # 0.000 K/sec
83 page-faults # 0.005 M/sec ( +- 0.10% )
45785943 cycles # 2.597 GHz ( +- 0.01% )
60043610 instructions # 1.31 insn per cycle ( +- 0.01% )
14550182 branches # 825.375 M/sec ( +- 0.01% )
21492 branch-misses # 0.15% of all branches ( +- 0.08% )
23.443927103 seconds time elapsed ( +- 0.05% )
Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
16626.042731 task-clock (msec) # 0.607 CPUs utilized ( +- 0.03% )
3291020 context-switches # 0.198 M/sec ( +- 0.05% )
1 cpu-migrations # 0.000 K/sec ( +- 0.50% )
85 page-faults # 0.005 K/sec ( +- 0.16% )
30581044838 cycles # 1.839 GHz ( +- 0.05% )
34962744631 instructions # 1.14 insn per cycle ( +- 0.01% )
6483883671 branches # 389.984 M/sec ( +- 0.02% )
99624551 branch-misses # 1.54% of all branches ( +- 0.17% )
27.370305077 seconds time elapsed ( +- 0.01% )
*Before* this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17' (200 runs):
18.143552 task-clock (msec) # 0.001 CPUs utilized ( +- 0.28% )
7 context-switches # 0.382 K/sec ( +- 0.28% )
1 cpu-migrations # 0.056 K/sec ( +- 0.97% )
83 page-faults # 0.005 M/sec ( +- 0.10% )
47105569 cycles # 2.596 GHz ( +- 0.28% )
60628757 instructions # 1.29 insn per cycle ( +- 0.04% )
14686743 branches # 809.475 M/sec ( +- 0.04% )
21826 branch-misses # 0.15% of all branches ( +- 0.12% )
23.918006193 seconds time elapsed ( +- 0.10% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_align=1' (200 runs):
21.726393 task-clock (msec) # 0.001 CPUs utilized ( +- 0.72% )
7 context-switches # 0.321 K/sec ( +- 0.24% )
1 cpu-migrations # 0.047 K/sec ( +- 0.85% )
83 page-faults # 0.004 M/sec ( +- 0.10% )
56422898 cycles # 2.597 GHz ( +- 0.72% )
61271860 instructions # 1.09 insn per cycle ( +- 0.05% )
14837500 branches # 682.925 M/sec ( +- 0.05% )
21484 branch-misses # 0.14% of all branches ( +- 0.10% )
23.876306259 seconds time elapsed ( +- 0.13% )
Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
17364.040855 task-clock (msec) # 0.624 CPUs utilized ( +- 0.02% )
3340375 context-switches # 0.192 M/sec ( +- 0.06% )
1 cpu-migrations # 0.000 K/sec
85 page-faults # 0.005 K/sec ( +- 0.15% )
32077623335 cycles # 1.847 GHz ( +- 0.03% )
35121047596 instructions # 1.09 insn per cycle ( +- 0.01% )
6519872824 branches # 375.481 M/sec ( +- 0.02% )
101877022 branch-misses # 1.56% of all branches ( +- 0.14% )
27.842745343 seconds time elapsed ( +- 0.02% )
Note, ipv4-udp, ipv6-tcp and ipv6-udp is also tested with the below script:
nc -u -l -k 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -N -u 127.0.0.1 1234
nc -l6 -k 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -N ::1 1234
nc -l6 -k -u 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -u -N ::1 1234
CC: Alexander Duyck <alexander.duyck@gmail.com>
1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/
Change log:
V12:
1. Do not treat page_frag_test ko as DEBUG feature.
2. Make some improvement for the refactoring in patch 8.
3. Some other minor improvement as Alexander's comment.
RFC v11:
1. Fold 'page_frag_cache' moving change into patch 2.
2. Optimizate patch 3 according to discussion in v9.
V10:
1. Change Subject to "Replace page_frag with page_frag_cache for sk_page_frag()".
2. Move 'struct page_frag_cache' to sched.h as suggested by Alexander.
3. Rename skb_copy_to_page_nocache().
4. Adjust change between patches to make it more reviewable as Alexander's comment.
5. Use 'aligned_remaining' variable to generate virtual address as Alexander's
comment.
6. Some included header and typo fix as Alexander's comment.
7. Add back the get_order() opt patch for xtensa arch
V9:
1. Add check for test_alloc_len and change perm of module_param()
to 0 as Wang Wei' comment.
2. Rebased on latest net-next.
V8: Remove patch 2 & 3 in V7, as free_unref_page() is changed to call
pcp_allowed_order() and used in page_frag API recently in:
commit 5b8d75913a0e ("mm: combine free_the_page() and free_unref_page()")
V7: Fix doc build warning and error.
V6:
1. Fix some typo and compiler error for x86 pointed out by Jakub and
Simon.
2. Add two refactoring and optimization patches.
V5:
1. Add page_frag_alloc_pg() API for tls_device.c case and refactor
some implementation, update kernel bin size changing as bin size
is increased after that.
2. Add ack from Mat.
RFC v4:
1. Update doc according to Randy and Mat's suggestion.
2. Change probe API to "probe" for a specific amount of available space,
rather than "nonzero" space according to Mat's suggestion.
3. Retest and update the test result.
v3:
1. Use new layout for 'struct page_frag_cache' as the discussion
with Alexander and other sugeestions from Alexander.
2. Add probe API to address Mat' comment about mptcp use case.
3. Some doc updating according to Bagas' suggestion.
v2:
1. reorder test module to patch 1.
2. split doc and maintainer updating to two patches.
3. refactor the page_frag before moving.
4. fix a type and 'static' warning in test module.
5. add a patch for xtensa arch to enable using get_order() in
BUILD_BUG_ON().
6. Add test case and performance data for the socket code.
Yunsheng Lin (14):
mm: page_frag: add a test module for page_frag
mm: move the page fragment allocator from page_alloc into its own file
mm: page_frag: use initial zero offset for page_frag_alloc_align()
mm: page_frag: add '_va' suffix to page_frag API
mm: page_frag: avoid caller accessing 'page_frag_cache' directly
xtensa: remove the get_order() implementation
mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
mm: page_frag: some minor refactoring before adding new API
mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
net: rename skb_copy_to_page_nocache() helper
mm: page_frag: introduce prepare/probe/commit API
net: replace page_frag with page_frag_cache
mm: page_frag: update documentation for page_frag
mm: page_frag: add an entry in MAINTAINERS for page_frag
Documentation/mm/page_frags.rst | 169 +++++++-
MAINTAINERS | 11 +
arch/xtensa/include/asm/page.h | 18 -
.../chelsio/inline_crypto/chtls/chtls.h | 3 -
.../chelsio/inline_crypto/chtls/chtls_io.c | 100 ++---
.../chelsio/inline_crypto/chtls/chtls_main.c | 3 -
drivers/net/ethernet/google/gve/gve_rx.c | 4 +-
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 +-
.../marvell/octeontx2/nic/otx2_common.c | 2 +-
drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 +-
drivers/net/tun.c | 48 +--
drivers/nvme/host/tcp.c | 8 +-
drivers/nvme/target/tcp.c | 22 +-
drivers/vhost/net.c | 8 +-
include/linux/gfp.h | 22 -
include/linux/mm_types.h | 18 -
include/linux/mm_types_task.h | 18 +
include/linux/page_frag_cache.h | 272 ++++++++++++
include/linux/sched.h | 2 +-
include/linux/skbuff.h | 3 +-
include/net/sock.h | 23 +-
kernel/bpf/cpumap.c | 2 +-
kernel/exit.c | 3 +-
kernel/fork.c | 3 +-
mm/Kconfig | 8 +
mm/Makefile | 2 +
mm/page_alloc.c | 136 ------
mm/page_frag_cache.c | 367 ++++++++++++++++
mm/page_frag_test.c | 396 ++++++++++++++++++
net/core/skbuff.c | 79 ++--
net/core/skmsg.c | 22 +-
net/core/sock.c | 46 +-
net/core/xdp.c | 2 +-
net/ipv4/ip_output.c | 33 +-
net/ipv4/tcp.c | 35 +-
net/ipv4/tcp_output.c | 28 +-
net/ipv6/ip6_output.c | 33 +-
net/kcm/kcmsock.c | 30 +-
net/mptcp/protocol.c | 67 +--
net/rxrpc/conn_object.c | 4 +-
net/rxrpc/local_object.c | 4 +-
net/rxrpc/txbuf.c | 15 +-
net/sched/em_meta.c | 2 +-
net/sunrpc/svcsock.c | 12 +-
net/tls/tls_device.c | 137 +++---
48 files changed, 1652 insertions(+), 582 deletions(-)
create mode 100644 include/linux/page_frag_cache.h
create mode 100644 mm/page_frag_cache.c
create mode 100644 mm/page_frag_test.c
--
2.33.0
^ permalink raw reply [flat|nested] 13+ messages in thread* [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-07-31 12:44 [PATCH net-next v12 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin @ 2024-07-31 12:44 ` Yunsheng Lin 2024-07-31 13:36 ` Chuck Lever ` (2 more replies) 2024-07-31 12:45 ` [PATCH net-next v12 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin 1 sibling, 3 replies; 13+ messages in thread From: Yunsheng Lin @ 2024-07-31 12:44 UTC (permalink / raw) To: davem, kuba, pabeni Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs Currently the page_frag API is returning 'virtual address' or 'va' when allocing and expecting 'virtual address' or 'va' as input when freeing. As we are about to support new use cases that the caller need to deal with 'struct page' or need to deal with both 'va' and 'struct page'. In order to differentiate the API handling between 'va' and 'struct page', add '_va' suffix to the corresponding API mirroring the page_pool_alloc_va() API of the page_pool. So that callers expecting to deal with va, page or both va and page may call page_frag_alloc_va*, page_frag_alloc_pg*, or page_frag_alloc* API accordingly. CC: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> --- drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- .../marvell/octeontx2/nic/otx2_common.c | 2 +- drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- drivers/nvme/host/tcp.c | 8 +++---- drivers/nvme/target/tcp.c | 22 +++++++++---------- drivers/vhost/net.c | 6 ++--- include/linux/page_frag_cache.h | 21 +++++++++--------- include/linux/skbuff.h | 2 +- kernel/bpf/cpumap.c | 2 +- mm/page_frag_cache.c | 12 +++++----- mm/page_frag_test.c | 13 ++++++----- net/core/skbuff.c | 14 ++++++------ net/core/xdp.c | 2 +- net/rxrpc/txbuf.c | 15 +++++++------ net/sunrpc/svcsock.c | 6 ++--- 19 files changed, 74 insertions(+), 69 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index acb73d4d0de6..b6c10100e462 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -729,7 +729,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, total_len = headroom + SKB_DATA_ALIGN(len) + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); - frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); + frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); if (!frame) { u64_stats_update_begin(&rx->statss); rx->xdp_alloc_fails++; @@ -742,7 +742,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, err = xdp_do_redirect(dev, &new, xdp_prog); if (err) - page_frag_free(frame); + page_frag_free_va(frame); return err; } diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8bb743f78fcb..399b317c509d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) dev_kfree_skb_any(tx_buf->skb); break; case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame(tx_buf->xdpf); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index feba314a3fe4..6379f57d8228 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats */ diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 2719f0e20933..a1a41a14df0d 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -250,7 +250,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, switch (tx_buf->type) { case ICE_TX_BUF_XDP_TX: - page_frag_free(tx_buf->raw_buf); + page_frag_free_va(tx_buf->raw_buf); break; case ICE_TX_BUF_XDP_XMIT: xdp_return_frame_bulk(tx_buf->xdpf, bq); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index 149911e3002a..eef16a909f85 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -302,7 +302,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, /* free the skb */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else napi_consume_skb(tx_buffer->skb, napi_budget); @@ -2412,7 +2412,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) /* Free all the Tx ring sk_buffs */ if (ring_is_xdp(tx_ring)) - page_frag_free(tx_buffer->data); + page_frag_free_va(tx_buffer->data); else dev_kfree_skb_any(tx_buffer->skb); diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 87d5776e3b88..a485e988fa1d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, *dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { - page_frag_free(buf); + page_frag_free_va(buf); return -ENOMEM; } diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c index 7063c78bd35f..c4228719f8a4 100644 --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, dma_addr_t addr; void *buf; - buf = page_frag_alloc(&q->cache, q->buf_size, - GFP_ATOMIC | GFP_DMA32); + buf = page_frag_alloc_va(&q->cache, q->buf_size, + GFP_ATOMIC | GFP_DMA32); if (!buf) break; diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index a2a47d3ab99f..86906bc505de 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -506,7 +506,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, { struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); - page_frag_free(req->pdu); + page_frag_free_va(req->pdu); } static int nvme_tcp_init_request(struct blk_mq_tag_set *set, @@ -520,7 +520,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set, struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; u8 hdgst = nvme_tcp_hdgst_len(queue); - req->pdu = page_frag_alloc(&queue->pf_cache, + req->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!req->pdu) @@ -1337,7 +1337,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) { struct nvme_tcp_request *async = &ctrl->async_req; - page_frag_free(async->pdu); + page_frag_free_va(async->pdu); } static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) @@ -1346,7 +1346,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) struct nvme_tcp_request *async = &ctrl->async_req; u8 hdgst = nvme_tcp_hdgst_len(queue); - async->pdu = page_frag_alloc(&queue->pf_cache, + async->pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(struct nvme_tcp_cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!async->pdu) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index 5bff0d5464d1..560df3db2f82 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -1463,24 +1463,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, c->queue = queue; c->req.port = queue->port->nport; - c->cmd_pdu = page_frag_alloc(&queue->pf_cache, + c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->cmd_pdu) return -ENOMEM; c->req.cmd = &c->cmd_pdu->cmd; - c->rsp_pdu = page_frag_alloc(&queue->pf_cache, + c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->rsp_pdu) goto out_free_cmd; c->req.cqe = &c->rsp_pdu->cqe; - c->data_pdu = page_frag_alloc(&queue->pf_cache, + c->data_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->data_pdu) goto out_free_rsp; - c->r2t_pdu = page_frag_alloc(&queue->pf_cache, + c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache, sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); if (!c->r2t_pdu) goto out_free_data; @@ -1495,20 +1495,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, return 0; out_free_data: - page_frag_free(c->data_pdu); + page_frag_free_va(c->data_pdu); out_free_rsp: - page_frag_free(c->rsp_pdu); + page_frag_free_va(c->rsp_pdu); out_free_cmd: - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->cmd_pdu); return -ENOMEM; } static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) { - page_frag_free(c->r2t_pdu); - page_frag_free(c->data_pdu); - page_frag_free(c->rsp_pdu); - page_frag_free(c->cmd_pdu); + page_frag_free_va(c->r2t_pdu); + page_frag_free_va(c->data_pdu); + page_frag_free_va(c->rsp_pdu); + page_frag_free_va(c->cmd_pdu); } static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index f16279351db5..6691fac01e0d 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -686,8 +686,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return -ENOSPC; buflen += SKB_DATA_ALIGN(len + pad); - buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, - SMP_CACHE_BYTES); + buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); if (unlikely(!buf)) return -ENOMEM; @@ -734,7 +734,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, return 0; err: - page_frag_free(buf); + page_frag_free_va(buf); return ret; } diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h index a758cb65a9b3..ef038a07925c 100644 --- a/include/linux/page_frag_cache.h +++ b/include/linux/page_frag_cache.h @@ -9,23 +9,24 @@ void page_frag_cache_drain(struct page_frag_cache *nc); void __page_frag_cache_drain(struct page *page, unsigned int count); -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, - gfp_t gfp_mask, unsigned int align_mask); +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask); -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align) +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, + gfp_t gfp_mask, unsigned int align) { WARN_ON_ONCE(!is_power_of_2(align)); - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); } -static inline void *page_frag_alloc(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask) +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask) { - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); } -void page_frag_free(void *addr); +void page_frag_free_va(void *addr); #endif diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index e057db1c63e9..8d50cb3b161e 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -3381,7 +3381,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, static inline void skb_free_frag(void *addr) { - page_frag_free(addr); + page_frag_free_va(addr); } void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index fbdf5a1aabfe..3b70b6b071b9 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -323,7 +323,7 @@ static int cpu_map_kthread_run(void *data) /* Bring struct page memory area to curr CPU. Read by * build_skb_around via page_is_pfmemalloc(), and when - * freed written by page_frag_free call. + * freed written by page_frag_free_va call. */ prefetchw(page); } diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c index c5bc72cf018a..70fb6dead624 100644 --- a/mm/page_frag_cache.c +++ b/mm/page_frag_cache.c @@ -59,9 +59,9 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) } EXPORT_SYMBOL(__page_frag_cache_drain); -void *__page_frag_alloc_align(struct page_frag_cache *nc, - unsigned int fragsz, gfp_t gfp_mask, - unsigned int align_mask) +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, + unsigned int fragsz, gfp_t gfp_mask, + unsigned int align_mask) { #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) unsigned int size = nc->size; @@ -130,16 +130,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, return nc->va + (size - remaining); } -EXPORT_SYMBOL(__page_frag_alloc_align); +EXPORT_SYMBOL(__page_frag_alloc_va_align); /* * Frees a page fragment allocated out of either a compound or order 0 page. */ -void page_frag_free(void *addr) +void page_frag_free_va(void *addr) { struct page *page = virt_to_head_page(addr); if (unlikely(put_page_testzero(page))) free_unref_page(page, compound_order(page)); } -EXPORT_SYMBOL(page_frag_free); +EXPORT_SYMBOL(page_frag_free_va); diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c index b7a5affb92f2..9eaa3ab74b29 100644 --- a/mm/page_frag_test.c +++ b/mm/page_frag_test.c @@ -276,7 +276,7 @@ static int page_frag_pop_thread(void *arg) if (obj) { nr--; - page_frag_free(obj); + page_frag_free_va(obj); } else { cond_resched(); } @@ -304,13 +304,16 @@ static int page_frag_push_thread(void *arg) int ret; if (test_align) { - va = page_frag_alloc_align(&test_frag, test_alloc_len, - GFP_KERNEL, SMP_CACHE_BYTES); + va = page_frag_alloc_va_align(&test_frag, + test_alloc_len, + GFP_KERNEL, + SMP_CACHE_BYTES); WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), "unaligned va returned\n"); } else { - va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); + va = page_frag_alloc_va(&test_frag, test_alloc_len, + GFP_KERNEL); } if (!va) @@ -318,7 +321,7 @@ static int page_frag_push_thread(void *arg) ret = objpool_push(va, pool); if (ret) { - page_frag_free(va); + page_frag_free_va(va); cond_resched(); } else { nr--; diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 83f8cd8aa2d1..4b8acd967793 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) fragsz = SKB_DATA_ALIGN(fragsz); local_lock_nested_bh(&napi_alloc_cache.bh_lock); - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, + align_mask); local_unlock_nested_bh(&napi_alloc_cache.bh_lock); return data; @@ -330,8 +330,8 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); fragsz = SKB_DATA_ALIGN(fragsz); - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, - align_mask); + data = __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, + align_mask); } else { local_bh_disable(); data = __napi_alloc_frag_align(fragsz, align_mask); @@ -748,14 +748,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, if (in_hardirq() || irqs_disabled()) { nc = this_cpu_ptr(&netdev_alloc_cache); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; } else { local_bh_disable(); local_lock_nested_bh(&napi_alloc_cache.bh_lock); nc = this_cpu_ptr(&napi_alloc_cache.page); - data = page_frag_alloc(nc, len, gfp_mask); + data = page_frag_alloc_va(nc, len, gfp_mask); pfmemalloc = nc->pfmemalloc; local_unlock_nested_bh(&napi_alloc_cache.bh_lock); @@ -845,7 +845,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) } else { len = SKB_HEAD_ALIGN(len); - data = page_frag_alloc(&nc->page, len, gfp_mask); + data = page_frag_alloc_va(&nc->page, len, gfp_mask); pfmemalloc = nc->page.pfmemalloc; } local_unlock_nested_bh(&napi_alloc_cache.bh_lock); diff --git a/net/core/xdp.c b/net/core/xdp.c index bcc5551c6424..7d4e09fb478f 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -387,7 +387,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, page_pool_put_full_page(page->pp, page, napi_direct); break; case MEM_TYPE_PAGE_SHARED: - page_frag_free(data); + page_frag_free_va(data); break; case MEM_TYPE_PAGE_ORDER0: page = virt_to_page(data); /* Assumes order0 page*/ diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c index c3913d8a50d3..dccb0353ee84 100644 --- a/net/rxrpc/txbuf.c +++ b/net/rxrpc/txbuf.c @@ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ data_align = umax(data_align, L1_CACHE_BYTES); mutex_lock(&call->conn->tx_data_alloc_lock); - buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, - data_align); + buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, + data_align); mutex_unlock(&call->conn->tx_data_alloc_lock); if (!buf) { kfree(txb); @@ -96,17 +96,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s if (!txb) return NULL; - buf = page_frag_alloc(&call->local->tx_alloc, - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); + buf = page_frag_alloc_va(&call->local->tx_alloc, + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); if (!buf) { kfree(txb); return NULL; } if (sack_size) { - buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); + buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size, + gfp); if (!buf2) { - page_frag_free(buf); + page_frag_free_va(buf); kfree(txb); return NULL; } @@ -180,7 +181,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) rxrpc_txbuf_free); for (i = 0; i < txb->nr_kvec; i++) if (txb->kvec[i].iov_base) - page_frag_free(txb->kvec[i].iov_base); + page_frag_free_va(txb->kvec[i].iov_base); kfree(txb); atomic_dec(&rxrpc_nr_txbuf); } diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 6b3f01beb294..42d20412c1c3 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, /* The stream record marker is copied into a temporary page * fragment buffer so that it can be included in rq_bvec. */ - buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), - GFP_KERNEL); + buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), + GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &marker, sizeof(marker)); @@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec, 1 + count, sizeof(marker) + rqstp->rq_res.len); ret = sock_sendmsg(svsk->sk_sock, &msg); - page_frag_free(buf); + page_frag_free_va(buf); if (ret < 0) return ret; *sentp += ret; -- 2.33.0 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin @ 2024-07-31 13:36 ` Chuck Lever 2024-07-31 18:13 ` Alexander Duyck 2024-08-04 6:44 ` Sagi Grimberg 2 siblings, 0 replies; 13+ messages in thread From: Chuck Lever @ 2024-07-31 13:36 UTC (permalink / raw) To: Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Alexander Duyck, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On Wed, Jul 31, 2024 at 08:44:54PM +0800, Yunsheng Lin wrote: > Currently the page_frag API is returning 'virtual address' > or 'va' when allocing and expecting 'virtual address' or > 'va' as input when freeing. > > As we are about to support new use cases that the caller > need to deal with 'struct page' or need to deal with both > 'va' and 'struct page'. In order to differentiate the API > handling between 'va' and 'struct page', add '_va' suffix > to the corresponding API mirroring the page_pool_alloc_va() > API of the page_pool. So that callers expecting to deal with > va, page or both va and page may call page_frag_alloc_va*, > page_frag_alloc_pg*, or page_frag_alloc* API accordingly. > > CC: Alexander Duyck <alexander.duyck@gmail.com> > Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> For the net/sunrpc/svcsock.c hunk: Acked-by: Chuck Lever <chuck.lever@oracle.com> > --- > drivers/net/ethernet/google/gve/gve_rx.c | 4 ++-- > drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +- > drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +- > drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +- > .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++-- > .../marvell/octeontx2/nic/otx2_common.c | 2 +- > drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++-- > drivers/nvme/host/tcp.c | 8 +++---- > drivers/nvme/target/tcp.c | 22 +++++++++---------- > drivers/vhost/net.c | 6 ++--- > include/linux/page_frag_cache.h | 21 +++++++++--------- > include/linux/skbuff.h | 2 +- > kernel/bpf/cpumap.c | 2 +- > mm/page_frag_cache.c | 12 +++++----- > mm/page_frag_test.c | 13 ++++++----- > net/core/skbuff.c | 14 ++++++------ > net/core/xdp.c | 2 +- > net/rxrpc/txbuf.c | 15 +++++++------ > net/sunrpc/svcsock.c | 6 ++--- > 19 files changed, 74 insertions(+), 69 deletions(-) > > diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c > index acb73d4d0de6..b6c10100e462 100644 > --- a/drivers/net/ethernet/google/gve/gve_rx.c > +++ b/drivers/net/ethernet/google/gve/gve_rx.c > @@ -729,7 +729,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, > > total_len = headroom + SKB_DATA_ALIGN(len) + > SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > - frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC); > + frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC); > if (!frame) { > u64_stats_update_begin(&rx->statss); > rx->xdp_alloc_fails++; > @@ -742,7 +742,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx, > > err = xdp_do_redirect(dev, &new, xdp_prog); > if (err) > - page_frag_free(frame); > + page_frag_free_va(frame); > > return err; > } > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c > index 8bb743f78fcb..399b317c509d 100644 > --- a/drivers/net/ethernet/intel/ice/ice_txrx.c > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c > @@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) > dev_kfree_skb_any(tx_buf->skb); > break; > case ICE_TX_BUF_XDP_TX: > - page_frag_free(tx_buf->raw_buf); > + page_frag_free_va(tx_buf->raw_buf); > break; > case ICE_TX_BUF_XDP_XMIT: > xdp_return_frame(tx_buf->xdpf); > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h > index feba314a3fe4..6379f57d8228 100644 > --- a/drivers/net/ethernet/intel/ice/ice_txrx.h > +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h > @@ -148,7 +148,7 @@ static inline int ice_skb_pad(void) > * @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree() > * @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA > * @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats > - * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats > + * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats > * @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats > * @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats > */ > diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c > index 2719f0e20933..a1a41a14df0d 100644 > --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c > +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c > @@ -250,7 +250,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf, > > switch (tx_buf->type) { > case ICE_TX_BUF_XDP_TX: > - page_frag_free(tx_buf->raw_buf); > + page_frag_free_va(tx_buf->raw_buf); > break; > case ICE_TX_BUF_XDP_XMIT: > xdp_return_frame_bulk(tx_buf->xdpf, bq); > diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c > index 149911e3002a..eef16a909f85 100644 > --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c > +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c > @@ -302,7 +302,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector, > > /* free the skb */ > if (ring_is_xdp(tx_ring)) > - page_frag_free(tx_buffer->data); > + page_frag_free_va(tx_buffer->data); > else > napi_consume_skb(tx_buffer->skb, napi_budget); > > @@ -2412,7 +2412,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring) > > /* Free all the Tx ring sk_buffs */ > if (ring_is_xdp(tx_ring)) > - page_frag_free(tx_buffer->data); > + page_frag_free_va(tx_buffer->data); > else > dev_kfree_skb_any(tx_buffer->skb); > > diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > index 87d5776e3b88..a485e988fa1d 100644 > --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c > @@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool, > *dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize, > DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC); > if (unlikely(dma_mapping_error(pfvf->dev, *dma))) { > - page_frag_free(buf); > + page_frag_free_va(buf); > return -ENOMEM; > } > > diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c > index 7063c78bd35f..c4228719f8a4 100644 > --- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c > +++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c > @@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q, > dma_addr_t addr; > void *buf; > > - buf = page_frag_alloc(&q->cache, q->buf_size, > - GFP_ATOMIC | GFP_DMA32); > + buf = page_frag_alloc_va(&q->cache, q->buf_size, > + GFP_ATOMIC | GFP_DMA32); > if (!buf) > break; > > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c > index a2a47d3ab99f..86906bc505de 100644 > --- a/drivers/nvme/host/tcp.c > +++ b/drivers/nvme/host/tcp.c > @@ -506,7 +506,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set, > { > struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); > > - page_frag_free(req->pdu); > + page_frag_free_va(req->pdu); > } > > static int nvme_tcp_init_request(struct blk_mq_tag_set *set, > @@ -520,7 +520,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set, > struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx]; > u8 hdgst = nvme_tcp_hdgst_len(queue); > > - req->pdu = page_frag_alloc(&queue->pf_cache, > + req->pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(struct nvme_tcp_cmd_pdu) + hdgst, > GFP_KERNEL | __GFP_ZERO); > if (!req->pdu) > @@ -1337,7 +1337,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl) > { > struct nvme_tcp_request *async = &ctrl->async_req; > > - page_frag_free(async->pdu); > + page_frag_free_va(async->pdu); > } > > static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) > @@ -1346,7 +1346,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl) > struct nvme_tcp_request *async = &ctrl->async_req; > u8 hdgst = nvme_tcp_hdgst_len(queue); > > - async->pdu = page_frag_alloc(&queue->pf_cache, > + async->pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(struct nvme_tcp_cmd_pdu) + hdgst, > GFP_KERNEL | __GFP_ZERO); > if (!async->pdu) > diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c > index 5bff0d5464d1..560df3db2f82 100644 > --- a/drivers/nvme/target/tcp.c > +++ b/drivers/nvme/target/tcp.c > @@ -1463,24 +1463,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, > c->queue = queue; > c->req.port = queue->port->nport; > > - c->cmd_pdu = page_frag_alloc(&queue->pf_cache, > + c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); > if (!c->cmd_pdu) > return -ENOMEM; > c->req.cmd = &c->cmd_pdu->cmd; > > - c->rsp_pdu = page_frag_alloc(&queue->pf_cache, > + c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); > if (!c->rsp_pdu) > goto out_free_cmd; > c->req.cqe = &c->rsp_pdu->cqe; > > - c->data_pdu = page_frag_alloc(&queue->pf_cache, > + c->data_pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); > if (!c->data_pdu) > goto out_free_rsp; > > - c->r2t_pdu = page_frag_alloc(&queue->pf_cache, > + c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache, > sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO); > if (!c->r2t_pdu) > goto out_free_data; > @@ -1495,20 +1495,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue, > > return 0; > out_free_data: > - page_frag_free(c->data_pdu); > + page_frag_free_va(c->data_pdu); > out_free_rsp: > - page_frag_free(c->rsp_pdu); > + page_frag_free_va(c->rsp_pdu); > out_free_cmd: > - page_frag_free(c->cmd_pdu); > + page_frag_free_va(c->cmd_pdu); > return -ENOMEM; > } > > static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c) > { > - page_frag_free(c->r2t_pdu); > - page_frag_free(c->data_pdu); > - page_frag_free(c->rsp_pdu); > - page_frag_free(c->cmd_pdu); > + page_frag_free_va(c->r2t_pdu); > + page_frag_free_va(c->data_pdu); > + page_frag_free_va(c->rsp_pdu); > + page_frag_free_va(c->cmd_pdu); > } > > static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue) > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index f16279351db5..6691fac01e0d 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -686,8 +686,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, > return -ENOSPC; > > buflen += SKB_DATA_ALIGN(len + pad); > - buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL, > - SMP_CACHE_BYTES); > + buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL, > + SMP_CACHE_BYTES); > if (unlikely(!buf)) > return -ENOMEM; > > @@ -734,7 +734,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, > return 0; > > err: > - page_frag_free(buf); > + page_frag_free_va(buf); > return ret; > } > > diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h > index a758cb65a9b3..ef038a07925c 100644 > --- a/include/linux/page_frag_cache.h > +++ b/include/linux/page_frag_cache.h > @@ -9,23 +9,24 @@ > > void page_frag_cache_drain(struct page_frag_cache *nc); > void __page_frag_cache_drain(struct page *page, unsigned int count); > -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz, > - gfp_t gfp_mask, unsigned int align_mask); > +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, > + unsigned int fragsz, gfp_t gfp_mask, > + unsigned int align_mask); > > -static inline void *page_frag_alloc_align(struct page_frag_cache *nc, > - unsigned int fragsz, gfp_t gfp_mask, > - unsigned int align) > +static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc, > + unsigned int fragsz, > + gfp_t gfp_mask, unsigned int align) > { > WARN_ON_ONCE(!is_power_of_2(align)); > - return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align); > + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align); > } > > -static inline void *page_frag_alloc(struct page_frag_cache *nc, > - unsigned int fragsz, gfp_t gfp_mask) > +static inline void *page_frag_alloc_va(struct page_frag_cache *nc, > + unsigned int fragsz, gfp_t gfp_mask) > { > - return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u); > + return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u); > } > > -void page_frag_free(void *addr); > +void page_frag_free_va(void *addr); > > #endif > diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h > index e057db1c63e9..8d50cb3b161e 100644 > --- a/include/linux/skbuff.h > +++ b/include/linux/skbuff.h > @@ -3381,7 +3381,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev, > > static inline void skb_free_frag(void *addr) > { > - page_frag_free(addr); > + page_frag_free_va(addr); > } > > void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask); > diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c > index fbdf5a1aabfe..3b70b6b071b9 100644 > --- a/kernel/bpf/cpumap.c > +++ b/kernel/bpf/cpumap.c > @@ -323,7 +323,7 @@ static int cpu_map_kthread_run(void *data) > > /* Bring struct page memory area to curr CPU. Read by > * build_skb_around via page_is_pfmemalloc(), and when > - * freed written by page_frag_free call. > + * freed written by page_frag_free_va call. > */ > prefetchw(page); > } > diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c > index c5bc72cf018a..70fb6dead624 100644 > --- a/mm/page_frag_cache.c > +++ b/mm/page_frag_cache.c > @@ -59,9 +59,9 @@ void __page_frag_cache_drain(struct page *page, unsigned int count) > } > EXPORT_SYMBOL(__page_frag_cache_drain); > > -void *__page_frag_alloc_align(struct page_frag_cache *nc, > - unsigned int fragsz, gfp_t gfp_mask, > - unsigned int align_mask) > +void *__page_frag_alloc_va_align(struct page_frag_cache *nc, > + unsigned int fragsz, gfp_t gfp_mask, > + unsigned int align_mask) > { > #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) > unsigned int size = nc->size; > @@ -130,16 +130,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc, > > return nc->va + (size - remaining); > } > -EXPORT_SYMBOL(__page_frag_alloc_align); > +EXPORT_SYMBOL(__page_frag_alloc_va_align); > > /* > * Frees a page fragment allocated out of either a compound or order 0 page. > */ > -void page_frag_free(void *addr) > +void page_frag_free_va(void *addr) > { > struct page *page = virt_to_head_page(addr); > > if (unlikely(put_page_testzero(page))) > free_unref_page(page, compound_order(page)); > } > -EXPORT_SYMBOL(page_frag_free); > +EXPORT_SYMBOL(page_frag_free_va); > diff --git a/mm/page_frag_test.c b/mm/page_frag_test.c > index b7a5affb92f2..9eaa3ab74b29 100644 > --- a/mm/page_frag_test.c > +++ b/mm/page_frag_test.c > @@ -276,7 +276,7 @@ static int page_frag_pop_thread(void *arg) > > if (obj) { > nr--; > - page_frag_free(obj); > + page_frag_free_va(obj); > } else { > cond_resched(); > } > @@ -304,13 +304,16 @@ static int page_frag_push_thread(void *arg) > int ret; > > if (test_align) { > - va = page_frag_alloc_align(&test_frag, test_alloc_len, > - GFP_KERNEL, SMP_CACHE_BYTES); > + va = page_frag_alloc_va_align(&test_frag, > + test_alloc_len, > + GFP_KERNEL, > + SMP_CACHE_BYTES); > > WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1), > "unaligned va returned\n"); > } else { > - va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL); > + va = page_frag_alloc_va(&test_frag, test_alloc_len, > + GFP_KERNEL); > } > > if (!va) > @@ -318,7 +321,7 @@ static int page_frag_push_thread(void *arg) > > ret = objpool_push(va, pool); > if (ret) { > - page_frag_free(va); > + page_frag_free_va(va); > cond_resched(); > } else { > nr--; > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index 83f8cd8aa2d1..4b8acd967793 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) > fragsz = SKB_DATA_ALIGN(fragsz); > > local_lock_nested_bh(&napi_alloc_cache.bh_lock); > - data = __page_frag_alloc_align(&nc->page, fragsz, GFP_ATOMIC, > - align_mask); > + data = __page_frag_alloc_va_align(&nc->page, fragsz, GFP_ATOMIC, > + align_mask); > local_unlock_nested_bh(&napi_alloc_cache.bh_lock); > return data; > > @@ -330,8 +330,8 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask) > struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache); > > fragsz = SKB_DATA_ALIGN(fragsz); > - data = __page_frag_alloc_align(nc, fragsz, GFP_ATOMIC, > - align_mask); > + data = __page_frag_alloc_va_align(nc, fragsz, GFP_ATOMIC, > + align_mask); > } else { > local_bh_disable(); > data = __napi_alloc_frag_align(fragsz, align_mask); > @@ -748,14 +748,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len, > > if (in_hardirq() || irqs_disabled()) { > nc = this_cpu_ptr(&netdev_alloc_cache); > - data = page_frag_alloc(nc, len, gfp_mask); > + data = page_frag_alloc_va(nc, len, gfp_mask); > pfmemalloc = nc->pfmemalloc; > } else { > local_bh_disable(); > local_lock_nested_bh(&napi_alloc_cache.bh_lock); > > nc = this_cpu_ptr(&napi_alloc_cache.page); > - data = page_frag_alloc(nc, len, gfp_mask); > + data = page_frag_alloc_va(nc, len, gfp_mask); > pfmemalloc = nc->pfmemalloc; > > local_unlock_nested_bh(&napi_alloc_cache.bh_lock); > @@ -845,7 +845,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len) > } else { > len = SKB_HEAD_ALIGN(len); > > - data = page_frag_alloc(&nc->page, len, gfp_mask); > + data = page_frag_alloc_va(&nc->page, len, gfp_mask); > pfmemalloc = nc->page.pfmemalloc; > } > local_unlock_nested_bh(&napi_alloc_cache.bh_lock); > diff --git a/net/core/xdp.c b/net/core/xdp.c > index bcc5551c6424..7d4e09fb478f 100644 > --- a/net/core/xdp.c > +++ b/net/core/xdp.c > @@ -387,7 +387,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, > page_pool_put_full_page(page->pp, page, napi_direct); > break; > case MEM_TYPE_PAGE_SHARED: > - page_frag_free(data); > + page_frag_free_va(data); > break; > case MEM_TYPE_PAGE_ORDER0: > page = virt_to_page(data); /* Assumes order0 page*/ > diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c > index c3913d8a50d3..dccb0353ee84 100644 > --- a/net/rxrpc/txbuf.c > +++ b/net/rxrpc/txbuf.c > @@ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ > > data_align = umax(data_align, L1_CACHE_BYTES); > mutex_lock(&call->conn->tx_data_alloc_lock); > - buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, > - data_align); > + buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp, > + data_align); > mutex_unlock(&call->conn->tx_data_alloc_lock); > if (!buf) { > kfree(txb); > @@ -96,17 +96,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s > if (!txb) > return NULL; > > - buf = page_frag_alloc(&call->local->tx_alloc, > - sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); > + buf = page_frag_alloc_va(&call->local->tx_alloc, > + sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp); > if (!buf) { > kfree(txb); > return NULL; > } > > if (sack_size) { > - buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp); > + buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size, > + gfp); > if (!buf2) { > - page_frag_free(buf); > + page_frag_free_va(buf); > kfree(txb); > return NULL; > } > @@ -180,7 +181,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb) > rxrpc_txbuf_free); > for (i = 0; i < txb->nr_kvec; i++) > if (txb->kvec[i].iov_base) > - page_frag_free(txb->kvec[i].iov_base); > + page_frag_free_va(txb->kvec[i].iov_base); > kfree(txb); > atomic_dec(&rxrpc_nr_txbuf); > } > diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c > index 6b3f01beb294..42d20412c1c3 100644 > --- a/net/sunrpc/svcsock.c > +++ b/net/sunrpc/svcsock.c > @@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, > /* The stream record marker is copied into a temporary page > * fragment buffer so that it can be included in rq_bvec. > */ > - buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker), > - GFP_KERNEL); > + buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker), > + GFP_KERNEL); > if (!buf) > return -ENOMEM; > memcpy(buf, &marker, sizeof(marker)); > @@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp, > iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec, > 1 + count, sizeof(marker) + rqstp->rq_res.len); > ret = sock_sendmsg(svsk->sk_sock, &msg); > - page_frag_free(buf); > + page_frag_free_va(buf); > if (ret < 0) > return ret; > *sentp += ret; > -- > 2.33.0 > -- Chuck Lever ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin 2024-07-31 13:36 ` Chuck Lever @ 2024-07-31 18:13 ` Alexander Duyck 2024-08-01 13:01 ` Yunsheng Lin 2024-08-04 6:44 ` Sagi Grimberg 2 siblings, 1 reply; 13+ messages in thread From: Alexander Duyck @ 2024-07-31 18:13 UTC (permalink / raw) To: Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > Currently the page_frag API is returning 'virtual address' > or 'va' when allocing and expecting 'virtual address' or > 'va' as input when freeing. > > As we are about to support new use cases that the caller > need to deal with 'struct page' or need to deal with both > 'va' and 'struct page'. In order to differentiate the API > handling between 'va' and 'struct page', add '_va' suffix > to the corresponding API mirroring the page_pool_alloc_va() > API of the page_pool. So that callers expecting to deal with > va, page or both va and page may call page_frag_alloc_va*, > page_frag_alloc_pg*, or page_frag_alloc* API accordingly. > > CC: Alexander Duyck <alexander.duyck@gmail.com> > Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> I am naking this patch. It is a pointless rename that is just going to obfuscate the git history for these callers. As I believe I said before I would prefer to see this work more like the handling of __get_free_pages and __free_pages in terms of the use of pages versus pointers and/or longs. Pushing this API aside because you want to reuse the name for something different isn't a valid reason to rename an existing API and will just lead to confusion. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-07-31 18:13 ` Alexander Duyck @ 2024-08-01 13:01 ` Yunsheng Lin 2024-08-01 15:21 ` Alexander Duyck 0 siblings, 1 reply; 13+ messages in thread From: Yunsheng Lin @ 2024-08-01 13:01 UTC (permalink / raw) To: Alexander Duyck Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On 2024/8/1 2:13, Alexander Duyck wrote: > On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: >> >> Currently the page_frag API is returning 'virtual address' >> or 'va' when allocing and expecting 'virtual address' or >> 'va' as input when freeing. >> >> As we are about to support new use cases that the caller >> need to deal with 'struct page' or need to deal with both >> 'va' and 'struct page'. In order to differentiate the API >> handling between 'va' and 'struct page', add '_va' suffix >> to the corresponding API mirroring the page_pool_alloc_va() >> API of the page_pool. So that callers expecting to deal with >> va, page or both va and page may call page_frag_alloc_va*, >> page_frag_alloc_pg*, or page_frag_alloc* API accordingly. >> >> CC: Alexander Duyck <alexander.duyck@gmail.com> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> >> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> > > I am naking this patch. It is a pointless rename that is just going to > obfuscate the git history for these callers. I responded to your above similar comment in v2, and then responded more detailedly in v11, both got not direct responding, it would be good to have more concrete feedback here instead of abstract argument. https://lore.kernel.org/all/74e7259a-c462-e3c1-73ac-8e3f49fb80b8@huawei.com/ https://lore.kernel.org/all/11187fe4-9419-4341-97b5-6dad7583b5b6@huawei.com/ > > As I believe I said before I would prefer to see this work more like > the handling of __get_free_pages and __free_pages in terms of the use I am not even sure why are you bringing up __get_free_pages() and __free_pages() here, as the declaration of them is something like below: unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order); void __free_pages(struct page *page, unsigned int order); And I add another related one for completeness here: extern void free_pages(unsigned long addr, unsigned int order); I am failing to see there is any pattern or rule for the above API naming. If there is some pattern for the above existing APIs, please describe them in detail so that we have common understanding. After the renaming, the declaration for both new and old APIs is below, please be more specific about what exactly is the confusion about them, what is the better naming for the below APIs in your mind: struct page *page_frag_alloc_pg(struct page_frag_cache *nc, unsigned int *offset, unsigned int fragsz, gfp_t gfp); void *page_frag_alloc_va(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask); struct page *page_frag_alloc(struct page_frag_cache *nc, unsigned int *offset, unsigned int fragsz, void **va, gfp_t gfp); > of pages versus pointers and/or longs. Pushing this API aside because > you want to reuse the name for something different isn't a valid > reason to rename an existing API and will just lead to confusion. Before this patchset, all the page_frag API renamed with a '_va' suffix in this patch are dealing with virtual address, it would be better to be more specific about what exactly is the confusion here by adding a explicit 'va' suffix for them in this patch? I would argue that the renaming may avoid some confusion about whether page_frag_alloc() returning a 'struct page' or returning a virtual address instead of leading to confusion. Anyway, naming is always hard, any better naming is welcome. But don't deny any existing API renaming when we have not really started doing detailed comparison between different API naming options yet. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-01 13:01 ` Yunsheng Lin @ 2024-08-01 15:21 ` Alexander Duyck 2024-08-02 10:05 ` Yunsheng Lin 0 siblings, 1 reply; 13+ messages in thread From: Alexander Duyck @ 2024-08-01 15:21 UTC (permalink / raw) To: Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On Thu, Aug 1, 2024 at 6:01 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > On 2024/8/1 2:13, Alexander Duyck wrote: > > On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > >> > >> Currently the page_frag API is returning 'virtual address' > >> or 'va' when allocing and expecting 'virtual address' or > >> 'va' as input when freeing. > >> > >> As we are about to support new use cases that the caller > >> need to deal with 'struct page' or need to deal with both > >> 'va' and 'struct page'. In order to differentiate the API > >> handling between 'va' and 'struct page', add '_va' suffix > >> to the corresponding API mirroring the page_pool_alloc_va() > >> API of the page_pool. So that callers expecting to deal with > >> va, page or both va and page may call page_frag_alloc_va*, > >> page_frag_alloc_pg*, or page_frag_alloc* API accordingly. > >> > >> CC: Alexander Duyck <alexander.duyck@gmail.com> > >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > >> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> > > > > I am naking this patch. It is a pointless rename that is just going to > > obfuscate the git history for these callers. > > I responded to your above similar comment in v2, and then responded more > detailedly in v11, both got not direct responding, it would be good to > have more concrete feedback here instead of abstract argument. > > https://lore.kernel.org/all/74e7259a-c462-e3c1-73ac-8e3f49fb80b8@huawei.com/ > https://lore.kernel.org/all/11187fe4-9419-4341-97b5-6dad7583b5b6@huawei.com/ I will make this much more understandable. This patch is one of the ones that will permanently block this set in my opinion. As such I will never ack this patch as I see no benefit to it. Arguing with me on this is moot as you aren't going to change my mind, and I don't have all day to argue back and forth with you on every single patch. As far as your API extension and naming maybe you should look like something like bio_vec and borrow the naming from that since that is essentially what you are passing back and forth is essentially that instead of a page frag which is normally a virtual address. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-01 15:21 ` Alexander Duyck @ 2024-08-02 10:05 ` Yunsheng Lin 2024-08-02 17:00 ` Alexander Duyck 0 siblings, 1 reply; 13+ messages in thread From: Yunsheng Lin @ 2024-08-02 10:05 UTC (permalink / raw) To: Alexander Duyck Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On 2024/8/1 23:21, Alexander Duyck wrote: > On Thu, Aug 1, 2024 at 6:01 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: >> >> On 2024/8/1 2:13, Alexander Duyck wrote: >>> On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: >>>> >>>> Currently the page_frag API is returning 'virtual address' >>>> or 'va' when allocing and expecting 'virtual address' or >>>> 'va' as input when freeing. >>>> >>>> As we are about to support new use cases that the caller >>>> need to deal with 'struct page' or need to deal with both >>>> 'va' and 'struct page'. In order to differentiate the API >>>> handling between 'va' and 'struct page', add '_va' suffix >>>> to the corresponding API mirroring the page_pool_alloc_va() >>>> API of the page_pool. So that callers expecting to deal with >>>> va, page or both va and page may call page_frag_alloc_va*, >>>> page_frag_alloc_pg*, or page_frag_alloc* API accordingly. >>>> >>>> CC: Alexander Duyck <alexander.duyck@gmail.com> >>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> >>>> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> >>> >>> I am naking this patch. It is a pointless rename that is just going to >>> obfuscate the git history for these callers. >> >> I responded to your above similar comment in v2, and then responded more >> detailedly in v11, both got not direct responding, it would be good to >> have more concrete feedback here instead of abstract argument. >> >> https://lore.kernel.org/all/74e7259a-c462-e3c1-73ac-8e3f49fb80b8@huawei.com/ >> https://lore.kernel.org/all/11187fe4-9419-4341-97b5-6dad7583b5b6@huawei.com/ > > I will make this much more understandable. This patch is one of the > ones that will permanently block this set in my opinion. As such I > will never ack this patch as I see no benefit to it. Arguing with me > on this is moot as you aren't going to change my mind, and I don't > have all day to argue back and forth with you on every single patch. Let's move on to more specific technical discussion then. > > As far as your API extension and naming maybe you should look like > something like bio_vec and borrow the naming from that since that is > essentially what you are passing back and forth is essentially that > instead of a page frag which is normally a virtual address. I thought about adding something like bio_vec before, but I am not sure what you have in mind is somthing like I considered before? Let's say that we reuse bio_vec like something below for the new APIs: struct bio_vec { struct page *bv_page; void *va; unsigned int bv_len; unsigned int bv_offset; }; It seems we have the below options for the new API: option 1, it seems like a better option from API naming point of view, but it needs to return a bio_vec pointer to the caller, it seems we need to have extra space for the pointer, I am not sure how we can avoid the memory waste for sk_page_frag() case in patch 12: struct bio_vec *page_frag_alloc_bio(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask); option 2, it need both the caller and callee to have a its own local space for 'struct bio_vec ', I am not sure if passing the content instead of the pointer of a struct through the function returning is the common pattern and if it has any performance impact yet: struct bio_vec page_frag_alloc_bio(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask); option 3, the caller passes the pointer of 'struct bio_vec ' to the callee, and page_frag_alloc_bio() fills in the data, I am not sure what is the point of indirect using 'struct bio_vec ' instead of passing 'va' & 'fragsz' & 'offset' through pointers directly: bool page_frag_alloc_bio(struct page_frag_cache *nc, unsigned int fragsz, gfp_t gfp_mask, struct bio_vec *bio); If one of the above option is something in your mind? Yes, please be more specific about which one is the prefer option, and why it is the prefer option than the one introduced in this patchset? If no, please be more specific what that is in your mind? ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-02 10:05 ` Yunsheng Lin @ 2024-08-02 17:00 ` Alexander Duyck 2024-08-04 4:30 ` Yunsheng Lin 0 siblings, 1 reply; 13+ messages in thread From: Alexander Duyck @ 2024-08-02 17:00 UTC (permalink / raw) To: Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On Fri, Aug 2, 2024 at 3:05 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > > On 2024/8/1 23:21, Alexander Duyck wrote: > > On Thu, Aug 1, 2024 at 6:01 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > >> > >> On 2024/8/1 2:13, Alexander Duyck wrote: > >>> On Wed, Jul 31, 2024 at 5:50 AM Yunsheng Lin <linyunsheng@huawei.com> wrote: > >>>> > >>>> Currently the page_frag API is returning 'virtual address' > >>>> or 'va' when allocing and expecting 'virtual address' or > >>>> 'va' as input when freeing. > >>>> > >>>> As we are about to support new use cases that the caller > >>>> need to deal with 'struct page' or need to deal with both > >>>> 'va' and 'struct page'. In order to differentiate the API > >>>> handling between 'va' and 'struct page', add '_va' suffix > >>>> to the corresponding API mirroring the page_pool_alloc_va() > >>>> API of the page_pool. So that callers expecting to deal with > >>>> va, page or both va and page may call page_frag_alloc_va*, > >>>> page_frag_alloc_pg*, or page_frag_alloc* API accordingly. > >>>> > >>>> CC: Alexander Duyck <alexander.duyck@gmail.com> > >>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> > >>>> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com> > >>> > >>> I am naking this patch. It is a pointless rename that is just going to > >>> obfuscate the git history for these callers. > >> > >> I responded to your above similar comment in v2, and then responded more > >> detailedly in v11, both got not direct responding, it would be good to > >> have more concrete feedback here instead of abstract argument. > >> > >> https://lore.kernel.org/all/74e7259a-c462-e3c1-73ac-8e3f49fb80b8@huawei.com/ > >> https://lore.kernel.org/all/11187fe4-9419-4341-97b5-6dad7583b5b6@huawei.com/ > > > > I will make this much more understandable. This patch is one of the > > ones that will permanently block this set in my opinion. As such I > > will never ack this patch as I see no benefit to it. Arguing with me > > on this is moot as you aren't going to change my mind, and I don't > > have all day to argue back and forth with you on every single patch. > > Let's move on to more specific technical discussion then. > > > > > As far as your API extension and naming maybe you should look like > > something like bio_vec and borrow the naming from that since that is > > essentially what you are passing back and forth is essentially that > > instead of a page frag which is normally a virtual address. > > I thought about adding something like bio_vec before, but I am not sure > what you have in mind is somthing like I considered before? > Let's say that we reuse bio_vec like something below for the new APIs: > > struct bio_vec { > struct page *bv_page; > void *va; > unsigned int bv_len; > unsigned int bv_offset; > }; I wasn't suggesting changing the bio_vec. I was suggesting that be what you pass as a pointer reference instead of the offset. Basically your use case is mostly just for populating bio_vec style structures anyway. > It seems we have the below options for the new API: > > option 1, it seems like a better option from API naming point of view, but > it needs to return a bio_vec pointer to the caller, it seems we need to have > extra space for the pointer, I am not sure how we can avoid the memory waste > for sk_page_frag() case in patch 12: > struct bio_vec *page_frag_alloc_bio(struct page_frag_cache *nc, > unsigned int fragsz, gfp_t gfp_mask); > > option 2, it need both the caller and callee to have a its own local space > for 'struct bio_vec ', I am not sure if passing the content instead of > the pointer of a struct through the function returning is the common pattern > and if it has any performance impact yet: > struct bio_vec page_frag_alloc_bio(struct page_frag_cache *nc, > unsigned int fragsz, gfp_t gfp_mask); > > option 3, the caller passes the pointer of 'struct bio_vec ' to the callee, > and page_frag_alloc_bio() fills in the data, I am not sure what is the point > of indirect using 'struct bio_vec ' instead of passing 'va' & 'fragsz' & > 'offset' through pointers directly: > bool page_frag_alloc_bio(struct page_frag_cache *nc, > unsigned int fragsz, gfp_t gfp_mask, struct bio_vec *bio); > > If one of the above option is something in your mind? Yes, please be more specific > about which one is the prefer option, and why it is the prefer option than the one > introduced in this patchset? > > If no, please be more specific what that is in your mind? Option 3 is more or less what I had in mind. Basically you would return an int to indicate any errors and you would be populating a bio_vec during your allocation. In addition you would use the bio_vec as a tracker of the actual fragsz so when you commit you are committing with the fragsz as it was determined at the time of putting the bio_vec together so you can theoretically catch things like if the underlying offset had somehow changed from the time you setup the allocation. It would fit well into your probe routines since they are all essentially passing the page, offset, and fragsz throughout the code. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-02 17:00 ` Alexander Duyck @ 2024-08-04 4:30 ` Yunsheng Lin 2024-08-06 0:52 ` Alexander Duyck 0 siblings, 1 reply; 13+ messages in thread From: Yunsheng Lin @ 2024-08-04 4:30 UTC (permalink / raw) To: Alexander Duyck, Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On 8/3/2024 1:00 AM, Alexander Duyck wrote: >> >>> >>> As far as your API extension and naming maybe you should look like >>> something like bio_vec and borrow the naming from that since that is >>> essentially what you are passing back and forth is essentially that >>> instead of a page frag which is normally a virtual address. >> >> I thought about adding something like bio_vec before, but I am not sure >> what you have in mind is somthing like I considered before? >> Let's say that we reuse bio_vec like something below for the new APIs: >> >> struct bio_vec { >> struct page *bv_page; >> void *va; >> unsigned int bv_len; >> unsigned int bv_offset; >> }; > > I wasn't suggesting changing the bio_vec. I was suggesting that be > what you pass as a pointer reference instead of the offset. Basically > your use case is mostly just for populating bio_vec style structures > anyway. I wasn't trying/going to reuse/change bio_vec for page_frag, I was just having a hard time coming with a good new name for it. The best one I came up with is pfrag_vec, but I am not sure about the 'vec' as the "vec" portion of the name would suggest, iovec structures tend to come in arrays, mentioned in the below article: https://lwn.net/Articles/625077/ Anther one is page_frag, which is currently in use. Or any better one in your mind? > >> It seems we have the below options for the new API: >> >> option 1, it seems like a better option from API naming point of view, but >> it needs to return a bio_vec pointer to the caller, it seems we need to have >> extra space for the pointer, I am not sure how we can avoid the memory waste >> for sk_page_frag() case in patch 12: >> struct bio_vec *page_frag_alloc_bio(struct page_frag_cache *nc, >> unsigned int fragsz, gfp_t gfp_mask); >> >> option 2, it need both the caller and callee to have a its own local space >> for 'struct bio_vec ', I am not sure if passing the content instead of >> the pointer of a struct through the function returning is the common pattern >> and if it has any performance impact yet: >> struct bio_vec page_frag_alloc_bio(struct page_frag_cache *nc, >> unsigned int fragsz, gfp_t gfp_mask); >> >> option 3, the caller passes the pointer of 'struct bio_vec ' to the callee, >> and page_frag_alloc_bio() fills in the data, I am not sure what is the point >> of indirect using 'struct bio_vec ' instead of passing 'va' & 'fragsz' & >> 'offset' through pointers directly: >> bool page_frag_alloc_bio(struct page_frag_cache *nc, >> unsigned int fragsz, gfp_t gfp_mask, struct bio_vec *bio); >> >> If one of the above option is something in your mind? Yes, please be more specific >> about which one is the prefer option, and why it is the prefer option than the one >> introduced in this patchset? >> >> If no, please be more specific what that is in your mind? > > Option 3 is more or less what I had in mind. Basically you would > return an int to indicate any errors and you would be populating a > bio_vec during your allocation. In addition you would use the bio_vec Actually using this new bio_vec style structures does not seem to solve the APIs naming issue this patch is trying to solve as my understanding, as the new struct is only about passing one pointer or multi-pointers from API naming perspective. It is part of the API naming, but not all of it. > as a tracker of the actual fragsz so when you commit you are > committing with the fragsz as it was determined at the time of putting > the bio_vec together so you can theoretically catch things like if the > underlying offset had somehow changed from the time you setup the I think we might need a stronger argument than the above to use the new *vec thing other than the above debugging feature. I looked throught the bio_vec related info, and come along somewhat not really related, but really helpful "What’s all this get us" section: https://docs.kernel.org/block/biovecs.html So the question seems to be: what is this new struct for page_frag get us? Generally, I am argeed with the new struct thing if it does bring us something other than the above debugging feature. Otherwise we should avoid introducing a new thing which is hard to argue about its existent. > allocation. It would fit well into your probe routines since they are > all essentially passing the page, offset, and fragsz throughout the > code. For the current probe routines, the 'va' need to be passed, do you expect the 'va' to be passed by function return, double pointer, or new the *_vec pointer? > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-04 4:30 ` Yunsheng Lin @ 2024-08-06 0:52 ` Alexander Duyck 2024-08-06 11:37 ` Yunsheng Lin 0 siblings, 1 reply; 13+ messages in thread From: Alexander Duyck @ 2024-08-06 0:52 UTC (permalink / raw) To: Yunsheng Lin Cc: Yunsheng Lin, davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On Sun, Aug 4, 2024 at 10:00 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote: > > On 8/3/2024 1:00 AM, Alexander Duyck wrote: > > >> > >>> > >>> As far as your API extension and naming maybe you should look like > >>> something like bio_vec and borrow the naming from that since that is > >>> essentially what you are passing back and forth is essentially that > >>> instead of a page frag which is normally a virtual address. > >> > >> I thought about adding something like bio_vec before, but I am not sure > >> what you have in mind is somthing like I considered before? > >> Let's say that we reuse bio_vec like something below for the new APIs: > >> > >> struct bio_vec { > >> struct page *bv_page; > >> void *va; > >> unsigned int bv_len; > >> unsigned int bv_offset; > >> }; > > > > I wasn't suggesting changing the bio_vec. I was suggesting that be > > what you pass as a pointer reference instead of the offset. Basically > > your use case is mostly just for populating bio_vec style structures > > anyway. > > I wasn't trying/going to reuse/change bio_vec for page_frag, I was just > having a hard time coming with a good new name for it. > The best one I came up with is pfrag_vec, but I am not sure about the > 'vec' as the "vec" portion of the name would suggest, iovec structures > tend to come in arrays, mentioned in the below article: > https://lwn.net/Articles/625077/ > > Anther one is page_frag, which is currently in use. > > Or any better one in your mind? I was suggesting using bio_vec, not some new structure. The general idea is that almost all the values you are using are exposed by that structure already in the case of the page based calls you were adding, so it makes sense to use what is there rather than reinventing the wheel. > > > >> It seems we have the below options for the new API: > >> > >> option 1, it seems like a better option from API naming point of view, but > >> it needs to return a bio_vec pointer to the caller, it seems we need to have > >> extra space for the pointer, I am not sure how we can avoid the memory waste > >> for sk_page_frag() case in patch 12: > >> struct bio_vec *page_frag_alloc_bio(struct page_frag_cache *nc, > >> unsigned int fragsz, gfp_t gfp_mask); > >> > >> option 2, it need both the caller and callee to have a its own local space > >> for 'struct bio_vec ', I am not sure if passing the content instead of > >> the pointer of a struct through the function returning is the common pattern > >> and if it has any performance impact yet: > >> struct bio_vec page_frag_alloc_bio(struct page_frag_cache *nc, > >> unsigned int fragsz, gfp_t gfp_mask); > >> > >> option 3, the caller passes the pointer of 'struct bio_vec ' to the callee, > >> and page_frag_alloc_bio() fills in the data, I am not sure what is the point > >> of indirect using 'struct bio_vec ' instead of passing 'va' & 'fragsz' & > >> 'offset' through pointers directly: > >> bool page_frag_alloc_bio(struct page_frag_cache *nc, > >> unsigned int fragsz, gfp_t gfp_mask, struct bio_vec *bio); > >> > >> If one of the above option is something in your mind? Yes, please be more specific > >> about which one is the prefer option, and why it is the prefer option than the one > >> introduced in this patchset? > >> > >> If no, please be more specific what that is in your mind? > > > > Option 3 is more or less what I had in mind. Basically you would > > return an int to indicate any errors and you would be populating a > > bio_vec during your allocation. In addition you would use the bio_vec > > Actually using this new bio_vec style structures does not seem to solve > the APIs naming issue this patch is trying to solve as my understanding, > as the new struct is only about passing one pointer or multi-pointers > from API naming perspective. It is part of the API naming, but not all > of it. I have no idea what you are talking about. The issue was you were splitting things page_frag_alloc_va and page_frag_alloc_pg. Now it would be page_frag_alloc and page_frag_alloc_bio or maybe page_frag_fill_bio which would better explain what you are doing with this function. > > as a tracker of the actual fragsz so when you commit you are > > committing with the fragsz as it was determined at the time of putting > > the bio_vec together so you can theoretically catch things like if the > > underlying offset had somehow changed from the time you setup the > > I think we might need a stronger argument than the above to use the new > *vec thing other than the above debugging feature. > > I looked throught the bio_vec related info, and come along somewhat not > really related, but really helpful "What’s all this get us" section: > https://docs.kernel.org/block/biovecs.html > > So the question seems to be: what is this new struct for page_frag get > us? > > Generally, I am argeed with the new struct thing if it does bring us > something other than the above debugging feature. Otherwise we should > avoid introducing a new thing which is hard to argue about its existent. I don't want a new structure. I just want you to use the bio_vec for spots where you are needing to use a page because you are populating a bio_vec. > > allocation. It would fit well into your probe routines since they are > > all essentially passing the page, offset, and fragsz throughout the > > code. > > For the current probe routines, the 'va' need to be passed, do you > expect the 'va' to be passed by function return, double pointer, or > new the *_vec pointer? I would suggest doing so via the *_vec pointer. The problem as I see it is that the existing code is exposing too much of the internals and setting up the possibility for a system to get corrupted really easily. At least if you are doing this with a bio_vec you can verify that you have the correct page and offset before you move the offset up by the length which should have been provided by the API in the first place and not just guessed at based on what the fragsz was that you requested. ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-08-06 0:52 ` Alexander Duyck @ 2024-08-06 11:37 ` Yunsheng Lin 0 siblings, 0 replies; 13+ messages in thread From: Yunsheng Lin @ 2024-08-06 11:37 UTC (permalink / raw) To: Alexander Duyck, Yunsheng Lin Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs On 2024/8/6 8:52, Alexander Duyck wrote: > On Sun, Aug 4, 2024 at 10:00 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote: >> >> On 8/3/2024 1:00 AM, Alexander Duyck wrote: >> >>>> >>>>> >>>>> As far as your API extension and naming maybe you should look like >>>>> something like bio_vec and borrow the naming from that since that is >>>>> essentially what you are passing back and forth is essentially that >>>>> instead of a page frag which is normally a virtual address. >>>> >>>> I thought about adding something like bio_vec before, but I am not sure >>>> what you have in mind is somthing like I considered before? >>>> Let's say that we reuse bio_vec like something below for the new APIs: >>>> >>>> struct bio_vec { >>>> struct page *bv_page; >>>> void *va; >>>> unsigned int bv_len; >>>> unsigned int bv_offset; >>>> }; >>> >>> I wasn't suggesting changing the bio_vec. I was suggesting that be >>> what you pass as a pointer reference instead of the offset. Basically >>> your use case is mostly just for populating bio_vec style structures >>> anyway. >> >> I wasn't trying/going to reuse/change bio_vec for page_frag, I was just >> having a hard time coming with a good new name for it. >> The best one I came up with is pfrag_vec, but I am not sure about the >> 'vec' as the "vec" portion of the name would suggest, iovec structures >> tend to come in arrays, mentioned in the below article: >> https://lwn.net/Articles/625077/ >> >> Anther one is page_frag, which is currently in use. >> >> Or any better one in your mind? > > I was suggesting using bio_vec, not some new structure. The general > idea is that almost all the values you are using are exposed by that > structure already in the case of the page based calls you were adding, > so it makes sense to use what is there rather than reinventing the > wheel. Through a quick look, there seems to be at least three structs which have similar values: struct bio_vec & struct skb_frag & struct page_frag. As your above agrument about using bio_vec, it seems it is ok to use any one of them as each one of them seems to have almost all the values we are using? Personally, my preference over them: 'struct page_frag' > 'struct skb_frag' > 'struct bio_vec', as the naming of 'struct page_frag' seems to best match the page_frag API, 'struct skb_frag' is the second preference because we mostly need to fill skb frag anyway, and 'struct bio_vec' is the last preference because it just happen to have almost all the values needed. Is there any specific reason other than the above "almost all the values you are using are exposed by that structure already " that you prefer bio_vec? > >>> >>>> It seems we have the below options for the new API: >>>> >>>> option 1, it seems like a better option from API naming point of view, but >>>> it needs to return a bio_vec pointer to the caller, it seems we need to have >>>> extra space for the pointer, I am not sure how we can avoid the memory waste >>>> for sk_page_frag() case in patch 12: >>>> struct bio_vec *page_frag_alloc_bio(struct page_frag_cache *nc, >>>> unsigned int fragsz, gfp_t gfp_mask); >>>> >>>> option 2, it need both the caller and callee to have a its own local space >>>> for 'struct bio_vec ', I am not sure if passing the content instead of >>>> the pointer of a struct through the function returning is the common pattern >>>> and if it has any performance impact yet: >>>> struct bio_vec page_frag_alloc_bio(struct page_frag_cache *nc, >>>> unsigned int fragsz, gfp_t gfp_mask); >>>> >>>> option 3, the caller passes the pointer of 'struct bio_vec ' to the callee, >>>> and page_frag_alloc_bio() fills in the data, I am not sure what is the point >>>> of indirect using 'struct bio_vec ' instead of passing 'va' & 'fragsz' & >>>> 'offset' through pointers directly: >>>> bool page_frag_alloc_bio(struct page_frag_cache *nc, >>>> unsigned int fragsz, gfp_t gfp_mask, struct bio_vec *bio); >>>> >>>> If one of the above option is something in your mind? Yes, please be more specific >>>> about which one is the prefer option, and why it is the prefer option than the one >>>> introduced in this patchset? >>>> >>>> If no, please be more specific what that is in your mind? >>> >>> Option 3 is more or less what I had in mind. Basically you would >>> return an int to indicate any errors and you would be populating a >>> bio_vec during your allocation. In addition you would use the bio_vec >> >> Actually using this new bio_vec style structures does not seem to solve >> the APIs naming issue this patch is trying to solve as my understanding, >> as the new struct is only about passing one pointer or multi-pointers >> from API naming perspective. It is part of the API naming, but not all >> of it. > > I have no idea what you are talking about. The issue was you were > splitting things page_frag_alloc_va and page_frag_alloc_pg. Now it > would be page_frag_alloc and page_frag_alloc_bio or maybe > page_frag_fill_bio which would better explain what you are doing with > this function. There are three types of API as proposed in this patchset instead of two types of API: 1. page_frag_alloc_va() returns [va]. 2. page_frag_alloc_pg() returns [page, offset]. 3. page_frag_alloc() returns [va] & [page, offset]. You seemed to miss that we need a third naming for the type 3 API. Do you see type 3 API as a valid API? if yes, what naming are you suggesting for it? if no, why it is not a valid API? > >>> as a tracker of the actual fragsz so when you commit you are >>> committing with the fragsz as it was determined at the time of putting >>> the bio_vec together so you can theoretically catch things like if the >>> underlying offset had somehow changed from the time you setup the >> >> I think we might need a stronger argument than the above to use the new >> *vec thing other than the above debugging feature. >> >> I looked throught the bio_vec related info, and come along somewhat not >> really related, but really helpful "What’s all this get us" section: >> https://docs.kernel.org/block/biovecs.html >> >> So the question seems to be: what is this new struct for page_frag get >> us? >> >> Generally, I am argeed with the new struct thing if it does bring us >> something other than the above debugging feature. Otherwise we should >> avoid introducing a new thing which is hard to argue about its existent. > > I don't want a new structure. I just want you to use the bio_vec for > spots where you are needing to use a page because you are populating a > bio_vec. > >>> allocation. It would fit well into your probe routines since they are >>> all essentially passing the page, offset, and fragsz throughout the >>> code. >> >> For the current probe routines, the 'va' need to be passed, do you >> expect the 'va' to be passed by function return, double pointer, or >> new the *_vec pointer? > > I would suggest doing so via the *_vec pointer. The problem as I see As your above suggestion, I can safely assume *_ve is 'struct bio_vec', right? I am really confused here, you just clarified that you wasn't suggesting changing the bio_vec, and now you are suggesting passing 'va' via the 'struct bio_vec' pointer? How is it possible with current definition of 'struct bio_vec'? struct bio_vec { struct page *bv_page; unsigned int bv_len; unsigned int bv_offset; }; Or am I mising something obvious here? > it is that the existing code is exposing too much of the internals and > setting up the possibility for a system to get corrupted really If most of the page_frag API callers doesn't access 'struct bio_vec' directly and use something like bvec_iter_* API to do the accessing, then I am agreed with the above argument. But right now, most of the page_frag API callers are accessing 'va' directly to do the memcpy'ing, and accessing 'page & off & len' directly to do skb frag filling, so I am not really sure what's the point of indirection using the 'struct bio_vec' here. And adding 'struct bio_vec' for page_frag and accessing the value of it directly may be against of the design choice of 'struct bio_vec', as there seems to be no inline helper defined to access the value of 'struct bio_vec' directly in bvec.h ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API 2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin 2024-07-31 13:36 ` Chuck Lever 2024-07-31 18:13 ` Alexander Duyck @ 2024-08-04 6:44 ` Sagi Grimberg 2 siblings, 0 replies; 13+ messages in thread From: Sagi Grimberg @ 2024-08-04 6:44 UTC (permalink / raw) To: Yunsheng Lin, davem, kuba, pabeni Cc: netdev, linux-kernel, Alexander Duyck, Subbaraya Sundeep, Jeroen de Borst, Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch, Jens Axboe, Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu, Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells, Marc Dionne, Chuck Lever, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust, Anna Schumaker, intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs Regardless of the API discussion, The nvme-tcp bits look straight-forward: Acked-by: Sagi Grimberg <sagi@grimberg.me> ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH net-next v12 12/14] net: replace page_frag with page_frag_cache 2024-07-31 12:44 [PATCH net-next v12 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin 2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin @ 2024-07-31 12:45 ` Yunsheng Lin 1 sibling, 0 replies; 13+ messages in thread From: Yunsheng Lin @ 2024-07-31 12:45 UTC (permalink / raw) To: davem, kuba, pabeni Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck, Mat Martineau, Ayush Sawal, Eric Dumazet, Willem de Bruijn, Jason Wang, Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider, John Fastabend, Jakub Sitnicki, David Ahern, Matthieu Baerts, Geliang Tang, Jamal Hadi Salim, Cong Wang, Jiri Pirko, Boris Pismenny, bpf, mptcp Use the newly introduced prepare/probe/commit API to replace page_frag with page_frag_cache for sk_page_frag(). CC: Alexander Duyck <alexander.duyck@gmail.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Acked-by: Mat Martineau <martineau@kernel.org> --- .../chelsio/inline_crypto/chtls/chtls.h | 3 - .../chelsio/inline_crypto/chtls/chtls_io.c | 100 ++++--------- .../chelsio/inline_crypto/chtls/chtls_main.c | 3 - drivers/net/tun.c | 48 +++--- include/linux/sched.h | 2 +- include/net/sock.h | 14 +- kernel/exit.c | 3 +- kernel/fork.c | 3 +- net/core/skbuff.c | 59 +++++--- net/core/skmsg.c | 22 +-- net/core/sock.c | 46 ++++-- net/ipv4/ip_output.c | 33 +++-- net/ipv4/tcp.c | 32 ++-- net/ipv4/tcp_output.c | 28 ++-- net/ipv6/ip6_output.c | 33 +++-- net/kcm/kcmsock.c | 27 ++-- net/mptcp/protocol.c | 67 +++++---- net/sched/em_meta.c | 2 +- net/tls/tls_device.c | 137 ++++++++++-------- 19 files changed, 347 insertions(+), 315 deletions(-) diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h index 7ff82b6778ba..fe2b6a8ef718 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h @@ -234,7 +234,6 @@ struct chtls_dev { struct list_head list_node; struct list_head rcu_node; struct list_head na_node; - unsigned int send_page_order; int max_host_sndbuf; u32 round_robin_cnt; struct key_map kmap; @@ -453,8 +452,6 @@ enum { /* The ULP mode/submode of an skbuff */ #define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode) -#define TCP_PAGE(sk) (sk->sk_frag.page) -#define TCP_OFF(sk) (sk->sk_frag.offset) static inline struct chtls_dev *to_chtls_dev(struct tls_toe_device *tlsdev) { diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c index d567e42e1760..334381c1587f 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c @@ -825,12 +825,6 @@ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags) ULP_SKB_CB(skb)->flags = flags; __skb_queue_tail(&csk->txq, skb); sk->sk_wmem_queued += skb->truesize; - - if (TCP_PAGE(sk) && TCP_OFF(sk)) { - put_page(TCP_PAGE(sk)); - TCP_PAGE(sk) = NULL; - TCP_OFF(sk) = 0; - } } static struct sk_buff *get_tx_skb(struct sock *sk, int size) @@ -882,16 +876,12 @@ static void push_frames_if_head(struct sock *sk) chtls_push_frames(csk, 1); } -static int chtls_skb_copy_to_page_nocache(struct sock *sk, - struct iov_iter *from, - struct sk_buff *skb, - struct page *page, - int off, int copy) +static int chtls_skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from, + struct sk_buff *skb, char *va, int copy) { int err; - err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) + - off, copy, skb->len); + err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len); if (err) return err; @@ -1114,82 +1104,44 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) if (err) goto do_fault; } else { + struct page_frag_cache *pfrag = &sk->sk_frag; int i = skb_shinfo(skb)->nr_frags; - struct page *page = TCP_PAGE(sk); - int pg_size = PAGE_SIZE; - int off = TCP_OFF(sk); - bool merge; - - if (page) - pg_size = page_size(page); - if (off < pg_size && - skb_can_coalesce(skb, i, page, off)) { + unsigned int offset, fragsz; + bool merge = false; + struct page *page; + void *va; + + fragsz = 32U; + page = page_frag_alloc_prepare(pfrag, &offset, &fragsz, + &va, sk->sk_allocation); + if (unlikely(!page)) + goto wait_for_memory; + + if (skb_can_coalesce(skb, i, page, offset)) merge = true; - goto copy; - } - merge = false; - if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) : - MAX_SKB_FRAGS)) + else if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) : + MAX_SKB_FRAGS)) goto new_buf; - if (page && off == pg_size) { - put_page(page); - TCP_PAGE(sk) = page = NULL; - pg_size = PAGE_SIZE; - } - - if (!page) { - gfp_t gfp = sk->sk_allocation; - int order = cdev->send_page_order; - - if (order) { - page = alloc_pages(gfp | __GFP_COMP | - __GFP_NOWARN | - __GFP_NORETRY, - order); - if (page) - pg_size <<= order; - } - if (!page) { - page = alloc_page(gfp); - pg_size = PAGE_SIZE; - } - if (!page) - goto wait_for_memory; - off = 0; - } -copy: - if (copy > pg_size - off) - copy = pg_size - off; + copy = min_t(int, copy, fragsz); if (is_tls_tx(csk)) copy = min_t(int, copy, csk->tlshws.txleft); - err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter, - skb, page, - off, copy); - if (unlikely(err)) { - if (!TCP_PAGE(sk)) { - TCP_PAGE(sk) = page; - TCP_OFF(sk) = 0; - } + err = chtls_skb_copy_to_va_nocache(sk, &msg->msg_iter, + skb, va, copy); + if (unlikely(err)) goto do_fault; - } + /* Update the skb. */ if (merge) { skb_frag_size_add( &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, copy); } else { - skb_fill_page_desc(skb, i, page, off, copy); - if (off + copy < pg_size) { - /* space left keep page */ - get_page(page); - TCP_PAGE(sk) = page; - } else { - TCP_PAGE(sk) = NULL; - } + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, copy); } - TCP_OFF(sk) = off + copy; } if (unlikely(skb->len == mss)) tx_skb_finalize(skb); diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c index 455a54708be4..ba88b2fc7cd8 100644 --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c @@ -34,7 +34,6 @@ static DEFINE_MUTEX(notify_mutex); static RAW_NOTIFIER_HEAD(listen_notify_list); static struct proto chtls_cpl_prot, chtls_cpl_protv6; struct request_sock_ops chtls_rsk_ops, chtls_rsk_opsv6; -static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT; static void register_listen_notifier(struct notifier_block *nb) { @@ -273,8 +272,6 @@ static void *chtls_uld_add(const struct cxgb4_lld_info *info) INIT_WORK(&cdev->deferq_task, process_deferq); spin_lock_init(&cdev->listen_lock); spin_lock_init(&cdev->idr_lock); - cdev->send_page_order = min_t(uint, get_order(32768), - send_page_order); cdev->max_host_sndbuf = 48 * 1024; if (lldi->vr->key.size) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 1d06c560c5e6..51df92fd60db 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1598,21 +1598,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile, } static struct sk_buff *__tun_build_skb(struct tun_file *tfile, - struct page_frag *alloc_frag, char *buf, - int buflen, int len, int pad) + char *buf, int buflen, int len, int pad) { struct sk_buff *skb = build_skb(buf, buflen); - if (!skb) + if (!skb) { + page_frag_free_va(buf); return ERR_PTR(-ENOMEM); + } skb_reserve(skb, pad); skb_put(skb, len); skb_set_owner_w(skb, tfile->socket.sk); - get_page(alloc_frag->page); - alloc_frag->offset += buflen; - return skb; } @@ -1660,7 +1658,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, struct virtio_net_hdr *hdr, int len, int *skb_xdp) { - struct page_frag *alloc_frag = ¤t->task_frag; + struct page_frag_cache *alloc_frag = ¤t->task_frag; struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx; struct bpf_prog *xdp_prog; int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); @@ -1676,16 +1674,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, buflen += SKB_DATA_ALIGN(len + pad); rcu_read_unlock(); - alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES); - if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL))) + buf = page_frag_alloc_va_align(alloc_frag, buflen, GFP_KERNEL, + SMP_CACHE_BYTES); + if (unlikely(!buf)) return ERR_PTR(-ENOMEM); - buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset; - copied = copy_page_from_iter(alloc_frag->page, - alloc_frag->offset + pad, - len, from); - if (copied != len) + copied = copy_from_iter(buf + pad, len, from); + if (copied != len) { + page_frag_alloc_abort(alloc_frag, buflen); return ERR_PTR(-EFAULT); + } /* There's a small window that XDP may be set after the check * of xdp_prog above, this should be rare and for simplicity @@ -1693,8 +1691,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, */ if (hdr->gso_type || !xdp_prog) { *skb_xdp = 1; - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, - pad); + return __tun_build_skb(tfile, buf, buflen, len, pad); } *skb_xdp = 0; @@ -1711,21 +1708,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, xdp_prepare_buff(&xdp, buf, pad, len, false); act = bpf_prog_run_xdp(xdp_prog, &xdp); - if (act == XDP_REDIRECT || act == XDP_TX) { - get_page(alloc_frag->page); - alloc_frag->offset += buflen; - } err = tun_xdp_act(tun, xdp_prog, &xdp, act); - if (err < 0) { - if (act == XDP_REDIRECT || act == XDP_TX) - put_page(alloc_frag->page); - goto out; - } - if (err == XDP_REDIRECT) xdp_do_flush(); - if (err != XDP_PASS) + + if (err == XDP_REDIRECT || err == XDP_TX) { + goto out; + } else if (err < 0 || err != XDP_PASS) { + page_frag_alloc_abort(alloc_frag, buflen); goto out; + } pad = xdp.data - xdp.data_hard_start; len = xdp.data_end - xdp.data; @@ -1734,7 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun, rcu_read_unlock(); local_bh_enable(); - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad); + return __tun_build_skb(tfile, buf, buflen, len, pad); out: bpf_net_ctx_clear(bpf_net_ctx); diff --git a/include/linux/sched.h b/include/linux/sched.h index f8d150343d42..bb9a8e9d6d2d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1355,7 +1355,7 @@ struct task_struct { /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; - struct page_frag task_frag; + struct page_frag_cache task_frag; #ifdef CONFIG_TASK_DELAY_ACCT struct task_delay_info *delays; diff --git a/include/net/sock.h b/include/net/sock.h index b5e702298ab7..8f6cc0dd2f4f 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -461,7 +461,7 @@ struct sock { struct sk_buff_head sk_write_queue; u32 sk_dst_pending_confirm; u32 sk_pacing_status; /* see enum sk_pacing */ - struct page_frag sk_frag; + struct page_frag_cache sk_frag; struct timer_list sk_timer; unsigned long sk_pacing_rate; /* bytes per second */ @@ -2484,7 +2484,7 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk) * Return: a per task page_frag if context allows that, * otherwise a per socket one. */ -static inline struct page_frag *sk_page_frag(struct sock *sk) +static inline struct page_frag_cache *sk_page_frag(struct sock *sk) { if (sk->sk_use_task_frag) return ¤t->task_frag; @@ -2492,7 +2492,15 @@ static inline struct page_frag *sk_page_frag(struct sock *sk) return &sk->sk_frag; } -bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag); +struct page *sk_page_frag_alloc_prepare(struct sock *sk, + struct page_frag_cache *pfrag, + unsigned int *size, + unsigned int *offset, void **va); + +struct page *sk_page_frag_alloc_pg_prepare(struct sock *sk, + struct page_frag_cache *pfrag, + unsigned int *size, + unsigned int *offset); /* * Default write policy as shown to user space via poll/select/SIGIO diff --git a/kernel/exit.c b/kernel/exit.c index 7430852a8571..b5257e74ec1c 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -917,8 +917,7 @@ void __noreturn do_exit(long code) if (tsk->splice_pipe) free_pipe_info(tsk->splice_pipe); - if (tsk->task_frag.page) - put_page(tsk->task_frag.page); + page_frag_cache_drain(&tsk->task_frag); exit_task_stack_account(tsk); diff --git a/kernel/fork.c b/kernel/fork.c index cc760491f201..7d380a6fd64a 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -80,6 +80,7 @@ #include <linux/tty.h> #include <linux/fs_struct.h> #include <linux/magic.h> +#include <linux/page_frag_cache.h> #include <linux/perf_event.h> #include <linux/posix-timers.h> #include <linux/user-return-notifier.h> @@ -1157,10 +1158,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->btrace_seq = 0; #endif tsk->splice_pipe = NULL; - tsk->task_frag.page = NULL; tsk->wake_q.next = NULL; tsk->worker_private = NULL; + page_frag_cache_init(&tsk->task_frag); kcov_task_init(tsk); kmsan_task_create(tsk); kmap_local_fork(tsk); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 76a473b1072d..b8967ffd2d92 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -3037,25 +3037,6 @@ static void sock_spd_release(struct splice_pipe_desc *spd, unsigned int i) put_page(spd->pages[i]); } -static struct page *linear_to_page(struct page *page, unsigned int *len, - unsigned int *offset, - struct sock *sk) -{ - struct page_frag *pfrag = sk_page_frag(sk); - - if (!sk_page_frag_refill(sk, pfrag)) - return NULL; - - *len = min_t(unsigned int, *len, pfrag->size - pfrag->offset); - - memcpy(page_address(pfrag->page) + pfrag->offset, - page_address(page) + *offset, *len); - *offset = pfrag->offset; - pfrag->offset += *len; - - return pfrag->page; -} - static bool spd_can_coalesce(const struct splice_pipe_desc *spd, struct page *page, unsigned int offset) @@ -3066,6 +3047,38 @@ static bool spd_can_coalesce(const struct splice_pipe_desc *spd, spd->partial[spd->nr_pages - 1].len == offset); } +static bool spd_fill_linear_page(struct splice_pipe_desc *spd, + struct page *page, unsigned int offset, + unsigned int *len, struct sock *sk) +{ + struct page_frag_cache *pfrag = sk_page_frag(sk); + unsigned int frag_len, frag_offset; + struct page *frag_page; + void *va; + + frag_page = sk_page_frag_alloc_prepare(sk, pfrag, &frag_offset, + &frag_len, &va); + if (!frag_page) + return true; + + *len = min_t(unsigned int, *len, frag_len); + memcpy(va, page_address(page) + offset, *len); + + if (spd_can_coalesce(spd, frag_page, frag_offset)) { + spd->partial[spd->nr_pages - 1].len += *len; + page_frag_alloc_commit_noref(pfrag, *len); + return false; + } + + page_frag_alloc_commit(pfrag, *len); + spd->pages[spd->nr_pages] = frag_page; + spd->partial[spd->nr_pages].len = *len; + spd->partial[spd->nr_pages].offset = frag_offset; + spd->nr_pages++; + + return false; +} + /* * Fill page/offset/length into spd, if it can hold more pages. */ @@ -3078,11 +3091,9 @@ static bool spd_fill_page(struct splice_pipe_desc *spd, if (unlikely(spd->nr_pages == MAX_SKB_FRAGS)) return true; - if (linear) { - page = linear_to_page(page, len, &offset, sk); - if (!page) - return true; - } + if (linear) + return spd_fill_linear_page(spd, page, offset, len, sk); + if (spd_can_coalesce(spd, page, offset)) { spd->partial[spd->nr_pages - 1].len += *len; return false; diff --git a/net/core/skmsg.c b/net/core/skmsg.c index bbf40b999713..956fd6103909 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -27,23 +27,25 @@ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce) int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, int elem_first_coalesce) { - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *pfrag = sk_page_frag(sk); u32 osize = msg->sg.size; int ret = 0; len -= msg->sg.size; while (len > 0) { + unsigned int frag_offset, frag_len; struct scatterlist *sge; - u32 orig_offset; + struct page *page; int use, i; - if (!sk_page_frag_refill(sk, pfrag)) { + page = sk_page_frag_alloc_pg_prepare(sk, pfrag, &frag_offset, + &frag_len); + if (!page) { ret = -ENOMEM; goto msg_trim; } - orig_offset = pfrag->offset; - use = min_t(int, len, pfrag->size - orig_offset); + use = min_t(int, len, frag_len); if (!sk_wmem_schedule(sk, use)) { ret = -ENOMEM; goto msg_trim; @@ -54,9 +56,10 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sge = &msg->sg.data[i]; if (sk_msg_try_coalesce_ok(msg, elem_first_coalesce) && - sg_page(sge) == pfrag->page && - sge->offset + sge->length == orig_offset) { + sg_page(sge) == page && + sge->offset + sge->length == frag_offset) { sge->length += use; + page_frag_alloc_commit_noref(pfrag, use); } else { if (sk_msg_full(msg)) { ret = -ENOSPC; @@ -65,14 +68,13 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len, sge = &msg->sg.data[msg->sg.end]; sg_unmark_end(sge); - sg_set_page(sge, pfrag->page, use, orig_offset); - get_page(pfrag->page); + sg_set_page(sge, page, use, frag_offset); + page_frag_alloc_commit(pfrag, use); sk_msg_iter_next(msg, end); } sk_mem_charge(sk, use); msg->sg.size += use; - pfrag->offset += use; len -= use; } diff --git a/net/core/sock.c b/net/core/sock.c index 9abc4fe25953..26c100ee9001 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2207,10 +2207,7 @@ static void __sk_destruct(struct rcu_head *head) pr_debug("%s: optmem leakage (%d bytes) detected\n", __func__, atomic_read(&sk->sk_omem_alloc)); - if (sk->sk_frag.page) { - put_page(sk->sk_frag.page); - sk->sk_frag.page = NULL; - } + page_frag_cache_drain(&sk->sk_frag); /* We do not need to acquire sk->sk_peer_lock, we are the last user. */ put_cred(sk->sk_peer_cred); @@ -2956,16 +2953,43 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp) } EXPORT_SYMBOL(skb_page_frag_refill); -bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag) +struct page *sk_page_frag_alloc_prepare(struct sock *sk, + struct page_frag_cache *pfrag, + unsigned int *offset, + unsigned int *size, void **va) { - if (likely(skb_page_frag_refill(32U, pfrag, sk->sk_allocation))) - return true; + struct page *page; + + *size = 32U; + page = page_frag_alloc_prepare(pfrag, offset, size, va, + sk->sk_allocation); + if (likely(page)) + return page; sk_enter_memory_pressure(sk); sk_stream_moderate_sndbuf(sk); - return false; + return NULL; +} +EXPORT_SYMBOL(sk_page_frag_alloc_prepare); + +struct page *sk_page_frag_alloc_pg_prepare(struct sock *sk, + struct page_frag_cache *pfrag, + unsigned int *offset, + unsigned int *size) +{ + struct page *page; + + *size = 32U; + page = page_frag_alloc_pg_prepare(pfrag, offset, size, + sk->sk_allocation); + if (likely(page)) + return page; + + sk_enter_memory_pressure(sk); + sk_stream_moderate_sndbuf(sk); + return NULL; } -EXPORT_SYMBOL(sk_page_frag_refill); +EXPORT_SYMBOL(sk_page_frag_alloc_pg_prepare); void __lock_sock(struct sock *sk) __releases(&sk->sk_lock.slock) @@ -3487,8 +3511,8 @@ void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid) sk->sk_error_report = sock_def_error_report; sk->sk_destruct = sock_def_destruct; - sk->sk_frag.page = NULL; - sk->sk_frag.offset = 0; + page_frag_cache_init(&sk->sk_frag); + sk->sk_peek_off = -1; sk->sk_peer_pid = NULL; diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index 8a10a7c67834..57499e3ed9e5 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -952,7 +952,7 @@ static int __ip_append_data(struct sock *sk, struct flowi4 *fl4, struct sk_buff_head *queue, struct inet_cork *cork, - struct page_frag *pfrag, + struct page_frag_cache *pfrag, int getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb), void *from, int length, int transhdrlen, @@ -1228,31 +1228,38 @@ static int __ip_append_data(struct sock *sk, wmem_alloc_delta += copy; } else if (!zc) { int i = skb_shinfo(skb)->nr_frags; + unsigned int frag_offset, frag_size; + struct page *page; + void *va; err = -ENOMEM; - if (!sk_page_frag_refill(sk, pfrag)) + page = sk_page_frag_alloc_prepare(sk, pfrag, + &frag_offset, + &frag_size, &va); + if (!page) goto error; skb_zcopy_downgrade_managed(skb); - if (!skb_can_coalesce(skb, i, pfrag->page, - pfrag->offset)) { + copy = min_t(int, copy, frag_size); + + if (!skb_can_coalesce(skb, i, page, frag_offset)) { err = -EMSGSIZE; if (i == MAX_SKB_FRAGS) goto error; - __skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, 0); + __skb_fill_page_desc(skb, i, page, frag_offset, + copy); skb_shinfo(skb)->nr_frags = ++i; - get_page(pfrag->page); + page_frag_alloc_commit(pfrag, copy); + } else { + skb_frag_size_add( + &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, copy); } - copy = min_t(int, copy, pfrag->size - pfrag->offset); - if (getfrag(from, - page_address(pfrag->page) + pfrag->offset, - offset, copy, skb->len, skb) < 0) + + if (getfrag(from, va, offset, copy, skb->len, skb) < 0) goto error_efault; - pfrag->offset += copy; - skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); skb_len_add(skb, copy); wmem_alloc_delta += copy; } else { diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 7c392710ae15..815ec53b16d5 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -1189,13 +1189,17 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) if (zc == 0) { bool merge = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); - - if (!sk_page_frag_refill(sk, pfrag)) + struct page_frag_cache *pfrag = sk_page_frag(sk); + unsigned int frag_offset, frag_size; + struct page *page; + void *va; + + page = sk_page_frag_alloc_prepare(sk, pfrag, &frag_offset, + &frag_size, &va); + if (!page) goto wait_for_space; - if (!skb_can_coalesce(skb, i, pfrag->page, - pfrag->offset)) { + if (!skb_can_coalesce(skb, i, page, frag_offset)) { if (i >= READ_ONCE(net_hotdata.sysctl_max_skb_frags)) { tcp_mark_push(tp, skb); goto new_segment; @@ -1203,7 +1207,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) merge = false; } - copy = min_t(int, copy, pfrag->size - pfrag->offset); + copy = min_t(int, copy, frag_size); if (unlikely(skb_zcopy_pure(skb) || skb_zcopy_managed(skb))) { if (tcp_downgrade_zcopy_pure(sk, skb)) @@ -1216,20 +1220,18 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size) goto wait_for_space; err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb, - page_address(pfrag->page) + - pfrag->offset, copy); + va, copy); if (err) goto do_error; /* Update the skb. */ if (merge) { skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, copy); } else { - skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, copy); - page_ref_inc(pfrag->page); + skb_fill_page_desc(skb, i, page, frag_offset, copy); + page_frag_alloc_commit(pfrag, copy); } - pfrag->offset += copy; } else if (zc == MSG_ZEROCOPY) { /* First append to a fragless skb builds initial * pure zerocopy skb @@ -3131,11 +3133,7 @@ int tcp_disconnect(struct sock *sk, int flags) WARN_ON(inet->inet_num && !icsk->icsk_bind_hash); - if (sk->sk_frag.page) { - put_page(sk->sk_frag.page); - sk->sk_frag.page = NULL; - sk->sk_frag.offset = 0; - } + page_frag_cache_drain(&sk->sk_frag); sk_error_report(sk); return 0; } diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c index 16c48df8df4c..43208092b89c 100644 --- a/net/ipv4/tcp_output.c +++ b/net/ipv4/tcp_output.c @@ -3970,9 +3970,12 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) struct inet_connection_sock *icsk = inet_csk(sk); struct tcp_sock *tp = tcp_sk(sk); struct tcp_fastopen_request *fo = tp->fastopen_req; - struct page_frag *pfrag = sk_page_frag(sk); + struct page_frag_cache *pfrag = sk_page_frag(sk); + unsigned int offset, size; struct sk_buff *syn_data; int space, err = 0; + struct page *page; + void *va; tp->rx_opt.mss_clamp = tp->advmss; /* If MSS is not cached */ if (!tcp_fastopen_cookie_check(sk, &tp->rx_opt.mss_clamp, &fo->cookie)) @@ -3991,30 +3994,31 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) space = min_t(size_t, space, fo->size); - if (space && - !skb_page_frag_refill(min_t(size_t, space, PAGE_SIZE), - pfrag, sk->sk_allocation)) - goto fallback; + if (space) { + size = min_t(size_t, space, PAGE_SIZE); + page = page_frag_alloc_prepare(pfrag, &offset, &size, &va, + sk->sk_allocation); + if (!page) + goto fallback; + } + syn_data = tcp_stream_alloc_skb(sk, sk->sk_allocation, false); if (!syn_data) goto fallback; memcpy(syn_data->cb, syn->cb, sizeof(syn->cb)); if (space) { - space = min_t(size_t, space, pfrag->size - pfrag->offset); + space = min_t(size_t, space, size); space = tcp_wmem_schedule(sk, space); } if (space) { - space = copy_page_from_iter(pfrag->page, pfrag->offset, - space, &fo->data->msg_iter); + space = _copy_from_iter(va, space, &fo->data->msg_iter); if (unlikely(!space)) { tcp_skb_tsorted_anchor_cleanup(syn_data); kfree_skb(syn_data); goto fallback; } - skb_fill_page_desc(syn_data, 0, pfrag->page, - pfrag->offset, space); - page_ref_inc(pfrag->page); - pfrag->offset += space; + skb_fill_page_desc(syn_data, 0, page, offset, space); + page_frag_alloc_commit(pfrag, space); skb_len_add(syn_data, space); skb_zcopy_set(syn_data, fo->uarg, NULL); } diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c index ab504d31f0cd..86086b4bb55c 100644 --- a/net/ipv6/ip6_output.c +++ b/net/ipv6/ip6_output.c @@ -1405,7 +1405,7 @@ static int __ip6_append_data(struct sock *sk, struct sk_buff_head *queue, struct inet_cork_full *cork_full, struct inet6_cork *v6_cork, - struct page_frag *pfrag, + struct page_frag_cache *pfrag, int getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb), void *from, size_t length, int transhdrlen, @@ -1746,32 +1746,39 @@ static int __ip6_append_data(struct sock *sk, copy = err; wmem_alloc_delta += copy; } else if (!zc) { + unsigned int frag_offset, frag_size; int i = skb_shinfo(skb)->nr_frags; + struct page *page; + void *va; err = -ENOMEM; - if (!sk_page_frag_refill(sk, pfrag)) + page = sk_page_frag_alloc_prepare(sk, pfrag, + &frag_offset, + &frag_size, &va); + if (!page) goto error; skb_zcopy_downgrade_managed(skb); - if (!skb_can_coalesce(skb, i, pfrag->page, - pfrag->offset)) { + copy = min_t(int, copy, frag_size); + + if (!skb_can_coalesce(skb, i, page, frag_offset)) { err = -EMSGSIZE; if (i == MAX_SKB_FRAGS) goto error; - __skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, 0); + __skb_fill_page_desc(skb, i, page, frag_offset, + copy); skb_shinfo(skb)->nr_frags = ++i; - get_page(pfrag->page); + page_frag_alloc_commit(pfrag, copy); + } else { + skb_frag_size_add( + &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, copy); } - copy = min_t(int, copy, pfrag->size - pfrag->offset); - if (getfrag(from, - page_address(pfrag->page) + pfrag->offset, - offset, copy, skb->len, skb) < 0) + + if (getfrag(from, va, offset, copy, skb->len, skb) < 0) goto error_efault; - pfrag->offset += copy; - skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); skb->len += copy; skb->data_len += copy; skb->truesize += copy; diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c index eec6c56b7f3e..e52ddf716fa5 100644 --- a/net/kcm/kcmsock.c +++ b/net/kcm/kcmsock.c @@ -803,13 +803,17 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) while (msg_data_left(msg)) { bool merge = true; int i = skb_shinfo(skb)->nr_frags; - struct page_frag *pfrag = sk_page_frag(sk); - - if (!sk_page_frag_refill(sk, pfrag)) + struct page_frag_cache *pfrag = sk_page_frag(sk); + unsigned int offset, size; + struct page *page; + void *va; + + page = sk_page_frag_alloc_prepare(sk, pfrag, &offset, &size, + &va); + if (!page) goto wait_for_memory; - if (!skb_can_coalesce(skb, i, pfrag->page, - pfrag->offset)) { + if (!skb_can_coalesce(skb, i, page, offset)) { if (i == MAX_SKB_FRAGS) { struct sk_buff *tskb; @@ -850,14 +854,12 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (head != skb) head->truesize += copy; } else { - copy = min_t(int, msg_data_left(msg), - pfrag->size - pfrag->offset); + copy = min_t(int, msg_data_left(msg), size); if (!sk_wmem_schedule(sk, copy)) goto wait_for_memory; err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb, - page_address(pfrag->page) + - pfrag->offset, copy); + va, copy); if (err) goto out_error; @@ -865,13 +867,12 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) if (merge) { skb_frag_size_add( &skb_shinfo(skb)->frags[i - 1], copy); + page_frag_alloc_commit_noref(pfrag, copy); } else { - skb_fill_page_desc(skb, i, pfrag->page, - pfrag->offset, copy); - get_page(pfrag->page); + skb_fill_page_desc(skb, i, page, offset, copy); + page_frag_alloc_commit(pfrag, copy); } - pfrag->offset += copy; } copied += copy; diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index a26c2c840fd9..9a09c1460460 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -960,17 +960,16 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq, } /* we can append data to the given data frag if: - * - there is space available in the backing page_frag - * - the data frag tail matches the current page_frag free offset + * - the data frag tail matches the current page and offset * - the data frag end sequence number matches the current write seq */ static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk, - const struct page_frag *pfrag, + const struct page *page, + const unsigned int offset, const struct mptcp_data_frag *df) { - return df && pfrag->page == df->page && - pfrag->size - pfrag->offset > 0 && - pfrag->offset == (df->offset + df->data_len) && + return df && page == df->page && + offset == (df->offset + df->data_len) && df->data_seq + df->data_len == msk->write_seq; } @@ -1085,30 +1084,36 @@ static void mptcp_enter_memory_pressure(struct sock *sk) /* ensure we get enough memory for the frag hdr, beyond some minimal amount of * data */ -static bool mptcp_page_frag_refill(struct sock *sk, struct page_frag *pfrag) +static struct page *mptcp_page_frag_alloc_prepare(struct sock *sk, + struct page_frag_cache *pfrag, + unsigned int *offset, + unsigned int *size, void **va) { - if (likely(skb_page_frag_refill(32U + sizeof(struct mptcp_data_frag), - pfrag, sk->sk_allocation))) - return true; + struct page *page; + + page = page_frag_alloc_prepare(pfrag, offset, size, va, + sk->sk_allocation); + if (likely(page)) + return page; mptcp_enter_memory_pressure(sk); - return false; + return NULL; } static struct mptcp_data_frag * -mptcp_carve_data_frag(const struct mptcp_sock *msk, struct page_frag *pfrag, - int orig_offset) +mptcp_carve_data_frag(const struct mptcp_sock *msk, struct page *page, + unsigned int orig_offset) { int offset = ALIGN(orig_offset, sizeof(long)); struct mptcp_data_frag *dfrag; - dfrag = (struct mptcp_data_frag *)(page_to_virt(pfrag->page) + offset); + dfrag = (struct mptcp_data_frag *)(page_to_virt(page) + offset); dfrag->data_len = 0; dfrag->data_seq = msk->write_seq; dfrag->overhead = offset - orig_offset + sizeof(struct mptcp_data_frag); dfrag->offset = offset + sizeof(struct mptcp_data_frag); dfrag->already_sent = 0; - dfrag->page = pfrag->page; + dfrag->page = page; return dfrag; } @@ -1793,7 +1798,7 @@ static u32 mptcp_send_limit(const struct sock *sk) static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) { struct mptcp_sock *msk = mptcp_sk(sk); - struct page_frag *pfrag; + struct page_frag_cache *pfrag; size_t copied = 0; int ret = 0; long timeo; @@ -1832,9 +1837,12 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) while (msg_data_left(msg)) { int total_ts, frag_truesize = 0; struct mptcp_data_frag *dfrag; + unsigned int offset = 0, size; bool dfrag_collapsed; - size_t psize, offset; + struct page *page; u32 copy_limit; + size_t psize; + void *va; /* ensure fitting the notsent_lowat() constraint */ copy_limit = mptcp_send_limit(sk); @@ -1845,21 +1853,27 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) * page allocator */ dfrag = mptcp_pending_tail(sk); - dfrag_collapsed = mptcp_frag_can_collapse_to(msk, pfrag, dfrag); + size = 1; + page = page_frag_alloc_probe(pfrag, &offset, &size, &va); + dfrag_collapsed = mptcp_frag_can_collapse_to(msk, page, offset, + dfrag); if (!dfrag_collapsed) { - if (!mptcp_page_frag_refill(sk, pfrag)) + size = 32U + sizeof(struct mptcp_data_frag); + page = mptcp_page_frag_alloc_prepare(sk, pfrag, &offset, + &size, &va); + if (!page) goto wait_for_memory; - dfrag = mptcp_carve_data_frag(msk, pfrag, pfrag->offset); + dfrag = mptcp_carve_data_frag(msk, page, offset); frag_truesize = dfrag->overhead; + va += dfrag->overhead; } /* we do not bound vs wspace, to allow a single packet. * memory accounting will prevent execessive memory usage * anyway */ - offset = dfrag->offset + dfrag->data_len; - psize = pfrag->size - offset; + psize = size - frag_truesize; psize = min_t(size_t, psize, msg_data_left(msg)); psize = min_t(size_t, psize, copy_limit); total_ts = psize + frag_truesize; @@ -1867,8 +1881,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) if (!sk_wmem_schedule(sk, total_ts)) goto wait_for_memory; - ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, - page_address(dfrag->page) + offset); + ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, va); if (ret) goto do_error; @@ -1877,7 +1890,6 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) copied += psize; dfrag->data_len += psize; frag_truesize += psize; - pfrag->offset += frag_truesize; WRITE_ONCE(msk->write_seq, msk->write_seq + psize); /* charge data on mptcp pending queue to the msk socket @@ -1885,11 +1897,14 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len) */ sk_wmem_queued_add(sk, frag_truesize); if (!dfrag_collapsed) { - get_page(dfrag->page); + page_frag_alloc_commit(pfrag, frag_truesize); list_add_tail(&dfrag->list, &msk->rtx_queue); if (!msk->first_pending) WRITE_ONCE(msk->first_pending, dfrag); + } else { + page_frag_alloc_commit_noref(pfrag, frag_truesize); } + pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d", msk, dfrag->data_seq, dfrag->data_len, dfrag->already_sent, !dfrag_collapsed); diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c index 8996c73c9779..4da465af972f 100644 --- a/net/sched/em_meta.c +++ b/net/sched/em_meta.c @@ -590,7 +590,7 @@ META_COLLECTOR(int_sk_sendmsg_off) *err = -1; return; } - dst->value = sk->sk_frag.offset; + dst->value = page_frag_cache_page_offset(&sk->sk_frag); } META_COLLECTOR(int_sk_write_pend) diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index dc063c2c7950..02925c25ae12 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -253,24 +253,42 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx, } static void tls_append_frag(struct tls_record_info *record, - struct page_frag *pfrag, - int size) + struct page_frag_cache *pfrag, struct page *page, + unsigned int offset, unsigned int size) { skb_frag_t *frag; frag = &record->frags[record->num_frags - 1]; - if (skb_frag_page(frag) == pfrag->page && - skb_frag_off(frag) + skb_frag_size(frag) == pfrag->offset) { + if (skb_frag_page(frag) == page && + skb_frag_off(frag) + skb_frag_size(frag) == offset) { skb_frag_size_add(frag, size); + page_frag_alloc_commit_noref(pfrag, size); } else { ++frag; - skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset, - size); + skb_frag_fill_page_desc(frag, page, offset, size); ++record->num_frags; - get_page(pfrag->page); + page_frag_alloc_commit(pfrag, size); + } + + record->len += size; +} + +static void tls_append_page(struct tls_record_info *record, struct page *page, + unsigned int offset, unsigned int size) +{ + skb_frag_t *frag; + + frag = &record->frags[record->num_frags - 1]; + if (skb_frag_page(frag) == page && + skb_frag_off(frag) + skb_frag_size(frag) == offset) { + skb_frag_size_add(frag, size); + } else { + ++frag; + skb_frag_fill_page_desc(frag, page, offset, size); + ++record->num_frags; + get_page(page); } - pfrag->offset += size; record->len += size; } @@ -311,11 +329,12 @@ static int tls_push_record(struct sock *sk, static void tls_device_record_close(struct sock *sk, struct tls_context *ctx, struct tls_record_info *record, - struct page_frag *pfrag, + struct page_frag_cache *pfrag, unsigned char record_type) { struct tls_prot_info *prot = &ctx->prot_info; - struct page_frag dummy_tag_frag; + unsigned int offset, size; + struct page *page; /* append tag * device will fill in the tag, we just need to append a placeholder @@ -323,13 +342,14 @@ static void tls_device_record_close(struct sock *sk, * increases frag count) * if we can't allocate memory now use the dummy page */ - if (unlikely(pfrag->size - pfrag->offset < prot->tag_size) && - !skb_page_frag_refill(prot->tag_size, pfrag, sk->sk_allocation)) { - dummy_tag_frag.page = dummy_page; - dummy_tag_frag.offset = 0; - pfrag = &dummy_tag_frag; + size = prot->tag_size; + page = page_frag_alloc_pg_prepare(pfrag, &offset, &size, + sk->sk_allocation); + if (unlikely(!page)) { + tls_append_page(record, dummy_page, 0, prot->tag_size); + } else { + tls_append_frag(record, pfrag, page, offset, prot->tag_size); } - tls_append_frag(record, pfrag, prot->tag_size); /* fill prepend */ tls_fill_prepend(ctx, skb_frag_address(&record->frags[0]), @@ -337,57 +357,52 @@ static void tls_device_record_close(struct sock *sk, record_type); } -static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx, - struct page_frag *pfrag, +static int tls_create_new_record(struct sock *sk, + struct tls_offload_context_tx *offload_ctx, + struct page_frag_cache *pfrag, size_t prepend_size) { struct tls_record_info *record; + unsigned int offset; + struct page *page; skb_frag_t *frag; record = kmalloc(sizeof(*record), GFP_KERNEL); if (!record) return -ENOMEM; - frag = &record->frags[0]; - skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset, - prepend_size); - - get_page(pfrag->page); - pfrag->offset += prepend_size; + page = page_frag_alloc_pg(pfrag, &offset, prepend_size, + sk->sk_allocation); + if (!page) { + kfree(record); + READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk); + sk_stream_moderate_sndbuf(sk); + return -ENOMEM; + } + frag = &record->frags[0]; + skb_frag_fill_page_desc(frag, page, offset, prepend_size); record->num_frags = 1; record->len = prepend_size; offload_ctx->open_record = record; return 0; } -static int tls_do_allocation(struct sock *sk, - struct tls_offload_context_tx *offload_ctx, - struct page_frag *pfrag, - size_t prepend_size) +static struct page *tls_do_allocation(struct sock *sk, + struct tls_offload_context_tx *ctx, + struct page_frag_cache *pfrag, + size_t prepend_size, unsigned int *offset, + unsigned int *size, void **va) { - int ret; - - if (!offload_ctx->open_record) { - if (unlikely(!skb_page_frag_refill(prepend_size, pfrag, - sk->sk_allocation))) { - READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk); - sk_stream_moderate_sndbuf(sk); - return -ENOMEM; - } + if (!ctx->open_record) { + int ret; - ret = tls_create_new_record(offload_ctx, pfrag, prepend_size); + ret = tls_create_new_record(sk, ctx, pfrag, prepend_size); if (ret) - return ret; - - if (pfrag->size > pfrag->offset) - return 0; + return NULL; } - if (!sk_page_frag_refill(sk, pfrag)) - return -ENOMEM; - - return 0; + return sk_page_frag_alloc_prepare(sk, pfrag, offset, size, va); } static int tls_device_copy_data(void *addr, size_t bytes, struct iov_iter *i) @@ -424,8 +439,8 @@ static int tls_push_data(struct sock *sk, struct tls_prot_info *prot = &tls_ctx->prot_info; struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx); struct tls_record_info *record; + struct page_frag_cache *pfrag; int tls_push_record_flags; - struct page_frag *pfrag; size_t orig_size = size; u32 max_open_record_len; bool more = false; @@ -462,8 +477,13 @@ static int tls_push_data(struct sock *sk, max_open_record_len = TLS_MAX_PAYLOAD_SIZE + prot->prepend_size; do { - rc = tls_do_allocation(sk, ctx, pfrag, prot->prepend_size); - if (unlikely(rc)) { + unsigned int frag_offset, frag_size; + struct page *page; + void *va; + + page = tls_do_allocation(sk, ctx, pfrag, prot->prepend_size, + &frag_offset, &frag_size, &va); + if (unlikely(!page)) { rc = sk_stream_wait_memory(sk, &timeo); if (!rc) continue; @@ -491,8 +511,8 @@ static int tls_push_data(struct sock *sk, copy = min_t(size_t, size, max_open_record_len - record->len); if (copy && (flags & MSG_SPLICE_PAGES)) { - struct page_frag zc_pfrag; - struct page **pages = &zc_pfrag.page; + struct page *splice_page; + struct page **pages = &splice_page; size_t off; rc = iov_iter_extract_pages(iter, &pages, @@ -504,24 +524,21 @@ static int tls_push_data(struct sock *sk, } copy = rc; - if (WARN_ON_ONCE(!sendpage_ok(zc_pfrag.page))) { + if (WARN_ON_ONCE(!sendpage_ok(splice_page))) { iov_iter_revert(iter, copy); rc = -EIO; goto handle_error; } - zc_pfrag.offset = off; - zc_pfrag.size = copy; - tls_append_frag(record, &zc_pfrag, copy); + tls_append_page(record, splice_page, off, copy); } else if (copy) { - copy = min_t(size_t, copy, pfrag->size - pfrag->offset); + copy = min_t(size_t, copy, frag_size); - rc = tls_device_copy_data(page_address(pfrag->page) + - pfrag->offset, copy, - iter); + rc = tls_device_copy_data(va, copy, iter); if (rc) goto handle_error; - tls_append_frag(record, pfrag, copy); + + tls_append_frag(record, pfrag, page, frag_offset, copy); } size -= copy; -- 2.33.0 ^ permalink raw reply related [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-08-06 11:37 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-07-31 12:44 [PATCH net-next v12 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin 2024-07-31 12:44 ` [PATCH net-next v12 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin 2024-07-31 13:36 ` Chuck Lever 2024-07-31 18:13 ` Alexander Duyck 2024-08-01 13:01 ` Yunsheng Lin 2024-08-01 15:21 ` Alexander Duyck 2024-08-02 10:05 ` Yunsheng Lin 2024-08-02 17:00 ` Alexander Duyck 2024-08-04 4:30 ` Yunsheng Lin 2024-08-06 0:52 ` Alexander Duyck 2024-08-06 11:37 ` Yunsheng Lin 2024-08-04 6:44 ` Sagi Grimberg 2024-07-31 12:45 ` [PATCH net-next v12 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox