* [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-09 11:08 ` Muhammad Usama Anjum
2024-08-08 12:37 ` [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
` (13 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Shuah Khan, linux-mm, linux-kselftest
The testing is done by ensuring that the fragment allocated
from a frag_frag_cache instance is pushed into a ptr_ring
instance in a kthread binded to a specified cpu, and a kthread
binded to a specified cpu will pop the fragment from the
ptr_ring and free the fragment.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
tools/testing/selftests/mm/Makefile | 2 +
tools/testing/selftests/mm/page_frag/Makefile | 18 ++
.../selftests/mm/page_frag/page_frag_test.c | 170 ++++++++++++++++++
tools/testing/selftests/mm/run_vmtests.sh | 9 +-
4 files changed, 198 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index 901e0d07765b..e91ed29378fc 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules
CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
LDLIBS = -lrt -lpthread -lm
+TEST_GEN_MODS_DIR := page_frag
+
TEST_GEN_FILES = cow
TEST_GEN_FILES += compaction_test
TEST_GEN_FILES += gup_longterm
diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
new file mode 100644
index 000000000000..58dda74d50a3
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/Makefile
@@ -0,0 +1,18 @@
+PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
+KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..)
+
+ifeq ($(V),1)
+Q =
+else
+Q = @
+endif
+
+MODULES = page_frag_test.ko
+
+obj-m += page_frag_test.o
+
+all:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules
+
+clean:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
new file mode 100644
index 000000000000..0e803db1ad79
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -0,0 +1,170 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Test module for page_frag cache
+ *
+ * Copyright: linyunsheng@huawei.com
+ */
+
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/cpumask.h>
+#include <linux/completion.h>
+#include <linux/ptr_ring.h>
+#include <linux/kthread.h>
+
+static struct ptr_ring ptr_ring;
+static int nr_objs = 512;
+static atomic_t nthreads;
+static struct completion wait;
+static struct page_frag_cache test_frag;
+
+static int nr_test = 5120000;
+module_param(nr_test, int, 0);
+MODULE_PARM_DESC(nr_test, "number of iterations to test");
+
+static bool test_align;
+module_param(test_align, bool, 0);
+MODULE_PARM_DESC(test_align, "use align API for testing");
+
+static int test_alloc_len = 2048;
+module_param(test_alloc_len, int, 0);
+MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
+
+static int test_push_cpu;
+module_param(test_push_cpu, int, 0);
+MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment");
+
+static int test_pop_cpu;
+module_param(test_pop_cpu, int, 0);
+MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment");
+
+static int page_frag_pop_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+ int nr = nr_test;
+
+ pr_info("page_frag pop test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (nr > 0) {
+ void *obj = __ptr_ring_consume(ring);
+
+ if (obj) {
+ nr--;
+ page_frag_free(obj);
+ } else {
+ cond_resched();
+ }
+ }
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ pr_info("page_frag pop test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ return 0;
+}
+
+static int page_frag_push_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+ int nr = nr_test;
+
+ pr_info("page_frag push test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (nr > 0) {
+ void *va;
+ int ret;
+
+ if (test_align) {
+ va = page_frag_alloc_align(&test_frag, test_alloc_len,
+ GFP_KERNEL, SMP_CACHE_BYTES);
+
+ WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1),
+ "unaligned va returned\n");
+ } else {
+ va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL);
+ }
+
+ if (!va)
+ continue;
+
+ ret = __ptr_ring_produce(ring, va);
+ if (ret) {
+ page_frag_free(va);
+ cond_resched();
+ } else {
+ nr--;
+ }
+ }
+
+ pr_info("page_frag push test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ return 0;
+}
+
+static int __init page_frag_test_init(void)
+{
+ struct task_struct *tsk_push, *tsk_pop;
+ ktime_t start;
+ u64 duration;
+ int ret;
+
+ test_frag.va = NULL;
+ atomic_set(&nthreads, 2);
+ init_completion(&wait);
+
+ if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 ||
+ !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu))
+ return -EINVAL;
+
+ ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL);
+ if (ret)
+ return ret;
+
+ tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring,
+ test_push_cpu, "page_frag_push");
+ if (IS_ERR(tsk_push))
+ return PTR_ERR(tsk_push);
+
+ tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring,
+ test_pop_cpu, "page_frag_pop");
+ if (IS_ERR(tsk_pop)) {
+ kthread_stop(tsk_push);
+ return PTR_ERR(tsk_pop);
+ }
+
+ start = ktime_get();
+ wake_up_process(tsk_push);
+ wake_up_process(tsk_pop);
+
+ pr_info("waiting for test to complete\n");
+ wait_for_completion(&wait);
+
+ duration = (u64)ktime_us_delta(ktime_get(), start);
+ pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
+ test_align ? "aligned" : "non-aligned", duration);
+
+ ptr_ring_cleanup(&ptr_ring, NULL);
+ page_frag_cache_drain(&test_frag);
+
+ return -EAGAIN;
+}
+
+static void __exit page_frag_test_exit(void)
+{
+}
+
+module_init(page_frag_test_init);
+module_exit(page_frag_test_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Yunsheng Lin <linyunsheng@huawei.com>");
+MODULE_DESCRIPTION("Test module for page_frag");
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index 03ac4f2e1cce..3636d984b786 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -75,6 +75,8 @@ separated by spaces:
read-only VMAs
- mdwe
test prctl(PR_SET_MDWE, ...)
+- page_frag
+ test handling of page fragment allocation and freeing
example: ./run_vmtests.sh -t "hmm mmap ksm"
EOF
@@ -231,7 +233,8 @@ run_test() {
("$@" 2>&1) | tap_prefix
local ret=${PIPESTATUS[0]}
count_total=$(( count_total + 1 ))
- if [ $ret -eq 0 ]; then
+ # page_frag_test.ko returns 11(EAGAIN) when insmod'ing to avoid rmmod
+ if [ $ret -eq 0 ] | [ $ret -eq 11 -a ${CATEGORY} == "page_frag" ]; then
count_pass=$(( count_pass + 1 ))
echo "[PASS]" | tap_prefix
echo "ok ${count_total} ${test}" | tap_output
@@ -453,6 +456,10 @@ CATEGORY="mkdirty" run_test ./mkdirty
CATEGORY="mdwe" run_test ./mdwe_test
+CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko
+
+CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko test_alloc_len=12 test_align=1
+
echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix
echo "1..${count_total}" | tap_output
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag
2024-08-08 12:37 ` [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
@ 2024-08-09 11:08 ` Muhammad Usama Anjum
2024-08-09 12:29 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Muhammad Usama Anjum @ 2024-08-09 11:08 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: Usama.Anjum, netdev, linux-kernel, Alexander Duyck, Andrew Morton,
Shuah Khan, linux-mm, linux-kselftest
On 8/8/24 5:37 PM, Yunsheng Lin wrote:
> The testing is done by ensuring that the fragment allocated
> from a frag_frag_cache instance is pushed into a ptr_ring
> instance in a kthread binded to a specified cpu, and a kthread
> binded to a specified cpu will pop the fragment from the
> ptr_ring and free the fragment.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> tools/testing/selftests/mm/Makefile | 2 +
> tools/testing/selftests/mm/page_frag/Makefile | 18 ++
> .../selftests/mm/page_frag/page_frag_test.c | 170 ++++++++++++++++++
Why are you adding a test module in kselftests? Have you considered
adding Kunit instead? Kunit is more suited to test kernel's internal
APIs which aren't exposed to userspace.
> tools/testing/selftests/mm/run_vmtests.sh | 9 +-
> 4 files changed, 198 insertions(+), 1 deletion(-)
> create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
> create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
>
> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
> index 901e0d07765b..e91ed29378fc 100644
> --- a/tools/testing/selftests/mm/Makefile
> +++ b/tools/testing/selftests/mm/Makefile
> @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules
> CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
> LDLIBS = -lrt -lpthread -lm
>
> +TEST_GEN_MODS_DIR := page_frag
> +
> TEST_GEN_FILES = cow
> TEST_GEN_FILES += compaction_test
> TEST_GEN_FILES += gup_longterm
> diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
> new file mode 100644
> index 000000000000..58dda74d50a3
> --- /dev/null
> +++ b/tools/testing/selftests/mm/page_frag/Makefile
> @@ -0,0 +1,18 @@
> +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
> +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..)
> +
> +ifeq ($(V),1)
> +Q =
> +else
> +Q = @
> +endif
> +
> +MODULES = page_frag_test.ko
> +
> +obj-m += page_frag_test.o
> +
> +all:
> + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules
> +
> +clean:
> + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
> diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
> new file mode 100644
> index 000000000000..0e803db1ad79
> --- /dev/null
> +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
> @@ -0,0 +1,170 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Test module for page_frag cache
> + *
> + * Copyright: linyunsheng@huawei.com
> + */
> +
> +#include <linux/mm.h>
> +#include <linux/module.h>
> +#include <linux/cpumask.h>
> +#include <linux/completion.h>
> +#include <linux/ptr_ring.h>
> +#include <linux/kthread.h>
> +
> +static struct ptr_ring ptr_ring;
> +static int nr_objs = 512;
> +static atomic_t nthreads;
> +static struct completion wait;
> +static struct page_frag_cache test_frag;
> +
> +static int nr_test = 5120000;
> +module_param(nr_test, int, 0);
> +MODULE_PARM_DESC(nr_test, "number of iterations to test");
> +
> +static bool test_align;
> +module_param(test_align, bool, 0);
> +MODULE_PARM_DESC(test_align, "use align API for testing");
> +
> +static int test_alloc_len = 2048;
> +module_param(test_alloc_len, int, 0);
> +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
> +
> +static int test_push_cpu;
> +module_param(test_push_cpu, int, 0);
> +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment");
> +
> +static int test_pop_cpu;
> +module_param(test_pop_cpu, int, 0);
> +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment");
> +
> +static int page_frag_pop_thread(void *arg)
> +{
> + struct ptr_ring *ring = arg;
> + int nr = nr_test;
> +
> + pr_info("page_frag pop test thread begins on cpu %d\n",
> + smp_processor_id());
> +
> + while (nr > 0) {
> + void *obj = __ptr_ring_consume(ring);
> +
> + if (obj) {
> + nr--;
> + page_frag_free(obj);
> + } else {
> + cond_resched();
> + }
> + }
> +
> + if (atomic_dec_and_test(&nthreads))
> + complete(&wait);
> +
> + pr_info("page_frag pop test thread exits on cpu %d\n",
> + smp_processor_id());
> +
> + return 0;
> +}
> +
> +static int page_frag_push_thread(void *arg)
> +{
> + struct ptr_ring *ring = arg;
> + int nr = nr_test;
> +
> + pr_info("page_frag push test thread begins on cpu %d\n",
> + smp_processor_id());
> +
> + while (nr > 0) {
> + void *va;
> + int ret;
> +
> + if (test_align) {
> + va = page_frag_alloc_align(&test_frag, test_alloc_len,
> + GFP_KERNEL, SMP_CACHE_BYTES);
> +
> + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1),
> + "unaligned va returned\n");
> + } else {
> + va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL);
> + }
> +
> + if (!va)
> + continue;
> +
> + ret = __ptr_ring_produce(ring, va);
> + if (ret) {
> + page_frag_free(va);
> + cond_resched();
> + } else {
> + nr--;
> + }
> + }
> +
> + pr_info("page_frag push test thread exits on cpu %d\n",
> + smp_processor_id());
> +
> + if (atomic_dec_and_test(&nthreads))
> + complete(&wait);
> +
> + return 0;
> +}
> +
> +static int __init page_frag_test_init(void)
> +{
> + struct task_struct *tsk_push, *tsk_pop;
> + ktime_t start;
> + u64 duration;
> + int ret;
> +
> + test_frag.va = NULL;
> + atomic_set(&nthreads, 2);
> + init_completion(&wait);
> +
> + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 ||
> + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu))
> + return -EINVAL;
> +
> + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL);
> + if (ret)
> + return ret;
> +
> + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring,
> + test_push_cpu, "page_frag_push");
> + if (IS_ERR(tsk_push))
> + return PTR_ERR(tsk_push);
> +
> + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring,
> + test_pop_cpu, "page_frag_pop");
> + if (IS_ERR(tsk_pop)) {
> + kthread_stop(tsk_push);
> + return PTR_ERR(tsk_pop);
> + }
> +
> + start = ktime_get();
> + wake_up_process(tsk_push);
> + wake_up_process(tsk_pop);
> +
> + pr_info("waiting for test to complete\n");
> + wait_for_completion(&wait);
> +
> + duration = (u64)ktime_us_delta(ktime_get(), start);
> + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
> + test_align ? "aligned" : "non-aligned", duration);
> +
> + ptr_ring_cleanup(&ptr_ring, NULL);
> + page_frag_cache_drain(&test_frag);
> +
> + return -EAGAIN;
> +}
> +
> +static void __exit page_frag_test_exit(void)
> +{
> +}
> +
> +module_init(page_frag_test_init);
> +module_exit(page_frag_test_exit);
> +
> +MODULE_LICENSE("GPL");
> +MODULE_AUTHOR("Yunsheng Lin <linyunsheng@huawei.com>");
> +MODULE_DESCRIPTION("Test module for page_frag");
> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
> index 03ac4f2e1cce..3636d984b786 100755
> --- a/tools/testing/selftests/mm/run_vmtests.sh
> +++ b/tools/testing/selftests/mm/run_vmtests.sh
> @@ -75,6 +75,8 @@ separated by spaces:
> read-only VMAs
> - mdwe
> test prctl(PR_SET_MDWE, ...)
> +- page_frag
> + test handling of page fragment allocation and freeing
>
> example: ./run_vmtests.sh -t "hmm mmap ksm"
> EOF
> @@ -231,7 +233,8 @@ run_test() {
> ("$@" 2>&1) | tap_prefix
> local ret=${PIPESTATUS[0]}
> count_total=$(( count_total + 1 ))
> - if [ $ret -eq 0 ]; then
> + # page_frag_test.ko returns 11(EAGAIN) when insmod'ing to avoid rmmod
> + if [ $ret -eq 0 ] | [ $ret -eq 11 -a ${CATEGORY} == "page_frag" ]; then
> count_pass=$(( count_pass + 1 ))
> echo "[PASS]" | tap_prefix
> echo "ok ${count_total} ${test}" | tap_output
> @@ -453,6 +456,10 @@ CATEGORY="mkdirty" run_test ./mkdirty
>
> CATEGORY="mdwe" run_test ./mdwe_test
>
> +CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko
> +
> +CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko test_alloc_len=12 test_align=1
> +
You are loading the test module. How will we verify if the test passed
or failed? There must be a way to mark the test passed or failed after
running it. You can definitely parse the dmesg to get results. But it
would be complex to do it. KUnit is way to go as all such tools are
already present there.
> echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix
> echo "1..${count_total}" | tap_output
>
--
BR,
Muhammad Usama Anjum
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag
2024-08-09 11:08 ` Muhammad Usama Anjum
@ 2024-08-09 12:29 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-09 12:29 UTC (permalink / raw)
To: Muhammad Usama Anjum, davem, kuba, pabeni
Cc: netdev, linux-kernel, Alexander Duyck, Andrew Morton, Shuah Khan,
linux-mm, linux-kselftest
On 2024/8/9 19:08, Muhammad Usama Anjum wrote:
> On 8/8/24 5:37 PM, Yunsheng Lin wrote:
>> The testing is done by ensuring that the fragment allocated
>> from a frag_frag_cache instance is pushed into a ptr_ring
>> instance in a kthread binded to a specified cpu, and a kthread
>> binded to a specified cpu will pop the fragment from the
>> ptr_ring and free the fragment.
>>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> ---
>> tools/testing/selftests/mm/Makefile | 2 +
>> tools/testing/selftests/mm/page_frag/Makefile | 18 ++
>> .../selftests/mm/page_frag/page_frag_test.c | 170 ++++++++++++++++++
> Why are you adding a test module in kselftests? Have you considered
> adding Kunit instead? Kunit is more suited to test kernel's internal
> APIs which aren't exposed to userspace.
The main intent is to do performance impact of changing related to
page_frag, which is very much performance sensitive, so I am guessing
Kunit is not a right choice here if I am understanding it correctly.
>
>> tools/testing/selftests/mm/run_vmtests.sh | 9 +-
>> 4 files changed, 198 insertions(+), 1 deletion(-)
>> create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
>> create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
>>
>> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
>> index 901e0d07765b..e91ed29378fc 100644
>> --- a/tools/testing/selftests/mm/Makefile
>> +++ b/tools/testing/selftests/mm/Makefile
>> @@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules
>> CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
>> LDLIBS = -lrt -lpthread -lm
>>
>> +TEST_GEN_MODS_DIR := page_frag
>> +
>> TEST_GEN_FILES = cow
>> TEST_GEN_FILES += compaction_test
>> TEST_GEN_FILES += gup_longterm
>> diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
>> new file mode 100644
>> index 000000000000..58dda74d50a3
>> --- /dev/null
>> +++ b/tools/testing/selftests/mm/page_frag/Makefile
>> @@ -0,0 +1,18 @@
>> +PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
>> +KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..)
>> +
>> +ifeq ($(V),1)
>> +Q =
>> +else
>> +Q = @
>> +endif
>> +
>> +MODULES = page_frag_test.ko
>> +
>> +obj-m += page_frag_test.o
>> +
>> +all:
>> + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules
>> +
>> +clean:
>> + +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
>> diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
>> new file mode 100644
>> index 000000000000..0e803db1ad79
>> --- /dev/null
>> +++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
>> @@ -0,0 +1,170 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +/*
>> + * Test module for page_frag cache
>> + *
>> + * Copyright: linyunsheng@huawei.com
>> + */
>> +
>> +#include <linux/mm.h>
>> +#include <linux/module.h>
>> +#include <linux/cpumask.h>
>> +#include <linux/completion.h>
>> +#include <linux/ptr_ring.h>
>> +#include <linux/kthread.h>
>> +
>> +static struct ptr_ring ptr_ring;
>> +static int nr_objs = 512;
>> +static atomic_t nthreads;
>> +static struct completion wait;
>> +static struct page_frag_cache test_frag;
>> +
>> +static int nr_test = 5120000;
>> +module_param(nr_test, int, 0);
>> +MODULE_PARM_DESC(nr_test, "number of iterations to test");
>> +
>> +static bool test_align;
>> +module_param(test_align, bool, 0);
>> +MODULE_PARM_DESC(test_align, "use align API for testing");
>> +
>> +static int test_alloc_len = 2048;
>> +module_param(test_alloc_len, int, 0);
>> +MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
>> +
>> +static int test_push_cpu;
>> +module_param(test_push_cpu, int, 0);
>> +MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment");
>> +
>> +static int test_pop_cpu;
>> +module_param(test_pop_cpu, int, 0);
>> +MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment");
>> +
>> +static int page_frag_pop_thread(void *arg)
>> +{
>> + struct ptr_ring *ring = arg;
>> + int nr = nr_test;
>> +
>> + pr_info("page_frag pop test thread begins on cpu %d\n",
>> + smp_processor_id());
>> +
>> + while (nr > 0) {
>> + void *obj = __ptr_ring_consume(ring);
>> +
>> + if (obj) {
>> + nr--;
>> + page_frag_free(obj);
>> + } else {
>> + cond_resched();
>> + }
>> + }
>> +
>> + if (atomic_dec_and_test(&nthreads))
>> + complete(&wait);
>> +
>> + pr_info("page_frag pop test thread exits on cpu %d\n",
>> + smp_processor_id());
>> +
>> + return 0;
>> +}
>> +
>> +static int page_frag_push_thread(void *arg)
>> +{
>> + struct ptr_ring *ring = arg;
>> + int nr = nr_test;
>> +
>> + pr_info("page_frag push test thread begins on cpu %d\n",
>> + smp_processor_id());
>> +
>> + while (nr > 0) {
>> + void *va;
>> + int ret;
>> +
>> + if (test_align) {
>> + va = page_frag_alloc_align(&test_frag, test_alloc_len,
>> + GFP_KERNEL, SMP_CACHE_BYTES);
>> +
>> + WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1),
>> + "unaligned va returned\n");
>> + } else {
>> + va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL);
>> + }
>> +
>> + if (!va)
>> + continue;
>> +
>> + ret = __ptr_ring_produce(ring, va);
>> + if (ret) {
>> + page_frag_free(va);
>> + cond_resched();
>> + } else {
>> + nr--;
>> + }
>> + }
>> +
>> + pr_info("page_frag push test thread exits on cpu %d\n",
>> + smp_processor_id());
>> +
>> + if (atomic_dec_and_test(&nthreads))
>> + complete(&wait);
>> +
>> + return 0;
>> +}
>> +
>> +static int __init page_frag_test_init(void)
>> +{
>> + struct task_struct *tsk_push, *tsk_pop;
>> + ktime_t start;
>> + u64 duration;
>> + int ret;
>> +
>> + test_frag.va = NULL;
>> + atomic_set(&nthreads, 2);
>> + init_completion(&wait);
>> +
>> + if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 ||
>> + !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu))
>> + return -EINVAL;
>> +
>> + ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL);
>> + if (ret)
>> + return ret;
>> +
>> + tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring,
>> + test_push_cpu, "page_frag_push");
>> + if (IS_ERR(tsk_push))
>> + return PTR_ERR(tsk_push);
>> +
>> + tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring,
>> + test_pop_cpu, "page_frag_pop");
>> + if (IS_ERR(tsk_pop)) {
>> + kthread_stop(tsk_push);
>> + return PTR_ERR(tsk_pop);
>> + }
>> +
>> + start = ktime_get();
>> + wake_up_process(tsk_push);
>> + wake_up_process(tsk_pop);
>> +
>> + pr_info("waiting for test to complete\n");
>> + wait_for_completion(&wait);
>> +
>> + duration = (u64)ktime_us_delta(ktime_get(), start);
>> + pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
>> + test_align ? "aligned" : "non-aligned", duration);
>> +
>> + ptr_ring_cleanup(&ptr_ring, NULL);
>> + page_frag_cache_drain(&test_frag);
>> +
>> + return -EAGAIN;
>> +}
>> +
>> +static void __exit page_frag_test_exit(void)
>> +{
>> +}
>> +
>> +module_init(page_frag_test_init);
>> +module_exit(page_frag_test_exit);
>> +
>> +MODULE_LICENSE("GPL");
>> +MODULE_AUTHOR("Yunsheng Lin <linyunsheng@huawei.com>");
>> +MODULE_DESCRIPTION("Test module for page_frag");
>> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
>> index 03ac4f2e1cce..3636d984b786 100755
>> --- a/tools/testing/selftests/mm/run_vmtests.sh
>> +++ b/tools/testing/selftests/mm/run_vmtests.sh
>> @@ -75,6 +75,8 @@ separated by spaces:
>> read-only VMAs
>> - mdwe
>> test prctl(PR_SET_MDWE, ...)
>> +- page_frag
>> + test handling of page fragment allocation and freeing
>>
>> example: ./run_vmtests.sh -t "hmm mmap ksm"
>> EOF
>> @@ -231,7 +233,8 @@ run_test() {
>> ("$@" 2>&1) | tap_prefix
>> local ret=${PIPESTATUS[0]}
>> count_total=$(( count_total + 1 ))
>> - if [ $ret -eq 0 ]; then
>> + # page_frag_test.ko returns 11(EAGAIN) when insmod'ing to avoid rmmod
>> + if [ $ret -eq 0 ] | [ $ret -eq 11 -a ${CATEGORY} == "page_frag" ]; then
>> count_pass=$(( count_pass + 1 ))
>> echo "[PASS]" | tap_prefix
>> echo "ok ${count_total} ${test}" | tap_output
>> @@ -453,6 +456,10 @@ CATEGORY="mkdirty" run_test ./mkdirty
>>
>> CATEGORY="mdwe" run_test ./mdwe_test
>>
>> +CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko
>> +
>> +CATEGORY="page_frag" run_test insmod ./page_frag/page_frag_test.ko test_alloc_len=12 test_align=1
>> +
> You are loading the test module. How will we verify if the test passed
> or failed? There must be a way to mark the test passed or failed after
I am not sure that matter that much for page_frag_test module as it
already return -EAGAIN for normal case as mentioned in:
https://patchwork.kernel.org/project/netdevbpf/patch/20240731124505.2903877-2-linyunsheng@huawei.com/#25960885
> running it. You can definitely parse the dmesg to get results. But it
> would be complex to do it. KUnit is way to go as all such tools are
> already present there.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 15:33 ` Alexander H Duyck
2024-08-14 20:22 ` Andrew Morton
2024-08-08 12:37 ` [PATCH net-next v13 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
` (12 subsequent siblings)
14 siblings, 2 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, David Howells,
Alexander Duyck, Andrew Morton, Eric Dumazet, Shuah Khan,
linux-mm, linux-kselftest
Inspired by [1], move the page fragment allocator from page_alloc
into its own c file and header file, as we are about to make more
change for it to replace another page_frag implementation in
sock.c
As this patchset is going to replace 'struct page_frag' with
'struct page_frag_cache' in sched.h, including page_frag_cache.h
in sched.h has a compiler error caused by interdependence between
mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler
error by moving 'struct page_frag_cache' to mm_types_task.h as
suggested by Alexander, see [3].
1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/
2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/
3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/
CC: David Howells <dhowells@redhat.com>
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/gfp.h | 22 ---
include/linux/mm_types.h | 18 ---
include/linux/mm_types_task.h | 18 +++
include/linux/page_frag_cache.h | 31 ++++
include/linux/skbuff.h | 1 +
mm/Makefile | 1 +
mm/page_alloc.c | 136 ----------------
mm/page_frag_cache.c | 145 ++++++++++++++++++
.../selftests/mm/page_frag/page_frag_test.c | 2 +-
9 files changed, 197 insertions(+), 177 deletions(-)
create mode 100644 include/linux/page_frag_cache.h
create mode 100644 mm/page_frag_cache.c
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index f53f76e0b17e..01a49be7c98d 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas
extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);
-struct page_frag_cache;
-void page_frag_cache_drain(struct page_frag_cache *nc);
-extern void __page_frag_cache_drain(struct page *page, unsigned int count);
-void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
- gfp_t gfp_mask, unsigned int align_mask);
-
-static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align)
-{
- WARN_ON_ONCE(!is_power_of_2(align));
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
-}
-
-static inline void *page_frag_alloc(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask)
-{
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
-}
-
-extern void page_frag_free(void *addr);
-
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 485424979254..843d75412105 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
*/
#define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page)))
-#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
-#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
-
/*
* page_private can be used on tail pages. However, PagePrivate is only
* checked by the VM on the head page. So page_private on the tail pages
@@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio)
return folio->private;
}
-struct page_frag_cache {
- void * va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- __u16 offset;
- __u16 size;
-#else
- __u32 offset;
-#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
-};
-
typedef unsigned long vm_flags_t;
/*
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index a2f6179b672b..cdc1e3696439 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -8,6 +8,7 @@
* (These are defined separately to decouple sched.h from mm_types.h as much as possible.)
*/
+#include <linux/align.h>
#include <linux/types.h>
#include <asm/page.h>
@@ -46,6 +47,23 @@ struct page_frag {
#endif
};
+#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
+#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
+struct page_frag_cache {
+ void *va;
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ __u16 offset;
+ __u16 size;
+#else
+ __u32 offset;
+#endif
+ /* we maintain a pagecount bias, so that we dont dirty cache line
+ * containing page->_refcount every time we allocate a fragment.
+ */
+ unsigned int pagecnt_bias;
+ bool pfmemalloc;
+};
+
/* Track pages that require TLB flushes */
struct tlbflush_unmap_batch {
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
new file mode 100644
index 000000000000..a758cb65a9b3
--- /dev/null
+++ b/include/linux/page_frag_cache.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_PAGE_FRAG_CACHE_H
+#define _LINUX_PAGE_FRAG_CACHE_H
+
+#include <linux/log2.h>
+#include <linux/types.h>
+#include <linux/mm_types_task.h>
+
+void page_frag_cache_drain(struct page_frag_cache *nc);
+void __page_frag_cache_drain(struct page *page, unsigned int count);
+void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
+ gfp_t gfp_mask, unsigned int align_mask);
+
+static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
+}
+
+static inline void *page_frag_alloc(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask)
+{
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
+}
+
+void page_frag_free(void *addr);
+
+#endif
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index cf8f6ce06742..7482997c719f 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -31,6 +31,7 @@
#include <linux/in6.h>
#include <linux/if_packet.h>
#include <linux/llist.h>
+#include <linux/page_frag_cache.h>
#include <net/flow.h>
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
#include <linux/netfilter/nf_conntrack_common.h>
diff --git a/mm/Makefile b/mm/Makefile
index d2915f8c9dc0..e9d342fa8058 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o
memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-y += page-alloc.o
+obj-y += page_frag_cache.o
obj-y += init-mm.o
obj-y += memblock.o
obj-y += $(memory-hotplug-y)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 28f80daf5c04..7e830613da1b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4803,142 +4803,6 @@ void free_pages(unsigned long addr, unsigned int order)
EXPORT_SYMBOL(free_pages);
-/*
- * Page Fragment:
- * An arbitrary-length arbitrary-offset area of memory which resides
- * within a 0 or higher order page. Multiple fragments within that page
- * are individually refcounted, in the page's reference counter.
- *
- * The page_frag functions below provide a simple allocation framework for
- * page fragments. This is used by the network stack and network device
- * drivers to provide a backing region of memory for use as either an
- * sk_buff->head, or to be used in the "frags" portion of skb_shared_info.
- */
-static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
-{
- struct page *page = NULL;
- gfp_t gfp = gfp_mask;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
- __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
-#endif
- if (unlikely(!page))
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
-
- nc->va = page ? page_address(page) : NULL;
-
- return page;
-}
-
-void page_frag_cache_drain(struct page_frag_cache *nc)
-{
- if (!nc->va)
- return;
-
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
-}
-EXPORT_SYMBOL(page_frag_cache_drain);
-
-void __page_frag_cache_drain(struct page *page, unsigned int count)
-{
- VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
-
- if (page_ref_sub_and_test(page, count))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(__page_frag_cache_drain);
-
-void *__page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align_mask)
-{
- unsigned int size = PAGE_SIZE;
- struct page *page;
- int offset;
-
- if (unlikely(!nc->va)) {
-refill:
- page = __page_frag_cache_refill(nc, gfp_mask);
- if (!page)
- return NULL;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* Even if we own the page, we do not use atomic_set().
- * This would break get_page_unless_zero() users.
- */
- page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
-
- /* reset page count bias and offset to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
- }
-
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
- page = virt_to_page(nc->va);
-
- if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
- goto refill;
-
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
- goto refill;
- }
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* OK, page count is 0, we can safely set it */
- set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
-
- /* reset page count bias and offset to start of new frag */
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
- }
-
- nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
-
- return nc->va + offset;
-}
-EXPORT_SYMBOL(__page_frag_alloc_align);
-
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
- */
-void page_frag_free(void *addr)
-{
- struct page *page = virt_to_head_page(addr);
-
- if (unlikely(put_page_testzero(page)))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(page_frag_free);
-
static void *make_alloc_exact(unsigned long addr, unsigned int order,
size_t size)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
new file mode 100644
index 000000000000..609a485cd02a
--- /dev/null
+++ b/mm/page_frag_cache.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Page fragment allocator
+ *
+ * Page Fragment:
+ * An arbitrary-length arbitrary-offset area of memory which resides within a
+ * 0 or higher order page. Multiple fragments within that page are
+ * individually refcounted, in the page's reference counter.
+ *
+ * The page_frag functions provide a simple allocation framework for page
+ * fragments. This is used by the network stack and network device drivers to
+ * provide a backing region of memory for use as either an sk_buff->head, or to
+ * be used in the "frags" portion of skb_shared_info.
+ */
+
+#include <linux/export.h>
+#include <linux/gfp_types.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/page_frag_cache.h>
+#include "internal.h"
+
+static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
+{
+ struct page *page = NULL;
+ gfp_t gfp = gfp_mask;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
+ __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
+ page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
+ PAGE_FRAG_CACHE_MAX_ORDER);
+ nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
+#endif
+ if (unlikely(!page))
+ page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+
+ nc->va = page ? page_address(page) : NULL;
+
+ return page;
+}
+
+void page_frag_cache_drain(struct page_frag_cache *nc)
+{
+ if (!nc->va)
+ return;
+
+ __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
+ nc->va = NULL;
+}
+EXPORT_SYMBOL(page_frag_cache_drain);
+
+void __page_frag_cache_drain(struct page *page, unsigned int count)
+{
+ VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
+
+ if (page_ref_sub_and_test(page, count))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(__page_frag_cache_drain);
+
+void *__page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ unsigned int size = PAGE_SIZE;
+ struct page *page;
+ int offset;
+
+ if (unlikely(!nc->va)) {
+refill:
+ page = __page_frag_cache_refill(nc, gfp_mask);
+ if (!page)
+ return NULL;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* Even if we own the page, we do not use atomic_set().
+ * This would break get_page_unless_zero() users.
+ */
+ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pfmemalloc = page_is_pfmemalloc(page);
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ nc->offset = size;
+ }
+
+ offset = nc->offset - fragsz;
+ if (unlikely(offset < 0)) {
+ page = virt_to_page(nc->va);
+
+ if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
+ goto refill;
+
+ if (unlikely(nc->pfmemalloc)) {
+ free_unref_page(page, compound_order(page));
+ goto refill;
+ }
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* OK, page count is 0, we can safely set it */
+ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ offset = size - fragsz;
+ if (unlikely(offset < 0)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+ }
+
+ nc->pagecnt_bias--;
+ offset &= align_mask;
+ nc->offset = offset;
+
+ return nc->va + offset;
+}
+EXPORT_SYMBOL(__page_frag_alloc_align);
+
+/*
+ * Frees a page fragment allocated out of either a compound or order 0 page.
+ */
+void page_frag_free(void *addr)
+{
+ struct page *page = virt_to_head_page(addr);
+
+ if (unlikely(put_page_testzero(page)))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(page_frag_free);
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 0e803db1ad79..4a009122991e 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -6,12 +6,12 @@
* Copyright: linyunsheng@huawei.com
*/
-#include <linux/mm.h>
#include <linux/module.h>
#include <linux/cpumask.h>
#include <linux/completion.h>
#include <linux/ptr_ring.h>
#include <linux/kthread.h>
+#include <linux/page_frag_cache.h>
static struct ptr_ring ptr_ring;
static int nr_objs = 512;
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file
2024-08-08 12:37 ` [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
@ 2024-08-14 15:33 ` Alexander H Duyck
2024-08-14 20:22 ` Andrew Morton
1 sibling, 0 replies; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 15:33 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, David Howells, Andrew Morton, Eric Dumazet,
Shuah Khan, linux-mm, linux-kselftest
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> Inspired by [1], move the page fragment allocator from page_alloc
> into its own c file and header file, as we are about to make more
> change for it to replace another page_frag implementation in
> sock.c
>
> As this patchset is going to replace 'struct page_frag' with
> 'struct page_frag_cache' in sched.h, including page_frag_cache.h
> in sched.h has a compiler error caused by interdependence between
> mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler
> error by moving 'struct page_frag_cache' to mm_types_task.h as
> suggested by Alexander, see [3].
>
> 1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/
> 2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/
> 3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/
> CC: David Howells <dhowells@redhat.com>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/gfp.h | 22 ---
> include/linux/mm_types.h | 18 ---
> include/linux/mm_types_task.h | 18 +++
> include/linux/page_frag_cache.h | 31 ++++
> include/linux/skbuff.h | 1 +
> mm/Makefile | 1 +
> mm/page_alloc.c | 136 ----------------
> mm/page_frag_cache.c | 145 ++++++++++++++++++
> .../selftests/mm/page_frag/page_frag_test.c | 2 +-
> 9 files changed, 197 insertions(+), 177 deletions(-)
> create mode 100644 include/linux/page_frag_cache.h
> create mode 100644 mm/page_frag_cache.c
>
>
...
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> new file mode 100644
> index 000000000000..a758cb65a9b3
> --- /dev/null
> +++ b/include/linux/page_frag_cache.h
> @@ -0,0 +1,31 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _LINUX_PAGE_FRAG_CACHE_H
> +#define _LINUX_PAGE_FRAG_CACHE_H
> +
> +#include <linux/log2.h>
> +#include <linux/types.h>
> +#include <linux/mm_types_task.h>
> +
Minor nit. These should usually be in alphabetical order. So
mm_types_task.h should be between log2.h and types.h.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file
2024-08-08 12:37 ` [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
2024-08-14 15:33 ` Alexander H Duyck
@ 2024-08-14 20:22 ` Andrew Morton
1 sibling, 0 replies; 47+ messages in thread
From: Andrew Morton @ 2024-08-14 20:22 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, David Howells,
Alexander Duyck, Eric Dumazet, Shuah Khan, linux-mm,
linux-kselftest
On Thu, 8 Aug 2024 20:37:02 +0800 Yunsheng Lin <linyunsheng@huawei.com> wrote:
> Inspired by [1], move the page fragment allocator from page_alloc
> into its own c file and header file, as we are about to make more
> change for it to replace another page_frag implementation in
> sock.c
Acked-by: Andrew Morton <akpm@linux-foundation.org>
This presently has no conflicts with mm.git.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align()
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin
` (11 subsequent siblings)
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
We are about to use page_frag_alloc_*() API to not just
allocate memory for skb->data, but also use them to do
the memory allocation for skb frag too. Currently the
implementation of page_frag in mm subsystem is running
the offset as a countdown rather than count-up value,
there may have several advantages to that as mentioned
in [1], but it may have some disadvantages, for example,
it may disable skb frag coalescing and more correct cache
prefetching
We have a trade-off to make in order to have a unified
implementation and API for page_frag, so use a initial zero
offset in this patch, and the following patch will try to
make some optimization to avoid the disadvantages as much
as possible.
Rename 'offset' to 'remaining' to retain the 'countdown'
behavior as 'remaining countdown' instead of 'offset
countdown'. Also, Renaming enable us to do a single
'fragsz > remaining' checking for the case of cache not
being enough, which should be the fast path if we ensure
'remaining' is zero when 'va' == NULL by memset'ing
'struct page_frag_cache' in page_frag_cache_init() and
page_frag_cache_drain().
1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/mm_types_task.h | 4 +--
mm/page_frag_cache.c | 52 +++++++++++++++++------------------
2 files changed, 28 insertions(+), 28 deletions(-)
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index cdc1e3696439..b1c54b2b9308 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -52,10 +52,10 @@ struct page_frag {
struct page_frag_cache {
void *va;
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- __u16 offset;
+ __u16 remaining;
__u16 size;
#else
- __u32 offset;
+ __u32 remaining;
#endif
/* we maintain a pagecount bias, so that we dont dirty cache line
* containing page->_refcount every time we allocate a fragment.
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 609a485cd02a..c5bc72cf018a 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ unsigned int size = nc->size;
+#else
unsigned int size = PAGE_SIZE;
+#endif
+ unsigned int remaining;
struct page *page;
- int offset;
if (unlikely(!nc->va)) {
refill:
@@ -82,14 +86,27 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
*/
page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
- /* reset page count bias and offset to start of new frag */
+ /* reset page count bias and remaining to start of new frag */
nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
+ nc->remaining = size;
}
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
+ remaining = nc->remaining & align_mask;
+ if (unlikely(remaining < fragsz)) {
+ if (unlikely(fragsz > PAGE_SIZE)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+
page = virt_to_page(nc->va);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
@@ -100,35 +117,18 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;
}
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
/* OK, page count is 0, we can safely set it */
set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
- /* reset page count bias and offset to start of new frag */
+ /* reset page count bias and remaining to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
+ remaining = size;
}
nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
+ nc->remaining = remaining - fragsz;
- return nc->va + offset;
+ return nc->va + (size - remaining);
}
EXPORT_SYMBOL(__page_frag_alloc_align);
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (2 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 15:49 ` Alexander H Duyck
2024-08-08 12:37 ` [PATCH net-next v13 05/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
` (10 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Subbaraya Sundeep, Chuck Lever, Sagi Grimberg, Jeroen de Borst,
Praveen Kaligineedi, Shailend Chand, Eric Dumazet, Tony Nguyen,
Przemek Kitszel, Sunil Goutham, Geetha sowjanya, hariprasad,
Felix Fietkau, Sean Wang, Mark Lee, Lorenzo Bianconi,
Matthias Brugger, AngeloGioacchino Del Regno, Keith Busch,
Jens Axboe, Christoph Hellwig, Chaitanya Kulkarni,
Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Andrew Morton,
Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
John Fastabend, Andrii Nakryiko, Martin KaFai Lau,
Eduard Zingerman, Song Liu, Yonghong Song, KP Singh,
Stanislav Fomichev, Hao Luo, Jiri Olsa, David Howells,
Marc Dionne, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo,
Tom Talpey, Trond Myklebust, Anna Schumaker, Shuah Khan,
intel-wired-lan, linux-arm-kernel, linux-mediatek, linux-nvme,
kvm, virtualization, linux-mm, bpf, linux-afs, linux-nfs,
linux-kselftest
Currently the page_frag API is returning 'virtual address'
or 'va' when allocing and expecting 'virtual address' or
'va' as input when freeing.
As we are about to support new use cases that the caller
need to deal with 'struct page' or need to deal with both
'va' and 'struct page'. In order to differentiate the API
handling between 'va' and 'struct page', add '_va' suffix
to the corresponding API mirroring the page_pool_alloc_va()
API of the page_pool. So that callers expecting to deal with
va, page or both va and page may call page_frag_alloc_va*,
page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
Acked-by: Chuck Lever <chuck.lever@oracle.com>
Acked-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
.../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
.../marvell/octeontx2/nic/otx2_common.c | 2 +-
drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
drivers/nvme/host/tcp.c | 8 +++----
drivers/nvme/target/tcp.c | 22 +++++++++----------
drivers/vhost/net.c | 6 ++---
include/linux/page_frag_cache.h | 21 +++++++++---------
include/linux/skbuff.h | 2 +-
kernel/bpf/cpumap.c | 2 +-
mm/page_frag_cache.c | 12 +++++-----
net/core/skbuff.c | 16 +++++++-------
net/core/xdp.c | 2 +-
net/rxrpc/txbuf.c | 15 +++++++------
net/sunrpc/svcsock.c | 6 ++---
.../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
19 files changed, 75 insertions(+), 70 deletions(-)
diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c
index acb73d4d0de6..b6c10100e462 100644
--- a/drivers/net/ethernet/google/gve/gve_rx.c
+++ b/drivers/net/ethernet/google/gve/gve_rx.c
@@ -729,7 +729,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
total_len = headroom + SKB_DATA_ALIGN(len) +
SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
- frame = page_frag_alloc(&rx->page_cache, total_len, GFP_ATOMIC);
+ frame = page_frag_alloc_va(&rx->page_cache, total_len, GFP_ATOMIC);
if (!frame) {
u64_stats_update_begin(&rx->statss);
rx->xdp_alloc_fails++;
@@ -742,7 +742,7 @@ static int gve_xdp_redirect(struct net_device *dev, struct gve_rx_ring *rx,
err = xdp_do_redirect(dev, &new, xdp_prog);
if (err)
- page_frag_free(frame);
+ page_frag_free_va(frame);
return err;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 8d25b6981269..00c706de2b82 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -126,7 +126,7 @@ ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf)
dev_kfree_skb_any(tx_buf->skb);
break;
case ICE_TX_BUF_XDP_TX:
- page_frag_free(tx_buf->raw_buf);
+ page_frag_free_va(tx_buf->raw_buf);
break;
case ICE_TX_BUF_XDP_XMIT:
xdp_return_frame(tx_buf->xdpf);
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index feba314a3fe4..6379f57d8228 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -148,7 +148,7 @@ static inline int ice_skb_pad(void)
* @ICE_TX_BUF_DUMMY: dummy Flow Director packet, unmap and kfree()
* @ICE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA
* @ICE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats
- * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free(), stats
+ * @ICE_TX_BUF_XDP_TX: &xdp_buff, unmap and page_frag_free_va(), stats
* @ICE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats
* @ICE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats
*/
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index 2719f0e20933..a1a41a14df0d 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -250,7 +250,7 @@ ice_clean_xdp_tx_buf(struct device *dev, struct ice_tx_buf *tx_buf,
switch (tx_buf->type) {
case ICE_TX_BUF_XDP_TX:
- page_frag_free(tx_buf->raw_buf);
+ page_frag_free_va(tx_buf->raw_buf);
break;
case ICE_TX_BUF_XDP_XMIT:
xdp_return_frame_bulk(tx_buf->xdpf, bq);
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index 149911e3002a..eef16a909f85 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -302,7 +302,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,
/* free the skb */
if (ring_is_xdp(tx_ring))
- page_frag_free(tx_buffer->data);
+ page_frag_free_va(tx_buffer->data);
else
napi_consume_skb(tx_buffer->skb, napi_budget);
@@ -2412,7 +2412,7 @@ static void ixgbevf_clean_tx_ring(struct ixgbevf_ring *tx_ring)
/* Free all the Tx ring sk_buffs */
if (ring_is_xdp(tx_ring))
- page_frag_free(tx_buffer->data);
+ page_frag_free_va(tx_buffer->data);
else
dev_kfree_skb_any(tx_buffer->skb);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 87d5776e3b88..a485e988fa1d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -553,7 +553,7 @@ static int __otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
*dma = dma_map_single_attrs(pfvf->dev, buf, pool->rbsize,
DMA_FROM_DEVICE, DMA_ATTR_SKIP_CPU_SYNC);
if (unlikely(dma_mapping_error(pfvf->dev, *dma))) {
- page_frag_free(buf);
+ page_frag_free_va(buf);
return -ENOMEM;
}
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.c b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
index 7063c78bd35f..c4228719f8a4 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.c
@@ -142,8 +142,8 @@ mtk_wed_wo_queue_refill(struct mtk_wed_wo *wo, struct mtk_wed_wo_queue *q,
dma_addr_t addr;
void *buf;
- buf = page_frag_alloc(&q->cache, q->buf_size,
- GFP_ATOMIC | GFP_DMA32);
+ buf = page_frag_alloc_va(&q->cache, q->buf_size,
+ GFP_ATOMIC | GFP_DMA32);
if (!buf)
break;
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index a2a47d3ab99f..86906bc505de 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -506,7 +506,7 @@ static void nvme_tcp_exit_request(struct blk_mq_tag_set *set,
{
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
- page_frag_free(req->pdu);
+ page_frag_free_va(req->pdu);
}
static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
@@ -520,7 +520,7 @@ static int nvme_tcp_init_request(struct blk_mq_tag_set *set,
struct nvme_tcp_queue *queue = &ctrl->queues[queue_idx];
u8 hdgst = nvme_tcp_hdgst_len(queue);
- req->pdu = page_frag_alloc(&queue->pf_cache,
+ req->pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(struct nvme_tcp_cmd_pdu) + hdgst,
GFP_KERNEL | __GFP_ZERO);
if (!req->pdu)
@@ -1337,7 +1337,7 @@ static void nvme_tcp_free_async_req(struct nvme_tcp_ctrl *ctrl)
{
struct nvme_tcp_request *async = &ctrl->async_req;
- page_frag_free(async->pdu);
+ page_frag_free_va(async->pdu);
}
static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl)
@@ -1346,7 +1346,7 @@ static int nvme_tcp_alloc_async_req(struct nvme_tcp_ctrl *ctrl)
struct nvme_tcp_request *async = &ctrl->async_req;
u8 hdgst = nvme_tcp_hdgst_len(queue);
- async->pdu = page_frag_alloc(&queue->pf_cache,
+ async->pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(struct nvme_tcp_cmd_pdu) + hdgst,
GFP_KERNEL | __GFP_ZERO);
if (!async->pdu)
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 5bff0d5464d1..560df3db2f82 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -1463,24 +1463,24 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue,
c->queue = queue;
c->req.port = queue->port->nport;
- c->cmd_pdu = page_frag_alloc(&queue->pf_cache,
+ c->cmd_pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(*c->cmd_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
if (!c->cmd_pdu)
return -ENOMEM;
c->req.cmd = &c->cmd_pdu->cmd;
- c->rsp_pdu = page_frag_alloc(&queue->pf_cache,
+ c->rsp_pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
if (!c->rsp_pdu)
goto out_free_cmd;
c->req.cqe = &c->rsp_pdu->cqe;
- c->data_pdu = page_frag_alloc(&queue->pf_cache,
+ c->data_pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
if (!c->data_pdu)
goto out_free_rsp;
- c->r2t_pdu = page_frag_alloc(&queue->pf_cache,
+ c->r2t_pdu = page_frag_alloc_va(&queue->pf_cache,
sizeof(*c->r2t_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
if (!c->r2t_pdu)
goto out_free_data;
@@ -1495,20 +1495,20 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue,
return 0;
out_free_data:
- page_frag_free(c->data_pdu);
+ page_frag_free_va(c->data_pdu);
out_free_rsp:
- page_frag_free(c->rsp_pdu);
+ page_frag_free_va(c->rsp_pdu);
out_free_cmd:
- page_frag_free(c->cmd_pdu);
+ page_frag_free_va(c->cmd_pdu);
return -ENOMEM;
}
static void nvmet_tcp_free_cmd(struct nvmet_tcp_cmd *c)
{
- page_frag_free(c->r2t_pdu);
- page_frag_free(c->data_pdu);
- page_frag_free(c->rsp_pdu);
- page_frag_free(c->cmd_pdu);
+ page_frag_free_va(c->r2t_pdu);
+ page_frag_free_va(c->data_pdu);
+ page_frag_free_va(c->rsp_pdu);
+ page_frag_free_va(c->cmd_pdu);
}
static int nvmet_tcp_alloc_cmds(struct nvmet_tcp_queue *queue)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f16279351db5..6691fac01e0d 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -686,8 +686,8 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
return -ENOSPC;
buflen += SKB_DATA_ALIGN(len + pad);
- buf = page_frag_alloc_align(&net->pf_cache, buflen, GFP_KERNEL,
- SMP_CACHE_BYTES);
+ buf = page_frag_alloc_va_align(&net->pf_cache, buflen, GFP_KERNEL,
+ SMP_CACHE_BYTES);
if (unlikely(!buf))
return -ENOMEM;
@@ -734,7 +734,7 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
return 0;
err:
- page_frag_free(buf);
+ page_frag_free_va(buf);
return ret;
}
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index a758cb65a9b3..ef038a07925c 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -9,23 +9,24 @@
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
-void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
- gfp_t gfp_mask, unsigned int align_mask);
+void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask);
-static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align)
+static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ gfp_t gfp_mask, unsigned int align)
{
WARN_ON_ONCE(!is_power_of_2(align));
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
+ return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align);
}
-static inline void *page_frag_alloc(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask)
+static inline void *page_frag_alloc_va(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask)
{
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
+ return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u);
}
-void page_frag_free(void *addr);
+void page_frag_free_va(void *addr);
#endif
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 7482997c719f..f9f0393e6b12 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -3381,7 +3381,7 @@ static inline struct sk_buff *netdev_alloc_skb_ip_align(struct net_device *dev,
static inline void skb_free_frag(void *addr)
{
- page_frag_free(addr);
+ page_frag_free_va(addr);
}
void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask);
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index fbdf5a1aabfe..3b70b6b071b9 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -323,7 +323,7 @@ static int cpu_map_kthread_run(void *data)
/* Bring struct page memory area to curr CPU. Read by
* build_skb_around via page_is_pfmemalloc(), and when
- * freed written by page_frag_free call.
+ * freed written by page_frag_free_va call.
*/
prefetchw(page);
}
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index c5bc72cf018a..70fb6dead624 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -59,9 +59,9 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
}
EXPORT_SYMBOL(__page_frag_cache_drain);
-void *__page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align_mask)
+void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask)
{
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
unsigned int size = nc->size;
@@ -130,16 +130,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
return nc->va + (size - remaining);
}
-EXPORT_SYMBOL(__page_frag_alloc_align);
+EXPORT_SYMBOL(__page_frag_alloc_va_align);
/*
* Frees a page fragment allocated out of either a compound or order 0 page.
*/
-void page_frag_free(void *addr)
+void page_frag_free_va(void *addr)
{
struct page *page = virt_to_head_page(addr);
if (unlikely(put_page_testzero(page)))
free_unref_page(page, compound_order(page));
}
-EXPORT_SYMBOL(page_frag_free);
+EXPORT_SYMBOL(page_frag_free_va);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index de2a044cc665..6cf2c51a34e1 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -314,8 +314,8 @@ void *__napi_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
fragsz = SKB_DATA_ALIGN(fragsz);
local_lock_nested_bh(&napi_alloc_cache.bh_lock);
- data = __page_frag_alloc_align(&nc->page, fragsz,
- GFP_ATOMIC | __GFP_NOWARN, align_mask);
+ data = __page_frag_alloc_va_align(&nc->page, fragsz,
+ GFP_ATOMIC | __GFP_NOWARN, align_mask);
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
return data;
@@ -330,9 +330,9 @@ void *__netdev_alloc_frag_align(unsigned int fragsz, unsigned int align_mask)
struct page_frag_cache *nc = this_cpu_ptr(&netdev_alloc_cache);
fragsz = SKB_DATA_ALIGN(fragsz);
- data = __page_frag_alloc_align(nc, fragsz,
- GFP_ATOMIC | __GFP_NOWARN,
- align_mask);
+ data = __page_frag_alloc_va_align(nc, fragsz,
+ GFP_ATOMIC | __GFP_NOWARN,
+ align_mask);
} else {
local_bh_disable();
data = __napi_alloc_frag_align(fragsz, align_mask);
@@ -751,14 +751,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (in_hardirq() || irqs_disabled()) {
nc = this_cpu_ptr(&netdev_alloc_cache);
- data = page_frag_alloc(nc, len, gfp_mask);
+ data = page_frag_alloc_va(nc, len, gfp_mask);
pfmemalloc = nc->pfmemalloc;
} else {
local_bh_disable();
local_lock_nested_bh(&napi_alloc_cache.bh_lock);
nc = this_cpu_ptr(&napi_alloc_cache.page);
- data = page_frag_alloc(nc, len, gfp_mask);
+ data = page_frag_alloc_va(nc, len, gfp_mask);
pfmemalloc = nc->pfmemalloc;
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
@@ -848,7 +848,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
} else {
len = SKB_HEAD_ALIGN(len);
- data = page_frag_alloc(&nc->page, len, gfp_mask);
+ data = page_frag_alloc_va(&nc->page, len, gfp_mask);
pfmemalloc = nc->page.pfmemalloc;
}
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
diff --git a/net/core/xdp.c b/net/core/xdp.c
index bcc5551c6424..7d4e09fb478f 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -387,7 +387,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct,
page_pool_put_full_page(page->pp, page, napi_direct);
break;
case MEM_TYPE_PAGE_SHARED:
- page_frag_free(data);
+ page_frag_free_va(data);
break;
case MEM_TYPE_PAGE_ORDER0:
page = virt_to_page(data); /* Assumes order0 page*/
diff --git a/net/rxrpc/txbuf.c b/net/rxrpc/txbuf.c
index c3913d8a50d3..dccb0353ee84 100644
--- a/net/rxrpc/txbuf.c
+++ b/net/rxrpc/txbuf.c
@@ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_
data_align = umax(data_align, L1_CACHE_BYTES);
mutex_lock(&call->conn->tx_data_alloc_lock);
- buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp,
- data_align);
+ buf = page_frag_alloc_va_align(&call->conn->tx_data_alloc, total, gfp,
+ data_align);
mutex_unlock(&call->conn->tx_data_alloc_lock);
if (!buf) {
kfree(txb);
@@ -96,17 +96,18 @@ struct rxrpc_txbuf *rxrpc_alloc_ack_txbuf(struct rxrpc_call *call, size_t sack_s
if (!txb)
return NULL;
- buf = page_frag_alloc(&call->local->tx_alloc,
- sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp);
+ buf = page_frag_alloc_va(&call->local->tx_alloc,
+ sizeof(*whdr) + sizeof(*ack) + 1 + 3 + sizeof(*trailer), gfp);
if (!buf) {
kfree(txb);
return NULL;
}
if (sack_size) {
- buf2 = page_frag_alloc(&call->local->tx_alloc, sack_size, gfp);
+ buf2 = page_frag_alloc_va(&call->local->tx_alloc, sack_size,
+ gfp);
if (!buf2) {
- page_frag_free(buf);
+ page_frag_free_va(buf);
kfree(txb);
return NULL;
}
@@ -180,7 +181,7 @@ static void rxrpc_free_txbuf(struct rxrpc_txbuf *txb)
rxrpc_txbuf_free);
for (i = 0; i < txb->nr_kvec; i++)
if (txb->kvec[i].iov_base)
- page_frag_free(txb->kvec[i].iov_base);
+ page_frag_free_va(txb->kvec[i].iov_base);
kfree(txb);
atomic_dec(&rxrpc_nr_txbuf);
}
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 6b3f01beb294..42d20412c1c3 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1222,8 +1222,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp,
/* The stream record marker is copied into a temporary page
* fragment buffer so that it can be included in rq_bvec.
*/
- buf = page_frag_alloc(&svsk->sk_frag_cache, sizeof(marker),
- GFP_KERNEL);
+ buf = page_frag_alloc_va(&svsk->sk_frag_cache, sizeof(marker),
+ GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf, &marker, sizeof(marker));
@@ -1235,7 +1235,7 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp,
iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,
1 + count, sizeof(marker) + rqstp->rq_res.len);
ret = sock_sendmsg(svsk->sk_sock, &msg);
- page_frag_free(buf);
+ page_frag_free_va(buf);
if (ret < 0)
return ret;
*sentp += ret;
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 4a009122991e..e522611452c9 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -52,7 +52,7 @@ static int page_frag_pop_thread(void *arg)
if (obj) {
nr--;
- page_frag_free(obj);
+ page_frag_free_va(obj);
} else {
cond_resched();
}
@@ -80,13 +80,16 @@ static int page_frag_push_thread(void *arg)
int ret;
if (test_align) {
- va = page_frag_alloc_align(&test_frag, test_alloc_len,
- GFP_KERNEL, SMP_CACHE_BYTES);
+ va = page_frag_alloc_va_align(&test_frag,
+ test_alloc_len,
+ GFP_KERNEL,
+ SMP_CACHE_BYTES);
WARN_ONCE((unsigned long)va & (SMP_CACHE_BYTES - 1),
"unaligned va returned\n");
} else {
- va = page_frag_alloc(&test_frag, test_alloc_len, GFP_KERNEL);
+ va = page_frag_alloc_va(&test_frag, test_alloc_len,
+ GFP_KERNEL);
}
if (!va)
@@ -94,7 +97,7 @@ static int page_frag_push_thread(void *arg)
ret = __ptr_ring_produce(ring, va);
if (ret) {
- page_frag_free(va);
+ page_frag_free_va(va);
cond_resched();
} else {
nr--;
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-08 12:37 ` [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin
@ 2024-08-14 15:49 ` Alexander H Duyck
2024-08-15 2:59 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 15:49 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Subbaraya Sundeep, Chuck Lever,
Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> Currently the page_frag API is returning 'virtual address'
> or 'va' when allocing and expecting 'virtual address' or
> 'va' as input when freeing.
>
> As we are about to support new use cases that the caller
> need to deal with 'struct page' or need to deal with both
> 'va' and 'struct page'. In order to differentiate the API
> handling between 'va' and 'struct page', add '_va' suffix
> to the corresponding API mirroring the page_pool_alloc_va()
> API of the page_pool. So that callers expecting to deal with
> va, page or both va and page may call page_frag_alloc_va*,
> page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> Acked-by: Sagi Grimberg <sagi@grimberg.me>
> ---
> drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
> drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
> .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
> .../marvell/octeontx2/nic/otx2_common.c | 2 +-
> drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
> drivers/nvme/host/tcp.c | 8 +++----
> drivers/nvme/target/tcp.c | 22 +++++++++----------
> drivers/vhost/net.c | 6 ++---
> include/linux/page_frag_cache.h | 21 +++++++++---------
> include/linux/skbuff.h | 2 +-
> kernel/bpf/cpumap.c | 2 +-
> mm/page_frag_cache.c | 12 +++++-----
> net/core/skbuff.c | 16 +++++++-------
> net/core/xdp.c | 2 +-
> net/rxrpc/txbuf.c | 15 +++++++------
> net/sunrpc/svcsock.c | 6 ++---
> .../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
> 19 files changed, 75 insertions(+), 70 deletions(-)
>
I still say no to this patch. It is an unnecessary name change and adds
no value. If you insist on this patch I will reject the set every time.
The fact is it is polluting the git history and just makes things
harder to maintain without adding any value as you aren't changing what
the function does and there is no need for this. In addition it just
makes it that much harder to backport fixes in the future as people
will have to work around the rename.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-14 15:49 ` Alexander H Duyck
@ 2024-08-15 2:59 ` Yunsheng Lin
2024-08-15 15:00 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-15 2:59 UTC (permalink / raw)
To: Alexander H Duyck, davem, kuba, pabeni
Cc: netdev, linux-kernel, Subbaraya Sundeep, Chuck Lever,
Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On 2024/8/14 23:49, Alexander H Duyck wrote:
> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>> Currently the page_frag API is returning 'virtual address'
>> or 'va' when allocing and expecting 'virtual address' or
>> 'va' as input when freeing.
>>
>> As we are about to support new use cases that the caller
>> need to deal with 'struct page' or need to deal with both
>> 'va' and 'struct page'. In order to differentiate the API
>> handling between 'va' and 'struct page', add '_va' suffix
>> to the corresponding API mirroring the page_pool_alloc_va()
>> API of the page_pool. So that callers expecting to deal with
>> va, page or both va and page may call page_frag_alloc_va*,
>> page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
>>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
>> Acked-by: Chuck Lever <chuck.lever@oracle.com>
>> Acked-by: Sagi Grimberg <sagi@grimberg.me>
>> ---
>> drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
>> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
>> drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
>> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
>> .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
>> .../marvell/octeontx2/nic/otx2_common.c | 2 +-
>> drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
>> drivers/nvme/host/tcp.c | 8 +++----
>> drivers/nvme/target/tcp.c | 22 +++++++++----------
>> drivers/vhost/net.c | 6 ++---
>> include/linux/page_frag_cache.h | 21 +++++++++---------
>> include/linux/skbuff.h | 2 +-
>> kernel/bpf/cpumap.c | 2 +-
>> mm/page_frag_cache.c | 12 +++++-----
>> net/core/skbuff.c | 16 +++++++-------
>> net/core/xdp.c | 2 +-
>> net/rxrpc/txbuf.c | 15 +++++++------
>> net/sunrpc/svcsock.c | 6 ++---
>> .../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
>> 19 files changed, 75 insertions(+), 70 deletions(-)
>>
>
> I still say no to this patch. It is an unnecessary name change and adds
> no value. If you insist on this patch I will reject the set every time.
>
> The fact is it is polluting the git history and just makes things
> harder to maintain without adding any value as you aren't changing what
> the function does and there is no need for this. In addition it just
I guess I have to disagree with the above 'no need for this' part for
now, as mentioned in [1]:
"There are three types of API as proposed in this patchset instead of
two types of API:
1. page_frag_alloc_va() returns [va].
2. page_frag_alloc_pg() returns [page, offset].
3. page_frag_alloc() returns [va] & [page, offset].
You seemed to miss that we need a third naming for the type 3 API.
Do you see type 3 API as a valid API? if yes, what naming are you
suggesting for it? if no, why it is not a valid API?"
1. https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
> makes it that much harder to backport fixes in the future as people
> will have to work around the rename.
>
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-15 2:59 ` Yunsheng Lin
@ 2024-08-15 15:00 ` Alexander Duyck
2024-08-16 11:55 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-15 15:00 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On Wed, Aug 14, 2024 at 8:00 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/14 23:49, Alexander H Duyck wrote:
> > On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> >> Currently the page_frag API is returning 'virtual address'
> >> or 'va' when allocing and expecting 'virtual address' or
> >> 'va' as input when freeing.
> >>
> >> As we are about to support new use cases that the caller
> >> need to deal with 'struct page' or need to deal with both
> >> 'va' and 'struct page'. In order to differentiate the API
> >> handling between 'va' and 'struct page', add '_va' suffix
> >> to the corresponding API mirroring the page_pool_alloc_va()
> >> API of the page_pool. So that callers expecting to deal with
> >> va, page or both va and page may call page_frag_alloc_va*,
> >> page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
> >>
> >> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
> >> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> >> Acked-by: Sagi Grimberg <sagi@grimberg.me>
> >> ---
> >> drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
> >> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
> >> drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
> >> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
> >> .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
> >> .../marvell/octeontx2/nic/otx2_common.c | 2 +-
> >> drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
> >> drivers/nvme/host/tcp.c | 8 +++----
> >> drivers/nvme/target/tcp.c | 22 +++++++++----------
> >> drivers/vhost/net.c | 6 ++---
> >> include/linux/page_frag_cache.h | 21 +++++++++---------
> >> include/linux/skbuff.h | 2 +-
> >> kernel/bpf/cpumap.c | 2 +-
> >> mm/page_frag_cache.c | 12 +++++-----
> >> net/core/skbuff.c | 16 +++++++-------
> >> net/core/xdp.c | 2 +-
> >> net/rxrpc/txbuf.c | 15 +++++++------
> >> net/sunrpc/svcsock.c | 6 ++---
> >> .../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
> >> 19 files changed, 75 insertions(+), 70 deletions(-)
> >>
> >
> > I still say no to this patch. It is an unnecessary name change and adds
> > no value. If you insist on this patch I will reject the set every time.
> >
> > The fact is it is polluting the git history and just makes things
> > harder to maintain without adding any value as you aren't changing what
> > the function does and there is no need for this. In addition it just
>
> I guess I have to disagree with the above 'no need for this' part for
> now, as mentioned in [1]:
>
> "There are three types of API as proposed in this patchset instead of
> two types of API:
> 1. page_frag_alloc_va() returns [va].
> 2. page_frag_alloc_pg() returns [page, offset].
> 3. page_frag_alloc() returns [va] & [page, offset].
>
> You seemed to miss that we need a third naming for the type 3 API.
> Do you see type 3 API as a valid API? if yes, what naming are you
> suggesting for it? if no, why it is not a valid API?"
I didn't. I just don't see the point in pushing out the existing API
to support that. In reality 2 and 3 are redundant. You probably only
need 3. Like I mentioned earlier you can essentially just pass a
page_frag via pointer to the function. With that you could also look
at just returning a virtual address as well if you insist on having
something that returns all of the above. No point in having 2 and 3 be
seperate functions.
I am going to nack this patch set if you insist on this pointless
renaming. The fact is it is just adding noise that adds no value.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-15 15:00 ` Alexander Duyck
@ 2024-08-16 11:55 ` Yunsheng Lin
2024-08-19 15:54 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-16 11:55 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On 2024/8/15 23:00, Alexander Duyck wrote:
> On Wed, Aug 14, 2024 at 8:00 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/14 23:49, Alexander H Duyck wrote:
>>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>>>> Currently the page_frag API is returning 'virtual address'
>>>> or 'va' when allocing and expecting 'virtual address' or
>>>> 'va' as input when freeing.
>>>>
>>>> As we are about to support new use cases that the caller
>>>> need to deal with 'struct page' or need to deal with both
>>>> 'va' and 'struct page'. In order to differentiate the API
>>>> handling between 'va' and 'struct page', add '_va' suffix
>>>> to the corresponding API mirroring the page_pool_alloc_va()
>>>> API of the page_pool. So that callers expecting to deal with
>>>> va, page or both va and page may call page_frag_alloc_va*,
>>>> page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
>>>>
>>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>>>> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
>>>> Acked-by: Chuck Lever <chuck.lever@oracle.com>
>>>> Acked-by: Sagi Grimberg <sagi@grimberg.me>
>>>> ---
>>>> drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
>>>> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
>>>> drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
>>>> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
>>>> .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
>>>> .../marvell/octeontx2/nic/otx2_common.c | 2 +-
>>>> drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
>>>> drivers/nvme/host/tcp.c | 8 +++----
>>>> drivers/nvme/target/tcp.c | 22 +++++++++----------
>>>> drivers/vhost/net.c | 6 ++---
>>>> include/linux/page_frag_cache.h | 21 +++++++++---------
>>>> include/linux/skbuff.h | 2 +-
>>>> kernel/bpf/cpumap.c | 2 +-
>>>> mm/page_frag_cache.c | 12 +++++-----
>>>> net/core/skbuff.c | 16 +++++++-------
>>>> net/core/xdp.c | 2 +-
>>>> net/rxrpc/txbuf.c | 15 +++++++------
>>>> net/sunrpc/svcsock.c | 6 ++---
>>>> .../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
>>>> 19 files changed, 75 insertions(+), 70 deletions(-)
>>>>
>>>
>>> I still say no to this patch. It is an unnecessary name change and adds
>>> no value. If you insist on this patch I will reject the set every time.
>>>
>>> The fact is it is polluting the git history and just makes things
>>> harder to maintain without adding any value as you aren't changing what
>>> the function does and there is no need for this. In addition it just
>>
>> I guess I have to disagree with the above 'no need for this' part for
>> now, as mentioned in [1]:
>>
>> "There are three types of API as proposed in this patchset instead of
>> two types of API:
>> 1. page_frag_alloc_va() returns [va].
>> 2. page_frag_alloc_pg() returns [page, offset].
>> 3. page_frag_alloc() returns [va] & [page, offset].
>>
>> You seemed to miss that we need a third naming for the type 3 API.
>> Do you see type 3 API as a valid API? if yes, what naming are you
>> suggesting for it? if no, why it is not a valid API?"
>
> I didn't. I just don't see the point in pushing out the existing API
> to support that. In reality 2 and 3 are redundant. You probably only
> need 3. Like I mentioned earlier you can essentially just pass a
If the caller just expect [page, offset], do you expect the caller also
type 3 API, which return both [va] and [page, offset]?
I am not sure if I understand why you think 2 and 3 are redundant here?
If you think 2 and 3 are redundant here, aren't 1 and 3 also redundant
as the similar agrument?
> page_frag via pointer to the function. With that you could also look
> at just returning a virtual address as well if you insist on having
> something that returns all of the above. No point in having 2 and 3 be
> seperate functions.
Let's be more specific about what are your suggestion here: which way
is the prefer way to return the virtual address. It seems there are two
options:
1. Return the virtual address by function returning as below:
void *page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio);
2. Return the virtual address by double pointer as below:
int page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio,
void **va);
If the above options is what you have in mind, please be more specific
which one is the prefer option, and why it is the prefer option.
If the above options is not what you have in mind, please list out the
declaration of API in your mind.
>
> I am going to nack this patch set if you insist on this pointless
> renaming. The fact is it is just adding noise that adds no value.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-16 11:55 ` Yunsheng Lin
@ 2024-08-19 15:54 ` Alexander Duyck
2024-08-20 13:07 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-19 15:54 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On Fri, Aug 16, 2024 at 4:55 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 23:00, Alexander Duyck wrote:
> > On Wed, Aug 14, 2024 at 8:00 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2024/8/14 23:49, Alexander H Duyck wrote:
> >>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> >>>> Currently the page_frag API is returning 'virtual address'
> >>>> or 'va' when allocing and expecting 'virtual address' or
> >>>> 'va' as input when freeing.
> >>>>
> >>>> As we are about to support new use cases that the caller
> >>>> need to deal with 'struct page' or need to deal with both
> >>>> 'va' and 'struct page'. In order to differentiate the API
> >>>> handling between 'va' and 'struct page', add '_va' suffix
> >>>> to the corresponding API mirroring the page_pool_alloc_va()
> >>>> API of the page_pool. So that callers expecting to deal with
> >>>> va, page or both va and page may call page_frag_alloc_va*,
> >>>> page_frag_alloc_pg*, or page_frag_alloc* API accordingly.
> >>>>
> >>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >>>> Reviewed-by: Subbaraya Sundeep <sbhatta@marvell.com>
> >>>> Acked-by: Chuck Lever <chuck.lever@oracle.com>
> >>>> Acked-by: Sagi Grimberg <sagi@grimberg.me>
> >>>> ---
> >>>> drivers/net/ethernet/google/gve/gve_rx.c | 4 ++--
> >>>> drivers/net/ethernet/intel/ice/ice_txrx.c | 2 +-
> >>>> drivers/net/ethernet/intel/ice/ice_txrx.h | 2 +-
> >>>> drivers/net/ethernet/intel/ice/ice_txrx_lib.c | 2 +-
> >>>> .../net/ethernet/intel/ixgbevf/ixgbevf_main.c | 4 ++--
> >>>> .../marvell/octeontx2/nic/otx2_common.c | 2 +-
> >>>> drivers/net/ethernet/mediatek/mtk_wed_wo.c | 4 ++--
> >>>> drivers/nvme/host/tcp.c | 8 +++----
> >>>> drivers/nvme/target/tcp.c | 22 +++++++++----------
> >>>> drivers/vhost/net.c | 6 ++---
> >>>> include/linux/page_frag_cache.h | 21 +++++++++---------
> >>>> include/linux/skbuff.h | 2 +-
> >>>> kernel/bpf/cpumap.c | 2 +-
> >>>> mm/page_frag_cache.c | 12 +++++-----
> >>>> net/core/skbuff.c | 16 +++++++-------
> >>>> net/core/xdp.c | 2 +-
> >>>> net/rxrpc/txbuf.c | 15 +++++++------
> >>>> net/sunrpc/svcsock.c | 6 ++---
> >>>> .../selftests/mm/page_frag/page_frag_test.c | 13 ++++++-----
> >>>> 19 files changed, 75 insertions(+), 70 deletions(-)
> >>>>
> >>>
> >>> I still say no to this patch. It is an unnecessary name change and adds
> >>> no value. If you insist on this patch I will reject the set every time.
> >>>
> >>> The fact is it is polluting the git history and just makes things
> >>> harder to maintain without adding any value as you aren't changing what
> >>> the function does and there is no need for this. In addition it just
> >>
> >> I guess I have to disagree with the above 'no need for this' part for
> >> now, as mentioned in [1]:
> >>
> >> "There are three types of API as proposed in this patchset instead of
> >> two types of API:
> >> 1. page_frag_alloc_va() returns [va].
> >> 2. page_frag_alloc_pg() returns [page, offset].
> >> 3. page_frag_alloc() returns [va] & [page, offset].
> >>
> >> You seemed to miss that we need a third naming for the type 3 API.
> >> Do you see type 3 API as a valid API? if yes, what naming are you
> >> suggesting for it? if no, why it is not a valid API?"
> >
> > I didn't. I just don't see the point in pushing out the existing API
> > to support that. In reality 2 and 3 are redundant. You probably only
> > need 3. Like I mentioned earlier you can essentially just pass a
>
> If the caller just expect [page, offset], do you expect the caller also
> type 3 API, which return both [va] and [page, offset]?
>
> I am not sure if I understand why you think 2 and 3 are redundant here?
> If you think 2 and 3 are redundant here, aren't 1 and 3 also redundant
> as the similar agrument?
The big difference is the need to return page and offset. Basically to
support returning page and offset you need to pass at least one value
as a pointer so you can store the return there.
The reason why 3 is just a redundant form of 2 is that you will
normally just be converting from a va to a page and offset so the va
should already be easily accessible.
> > page_frag via pointer to the function. With that you could also look
> > at just returning a virtual address as well if you insist on having
> > something that returns all of the above. No point in having 2 and 3 be
> > seperate functions.
>
> Let's be more specific about what are your suggestion here: which way
> is the prefer way to return the virtual address. It seems there are two
> options:
>
> 1. Return the virtual address by function returning as below:
> void *page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio);
>
> 2. Return the virtual address by double pointer as below:
> int page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio,
> void **va);
I was thinking more of option 1. Basically this is a superset of
page_frag_alloc_va that is also returning the page and offset via a
page frag. However instead of bio_vec I would be good with "struct
page_frag *" being the value passed to the function to play the role
of container. Basically the big difference between 1 and 2/3 if I am
not mistaken is the fact that for 1 you pass the size, whereas with
2/3 you are peeling off the page frag from the larger page frag cache
after the fact via a commit type action.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-19 15:54 ` Alexander Duyck
@ 2024-08-20 13:07 ` Yunsheng Lin
2024-08-20 16:02 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-20 13:07 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On 2024/8/19 23:54, Alexander Duyck wrote:
...
>>>>
>>>> "There are three types of API as proposed in this patchset instead of
>>>> two types of API:
>>>> 1. page_frag_alloc_va() returns [va].
>>>> 2. page_frag_alloc_pg() returns [page, offset].
>>>> 3. page_frag_alloc() returns [va] & [page, offset].
>>>>
>>>> You seemed to miss that we need a third naming for the type 3 API.
>>>> Do you see type 3 API as a valid API? if yes, what naming are you
>>>> suggesting for it? if no, why it is not a valid API?"
>>>
>>> I didn't. I just don't see the point in pushing out the existing API
>>> to support that. In reality 2 and 3 are redundant. You probably only
>>> need 3. Like I mentioned earlier you can essentially just pass a
>>
>> If the caller just expect [page, offset], do you expect the caller also
>> type 3 API, which return both [va] and [page, offset]?
>>
>> I am not sure if I understand why you think 2 and 3 are redundant here?
>> If you think 2 and 3 are redundant here, aren't 1 and 3 also redundant
>> as the similar agrument?
>
> The big difference is the need to return page and offset. Basically to
> support returning page and offset you need to pass at least one value
> as a pointer so you can store the return there.
>
> The reason why 3 is just a redundant form of 2 is that you will
> normally just be converting from a va to a page and offset so the va
> should already be easily accessible.
I am assuming that by 'easily accessible', you meant the 'va' can be
calculated as below, right?
va = encoded_page_address(encoded_va) +
(page_frag_cache_page_size(encoded_va) - remaining);
I guess it is easily accessible, but it is not without some overhead
to calculate the 'va' here.
>
>>> page_frag via pointer to the function. With that you could also look
>>> at just returning a virtual address as well if you insist on having
>>> something that returns all of the above. No point in having 2 and 3 be
>>> seperate functions.
>>
>> Let's be more specific about what are your suggestion here: which way
>> is the prefer way to return the virtual address. It seems there are two
>> options:
>>
>> 1. Return the virtual address by function returning as below:
>> void *page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio);
>>
>> 2. Return the virtual address by double pointer as below:
>> int page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio,
>> void **va);
>
> I was thinking more of option 1. Basically this is a superset of
> page_frag_alloc_va that is also returning the page and offset via a
> page frag. However instead of bio_vec I would be good with "struct
> page_frag *" being the value passed to the function to play the role
> of container. Basically the big difference between 1 and 2/3 if I am
> not mistaken is the fact that for 1 you pass the size, whereas with
> 2/3 you are peeling off the page frag from the larger page frag cache
Let's be clear here: The callers just expecting [page, offset] also need
to call type 3 API, which return both [va] and [page, offset]? and it
is ok to ignore the overhead of calculating the 'va' for those kinds
of callers just because we don't want to do the renaming for a existing
API and can't come up with good naming for that?
> after the fact via a commit type action.
Just be clear here, there is no commit type action for some subtype of
type 2/3 API.
For example, for type 2 API in this patchset, it has below subtypes:
subtype 1: it does not need a commit type action, it just return
[page, offset] instead of page_frag_alloc_va() returning [va],
and it does not return the allocated fragsz back to the caller
as page_frag_alloc_va() does not too:
struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
unsigned int *offset, unsigned int fragsz,
gfp_t gfp)
subtype 2: it does need a commit type action, and @fragsz is returned to
the caller and caller used that to commit how much fragsz to
commit.
struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
unsigned int *offset,
unsigned int *fragsz, gfp_t gfp)
Do you see subtype 1 as valid API? If no, why?
If yes, do you also expect the caller to use "struct page_frag *" as the
container? If yes, what is the caller expected to do with the size field in
"struct page_frag *" from API perspective? Just ignore it?
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-20 13:07 ` Yunsheng Lin
@ 2024-08-20 16:02 ` Alexander Duyck
2024-08-21 12:30 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-20 16:02 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On Tue, Aug 20, 2024 at 6:07 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/19 23:54, Alexander Duyck wrote:
>
> ...
>
> >>>>
> >>>> "There are three types of API as proposed in this patchset instead of
> >>>> two types of API:
> >>>> 1. page_frag_alloc_va() returns [va].
> >>>> 2. page_frag_alloc_pg() returns [page, offset].
> >>>> 3. page_frag_alloc() returns [va] & [page, offset].
> >>>>
> >>>> You seemed to miss that we need a third naming for the type 3 API.
> >>>> Do you see type 3 API as a valid API? if yes, what naming are you
> >>>> suggesting for it? if no, why it is not a valid API?"
> >>>
> >>> I didn't. I just don't see the point in pushing out the existing API
> >>> to support that. In reality 2 and 3 are redundant. You probably only
> >>> need 3. Like I mentioned earlier you can essentially just pass a
> >>
> >> If the caller just expect [page, offset], do you expect the caller also
> >> type 3 API, which return both [va] and [page, offset]?
> >>
> >> I am not sure if I understand why you think 2 and 3 are redundant here?
> >> If you think 2 and 3 are redundant here, aren't 1 and 3 also redundant
> >> as the similar agrument?
> >
> > The big difference is the need to return page and offset. Basically to
> > support returning page and offset you need to pass at least one value
> > as a pointer so you can store the return there.
> >
> > The reason why 3 is just a redundant form of 2 is that you will
> > normally just be converting from a va to a page and offset so the va
> > should already be easily accessible.
>
> I am assuming that by 'easily accessible', you meant the 'va' can be
> calculated as below, right?
>
> va = encoded_page_address(encoded_va) +
> (page_frag_cache_page_size(encoded_va) - remaining);
>
> I guess it is easily accessible, but it is not without some overhead
> to calculate the 'va' here.
It is just the encoded_page_address + offset that you have to
calculate anyway. So the only bit you actually have to do is 2
instructions, one to mask the encoded_va and then the addition of the
offset that you provided to the page. As it stands those instruction
can easily be slipped in while you are working on converting the va to
a page.
> >
> >>> page_frag via pointer to the function. With that you could also look
> >>> at just returning a virtual address as well if you insist on having
> >>> something that returns all of the above. No point in having 2 and 3 be
> >>> seperate functions.
> >>
> >> Let's be more specific about what are your suggestion here: which way
> >> is the prefer way to return the virtual address. It seems there are two
> >> options:
> >>
> >> 1. Return the virtual address by function returning as below:
> >> void *page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio);
> >>
> >> 2. Return the virtual address by double pointer as below:
> >> int page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio,
> >> void **va);
> >
> > I was thinking more of option 1. Basically this is a superset of
> > page_frag_alloc_va that is also returning the page and offset via a
> > page frag. However instead of bio_vec I would be good with "struct
> > page_frag *" being the value passed to the function to play the role
> > of container. Basically the big difference between 1 and 2/3 if I am
> > not mistaken is the fact that for 1 you pass the size, whereas with
> > 2/3 you are peeling off the page frag from the larger page frag cache
>
> Let's be clear here: The callers just expecting [page, offset] also need
> to call type 3 API, which return both [va] and [page, offset]? and it
> is ok to ignore the overhead of calculating the 'va' for those kinds
> of callers just because we don't want to do the renaming for a existing
> API and can't come up with good naming for that?
>
> > after the fact via a commit type action.
>
> Just be clear here, there is no commit type action for some subtype of
> type 2/3 API.
>
> For example, for type 2 API in this patchset, it has below subtypes:
>
> subtype 1: it does not need a commit type action, it just return
> [page, offset] instead of page_frag_alloc_va() returning [va],
> and it does not return the allocated fragsz back to the caller
> as page_frag_alloc_va() does not too:
> struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
> unsigned int *offset, unsigned int fragsz,
> gfp_t gfp)
>
> subtype 2: it does need a commit type action, and @fragsz is returned to
> the caller and caller used that to commit how much fragsz to
> commit.
> struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
> unsigned int *offset,
> unsigned int *fragsz, gfp_t gfp)
>
> Do you see subtype 1 as valid API? If no, why?
Not really, it is just a wrapper for page_frag_alloc that is
converting the virtual address to a page and offset. They are the same
data and don't justify the need for two functions. It kind of explains
one of the complaints I had about this code. Supposedly it was
refactoring and combining several different callers into one, but what
it is actually doing is fracturing the code path into 3 different
variants based on little if any actual difference as it is doing
unnecessary optimization.
> If yes, do you also expect the caller to use "struct page_frag *" as the
> container? If yes, what is the caller expected to do with the size field in
> "struct page_frag *" from API perspective? Just ignore it?
It should be populated. You passed a fragsz, so you should populate
the output fragsz so you can get the truesize in the case of network
packets. The removal of the page_frag from the other callers is making
it much harder to review your code anyway. If we keep the page_frag
there it should reduce the amount of change needed when you replace
page_frag with the page_frag_cache.
Honestly this is eating up too much of my time. As I said before this
patch set is too big and it is trying to squeeze in more than it
really should for a single patch set to be reviewable. Going forward
please split up the patch set as I had suggested before and address my
comments. Ideally you would have your first patch just be some
refactor and cleanup to get the "offset" pointer moving in the
direction you want. With that we can at least get half of this set
digested before we start chewing into all this refactor for the
replacement of page_frag with the page_frag_cache.
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API
2024-08-20 16:02 ` Alexander Duyck
@ 2024-08-21 12:30 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-21 12:30 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Subbaraya Sundeep,
Chuck Lever, Sagi Grimberg, Jeroen de Borst, Praveen Kaligineedi,
Shailend Chand, Eric Dumazet, Tony Nguyen, Przemek Kitszel,
Sunil Goutham, Geetha sowjanya, hariprasad, Felix Fietkau,
Sean Wang, Mark Lee, Lorenzo Bianconi, Matthias Brugger,
AngeloGioacchino Del Regno, Keith Busch, Jens Axboe,
Christoph Hellwig, Chaitanya Kulkarni, Michael S. Tsirkin,
Jason Wang, Eugenio Pérez, Andrew Morton, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
David Howells, Marc Dionne, Jeff Layton, Neil Brown,
Olga Kornievskaia, Dai Ngo, Tom Talpey, Trond Myklebust,
Anna Schumaker, Shuah Khan, intel-wired-lan, linux-arm-kernel,
linux-mediatek, linux-nvme, kvm, virtualization, linux-mm, bpf,
linux-afs, linux-nfs, linux-kselftest
On 2024/8/21 0:02, Alexander Duyck wrote:
> On Tue, Aug 20, 2024 at 6:07 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/19 23:54, Alexander Duyck wrote:
>>
>> ...
>>
>>>>>>
>>>>>> "There are three types of API as proposed in this patchset instead of
>>>>>> two types of API:
>>>>>> 1. page_frag_alloc_va() returns [va].
>>>>>> 2. page_frag_alloc_pg() returns [page, offset].
>>>>>> 3. page_frag_alloc() returns [va] & [page, offset].
>>>>>>
>>>>>> You seemed to miss that we need a third naming for the type 3 API.
>>>>>> Do you see type 3 API as a valid API? if yes, what naming are you
>>>>>> suggesting for it? if no, why it is not a valid API?"
>>>>>
>>>>> I didn't. I just don't see the point in pushing out the existing API
>>>>> to support that. In reality 2 and 3 are redundant. You probably only
>>>>> need 3. Like I mentioned earlier you can essentially just pass a
>>>>
>>>> If the caller just expect [page, offset], do you expect the caller also
>>>> type 3 API, which return both [va] and [page, offset]?
>>>>
>>>> I am not sure if I understand why you think 2 and 3 are redundant here?
>>>> If you think 2 and 3 are redundant here, aren't 1 and 3 also redundant
>>>> as the similar agrument?
>>>
>>> The big difference is the need to return page and offset. Basically to
>>> support returning page and offset you need to pass at least one value
>>> as a pointer so you can store the return there.
>>>
>>> The reason why 3 is just a redundant form of 2 is that you will
>>> normally just be converting from a va to a page and offset so the va
>>> should already be easily accessible.
>>
>> I am assuming that by 'easily accessible', you meant the 'va' can be
>> calculated as below, right?
>>
>> va = encoded_page_address(encoded_va) +
>> (page_frag_cache_page_size(encoded_va) - remaining);
>>
>> I guess it is easily accessible, but it is not without some overhead
>> to calculate the 'va' here.
>
> It is just the encoded_page_address + offset that you have to
> calculate anyway. So the only bit you actually have to do is 2
> instructions, one to mask the encoded_va and then the addition of the
> offset that you provided to the page. As it stands those instruction
> can easily be slipped in while you are working on converting the va to
> a page.
Well, with your suggestions against other optimizations like avoiding
a checking in fast patch and avoid calling virt_to_page(), the overhead
is kind of added up.
And I am really surprised by your above suggestion about deciding the
API for users according to the internal implementation detail here. As
the overhead of calculating 'va' is really depending on the layout of
'struct page_frag_cache' here, what if we change the implementation and
the overhead of calculating 'va' becomes bigger? Do we expect to change
the API for the callers when we change the internal implementation of
page_frag_cache?
>
>
>>>
>>>>> page_frag via pointer to the function. With that you could also look
>>>>> at just returning a virtual address as well if you insist on having
>>>>> something that returns all of the above. No point in having 2 and 3 be
>>>>> seperate functions.
>>>>
>>>> Let's be more specific about what are your suggestion here: which way
>>>> is the prefer way to return the virtual address. It seems there are two
>>>> options:
>>>>
>>>> 1. Return the virtual address by function returning as below:
>>>> void *page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio);
>>>>
>>>> 2. Return the virtual address by double pointer as below:
>>>> int page_frag_alloc_bio(struct page_frag_cache *nc, struct bio_vec *bio,
>>>> void **va);
>>>
>>> I was thinking more of option 1. Basically this is a superset of
>>> page_frag_alloc_va that is also returning the page and offset via a
>>> page frag. However instead of bio_vec I would be good with "struct
>>> page_frag *" being the value passed to the function to play the role
>>> of container. Basically the big difference between 1 and 2/3 if I am
>>> not mistaken is the fact that for 1 you pass the size, whereas with
>>> 2/3 you are peeling off the page frag from the larger page frag cache
>>
>> Let's be clear here: The callers just expecting [page, offset] also need
>> to call type 3 API, which return both [va] and [page, offset]? and it
>> is ok to ignore the overhead of calculating the 'va' for those kinds
>> of callers just because we don't want to do the renaming for a existing
>> API and can't come up with good naming for that?
>>
>>> after the fact via a commit type action.
>>
>> Just be clear here, there is no commit type action for some subtype of
>> type 2/3 API.
>>
>> For example, for type 2 API in this patchset, it has below subtypes:
>>
>> subtype 1: it does not need a commit type action, it just return
>> [page, offset] instead of page_frag_alloc_va() returning [va],
>> and it does not return the allocated fragsz back to the caller
>> as page_frag_alloc_va() does not too:
>> struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
>> unsigned int *offset, unsigned int fragsz,
>> gfp_t gfp)
>>
>> subtype 2: it does need a commit type action, and @fragsz is returned to
>> the caller and caller used that to commit how much fragsz to
>> commit.
>> struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
>> unsigned int *offset,
>> unsigned int *fragsz, gfp_t gfp)
>>
>> Do you see subtype 1 as valid API? If no, why?
>
> Not really, it is just a wrapper for page_frag_alloc that is
> converting the virtual address to a page and offset. They are the same
> data and don't justify the need for two functions. It kind of explains
I am supposing you meant something like below:
struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
unsigned int *offset, unsigned int fragsz,
gfp_t gfp)
{
struct page *page;
void *va;
va = page_frag_alloc_va(nc, fragsz, gfp);
if (!va)
return NULL;
page = virt_to_head_page(va);
*offset = va - page_to_virt(page);
return page;
}
If yes, I really think you are caring about maintainability too much by
trading off too much performance here by not only recalculating the offset
here, but also sometimes calling virt_to_head_page() unnecessarily.
If no, please share the pseudo code in your mind.
> one of the complaints I had about this code. Supposedly it was
> refactoring and combining several different callers into one, but what
> it is actually doing is fracturing the code path into 3 different
> variants based on little if any actual difference as it is doing
> unnecessary optimization.
I am supposing the 3 different variants meant the below, right?
1. page_frag_alloc_va() returns [va].
2. page_frag_alloc_pg() returns [page, offset].
3. page_frag_alloc() returns [va] & [page, offset].
And there is others 3 different variants for prepare API too:
4. page_frag_alloc_va_prepare() returns [va].
5. page_frag_alloc_pg_prepare() returns [page, offset].
6. page_frag_alloc_prepare() returns [va] & [page, offset].
Side note: I just found the '4. page_frag_alloc_va_prepare()' API is
not used/called currently and can be removed in next revision for this
patchset.
It seems what you really want is 3 & 2 to be a wrapper for 1, and
5 & 6 to be a wrapper for 4?
If yes, too much performance is traded off here as my understanding.
Does't the introducing of __page_frag_cache_reload() already enable the
balance between performance and maintainability as much as possible in
patch 8?
>
>> If yes, do you also expect the caller to use "struct page_frag *" as the
>> container? If yes, what is the caller expected to do with the size field in
>> "struct page_frag *" from API perspective? Just ignore it?
>
> It should be populated. You passed a fragsz, so you should populate
> the output fragsz so you can get the truesize in the case of network
> packets. The removal of the page_frag from the other callers is making
> it much harder to review your code anyway. If we keep the page_frag
> there it should reduce the amount of change needed when you replace
> page_frag with the page_frag_cache.
I am not starting to use page_frag as the container yet, but the above
part is something that I am probably agreed with.
>
> Honestly this is eating up too much of my time. As I said before this
> patch set is too big and it is trying to squeeze in more than it
> really should for a single patch set to be reviewable. Going forward
> please split up the patch set as I had suggested before and address my
> comments. Ideally you would have your first patch just be some
> refactor and cleanup to get the "offset" pointer moving in the
> direction you want. With that we can at least get half of this set
> digested before we start chewing into all this refactor for the
> replacement of page_frag with the page_frag_cache.
I don't really think breaking this patchset into more patchsets without
a newcase is helping to speed up the process here, it might slow down
the process instead, as the different idea about the refactoring and
new API naming is not going to disappear by breaking the patchset, and
the breaking may make the discussion harder without a bigger picture
and context.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 05/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (3 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 04/14] mm: page_frag: add '_va' suffix to page_frag API Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 06/14] xtensa: remove the get_order() implementation Yunsheng Lin
` (9 subsequent siblings)
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexander Duyck, Chuck Lever, Michael S. Tsirkin, Jason Wang,
Eugenio Pérez, Andrew Morton, Eric Dumazet, David Howells,
Marc Dionne, Trond Myklebust, Anna Schumaker, Jeff Layton,
Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey, Shuah Khan,
kvm, virtualization, linux-mm, linux-afs, linux-nfs,
linux-kselftest
Use appropriate frag_page API instead of caller accessing
'page_frag_cache' directly.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Acked-by: Chuck Lever <chuck.lever@oracle.com>
---
drivers/vhost/net.c | 2 +-
include/linux/page_frag_cache.h | 10 ++++++++++
net/core/skbuff.c | 6 +++---
net/rxrpc/conn_object.c | 4 +---
net/rxrpc/local_object.c | 4 +---
net/sunrpc/svcsock.c | 6 ++----
tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +-
7 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 6691fac01e0d..b2737dc0dc50 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
vqs[VHOST_NET_VQ_RX]);
f->private_data = n;
- n->pf_cache.va = NULL;
+ page_frag_cache_init(&n->pf_cache);
return 0;
}
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index ef038a07925c..7c9125a9aed3 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -7,6 +7,16 @@
#include <linux/types.h>
#include <linux/mm_types_task.h>
+static inline void page_frag_cache_init(struct page_frag_cache *nc)
+{
+ nc->va = NULL;
+}
+
+static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
+{
+ return !!nc->pfmemalloc;
+}
+
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 6cf2c51a34e1..bb77c3fd192f 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -752,14 +752,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (in_hardirq() || irqs_disabled()) {
nc = this_cpu_ptr(&netdev_alloc_cache);
data = page_frag_alloc_va(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
} else {
local_bh_disable();
local_lock_nested_bh(&napi_alloc_cache.bh_lock);
nc = this_cpu_ptr(&napi_alloc_cache.page);
data = page_frag_alloc_va(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
local_bh_enable();
@@ -849,7 +849,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
len = SKB_HEAD_ALIGN(len);
data = page_frag_alloc_va(&nc->page, len, gfp_mask);
- pfmemalloc = nc->page.pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page);
}
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 1539d315afe7..694c4df7a1a3 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work)
*/
rxrpc_purge_queue(&conn->rx_queue);
- if (conn->tx_data_alloc.va)
- __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va),
- conn->tx_data_alloc.pagecnt_bias);
+ page_frag_cache_drain(&conn->tx_data_alloc);
call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
}
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index 504453c688d7..a8cffe47cf01 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local)
#endif
rxrpc_purge_queue(&local->rx_queue);
rxrpc_purge_client_connections(local);
- if (local->tx_alloc.va)
- __page_frag_cache_drain(virt_to_page(local->tx_alloc.va),
- local->tx_alloc.pagecnt_bias);
+ page_frag_cache_drain(&local->tx_alloc);
}
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 42d20412c1c3..4b1e87187614 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1609,7 +1609,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt)
static void svc_sock_free(struct svc_xprt *xprt)
{
struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt);
- struct page_frag_cache *pfc = &svsk->sk_frag_cache;
struct socket *sock = svsk->sk_sock;
trace_svcsock_free(svsk, sock);
@@ -1619,8 +1618,7 @@ static void svc_sock_free(struct svc_xprt *xprt)
sockfd_put(sock);
else
sock_release(sock);
- if (pfc->va)
- __page_frag_cache_drain(virt_to_head_page(pfc->va),
- pfc->pagecnt_bias);
+
+ page_frag_cache_drain(&svsk->sk_frag_cache);
kfree(svsk);
}
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index e522611452c9..3b98df379ce6 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -120,7 +120,7 @@ static int __init page_frag_test_init(void)
u64 duration;
int ret;
- test_frag.va = NULL;
+ page_frag_cache_init(&test_frag);
atomic_set(&nthreads, 2);
init_completion(&wait);
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 06/14] xtensa: remove the get_order() implementation
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (4 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 05/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
` (8 subsequent siblings)
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck, Max Filippov,
Chris Zankel
As the get_order() implemented by xtensa supporting 'nsau'
instruction seems be the same as the generic implementation
in include/asm-generic/getorder.h when size is not a constant
value as the generic implementation calling the fls*() is also
utilizing the 'nsau' instruction for xtensa.
So remove the get_order() implemented by xtensa, as using the
generic implementation may enable the compiler to do the
computing when size is a constant value instead of runtime
computing and enable the using of get_order() in BUILD_BUG_ON()
macro in next patch.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
---
arch/xtensa/include/asm/page.h | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h
index 4db56ef052d2..8665d57991dd 100644
--- a/arch/xtensa/include/asm/page.h
+++ b/arch/xtensa/include/asm/page.h
@@ -109,26 +109,8 @@ typedef struct page *pgtable_t;
#define __pgd(x) ((pgd_t) { (x) } )
#define __pgprot(x) ((pgprot_t) { (x) } )
-/*
- * Pure 2^n version of get_order
- * Use 'nsau' instructions if supported by the processor or the generic version.
- */
-
-#if XCHAL_HAVE_NSA
-
-static inline __attribute_const__ int get_order(unsigned long size)
-{
- int lz;
- asm ("nsau %0, %1" : "=r" (lz) : "r" ((size - 1) >> PAGE_SHIFT));
- return 32 - lz;
-}
-
-#else
-
# include <asm-generic/getorder.h>
-#endif
-
struct page;
struct vm_area_struct;
extern void clear_page(void *page);
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (5 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 06/14] xtensa: remove the get_order() implementation Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 16:13 ` Alexander H Duyck
2024-08-08 12:37 ` [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
` (7 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
Currently there is one 'struct page_frag' for every 'struct
sock' and 'struct task_struct', we are about to replace the
'struct page_frag' with 'struct page_frag_cache' for them.
Before begin the replacing, we need to ensure the size of
'struct page_frag_cache' is not bigger than the size of
'struct page_frag', as there may be tens of thousands of
'struct sock' and 'struct task_struct' instances in the
system.
By or'ing the page order & pfmemalloc with lower bits of
'va' instead of using 'u16' or 'u32' for page size and 'u8'
for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
And page address & pfmemalloc & order is unchanged for the
same page in the same 'page_frag_cache' instance, it makes
sense to fit them together.
After this patch, the size of 'struct page_frag_cache' should be
the same as the size of 'struct page_frag'.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/mm_types_task.h | 16 +++++-----
include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
mm/page_frag_cache.c | 49 +++++++++++++++++--------------
3 files changed, 85 insertions(+), 32 deletions(-)
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index b1c54b2b9308..f2610112a642 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -50,18 +50,18 @@ struct page_frag {
#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
struct page_frag_cache {
- void *va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* encoded_va consists of the virtual address, pfmemalloc bit and order
+ * of a page.
+ */
+ unsigned long encoded_va;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
__u16 remaining;
- __u16 size;
+ __u16 pagecnt_bias;
#else
__u32 remaining;
+ __u32 pagecnt_bias;
#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
};
/* Track pages that require TLB flushes */
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 7c9125a9aed3..4ce924eaf1b1 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -3,18 +3,66 @@
#ifndef _LINUX_PAGE_FRAG_CACHE_H
#define _LINUX_PAGE_FRAG_CACHE_H
+#include <linux/bits.h>
+#include <linux/build_bug.h>
#include <linux/log2.h>
#include <linux/types.h>
#include <linux/mm_types_task.h>
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+/* Use a full byte here to enable assembler optimization as the shift
+ * operation is usually expecting a byte.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0)
+#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(8)
+#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8
+#else
+/* Compiler should be able to figure out we don't read things as any value
+ * ANDed with 0 is 0.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK 0
+#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(0)
+#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0
+#endif
+
+static inline unsigned long encode_aligned_va(void *va, unsigned int order,
+ bool pfmemalloc)
+{
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK);
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT >= PAGE_SHIFT);
+
+ return (unsigned long)va | (order & PAGE_FRAG_CACHE_ORDER_MASK) |
+ ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT);
+}
+
+static inline unsigned long encoded_page_order(unsigned long encoded_va)
+{
+ return encoded_va & PAGE_FRAG_CACHE_ORDER_MASK;
+}
+
+static inline bool encoded_page_pfmemalloc(unsigned long encoded_va)
+{
+ return !!(encoded_va & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
+}
+
+static inline void *encoded_page_address(unsigned long encoded_va)
+{
+ return (void *)(encoded_va & PAGE_MASK);
+}
+
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
- nc->va = NULL;
+ nc->encoded_va = 0;
}
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
- return !!nc->pfmemalloc;
+ return encoded_page_pfmemalloc(nc->encoded_va);
+}
+
+static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va)
+{
+ return PAGE_SIZE << encoded_page_order(encoded_va);
}
void page_frag_cache_drain(struct page_frag_cache *nc);
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 70fb6dead624..2544b292375a 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -22,6 +22,7 @@
static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
gfp_t gfp_mask)
{
+ unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
struct page *page = NULL;
gfp_t gfp = gfp_mask;
@@ -30,23 +31,31 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
#endif
- if (unlikely(!page))
+ if (unlikely(!page)) {
page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ if (unlikely(!page)) {
+ nc->encoded_va = 0;
+ return NULL;
+ }
- nc->va = page ? page_address(page) : NULL;
+ order = 0;
+ }
+
+ nc->encoded_va = encode_aligned_va(page_address(page), order,
+ page_is_pfmemalloc(page));
return page;
}
void page_frag_cache_drain(struct page_frag_cache *nc)
{
- if (!nc->va)
+ if (!nc->encoded_va)
return;
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
+ __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
+ nc->pagecnt_bias);
+ nc->encoded_va = 0;
}
EXPORT_SYMBOL(page_frag_cache_drain);
@@ -63,33 +72,29 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- unsigned int size = nc->size;
-#else
- unsigned int size = PAGE_SIZE;
-#endif
- unsigned int remaining;
+ unsigned long encoded_va = nc->encoded_va;
+ unsigned int size, remaining;
struct page *page;
- if (unlikely(!nc->va)) {
+ if (unlikely(!encoded_va)) {
refill:
page = __page_frag_cache_refill(nc, gfp_mask);
if (!page)
return NULL;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
+ encoded_va = nc->encoded_va;
+ size = page_frag_cache_page_size(encoded_va);
+
/* Even if we own the page, we do not use atomic_set().
* This would break get_page_unless_zero() users.
*/
page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
/* reset page count bias and remaining to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
nc->remaining = size;
+ } else {
+ size = page_frag_cache_page_size(encoded_va);
}
remaining = nc->remaining & align_mask;
@@ -107,13 +112,13 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
return NULL;
}
- page = virt_to_page(nc->va);
+ page = virt_to_page((void *)encoded_va);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
goto refill;
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
+ if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
+ free_unref_page(page, encoded_page_order(encoded_va));
goto refill;
}
@@ -128,7 +133,7 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
nc->pagecnt_bias--;
nc->remaining = remaining - fragsz;
- return nc->va + (size - remaining);
+ return encoded_page_address(encoded_va) + (size - remaining);
}
EXPORT_SYMBOL(__page_frag_alloc_va_align);
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-08 12:37 ` [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
@ 2024-08-14 16:13 ` Alexander H Duyck
2024-08-15 3:10 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 16:13 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> Currently there is one 'struct page_frag' for every 'struct
> sock' and 'struct task_struct', we are about to replace the
> 'struct page_frag' with 'struct page_frag_cache' for them.
> Before begin the replacing, we need to ensure the size of
> 'struct page_frag_cache' is not bigger than the size of
> 'struct page_frag', as there may be tens of thousands of
> 'struct sock' and 'struct task_struct' instances in the
> system.
>
> By or'ing the page order & pfmemalloc with lower bits of
> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
> And page address & pfmemalloc & order is unchanged for the
> same page in the same 'page_frag_cache' instance, it makes
> sense to fit them together.
>
> After this patch, the size of 'struct page_frag_cache' should be
> the same as the size of 'struct page_frag'.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/mm_types_task.h | 16 +++++-----
> include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
> mm/page_frag_cache.c | 49 +++++++++++++++++--------------
> 3 files changed, 85 insertions(+), 32 deletions(-)
>
> diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
> index b1c54b2b9308..f2610112a642 100644
> --- a/include/linux/mm_types_task.h
> +++ b/include/linux/mm_types_task.h
> @@ -50,18 +50,18 @@ struct page_frag {
> #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
> #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
> struct page_frag_cache {
> - void *va;
> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> + /* encoded_va consists of the virtual address, pfmemalloc bit and order
> + * of a page.
> + */
> + unsigned long encoded_va;
> +
Rather than calling this an "encoded_va" we might want to call this an
"encoded_page" as that would be closer to what we are actually working
with. We are just using the virtual address as the page pointer instead
of the page struct itself since we need quicker access to the virtual
address than we do the page struct.
> +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
> __u16 remaining;
> - __u16 size;
> + __u16 pagecnt_bias;
> #else
> __u32 remaining;
> + __u32 pagecnt_bias;
> #endif
> - /* we maintain a pagecount bias, so that we dont dirty cache line
> - * containing page->_refcount every time we allocate a fragment.
> - */
> - unsigned int pagecnt_bias;
> - bool pfmemalloc;
> };
>
> /* Track pages that require TLB flushes */
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> index 7c9125a9aed3..4ce924eaf1b1 100644
> --- a/include/linux/page_frag_cache.h
> +++ b/include/linux/page_frag_cache.h
> @@ -3,18 +3,66 @@
> #ifndef _LINUX_PAGE_FRAG_CACHE_H
> #define _LINUX_PAGE_FRAG_CACHE_H
>
> +#include <linux/bits.h>
> +#include <linux/build_bug.h>
> #include <linux/log2.h>
> #include <linux/types.h>
> #include <linux/mm_types_task.h>
>
> +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> +/* Use a full byte here to enable assembler optimization as the shift
> + * operation is usually expecting a byte.
> + */
> +#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0)
> +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(8)
> +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 8
> +#else
> +/* Compiler should be able to figure out we don't read things as any value
> + * ANDed with 0 is 0.
> + */
> +#define PAGE_FRAG_CACHE_ORDER_MASK 0
> +#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT BIT(0)
> +#define PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT 0
> +#endif
> +
You should probably pull out PAGE_FRAG_CACHE_PFMEMALLOC_BIT and just
define it as:
#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT \
BIT(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT)
That way there is no risk of the bit and the shift somehow getting
split up and being different values.
> +static inline unsigned long encode_aligned_va(void *va, unsigned int order,
> + bool pfmemalloc)
Rather than passing the virtual address it might make more sense to
pass the page. With that you know it should be PAGE_SIZE aligned versus
just being passed some random virtual address.
> +{
> + BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK);
> + BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT >= PAGE_SHIFT);
Rather than test the shift I would test the bit versus PAGE_SIZE.
> +
> + return (unsigned long)va | (order & PAGE_FRAG_CACHE_ORDER_MASK) |
> + ((unsigned long)pfmemalloc << PAGE_FRAG_CACHE_PFMEMALLOC_SHIFT);
> +}
> +
> +static inline unsigned long encoded_page_order(unsigned long encoded_va)
> +{
> + return encoded_va & PAGE_FRAG_CACHE_ORDER_MASK;
> +}
> +
> +static inline bool encoded_page_pfmemalloc(unsigned long encoded_va)
> +{
> + return !!(encoded_va & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
> +}
> +
> +static inline void *encoded_page_address(unsigned long encoded_va)
> +{
> + return (void *)(encoded_va & PAGE_MASK);
> +}
> +
This is one of the reasons why I am thinking "encoded_page" might be a
better name for it. The 3 functions above all have their equivilent for
a page struct but we pulled that data out and packed it all into the
encoded_page.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-14 16:13 ` Alexander H Duyck
@ 2024-08-15 3:10 ` Yunsheng Lin
2024-08-15 15:03 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-15 3:10 UTC (permalink / raw)
To: Alexander H Duyck, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On 2024/8/15 0:13, Alexander H Duyck wrote:
> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>> Currently there is one 'struct page_frag' for every 'struct
>> sock' and 'struct task_struct', we are about to replace the
>> 'struct page_frag' with 'struct page_frag_cache' for them.
>> Before begin the replacing, we need to ensure the size of
>> 'struct page_frag_cache' is not bigger than the size of
>> 'struct page_frag', as there may be tens of thousands of
>> 'struct sock' and 'struct task_struct' instances in the
>> system.
>>
>> By or'ing the page order & pfmemalloc with lower bits of
>> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
>> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
>> And page address & pfmemalloc & order is unchanged for the
>> same page in the same 'page_frag_cache' instance, it makes
>> sense to fit them together.
>>
>> After this patch, the size of 'struct page_frag_cache' should be
>> the same as the size of 'struct page_frag'.
>>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> ---
>> include/linux/mm_types_task.h | 16 +++++-----
>> include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
>> mm/page_frag_cache.c | 49 +++++++++++++++++--------------
>> 3 files changed, 85 insertions(+), 32 deletions(-)
>>
>> diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
>> index b1c54b2b9308..f2610112a642 100644
>> --- a/include/linux/mm_types_task.h
>> +++ b/include/linux/mm_types_task.h
>> @@ -50,18 +50,18 @@ struct page_frag {
>> #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
>> #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
>> struct page_frag_cache {
>> - void *va;
>> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>> + /* encoded_va consists of the virtual address, pfmemalloc bit and order
>> + * of a page.
>> + */
>> + unsigned long encoded_va;
>> +
>
> Rather than calling this an "encoded_va" we might want to call this an
> "encoded_page" as that would be closer to what we are actually working
> with. We are just using the virtual address as the page pointer instead
> of the page struct itself since we need quicker access to the virtual
> address than we do the page struct.
Calling it "encoded_page" seems confusing enough when calling virt_to_page()
with "encoded_page" when virt_to_page() is expecting a 'va', no?
>
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-15 3:10 ` Yunsheng Lin
@ 2024-08-15 15:03 ` Alexander Duyck
2024-08-16 11:55 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-15 15:03 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On Wed, Aug 14, 2024 at 8:10 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 0:13, Alexander H Duyck wrote:
> > On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> >> Currently there is one 'struct page_frag' for every 'struct
> >> sock' and 'struct task_struct', we are about to replace the
> >> 'struct page_frag' with 'struct page_frag_cache' for them.
> >> Before begin the replacing, we need to ensure the size of
> >> 'struct page_frag_cache' is not bigger than the size of
> >> 'struct page_frag', as there may be tens of thousands of
> >> 'struct sock' and 'struct task_struct' instances in the
> >> system.
> >>
> >> By or'ing the page order & pfmemalloc with lower bits of
> >> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
> >> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
> >> And page address & pfmemalloc & order is unchanged for the
> >> same page in the same 'page_frag_cache' instance, it makes
> >> sense to fit them together.
> >>
> >> After this patch, the size of 'struct page_frag_cache' should be
> >> the same as the size of 'struct page_frag'.
> >>
> >> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >> ---
> >> include/linux/mm_types_task.h | 16 +++++-----
> >> include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
> >> mm/page_frag_cache.c | 49 +++++++++++++++++--------------
> >> 3 files changed, 85 insertions(+), 32 deletions(-)
> >>
> >> diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
> >> index b1c54b2b9308..f2610112a642 100644
> >> --- a/include/linux/mm_types_task.h
> >> +++ b/include/linux/mm_types_task.h
> >> @@ -50,18 +50,18 @@ struct page_frag {
> >> #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
> >> #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
> >> struct page_frag_cache {
> >> - void *va;
> >> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> >> + /* encoded_va consists of the virtual address, pfmemalloc bit and order
> >> + * of a page.
> >> + */
> >> + unsigned long encoded_va;
> >> +
> >
> > Rather than calling this an "encoded_va" we might want to call this an
> > "encoded_page" as that would be closer to what we are actually working
> > with. We are just using the virtual address as the page pointer instead
> > of the page struct itself since we need quicker access to the virtual
> > address than we do the page struct.
>
> Calling it "encoded_page" seems confusing enough when calling virt_to_page()
> with "encoded_page" when virt_to_page() is expecting a 'va', no?
It makes about as much sense as calling it an "encoded_va". What you
have is essentially a packed page struct that contains the virtual
address, pfmemalloc flag, and order. So if you want you could call it
"packed_page" too I suppose. Basically this isn't a valid virtual
address it is a page pointer with some extra metadata packed in.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-15 15:03 ` Alexander Duyck
@ 2024-08-16 11:55 ` Yunsheng Lin
2024-08-19 16:00 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-16 11:55 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On 2024/8/15 23:03, Alexander Duyck wrote:
> On Wed, Aug 14, 2024 at 8:10 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/15 0:13, Alexander H Duyck wrote:
>>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>>>> Currently there is one 'struct page_frag' for every 'struct
>>>> sock' and 'struct task_struct', we are about to replace the
>>>> 'struct page_frag' with 'struct page_frag_cache' for them.
>>>> Before begin the replacing, we need to ensure the size of
>>>> 'struct page_frag_cache' is not bigger than the size of
>>>> 'struct page_frag', as there may be tens of thousands of
>>>> 'struct sock' and 'struct task_struct' instances in the
>>>> system.
>>>>
>>>> By or'ing the page order & pfmemalloc with lower bits of
>>>> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
>>>> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
>>>> And page address & pfmemalloc & order is unchanged for the
>>>> same page in the same 'page_frag_cache' instance, it makes
>>>> sense to fit them together.
>>>>
>>>> After this patch, the size of 'struct page_frag_cache' should be
>>>> the same as the size of 'struct page_frag'.
>>>>
>>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>>>> ---
>>>> include/linux/mm_types_task.h | 16 +++++-----
>>>> include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
>>>> mm/page_frag_cache.c | 49 +++++++++++++++++--------------
>>>> 3 files changed, 85 insertions(+), 32 deletions(-)
>>>>
>>>> diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
>>>> index b1c54b2b9308..f2610112a642 100644
>>>> --- a/include/linux/mm_types_task.h
>>>> +++ b/include/linux/mm_types_task.h
>>>> @@ -50,18 +50,18 @@ struct page_frag {
>>>> #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
>>>> #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
>>>> struct page_frag_cache {
>>>> - void *va;
>>>> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
>>>> + /* encoded_va consists of the virtual address, pfmemalloc bit and order
>>>> + * of a page.
>>>> + */
>>>> + unsigned long encoded_va;
>>>> +
>>>
>>> Rather than calling this an "encoded_va" we might want to call this an
>>> "encoded_page" as that would be closer to what we are actually working
>>> with. We are just using the virtual address as the page pointer instead
>>> of the page struct itself since we need quicker access to the virtual
>>> address than we do the page struct.
>>
>> Calling it "encoded_page" seems confusing enough when calling virt_to_page()
>> with "encoded_page" when virt_to_page() is expecting a 'va', no?
>
> It makes about as much sense as calling it an "encoded_va". What you
> have is essentially a packed page struct that contains the virtual
> address, pfmemalloc flag, and order. So if you want you could call it
> "packed_page" too I suppose. Basically this isn't a valid virtual
> address it is a page pointer with some extra metadata packed in.
I think we are all argeed that is not a valid virtual address by adding
the 'encoded_' part.
I am not really sure if "encoded_page" or "packed_page" is better than
'encoded_va' here, as there is no 'page pointer' that is implied by
"encoded_page" or "packed_page" here. For 'encoded_va', at least there
is 'virtual address' that is implied by 'encoded_va', and that 'virtual
address' just happen to be page pointer.
Yes, you may say the 'pfmemalloc flag and order' part is about page, not
about 'va', I guess there is trade-off we need to make here if there is
not a perfect name for it and 'va' does occupy most bits of 'encoded_va'.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-08-16 11:55 ` Yunsheng Lin
@ 2024-08-19 16:00 ` Alexander Duyck
0 siblings, 0 replies; 47+ messages in thread
From: Alexander Duyck @ 2024-08-19 16:00 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On Fri, Aug 16, 2024 at 4:56 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 23:03, Alexander Duyck wrote:
> > On Wed, Aug 14, 2024 at 8:10 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2024/8/15 0:13, Alexander H Duyck wrote:
> >>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> >>>> Currently there is one 'struct page_frag' for every 'struct
> >>>> sock' and 'struct task_struct', we are about to replace the
> >>>> 'struct page_frag' with 'struct page_frag_cache' for them.
> >>>> Before begin the replacing, we need to ensure the size of
> >>>> 'struct page_frag_cache' is not bigger than the size of
> >>>> 'struct page_frag', as there may be tens of thousands of
> >>>> 'struct sock' and 'struct task_struct' instances in the
> >>>> system.
> >>>>
> >>>> By or'ing the page order & pfmemalloc with lower bits of
> >>>> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
> >>>> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
> >>>> And page address & pfmemalloc & order is unchanged for the
> >>>> same page in the same 'page_frag_cache' instance, it makes
> >>>> sense to fit them together.
> >>>>
> >>>> After this patch, the size of 'struct page_frag_cache' should be
> >>>> the same as the size of 'struct page_frag'.
> >>>>
> >>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >>>> ---
> >>>> include/linux/mm_types_task.h | 16 +++++-----
> >>>> include/linux/page_frag_cache.h | 52 +++++++++++++++++++++++++++++++--
> >>>> mm/page_frag_cache.c | 49 +++++++++++++++++--------------
> >>>> 3 files changed, 85 insertions(+), 32 deletions(-)
> >>>>
> >>>> diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
> >>>> index b1c54b2b9308..f2610112a642 100644
> >>>> --- a/include/linux/mm_types_task.h
> >>>> +++ b/include/linux/mm_types_task.h
> >>>> @@ -50,18 +50,18 @@ struct page_frag {
> >>>> #define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
> >>>> #define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
> >>>> struct page_frag_cache {
> >>>> - void *va;
> >>>> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> >>>> + /* encoded_va consists of the virtual address, pfmemalloc bit and order
> >>>> + * of a page.
> >>>> + */
> >>>> + unsigned long encoded_va;
> >>>> +
> >>>
> >>> Rather than calling this an "encoded_va" we might want to call this an
> >>> "encoded_page" as that would be closer to what we are actually working
> >>> with. We are just using the virtual address as the page pointer instead
> >>> of the page struct itself since we need quicker access to the virtual
> >>> address than we do the page struct.
> >>
> >> Calling it "encoded_page" seems confusing enough when calling virt_to_page()
> >> with "encoded_page" when virt_to_page() is expecting a 'va', no?
> >
> > It makes about as much sense as calling it an "encoded_va". What you
> > have is essentially a packed page struct that contains the virtual
> > address, pfmemalloc flag, and order. So if you want you could call it
> > "packed_page" too I suppose. Basically this isn't a valid virtual
> > address it is a page pointer with some extra metadata packed in.
>
> I think we are all argeed that is not a valid virtual address by adding
> the 'encoded_' part.
> I am not really sure if "encoded_page" or "packed_page" is better than
> 'encoded_va' here, as there is no 'page pointer' that is implied by
> "encoded_page" or "packed_page" here. For 'encoded_va', at least there
> is 'virtual address' that is implied by 'encoded_va', and that 'virtual
> address' just happen to be page pointer.
Basically we are using the page's virtual address to encode the page
into the struct. If you look, "virtual" is a pointer stored in the
page to provide the virtual address on some architectures. It also
happens that we have virt_to_page which provides an easy way to get
back and forth between the values.
> Yes, you may say the 'pfmemalloc flag and order' part is about page, not
> about 'va', I guess there is trade-off we need to make here if there is
> not a perfect name for it and 'va' does occupy most bits of 'encoded_va'.
The naming isn't really a show stopper one way or another. It was more
the fact that you had several functions accessing it that were using
the name "encoded_page" as I recall. That is why I thought it might
make sense to rename it to that. Why have functions called
"encoded_page_order" work with an "encoded_va" versus an
"encoded_page". It makes it easier to logically lump them all
together.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (6 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 07/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 17:54 ` Alexander H Duyck
2024-08-08 12:37 ` [PATCH net-next v13 09/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
` (6 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
Refactor common codes from __page_frag_alloc_va_align()
to __page_frag_cache_reload(), so that the new API can
make use of them.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/page_frag_cache.h | 2 +-
mm/page_frag_cache.c | 138 ++++++++++++++++++--------------
2 files changed, 81 insertions(+), 59 deletions(-)
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 4ce924eaf1b1..0abffdd10a1c 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va)
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
- nc->encoded_va = 0;
+ memset(nc, 0, sizeof(*nc));
}
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 2544b292375a..4e6b1c4684f0 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -19,8 +19,27 @@
#include <linux/page_frag_cache.h>
#include "internal.h"
-static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
+static bool __page_frag_cache_reuse(unsigned long encoded_va,
+ unsigned int pagecnt_bias)
+{
+ struct page *page;
+
+ page = virt_to_page((void *)encoded_va);
+ if (!page_ref_sub_and_test(page, pagecnt_bias))
+ return false;
+
+ if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
+ free_unref_page(page, encoded_page_order(encoded_va));
+ return false;
+ }
+
+ /* OK, page count is 0, we can safely set it */
+ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+ return true;
+}
+
+static bool __page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
{
unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
struct page *page = NULL;
@@ -35,8 +54,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
if (unlikely(!page)) {
page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
if (unlikely(!page)) {
- nc->encoded_va = 0;
- return NULL;
+ memset(nc, 0, sizeof(*nc));
+ return false;
}
order = 0;
@@ -45,7 +64,33 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
nc->encoded_va = encode_aligned_va(page_address(page), order,
page_is_pfmemalloc(page));
- return page;
+ /* Even if we own the page, we do not use atomic_set().
+ * This would break get_page_unless_zero() users.
+ */
+ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
+
+ return true;
+}
+
+/* Reload cache by reusing the old cache if it is possible, or
+ * refilling from the page allocator.
+ */
+static bool __page_frag_cache_reload(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
+{
+ if (likely(nc->encoded_va)) {
+ if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
+ goto out;
+ }
+
+ if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
+ return false;
+
+out:
+ /* reset page count bias and remaining to start of new frag */
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ nc->remaining = page_frag_cache_page_size(nc->encoded_va);
+ return true;
}
void page_frag_cache_drain(struct page_frag_cache *nc)
@@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc)
__page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
nc->pagecnt_bias);
- nc->encoded_va = 0;
+ memset(nc, 0, sizeof(*nc));
}
EXPORT_SYMBOL(page_frag_cache_drain);
@@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int align_mask)
{
unsigned long encoded_va = nc->encoded_va;
- unsigned int size, remaining;
- struct page *page;
-
- if (unlikely(!encoded_va)) {
-refill:
- page = __page_frag_cache_refill(nc, gfp_mask);
- if (!page)
- return NULL;
-
- encoded_va = nc->encoded_va;
- size = page_frag_cache_page_size(encoded_va);
-
- /* Even if we own the page, we do not use atomic_set().
- * This would break get_page_unless_zero() users.
- */
- page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
-
- /* reset page count bias and remaining to start of new frag */
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->remaining = size;
- } else {
- size = page_frag_cache_page_size(encoded_va);
- }
+ unsigned int remaining;
remaining = nc->remaining & align_mask;
- if (unlikely(remaining < fragsz)) {
- if (unlikely(fragsz > PAGE_SIZE)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
-
- page = virt_to_page((void *)encoded_va);
- if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
- goto refill;
-
- if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
- free_unref_page(page, encoded_page_order(encoded_va));
- goto refill;
- }
+ /* As we have ensured remaining is zero when initializing and draining old
+ * cache, 'remaining >= fragsz' checking is enough to indicate there is
+ * enough available space for the new fragment allocation.
+ */
+ if (likely(remaining >= fragsz)) {
+ nc->pagecnt_bias--;
+ nc->remaining = remaining - fragsz;
- /* OK, page count is 0, we can safely set it */
- set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+ return encoded_page_address(encoded_va) +
+ (page_frag_cache_page_size(encoded_va) - remaining);
+ }
- /* reset page count bias and remaining to start of new frag */
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- remaining = size;
+ if (unlikely(fragsz > PAGE_SIZE)) {
+ /*
+ * The caller is trying to allocate a fragment with
+ * fragsz > PAGE_SIZE but the cache isn't big enough to satisfy
+ * the request, this may happen in low memory conditions. We don't
+ * release the cache page because it could make memory pressure
+ * worse so we simply return NULL here.
+ */
+ return NULL;
}
+ if (unlikely(!__page_frag_cache_reload(nc, gfp_mask)))
+ return NULL;
+
+ /* As the we are allocating fragment from cache by count-up way, the offset
+ * of allocated fragment from the just reloaded cache is zero, so remaining
+ * aligning and offset calculation are not needed.
+ */
nc->pagecnt_bias--;
- nc->remaining = remaining - fragsz;
+ nc->remaining -= fragsz;
- return encoded_page_address(encoded_va) + (size - remaining);
+ return encoded_page_address(nc->encoded_va);
}
EXPORT_SYMBOL(__page_frag_alloc_va_align);
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API
2024-08-08 12:37 ` [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
@ 2024-08-14 17:54 ` Alexander H Duyck
2024-08-15 3:04 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 17:54 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> Refactor common codes from __page_frag_alloc_va_align()
> to __page_frag_cache_reload(), so that the new API can
> make use of them.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/page_frag_cache.h | 2 +-
> mm/page_frag_cache.c | 138 ++++++++++++++++++--------------
> 2 files changed, 81 insertions(+), 59 deletions(-)
>
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> index 4ce924eaf1b1..0abffdd10a1c 100644
> --- a/include/linux/page_frag_cache.h
> +++ b/include/linux/page_frag_cache.h
> @@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va)
>
> static inline void page_frag_cache_init(struct page_frag_cache *nc)
> {
> - nc->encoded_va = 0;
> + memset(nc, 0, sizeof(*nc));
> }
>
Still not a fan of this. Just setting encoded_va to 0 should be enough
as the other fields will automatically be overwritten when the new page
is allocated.
Relying on memset is problematic at best since you then introduce the
potential for issues where remaining somehow gets corrupted but
encoded_va/page is 0. I would rather have both of these being checked
as a part of allocation than just just assuming it is valid if
remaining is set.
I would prefer to keep the check for a non-0 encoded_page value and
then check remaining rather than just rely on remaining as it creates a
single point of failure. With that we can safely tear away a page and
the next caller to try to allocate will populated a new page and the
associated fields.
> static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index 2544b292375a..4e6b1c4684f0 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -19,8 +19,27 @@
> #include <linux/page_frag_cache.h>
> #include "internal.h"
>
> -static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
> - gfp_t gfp_mask)
> +static bool __page_frag_cache_reuse(unsigned long encoded_va,
> + unsigned int pagecnt_bias)
> +{
> + struct page *page;
> +
> + page = virt_to_page((void *)encoded_va);
> + if (!page_ref_sub_and_test(page, pagecnt_bias))
> + return false;
> +
> + if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
> + free_unref_page(page, encoded_page_order(encoded_va));
> + return false;
> + }
> +
> + /* OK, page count is 0, we can safely set it */
> + set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
> + return true;
> +}
> +
> +static bool __page_frag_cache_refill(struct page_frag_cache *nc,
> + gfp_t gfp_mask)
> {
> unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
> struct page *page = NULL;
> @@ -35,8 +54,8 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
> if (unlikely(!page)) {
> page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
> if (unlikely(!page)) {
> - nc->encoded_va = 0;
> - return NULL;
> + memset(nc, 0, sizeof(*nc));
> + return false;
> }
>
> order = 0;
> @@ -45,7 +64,33 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
> nc->encoded_va = encode_aligned_va(page_address(page), order,
> page_is_pfmemalloc(page));
>
> - return page;
> + /* Even if we own the page, we do not use atomic_set().
> + * This would break get_page_unless_zero() users.
> + */
> + page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
> +
> + return true;
> +}
> +
> +/* Reload cache by reusing the old cache if it is possible, or
> + * refilling from the page allocator.
> + */
> +static bool __page_frag_cache_reload(struct page_frag_cache *nc,
> + gfp_t gfp_mask)
> +{
> + if (likely(nc->encoded_va)) {
> + if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
> + goto out;
> + }
> +
> + if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
> + return false;
> +
> +out:
> + /* reset page count bias and remaining to start of new frag */
> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> + nc->remaining = page_frag_cache_page_size(nc->encoded_va);
One thought I am having is that it might be better to have the
pagecnt_bias get set at the same time as the page_ref_add or the
set_page_count call. In addition setting the remaining value at the
same time probably would make sense as in the refill case you can make
use of the "order" value directly instead of having to write/read it
out of the encoded va/page.
With that we could simplify this function and get something closer to
what we had for the original alloc_va_align code.
> + return true;
> }
>
> void page_frag_cache_drain(struct page_frag_cache *nc)
> @@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc)
>
> __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
> nc->pagecnt_bias);
> - nc->encoded_va = 0;
> + memset(nc, 0, sizeof(*nc));
> }
> EXPORT_SYMBOL(page_frag_cache_drain);
>
> @@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
> unsigned int align_mask)
> {
> unsigned long encoded_va = nc->encoded_va;
> - unsigned int size, remaining;
> - struct page *page;
> -
> - if (unlikely(!encoded_va)) {
We should still be checking this before we even touch remaining.
Otherwise we greatly increase the risk of providing a bad virtual
address and have greatly decreased the likelihood of us catching
potential errors gracefully.
> -refill:
> - page = __page_frag_cache_refill(nc, gfp_mask);
> - if (!page)
> - return NULL;
> -
> - encoded_va = nc->encoded_va;
> - size = page_frag_cache_page_size(encoded_va);
> -
> - /* Even if we own the page, we do not use atomic_set().
> - * This would break get_page_unless_zero() users.
> - */
> - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
> -
> - /* reset page count bias and remaining to start of new frag */
> - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> - nc->remaining = size;
With my suggested change above you could essentially just drop the
block starting from the comment and this function wouldn't need to
change as much as it is.
> - } else {
> - size = page_frag_cache_page_size(encoded_va);
> - }
> + unsigned int remaining;
>
> remaining = nc->remaining & align_mask;
> - if (unlikely(remaining < fragsz)) {
> - if (unlikely(fragsz > PAGE_SIZE)) {
> - /*
> - * The caller is trying to allocate a fragment
> - * with fragsz > PAGE_SIZE but the cache isn't big
> - * enough to satisfy the request, this may
> - * happen in low memory conditions.
> - * We don't release the cache page because
> - * it could make memory pressure worse
> - * so we simply return NULL here.
> - */
> - return NULL;
> - }
> -
> - page = virt_to_page((void *)encoded_va);
>
> - if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
> - goto refill;
> -
> - if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
> - free_unref_page(page, encoded_page_order(encoded_va));
> - goto refill;
> - }
Likewise for this block here. We can essentially just make use of the
__page_frag_cache_reuse function without the need to do a complete
rework of the code.
> + /* As we have ensured remaining is zero when initializing and draining old
> + * cache, 'remaining >= fragsz' checking is enough to indicate there is
> + * enough available space for the new fragment allocation.
> + */
> + if (likely(remaining >= fragsz)) {
> + nc->pagecnt_bias--;
> + nc->remaining = remaining - fragsz;
>
> - /* OK, page count is 0, we can safely set it */
> - set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
> + return encoded_page_address(encoded_va) +
> + (page_frag_cache_page_size(encoded_va) - remaining);
> + }
>
> - /* reset page count bias and remaining to start of new frag */
> - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> - remaining = size;
> + if (unlikely(fragsz > PAGE_SIZE)) {
> + /*
> + * The caller is trying to allocate a fragment with
> + * fragsz > PAGE_SIZE but the cache isn't big enough to satisfy
> + * the request, this may happen in low memory conditions. We don't
> + * release the cache page because it could make memory pressure
> + * worse so we simply return NULL here.
> + */
> + return NULL;
> }
>
> + if (unlikely(!__page_frag_cache_reload(nc, gfp_mask)))
> + return NULL;
> +
> + /* As the we are allocating fragment from cache by count-up way, the offset
> + * of allocated fragment from the just reloaded cache is zero, so remaining
> + * aligning and offset calculation are not needed.
> + */
> nc->pagecnt_bias--;
> - nc->remaining = remaining - fragsz;
> + nc->remaining -= fragsz;
>
> - return encoded_page_address(encoded_va) + (size - remaining);
> + return encoded_page_address(nc->encoded_va);
> }
> EXPORT_SYMBOL(__page_frag_alloc_va_align);
>
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API
2024-08-14 17:54 ` Alexander H Duyck
@ 2024-08-15 3:04 ` Yunsheng Lin
2024-08-15 15:09 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-15 3:04 UTC (permalink / raw)
To: Alexander H Duyck, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On 2024/8/15 1:54, Alexander H Duyck wrote:
> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>> Refactor common codes from __page_frag_alloc_va_align()
>> to __page_frag_cache_reload(), so that the new API can
>> make use of them.
>>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> ---
>> include/linux/page_frag_cache.h | 2 +-
>> mm/page_frag_cache.c | 138 ++++++++++++++++++--------------
>> 2 files changed, 81 insertions(+), 59 deletions(-)
>>
>> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
>> index 4ce924eaf1b1..0abffdd10a1c 100644
>> --- a/include/linux/page_frag_cache.h
>> +++ b/include/linux/page_frag_cache.h
>> @@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va)
>>
>> static inline void page_frag_cache_init(struct page_frag_cache *nc)
>> {
>> - nc->encoded_va = 0;
>> + memset(nc, 0, sizeof(*nc));
>> }
>>
>
> Still not a fan of this. Just setting encoded_va to 0 should be enough
> as the other fields will automatically be overwritten when the new page
> is allocated.
>
> Relying on memset is problematic at best since you then introduce the
> potential for issues where remaining somehow gets corrupted but
> encoded_va/page is 0. I would rather have both of these being checked
> as a part of allocation than just just assuming it is valid if
> remaining is set.
Does adding something like VM_BUG_ON(!nc->encoded_va && nc->remaining) to
catch the above problem address your above concern?
>
> I would prefer to keep the check for a non-0 encoded_page value and
> then check remaining rather than just rely on remaining as it creates a
> single point of failure. With that we can safely tear away a page and
> the next caller to try to allocate will populated a new page and the
> associated fields.
As mentioned before, the memset() is used mainly because of:
1. avoid a checking in the fast path.
2. avoid duplicating the checking pattern you mentioned above for the
new API.
>
>> static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
>> index 2544b292375a..4e6b1c4684f0 100644
>> --- a/mm/page_frag_cache.c
>> +++ b/mm/page_frag_cache.c
>> @@ -19,8 +19,27 @@
>> #include <linux/page_frag_cache.h>
>> #include "internal.h"
>>
...
>> +
>> +/* Reload cache by reusing the old cache if it is possible, or
>> + * refilling from the page allocator.
>> + */
>> +static bool __page_frag_cache_reload(struct page_frag_cache *nc,
>> + gfp_t gfp_mask)
>> +{
>> + if (likely(nc->encoded_va)) {
>> + if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
>> + goto out;
>> + }
>> +
>> + if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
>> + return false;
>> +
>> +out:
>> + /* reset page count bias and remaining to start of new frag */
>> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>> + nc->remaining = page_frag_cache_page_size(nc->encoded_va);
>
> One thought I am having is that it might be better to have the
> pagecnt_bias get set at the same time as the page_ref_add or the
> set_page_count call. In addition setting the remaining value at the
> same time probably would make sense as in the refill case you can make
> use of the "order" value directly instead of having to write/read it
> out of the encoded va/page.
Probably, there is always tradeoff to make regarding avoid code
duplication and avoid reading the order, I am not sure it matters
for both for case, I would rather keep the above pattern if there
is not obvious benefit for the other pattern.
>
> With that we could simplify this function and get something closer to
> what we had for the original alloc_va_align code.
>
>> + return true;
>> }
>>
>> void page_frag_cache_drain(struct page_frag_cache *nc)
>> @@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc)
>>
>> __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
>> nc->pagecnt_bias);
>> - nc->encoded_va = 0;
>> + memset(nc, 0, sizeof(*nc));
>> }
>> EXPORT_SYMBOL(page_frag_cache_drain);
>>
>> @@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
>> unsigned int align_mask)
>> {
>> unsigned long encoded_va = nc->encoded_va;
>> - unsigned int size, remaining;
>> - struct page *page;
>> -
>> - if (unlikely(!encoded_va)) {
>
> We should still be checking this before we even touch remaining.
> Otherwise we greatly increase the risk of providing a bad virtual
> address and have greatly decreased the likelihood of us catching
> potential errors gracefully.
>
>> -refill:
>> - page = __page_frag_cache_refill(nc, gfp_mask);
>> - if (!page)
>> - return NULL;
>> -
>> - encoded_va = nc->encoded_va;
>> - size = page_frag_cache_page_size(encoded_va);
>> -
>> - /* Even if we own the page, we do not use atomic_set().
>> - * This would break get_page_unless_zero() users.
>> - */
>> - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
>> -
>> - /* reset page count bias and remaining to start of new frag */
>> - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>> - nc->remaining = size;
>
> With my suggested change above you could essentially just drop the
> block starting from the comment and this function wouldn't need to
> change as much as it is.
It seems you are still suggesting that new API also duplicates the old
checking pattern in __page_frag_alloc_va_align()?
I would rather avoid the above if something like VM_BUG_ON() can address
your above concern.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API
2024-08-15 3:04 ` Yunsheng Lin
@ 2024-08-15 15:09 ` Alexander Duyck
2024-08-16 11:58 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-15 15:09 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On Wed, Aug 14, 2024 at 8:04 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 1:54, Alexander H Duyck wrote:
> > On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> >> Refactor common codes from __page_frag_alloc_va_align()
> >> to __page_frag_cache_reload(), so that the new API can
> >> make use of them.
> >>
> >> CC: Alexander Duyck <alexander.duyck@gmail.com>
> >> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> >> ---
> >> include/linux/page_frag_cache.h | 2 +-
> >> mm/page_frag_cache.c | 138 ++++++++++++++++++--------------
> >> 2 files changed, 81 insertions(+), 59 deletions(-)
> >>
> >> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> >> index 4ce924eaf1b1..0abffdd10a1c 100644
> >> --- a/include/linux/page_frag_cache.h
> >> +++ b/include/linux/page_frag_cache.h
> >> @@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va)
> >>
> >> static inline void page_frag_cache_init(struct page_frag_cache *nc)
> >> {
> >> - nc->encoded_va = 0;
> >> + memset(nc, 0, sizeof(*nc));
> >> }
> >>
> >
> > Still not a fan of this. Just setting encoded_va to 0 should be enough
> > as the other fields will automatically be overwritten when the new page
> > is allocated.
> >
> > Relying on memset is problematic at best since you then introduce the
> > potential for issues where remaining somehow gets corrupted but
> > encoded_va/page is 0. I would rather have both of these being checked
> > as a part of allocation than just just assuming it is valid if
> > remaining is set.
>
> Does adding something like VM_BUG_ON(!nc->encoded_va && nc->remaining) to
> catch the above problem address your above concern?
Not really. I would prefer to just retain the existing behavior.
> >
> > I would prefer to keep the check for a non-0 encoded_page value and
> > then check remaining rather than just rely on remaining as it creates a
> > single point of failure. With that we can safely tear away a page and
> > the next caller to try to allocate will populated a new page and the
> > associated fields.
>
> As mentioned before, the memset() is used mainly because of:
> 1. avoid a checking in the fast path.
> 2. avoid duplicating the checking pattern you mentioned above for the
> new API.
I'm not a fan of the new code flow after getting rid of the checking
in the fast path. The code is becoming a tangled mess of spaghetti
code in my opinion. Arguably the patches don't help as you are taking
huge steps in many of these patches and it is making it hard to read.
In addition the code becomes more obfuscated with each patch which is
one of the reasons why I would have preferred to see this set broken
into a couple sets so we can give it some time for any of the kinks to
get worked out.
> >
> >> static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
> >> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> >> index 2544b292375a..4e6b1c4684f0 100644
> >> --- a/mm/page_frag_cache.c
> >> +++ b/mm/page_frag_cache.c
> >> @@ -19,8 +19,27 @@
> >> #include <linux/page_frag_cache.h>
> >> #include "internal.h"
> >>
>
> ...
>
> >> +
> >> +/* Reload cache by reusing the old cache if it is possible, or
> >> + * refilling from the page allocator.
> >> + */
> >> +static bool __page_frag_cache_reload(struct page_frag_cache *nc,
> >> + gfp_t gfp_mask)
> >> +{
> >> + if (likely(nc->encoded_va)) {
> >> + if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
> >> + goto out;
> >> + }
> >> +
> >> + if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
> >> + return false;
> >> +
> >> +out:
> >> + /* reset page count bias and remaining to start of new frag */
> >> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >> + nc->remaining = page_frag_cache_page_size(nc->encoded_va);
> >
> > One thought I am having is that it might be better to have the
> > pagecnt_bias get set at the same time as the page_ref_add or the
> > set_page_count call. In addition setting the remaining value at the
> > same time probably would make sense as in the refill case you can make
> > use of the "order" value directly instead of having to write/read it
> > out of the encoded va/page.
>
> Probably, there is always tradeoff to make regarding avoid code
> duplication and avoid reading the order, I am not sure it matters
> for both for case, I would rather keep the above pattern if there
> is not obvious benefit for the other pattern.
Part of it is more about keeping the functions contained to generating
self contained objects. I am not a fan of us splitting up the page
init into a few sections as it makes it much easier to mess up a page
by changing one spot and overlooking the fact that an additional page
is needed somewhere else.
> >
> > With that we could simplify this function and get something closer to
> > what we had for the original alloc_va_align code.
> >
> >> + return true;
> >> }
> >>
> >> void page_frag_cache_drain(struct page_frag_cache *nc)
> >> @@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc)
> >>
> >> __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
> >> nc->pagecnt_bias);
> >> - nc->encoded_va = 0;
> >> + memset(nc, 0, sizeof(*nc));
> >> }
> >> EXPORT_SYMBOL(page_frag_cache_drain);
> >>
> >> @@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
> >> unsigned int align_mask)
> >> {
> >> unsigned long encoded_va = nc->encoded_va;
> >> - unsigned int size, remaining;
> >> - struct page *page;
> >> -
> >> - if (unlikely(!encoded_va)) {
> >
> > We should still be checking this before we even touch remaining.
> > Otherwise we greatly increase the risk of providing a bad virtual
> > address and have greatly decreased the likelihood of us catching
> > potential errors gracefully.
> >
> >> -refill:
> >> - page = __page_frag_cache_refill(nc, gfp_mask);
> >> - if (!page)
> >> - return NULL;
> >> -
> >> - encoded_va = nc->encoded_va;
> >> - size = page_frag_cache_page_size(encoded_va);
> >> -
> >> - /* Even if we own the page, we do not use atomic_set().
> >> - * This would break get_page_unless_zero() users.
> >> - */
> >> - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
> >> -
> >> - /* reset page count bias and remaining to start of new frag */
> >> - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >> - nc->remaining = size;
> >
> > With my suggested change above you could essentially just drop the
> > block starting from the comment and this function wouldn't need to
> > change as much as it is.
>
> It seems you are still suggesting that new API also duplicates the old
> checking pattern in __page_frag_alloc_va_align()?
>
> I would rather avoid the above if something like VM_BUG_ON() can address
> your above concern.
Yes, that is what I am suggesting. It makes the code much less prone
to any sort of possible races as resetting encoded_va would make it so
that reads for all the other fields would be skipped versus having to
use a memset which differs in implementation depending on the
architecture.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API
2024-08-15 15:09 ` Alexander Duyck
@ 2024-08-16 11:58 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-16 11:58 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On 2024/8/15 23:09, Alexander Duyck wrote:
> On Wed, Aug 14, 2024 at 8:04 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/15 1:54, Alexander H Duyck wrote:
>>> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>>>> Refactor common codes from __page_frag_alloc_va_align()
>>>> to __page_frag_cache_reload(), so that the new API can
>>>> make use of them.
>>>>
>>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>>>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>>>> ---
>>>> include/linux/page_frag_cache.h | 2 +-
>>>> mm/page_frag_cache.c | 138 ++++++++++++++++++--------------
>>>> 2 files changed, 81 insertions(+), 59 deletions(-)
>>>>
>>>> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
>>>> index 4ce924eaf1b1..0abffdd10a1c 100644
>>>> --- a/include/linux/page_frag_cache.h
>>>> +++ b/include/linux/page_frag_cache.h
>>>> @@ -52,7 +52,7 @@ static inline void *encoded_page_address(unsigned long encoded_va)
>>>>
>>>> static inline void page_frag_cache_init(struct page_frag_cache *nc)
>>>> {
>>>> - nc->encoded_va = 0;
>>>> + memset(nc, 0, sizeof(*nc));
>>>> }
>>>>
>>>
>>> Still not a fan of this. Just setting encoded_va to 0 should be enough
>>> as the other fields will automatically be overwritten when the new page
>>> is allocated.
>>>
>>> Relying on memset is problematic at best since you then introduce the
>>> potential for issues where remaining somehow gets corrupted but
>>> encoded_va/page is 0. I would rather have both of these being checked
>>> as a part of allocation than just just assuming it is valid if
>>> remaining is set.
>>
>> Does adding something like VM_BUG_ON(!nc->encoded_va && nc->remaining) to
>> catch the above problem address your above concern?
>
> Not really. I would prefer to just retain the existing behavior.
As my understanding, it is a implementation detail that API caller does
not need to care about if the API is used correctly. If not, we have bigger
problem than above.
If there is a error in that implementation, it would be good to point it
out. And there is already a comment explaining that implementation detail
in this patch, doesn't adding a explicit VM_BUG_ON() make it more obvious.
>
>>>
>>> I would prefer to keep the check for a non-0 encoded_page value and
>>> then check remaining rather than just rely on remaining as it creates a
>>> single point of failure. With that we can safely tear away a page and
>>> the next caller to try to allocate will populated a new page and the
>>> associated fields.
>>
>> As mentioned before, the memset() is used mainly because of:
>> 1. avoid a checking in the fast path.
>> 2. avoid duplicating the checking pattern you mentioned above for the
>> new API.
>
> I'm not a fan of the new code flow after getting rid of the checking
> in the fast path. The code is becoming a tangled mess of spaghetti
I am not sure if you get the point that getting rid of nc->encoded_va
checking in the fast path is the reason we are able able to refactor
common codes into __page_frag_cache_reload(), so that both the old API
and new APIs can reuse the common codes.
> code in my opinion. Arguably the patches don't help as you are taking
> huge steps in many of these patches and it is making it hard to read.
> In addition the code becomes more obfuscated with each patch which is
> one of the reasons why I would have preferred to see this set broken
> into a couple sets so we can give it some time for any of the kinks to
> get worked out.
If there is no new APIs added, I guess I am agreed with your above
argument.
With the new API for new use case, the refactoring in this patch make
code more reusable and maintainable, that is why I would have preferred
not to break this patchset into more patchsets as it is already hard to
argue the reason behind the refactoring.
>
>>>
>>>> static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
>>>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
>>>> index 2544b292375a..4e6b1c4684f0 100644
>>>> --- a/mm/page_frag_cache.c
>>>> +++ b/mm/page_frag_cache.c
>>>> @@ -19,8 +19,27 @@
>>>> #include <linux/page_frag_cache.h>
>>>> #include "internal.h"
>>>>
>>
>> ...
>>
>>>> +
>>>> +/* Reload cache by reusing the old cache if it is possible, or
>>>> + * refilling from the page allocator.
>>>> + */
>>>> +static bool __page_frag_cache_reload(struct page_frag_cache *nc,
>>>> + gfp_t gfp_mask)
>>>> +{
>>>> + if (likely(nc->encoded_va)) {
>>>> + if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
>>>> + goto out;
>>>> + }
>>>> +
>>>> + if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
>>>> + return false;
>>>> +
>>>> +out:
>>>> + /* reset page count bias and remaining to start of new frag */
>>>> + nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>>>> + nc->remaining = page_frag_cache_page_size(nc->encoded_va);
>>>
>>> One thought I am having is that it might be better to have the
>>> pagecnt_bias get set at the same time as the page_ref_add or the
>>> set_page_count call. In addition setting the remaining value at the
>>> same time probably would make sense as in the refill case you can make
>>> use of the "order" value directly instead of having to write/read it
>>> out of the encoded va/page.
>>
>> Probably, there is always tradeoff to make regarding avoid code
>> duplication and avoid reading the order, I am not sure it matters
>> for both for case, I would rather keep the above pattern if there
>> is not obvious benefit for the other pattern.
>
> Part of it is more about keeping the functions contained to generating
> self contained objects. I am not a fan of us splitting up the page
> init into a few sections as it makes it much easier to mess up a page
> by changing one spot and overlooking the fact that an additional page
> is needed somewhere else.
To be honest, I am not so obsessed with where are pagecnt_bias and
remaining set.
I am obsessed with whether the __page_frag_cache_reload() is needed.
Let's be more specific about your suggestion here: are you suggesting to
remove __page_frag_cache_reload()?
If yes, are you really expecting both old and new API duplicating the
below checking pattern? Why?
if (likely(nc->encoded_va)) {
if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
...
}
if (unlikely(remaining < fragsz)) {
page = __page_frag_cache_refill(nc, gfp_mask);
....
}
If no, doesn't it make sense to call __page_frag_cache_reload() for both
old and new API?
>
>>>
>>> With that we could simplify this function and get something closer to
>>> what we had for the original alloc_va_align code.
>>>
>>>> + return true;
>>>> }
>>>>
>>>> void page_frag_cache_drain(struct page_frag_cache *nc)
>>>> @@ -55,7 +100,7 @@ void page_frag_cache_drain(struct page_frag_cache *nc)
>>>>
>>>> __page_frag_cache_drain(virt_to_head_page((void *)nc->encoded_va),
>>>> nc->pagecnt_bias);
>>>> - nc->encoded_va = 0;
>>>> + memset(nc, 0, sizeof(*nc));
>>>> }
>>>> EXPORT_SYMBOL(page_frag_cache_drain);
>>>>
>>>> @@ -73,67 +118,44 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
>>>> unsigned int align_mask)
>>>> {
>>>> unsigned long encoded_va = nc->encoded_va;
>>>> - unsigned int size, remaining;
>>>> - struct page *page;
>>>> -
>>>> - if (unlikely(!encoded_va)) {
>>>
>>> We should still be checking this before we even touch remaining.
>>> Otherwise we greatly increase the risk of providing a bad virtual
>>> address and have greatly decreased the likelihood of us catching
>>> potential errors gracefully.
I would argue that by duplicating the above checking for both the old
and new API will make the code less maintainable, for example, when
fixing bug or making changing to the duplicated checking, it is more
likely miss to change some of them if there are duplicated checking
codes.
>>>
>>>> -refill:
>>>> - page = __page_frag_cache_refill(nc, gfp_mask);
>>>> - if (!page)
>>>> - return NULL;
>>>> -
>>>> - encoded_va = nc->encoded_va;
>>>> - size = page_frag_cache_page_size(encoded_va);
>>>> -
>>>> - /* Even if we own the page, we do not use atomic_set().
>>>> - * This would break get_page_unless_zero() users.
>>>> - */
>>>> - page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
>>>> -
>>>> - /* reset page count bias and remaining to start of new frag */
>>>> - nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>>>> - nc->remaining = size;
>>>
>>> With my suggested change above you could essentially just drop the
>>> block starting from the comment and this function wouldn't need to
>>> change as much as it is.
>>
>> It seems you are still suggesting that new API also duplicates the old
>> checking pattern in __page_frag_alloc_va_align()?
>>
>> I would rather avoid the above if something like VM_BUG_ON() can address
>> your above concern.
>
> Yes, that is what I am suggesting. It makes the code much less prone
> to any sort of possible races as resetting encoded_va would make it so
> that reads for all the other fields would be skipped versus having to
> use a memset which differs in implementation depending on the
> architecture.
It would good to be more specific about what is the race here? if it does
exist, we can fix it.
And it would be good to have more specific suggestion and argument too,
othewise it just you arguing for preserving old behavior to make
the code much less prone to any sort of possible races, and me arguing
for making the old code more reusable and maintainable for the new API.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 09/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (7 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 08/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 10/14] net: rename skb_copy_to_page_nocache() helper Yunsheng Lin
` (5 subsequent siblings)
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
There are more new APIs calling __page_frag_cache_refill() in
this patchset, which may cause compiler not being able to inline
__page_frag_cache_refill() into __page_frag_alloc_va_align().
Not being able to do the inlining seems to cause some notiable
performance degradation in arm64 system with 64K PAGE_SIZE after
adding new API calling __page_frag_cache_refill().
It seems there is about 24Bytes binary size increase for
__page_frag_cache_refill() and __page_frag_cache_refill() in
arm64 system with 64K PAGE_SIZE. By doing the gdb disassembling,
It seems we can have more than 100Bytes decrease for the binary
size by using __alloc_pages() to replace alloc_pages_node(), as
there seems to be some unnecessary checking for nid being
NUMA_NO_NODE, especially when page_frag is still part of the mm
system.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
mm/page_frag_cache.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 4e6b1c4684f0..27596b84b452 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -48,11 +48,11 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc,
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
+ page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
+ numa_mem_id(), NULL);
#endif
if (unlikely(!page)) {
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
if (unlikely(!page)) {
memset(nc, 0, sizeof(*nc));
return false;
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 10/14] net: rename skb_copy_to_page_nocache() helper
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (8 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 09/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
` (4 subsequent siblings)
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck, Eric Dumazet,
David Ahern
Rename skb_copy_to_page_nocache() to skb_copy_to_va_nocache()
to avoid calling virt_to_page() as we are about to pass virtual
address directly.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/net/sock.h | 9 +++------
net/ipv4/tcp.c | 7 +++----
net/kcm/kcmsock.c | 7 +++----
3 files changed, 9 insertions(+), 14 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index cce23ac4d514..b5e702298ab7 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -2183,15 +2183,12 @@ static inline int skb_add_data_nocache(struct sock *sk, struct sk_buff *skb,
return err;
}
-static inline int skb_copy_to_page_nocache(struct sock *sk, struct iov_iter *from,
- struct sk_buff *skb,
- struct page *page,
- int off, int copy)
+static inline int skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from,
+ struct sk_buff *skb, char *va, int copy)
{
int err;
- err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) + off,
- copy, skb->len);
+ err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len);
if (err)
return err;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index e03a342c9162..7c392710ae15 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1215,10 +1215,9 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
if (!copy)
goto wait_for_space;
- err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
- pfrag->page,
- pfrag->offset,
- copy);
+ err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb,
+ page_address(pfrag->page) +
+ pfrag->offset, copy);
if (err)
goto do_error;
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index 2f191e50d4fc..eec6c56b7f3e 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -855,10 +855,9 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
if (!sk_wmem_schedule(sk, copy))
goto wait_for_memory;
- err = skb_copy_to_page_nocache(sk, &msg->msg_iter, skb,
- pfrag->page,
- pfrag->offset,
- copy);
+ err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb,
+ page_address(pfrag->page) +
+ pfrag->offset, copy);
if (err)
goto out_error;
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (9 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 10/14] net: rename skb_copy_to_page_nocache() helper Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 21:00 ` Alexander H Duyck
2024-08-08 12:37 ` [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin
` (3 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
There are many use cases that need minimum memory in order
for forward progress, but more performant if more memory is
available or need to probe the cache info to use any memory
available for frag caoleasing reason.
Currently skb_page_frag_refill() API is used to solve the
above use cases, but caller needs to know about the internal
detail and access the data field of 'struct page_frag' to
meet the requirement of the above use cases and its
implementation is similar to the one in mm subsystem.
To unify those two page_frag implementations, introduce a
prepare API to ensure minimum memory is satisfied and return
how much the actual memory is available to the caller and a
probe API to report the current available memory to caller
without doing cache refilling. The caller needs to either call
the commit API to report how much memory it actually uses, or
not do so if deciding to not use any memory.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/page_frag_cache.h | 75 ++++++++++++++++
mm/page_frag_cache.c | 152 ++++++++++++++++++++++++++++----
2 files changed, 212 insertions(+), 15 deletions(-)
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 0abffdd10a1c..ba5d7f8a03cd 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -7,6 +7,8 @@
#include <linux/build_bug.h>
#include <linux/log2.h>
#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/mmdebug.h>
#include <linux/mm_types_task.h>
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
@@ -67,6 +69,9 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va)
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
+struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
+ unsigned int *offset, unsigned int fragsz,
+ gfp_t gfp);
void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask);
@@ -79,12 +84,82 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align);
}
+static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc)
+{
+ return page_frag_cache_page_size(nc->encoded_va) - nc->remaining;
+}
+
static inline void *page_frag_alloc_va(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask)
{
return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u);
}
+void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz,
+ gfp_t gfp);
+
+static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc,
+ unsigned int *fragsz,
+ gfp_t gfp,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align) || align > PAGE_SIZE);
+ nc->remaining = nc->remaining & -align;
+ return page_frag_alloc_va_prepare(nc, fragsz, gfp);
+}
+
+struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
+ unsigned int *offset,
+ unsigned int *fragsz, gfp_t gfp);
+
+struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
+ unsigned int *offset,
+ unsigned int *fragsz,
+ void **va, gfp_t gfp);
+
+static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc,
+ unsigned int *offset,
+ unsigned int *fragsz,
+ void **va)
+{
+ unsigned long encoded_va = nc->encoded_va;
+ struct page *page;
+
+ VM_BUG_ON(!*fragsz);
+ if (unlikely(nc->remaining < *fragsz))
+ return NULL;
+
+ *va = encoded_page_address(encoded_va);
+ page = virt_to_page(*va);
+ *fragsz = nc->remaining;
+ *offset = page_frag_cache_page_size(encoded_va) - *fragsz;
+ *va += *offset;
+
+ return page;
+}
+
+static inline void page_frag_alloc_commit(struct page_frag_cache *nc,
+ unsigned int fragsz)
+{
+ VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias);
+ nc->pagecnt_bias--;
+ nc->remaining -= fragsz;
+}
+
+static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc,
+ unsigned int fragsz)
+{
+ VM_BUG_ON(fragsz > nc->remaining);
+ nc->remaining -= fragsz;
+}
+
+static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
+ unsigned int fragsz)
+{
+ nc->pagecnt_bias++;
+ nc->remaining += fragsz;
+}
+
void page_frag_free_va(void *addr);
#endif
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 27596b84b452..f8fad7d2cca8 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -19,27 +19,27 @@
#include <linux/page_frag_cache.h>
#include "internal.h"
-static bool __page_frag_cache_reuse(unsigned long encoded_va,
- unsigned int pagecnt_bias)
+static struct page *__page_frag_cache_reuse(unsigned long encoded_va,
+ unsigned int pagecnt_bias)
{
struct page *page;
page = virt_to_page((void *)encoded_va);
if (!page_ref_sub_and_test(page, pagecnt_bias))
- return false;
+ return NULL;
if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
free_unref_page(page, encoded_page_order(encoded_va));
- return false;
+ return NULL;
}
/* OK, page count is 0, we can safely set it */
set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
- return true;
+ return page;
}
-static bool __page_frag_cache_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
+static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
{
unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
struct page *page = NULL;
@@ -55,7 +55,7 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc,
page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
if (unlikely(!page)) {
memset(nc, 0, sizeof(*nc));
- return false;
+ return NULL;
}
order = 0;
@@ -69,29 +69,151 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc,
*/
page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
- return true;
+ return page;
}
/* Reload cache by reusing the old cache if it is possible, or
* refilling from the page allocator.
*/
-static bool __page_frag_cache_reload(struct page_frag_cache *nc,
- gfp_t gfp_mask)
+static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
{
+ struct page *page;
+
if (likely(nc->encoded_va)) {
- if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
+ page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
+ if (page)
goto out;
}
- if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
- return false;
+ page = __page_frag_cache_refill(nc, gfp_mask);
+ if (unlikely(!page))
+ return NULL;
out:
/* reset page count bias and remaining to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
nc->remaining = page_frag_cache_page_size(nc->encoded_va);
- return true;
+ return page;
+}
+
+void *page_frag_alloc_va_prepare(struct page_frag_cache *nc,
+ unsigned int *fragsz, gfp_t gfp)
+{
+ unsigned int remaining = nc->remaining;
+
+ VM_BUG_ON(!*fragsz);
+ if (likely(remaining >= *fragsz)) {
+ unsigned long encoded_va = nc->encoded_va;
+
+ *fragsz = remaining;
+
+ return encoded_page_address(encoded_va) +
+ (page_frag_cache_page_size(encoded_va) - remaining);
+ }
+
+ if (unlikely(*fragsz > PAGE_SIZE))
+ return NULL;
+
+ /* When reload fails, nc->encoded_va and nc->remaining are both reset
+ * to zero, so there is no need to check the return value here.
+ */
+ __page_frag_cache_reload(nc, gfp);
+
+ *fragsz = nc->remaining;
+ return encoded_page_address(nc->encoded_va);
+}
+EXPORT_SYMBOL(page_frag_alloc_va_prepare);
+
+struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
+ unsigned int *offset,
+ unsigned int *fragsz, gfp_t gfp)
+{
+ unsigned int remaining = nc->remaining;
+ struct page *page;
+
+ VM_BUG_ON(!*fragsz);
+ if (likely(remaining >= *fragsz)) {
+ unsigned long encoded_va = nc->encoded_va;
+
+ *offset = page_frag_cache_page_size(encoded_va) - remaining;
+ *fragsz = remaining;
+
+ return virt_to_page((void *)encoded_va);
+ }
+
+ if (unlikely(*fragsz > PAGE_SIZE))
+ return NULL;
+
+ page = __page_frag_cache_reload(nc, gfp);
+ *offset = 0;
+ *fragsz = nc->remaining;
+ return page;
+}
+EXPORT_SYMBOL(page_frag_alloc_pg_prepare);
+
+struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
+ unsigned int *offset,
+ unsigned int *fragsz,
+ void **va, gfp_t gfp)
+{
+ unsigned int remaining = nc->remaining;
+ struct page *page;
+
+ VM_BUG_ON(!*fragsz);
+ if (likely(remaining >= *fragsz)) {
+ unsigned long encoded_va = nc->encoded_va;
+
+ *offset = page_frag_cache_page_size(encoded_va) - remaining;
+ *va = encoded_page_address(encoded_va) + *offset;
+ *fragsz = remaining;
+
+ return virt_to_page((void *)encoded_va);
+ }
+
+ if (unlikely(*fragsz > PAGE_SIZE))
+ return NULL;
+
+ page = __page_frag_cache_reload(nc, gfp);
+ *offset = 0;
+ *fragsz = nc->remaining;
+ *va = encoded_page_address(nc->encoded_va);
+
+ return page;
+}
+EXPORT_SYMBOL(page_frag_alloc_prepare);
+
+struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
+ unsigned int *offset, unsigned int fragsz,
+ gfp_t gfp)
+{
+ unsigned int remaining = nc->remaining;
+ struct page *page;
+
+ VM_BUG_ON(!fragsz);
+ if (likely(remaining >= fragsz)) {
+ unsigned long encoded_va = nc->encoded_va;
+
+ *offset = page_frag_cache_page_size(encoded_va) -
+ remaining;
+
+ return virt_to_page((void *)encoded_va);
+ }
+
+ if (unlikely(fragsz > PAGE_SIZE))
+ return NULL;
+
+ page = __page_frag_cache_reload(nc, gfp);
+ if (unlikely(!page))
+ return NULL;
+
+ *offset = 0;
+ nc->remaining = remaining - fragsz;
+ nc->pagecnt_bias--;
+
+ return page;
}
+EXPORT_SYMBOL(page_frag_alloc_pg);
void page_frag_cache_drain(struct page_frag_cache *nc)
{
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-08 12:37 ` [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
@ 2024-08-14 21:00 ` Alexander H Duyck
2024-08-15 3:05 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 21:00 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> There are many use cases that need minimum memory in order
> for forward progress, but more performant if more memory is
> available or need to probe the cache info to use any memory
> available for frag caoleasing reason.
>
> Currently skb_page_frag_refill() API is used to solve the
> above use cases, but caller needs to know about the internal
> detail and access the data field of 'struct page_frag' to
> meet the requirement of the above use cases and its
> implementation is similar to the one in mm subsystem.
>
> To unify those two page_frag implementations, introduce a
> prepare API to ensure minimum memory is satisfied and return
> how much the actual memory is available to the caller and a
> probe API to report the current available memory to caller
> without doing cache refilling. The caller needs to either call
> the commit API to report how much memory it actually uses, or
> not do so if deciding to not use any memory.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/page_frag_cache.h | 75 ++++++++++++++++
> mm/page_frag_cache.c | 152 ++++++++++++++++++++++++++++----
> 2 files changed, 212 insertions(+), 15 deletions(-)
>
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> index 0abffdd10a1c..ba5d7f8a03cd 100644
> --- a/include/linux/page_frag_cache.h
> +++ b/include/linux/page_frag_cache.h
> @@ -7,6 +7,8 @@
> #include <linux/build_bug.h>
> #include <linux/log2.h>
> #include <linux/types.h>
> +#include <linux/mm.h>
> +#include <linux/mmdebug.h>
> #include <linux/mm_types_task.h>
>
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> @@ -67,6 +69,9 @@ static inline unsigned int page_frag_cache_page_size(unsigned long encoded_va)
>
> void page_frag_cache_drain(struct page_frag_cache *nc);
> void __page_frag_cache_drain(struct page *page, unsigned int count);
> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
> + unsigned int *offset, unsigned int fragsz,
> + gfp_t gfp);
> void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
> unsigned int fragsz, gfp_t gfp_mask,
> unsigned int align_mask);
> @@ -79,12 +84,82 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
> return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align);
> }
>
> +static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc)
> +{
> + return page_frag_cache_page_size(nc->encoded_va) - nc->remaining;
> +}
> +
> static inline void *page_frag_alloc_va(struct page_frag_cache *nc,
> unsigned int fragsz, gfp_t gfp_mask)
> {
> return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, ~0u);
> }
>
> +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz,
> + gfp_t gfp);
> +
> +static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc,
> + unsigned int *fragsz,
> + gfp_t gfp,
> + unsigned int align)
> +{
> + WARN_ON_ONCE(!is_power_of_2(align) || align > PAGE_SIZE);
> + nc->remaining = nc->remaining & -align;
> + return page_frag_alloc_va_prepare(nc, fragsz, gfp);
> +}
> +
> +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
> + unsigned int *offset,
> + unsigned int *fragsz, gfp_t gfp);
> +
> +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
> + unsigned int *offset,
> + unsigned int *fragsz,
> + void **va, gfp_t gfp);
> +
> +static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc,
> + unsigned int *offset,
> + unsigned int *fragsz,
> + void **va)
> +{
> + unsigned long encoded_va = nc->encoded_va;
> + struct page *page;
> +
> + VM_BUG_ON(!*fragsz);
> + if (unlikely(nc->remaining < *fragsz))
> + return NULL;
> +
> + *va = encoded_page_address(encoded_va);
> + page = virt_to_page(*va);
> + *fragsz = nc->remaining;
> + *offset = page_frag_cache_page_size(encoded_va) - *fragsz;
> + *va += *offset;
> +
> + return page;
> +}
> +
I still think this should be populating a bio_vec instead of passing
multiple arguments by pointer. With that you would be able to get all
the fields without as many arguments having to be passed.
> +static inline void page_frag_alloc_commit(struct page_frag_cache *nc,
> + unsigned int fragsz)
> +{
> + VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias);
> + nc->pagecnt_bias--;
> + nc->remaining -= fragsz;
> +}
> +
I would really like to see this accept a bio_vec as well. With that you
could verify the page and offset matches the expected value before
applying fragsz.
> +static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc,
> + unsigned int fragsz)
> +{
> + VM_BUG_ON(fragsz > nc->remaining);
> + nc->remaining -= fragsz;
> +}
> +
Same here.
> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
> + unsigned int fragsz)
> +{
> + nc->pagecnt_bias++;
> + nc->remaining += fragsz;
> +}
> +
This doesn't add up. Why would you need abort if you have commit? Isn't
this more of a revert? I wouldn't think that would be valid as it is
possible you took some sort of action that might have resulted in this
memory already being shared. We shouldn't allow rewinding the offset
pointer without knowing that there are no other entities sharing the
page.
> void page_frag_free_va(void *addr);
>
> #endif
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index 27596b84b452..f8fad7d2cca8 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -19,27 +19,27 @@
> #include <linux/page_frag_cache.h>
> #include "internal.h"
>
> -static bool __page_frag_cache_reuse(unsigned long encoded_va,
> - unsigned int pagecnt_bias)
> +static struct page *__page_frag_cache_reuse(unsigned long encoded_va,
> + unsigned int pagecnt_bias)
> {
> struct page *page;
>
> page = virt_to_page((void *)encoded_va);
> if (!page_ref_sub_and_test(page, pagecnt_bias))
> - return false;
> + return NULL;
>
> if (unlikely(encoded_page_pfmemalloc(encoded_va))) {
> free_unref_page(page, encoded_page_order(encoded_va));
> - return false;
> + return NULL;
> }
>
> /* OK, page count is 0, we can safely set it */
> set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
> - return true;
> + return page;
> }
>
> -static bool __page_frag_cache_refill(struct page_frag_cache *nc,
> - gfp_t gfp_mask)
> +static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
> + gfp_t gfp_mask)
> {
> unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
> struct page *page = NULL;
> @@ -55,7 +55,7 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc,
> page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
> if (unlikely(!page)) {
> memset(nc, 0, sizeof(*nc));
> - return false;
> + return NULL;
> }
>
> order = 0;
> @@ -69,29 +69,151 @@ static bool __page_frag_cache_refill(struct page_frag_cache *nc,
> */
> page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
>
> - return true;
> + return page;
> }
>
> /* Reload cache by reusing the old cache if it is possible, or
> * refilling from the page allocator.
> */
> -static bool __page_frag_cache_reload(struct page_frag_cache *nc,
> - gfp_t gfp_mask)
> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
> + gfp_t gfp_mask)
> {
> + struct page *page;
> +
> if (likely(nc->encoded_va)) {
> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
> + if (page)
> goto out;
> }
>
> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
> - return false;
> + page = __page_frag_cache_refill(nc, gfp_mask);
> + if (unlikely(!page))
> + return NULL;
>
> out:
> /* reset page count bias and remaining to start of new frag */
> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
> - return true;
> + return page;
> +}
> +
None of the functions above need to be returning page.
> +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc,
> + unsigned int *fragsz, gfp_t gfp)
> +{
> + unsigned int remaining = nc->remaining;
> +
> + VM_BUG_ON(!*fragsz);
> + if (likely(remaining >= *fragsz)) {
> + unsigned long encoded_va = nc->encoded_va;
> +
> + *fragsz = remaining;
> +
> + return encoded_page_address(encoded_va) +
> + (page_frag_cache_page_size(encoded_va) - remaining);
> + }
> +
> + if (unlikely(*fragsz > PAGE_SIZE))
> + return NULL;
> +
> + /* When reload fails, nc->encoded_va and nc->remaining are both reset
> + * to zero, so there is no need to check the return value here.
> + */
> + __page_frag_cache_reload(nc, gfp);
> +
> + *fragsz = nc->remaining;
> + return encoded_page_address(nc->encoded_va);
> +}
> +EXPORT_SYMBOL(page_frag_alloc_va_prepare);
> +
> +struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
> + unsigned int *offset,
> + unsigned int *fragsz, gfp_t gfp)
> +{
> + unsigned int remaining = nc->remaining;
> + struct page *page;
> +
> + VM_BUG_ON(!*fragsz);
> + if (likely(remaining >= *fragsz)) {
> + unsigned long encoded_va = nc->encoded_va;
> +
> + *offset = page_frag_cache_page_size(encoded_va) - remaining;
> + *fragsz = remaining;
> +
> + return virt_to_page((void *)encoded_va);
> + }
> +
> + if (unlikely(*fragsz > PAGE_SIZE))
> + return NULL;
> +
> + page = __page_frag_cache_reload(nc, gfp);
> + *offset = 0;
> + *fragsz = nc->remaining;
> + return page;
> +}
> +EXPORT_SYMBOL(page_frag_alloc_pg_prepare);
> +
> +struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
> + unsigned int *offset,
> + unsigned int *fragsz,
> + void **va, gfp_t gfp)
> +{
> + unsigned int remaining = nc->remaining;
> + struct page *page;
> +
> + VM_BUG_ON(!*fragsz);
> + if (likely(remaining >= *fragsz)) {
> + unsigned long encoded_va = nc->encoded_va;
> +
> + *offset = page_frag_cache_page_size(encoded_va) - remaining;
> + *va = encoded_page_address(encoded_va) + *offset;
> + *fragsz = remaining;
> +
> + return virt_to_page((void *)encoded_va);
> + }
> +
> + if (unlikely(*fragsz > PAGE_SIZE))
> + return NULL;
> +
> + page = __page_frag_cache_reload(nc, gfp);
> + *offset = 0;
> + *fragsz = nc->remaining;
> + *va = encoded_page_address(nc->encoded_va);
> +
> + return page;
> +}
> +EXPORT_SYMBOL(page_frag_alloc_prepare);
> +
> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
> + unsigned int *offset, unsigned int fragsz,
> + gfp_t gfp)
> +{
> + unsigned int remaining = nc->remaining;
> + struct page *page;
> +
> + VM_BUG_ON(!fragsz);
> + if (likely(remaining >= fragsz)) {
> + unsigned long encoded_va = nc->encoded_va;
> +
> + *offset = page_frag_cache_page_size(encoded_va) -
> + remaining;
> +
> + return virt_to_page((void *)encoded_va);
> + }
> +
> + if (unlikely(fragsz > PAGE_SIZE))
> + return NULL;
> +
> + page = __page_frag_cache_reload(nc, gfp);
> + if (unlikely(!page))
> + return NULL;
> +
> + *offset = 0;
> + nc->remaining = remaining - fragsz;
> + nc->pagecnt_bias--;
> +
> + return page;
> }
> +EXPORT_SYMBOL(page_frag_alloc_pg);
Again, this isn't returning a page. It is essentially returning a
bio_vec without calling it as such. You might as well pass the bio_vec
pointer as an argument and just have it populate it directly.
It would be identical to the existing page_frag for all intents and
purposes. In addition you could use that as an intermediate value
between the page_frag_cache for your prepare/commit call setup as you
could limit the size/bv_len to being the only item that can be
adjusted, specifically reduced between the prepare and commit calls.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-14 21:00 ` Alexander H Duyck
@ 2024-08-15 3:05 ` Yunsheng Lin
2024-08-15 15:25 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-15 3:05 UTC (permalink / raw)
To: Alexander H Duyck, davem, kuba, pabeni
Cc: netdev, linux-kernel, Andrew Morton, linux-mm
On 2024/8/15 5:00, Alexander H Duyck wrote:
...
>
>> +static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc,
>> + unsigned int *offset,
>> + unsigned int *fragsz,
>> + void **va)
>> +{
>> + unsigned long encoded_va = nc->encoded_va;
>> + struct page *page;
>> +
>> + VM_BUG_ON(!*fragsz);
>> + if (unlikely(nc->remaining < *fragsz))
>> + return NULL;
>> +
>> + *va = encoded_page_address(encoded_va);
>> + page = virt_to_page(*va);
>> + *fragsz = nc->remaining;
>> + *offset = page_frag_cache_page_size(encoded_va) - *fragsz;
>> + *va += *offset;
>> +
>> + return page;
>> +}
>> +
>
> I still think this should be populating a bio_vec instead of passing
> multiple arguments by pointer. With that you would be able to get all
> the fields without as many arguments having to be passed.
As I was already arguing in [1]:
If most of the page_frag API callers doesn't access 'struct bio_vec'
directly and use something like bvec_iter_* API to do the accessing,
then I am agreed with the above argument.
But right now, most of the page_frag API callers are accessing 'va'
directly to do the memcpy'ing, and accessing 'page & off & len' directly
to do skb frag filling, so I am not really sure what's the point of
indirection using the 'struct bio_vec' here.
And adding 'struct bio_vec' for page_frag and accessing the value of it
directly may be against of the design choice of 'struct bio_vec', as
there seems to be no inline helper defined to access the value of
'struct bio_vec' directly in bvec.h
1. https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
>
>> +static inline void page_frag_alloc_commit(struct page_frag_cache *nc,
>> + unsigned int fragsz)
>> +{
>> + VM_BUG_ON(fragsz > nc->remaining || !nc->pagecnt_bias);
>> + nc->pagecnt_bias--;
>> + nc->remaining -= fragsz;
>> +}
>> +
>
>
>> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
>> + unsigned int fragsz)
>> +{
>> + nc->pagecnt_bias++;
>> + nc->remaining += fragsz;
>> +}
>> +
>
> This doesn't add up. Why would you need abort if you have commit? Isn't
> this more of a revert? I wouldn't think that would be valid as it is
> possible you took some sort of action that might have resulted in this
> memory already being shared. We shouldn't allow rewinding the offset
> pointer without knowing that there are no other entities sharing the
> page.
This is used for __tun_build_skb() in drivers/net/tun.c as below, mainly
used to avoid performance penalty for XDP drop case:
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1598,21 +1598,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
}
static struct sk_buff *__tun_build_skb(struct tun_file *tfile,
- struct page_frag *alloc_frag, char *buf,
- int buflen, int len, int pad)
+ char *buf, int buflen, int len, int pad)
{
struct sk_buff *skb = build_skb(buf, buflen);
- if (!skb)
+ if (!skb) {
+ page_frag_free_va(buf);
return ERR_PTR(-ENOMEM);
+ }
skb_reserve(skb, pad);
skb_put(skb, len);
skb_set_owner_w(skb, tfile->socket.sk);
- get_page(alloc_frag->page);
- alloc_frag->offset += buflen;
-
return skb;
}
@@ -1660,7 +1658,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
struct virtio_net_hdr *hdr,
int len, int *skb_xdp)
{
- struct page_frag *alloc_frag = ¤t->task_frag;
+ struct page_frag_cache *alloc_frag = ¤t->task_frag;
struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
struct bpf_prog *xdp_prog;
int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -1676,16 +1674,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
buflen += SKB_DATA_ALIGN(len + pad);
rcu_read_unlock();
- alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
- if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL)))
+ buf = page_frag_alloc_va_align(alloc_frag, buflen, GFP_KERNEL,
+ SMP_CACHE_BYTES);
+ if (unlikely(!buf))
return ERR_PTR(-ENOMEM);
- buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset;
- copied = copy_page_from_iter(alloc_frag->page,
- alloc_frag->offset + pad,
- len, from);
- if (copied != len)
+ copied = copy_from_iter(buf + pad, len, from);
+ if (copied != len) {
+ page_frag_alloc_abort(alloc_frag, buflen);
return ERR_PTR(-EFAULT);
+ }
/* There's a small window that XDP may be set after the check
* of xdp_prog above, this should be rare and for simplicity
@@ -1693,8 +1691,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
*/
if (hdr->gso_type || !xdp_prog) {
*skb_xdp = 1;
- return __tun_build_skb(tfile, alloc_frag, buf, buflen, len,
- pad);
+ return __tun_build_skb(tfile, buf, buflen, len, pad);
}
*skb_xdp = 0;
@@ -1711,21 +1708,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
xdp_prepare_buff(&xdp, buf, pad, len, false);
act = bpf_prog_run_xdp(xdp_prog, &xdp);
- if (act == XDP_REDIRECT || act == XDP_TX) {
- get_page(alloc_frag->page);
- alloc_frag->offset += buflen;
- }
err = tun_xdp_act(tun, xdp_prog, &xdp, act);
- if (err < 0) {
- if (act == XDP_REDIRECT || act == XDP_TX)
- put_page(alloc_frag->page);
- goto out;
- }
-
if (err == XDP_REDIRECT)
xdp_do_flush();
- if (err != XDP_PASS)
+
+ if (err == XDP_REDIRECT || err == XDP_TX) {
+ goto out;
+ } else if (err < 0 || err != XDP_PASS) {
+ page_frag_alloc_abort(alloc_frag, buflen);
goto out;
+ }
pad = xdp.data - xdp.data_hard_start;
len = xdp.data_end - xdp.data;
@@ -1734,7 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
rcu_read_unlock();
local_bh_enable();
- return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad);
+ return __tun_build_skb(tfile, buf, buflen, len, pad);
out:
bpf_net_ctx_clear(bpf_net_ctx);
>
>> void page_frag_free_va(void *addr);
>>
>> #endif
>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
...
>> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
>> + gfp_t gfp_mask)
>> {
>> + struct page *page;
>> +
>> if (likely(nc->encoded_va)) {
>> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
>> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
>> + if (page)
>> goto out;
>> }
>>
>> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
>> - return false;
>> + page = __page_frag_cache_refill(nc, gfp_mask);
>> + if (unlikely(!page))
>> + return NULL;
>>
>> out:
>> /* reset page count bias and remaining to start of new frag */
>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
>> - return true;
>> + return page;
>> +}
>> +
>
> None of the functions above need to be returning page.
Are you still suggesting to always use virt_to_page() even when it is
not really necessary? why not return the page here to avoid the
virt_to_page()?
>
>> +void *page_frag_alloc_va_prepare(struct page_frag_cache *nc,
>> + unsigned int *fragsz, gfp_t gfp)
>> +{
>> + unsigned int remaining = nc->remaining;
>> +
>> + VM_BUG_ON(!*fragsz);
>> + if (likely(remaining >= *fragsz)) {
>> + unsigned long encoded_va = nc->encoded_va;
>> +
>> + *fragsz = remaining;
>> +
>> + return encoded_page_address(encoded_va) +
>> + (page_frag_cache_page_size(encoded_va) - remaining);
>> + }
>> +
>> + if (unlikely(*fragsz > PAGE_SIZE))
>> + return NULL;
>> +
>> + /* When reload fails, nc->encoded_va and nc->remaining are both reset
>> + * to zero, so there is no need to check the return value here.
>> + */
>> + __page_frag_cache_reload(nc, gfp);
>> +
>> + *fragsz = nc->remaining;
>> + return encoded_page_address(nc->encoded_va);
>> +}
>> +EXPORT_SYMBOL(page_frag_alloc_va_prepare);
...
>> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
>> + unsigned int *offset, unsigned int fragsz,
>> + gfp_t gfp)
>> +{
>> + unsigned int remaining = nc->remaining;
>> + struct page *page;
>> +
>> + VM_BUG_ON(!fragsz);
>> + if (likely(remaining >= fragsz)) {
>> + unsigned long encoded_va = nc->encoded_va;
>> +
>> + *offset = page_frag_cache_page_size(encoded_va) -
>> + remaining;
>> +
>> + return virt_to_page((void *)encoded_va);
>> + }
>> +
>> + if (unlikely(fragsz > PAGE_SIZE))
>> + return NULL;
>> +
>> + page = __page_frag_cache_reload(nc, gfp);
>> + if (unlikely(!page))
>> + return NULL;
>> +
>> + *offset = 0;
>> + nc->remaining = remaining - fragsz;
>> + nc->pagecnt_bias--;
>> +
>> + return page;
>> }
>> +EXPORT_SYMBOL(page_frag_alloc_pg);
>
> Again, this isn't returning a page. It is essentially returning a
> bio_vec without calling it as such. You might as well pass the bio_vec
> pointer as an argument and just have it populate it directly.
I really don't think your bio_vec suggestion make much sense for now as
the reason mentioned in below:
"Through a quick look, there seems to be at least three structs which have
similar values: struct bio_vec & struct skb_frag & struct page_frag.
As your above agrument about using bio_vec, it seems it is ok to use any
one of them as each one of them seems to have almost all the values we
are using?
Personally, my preference over them: 'struct page_frag' > 'struct skb_frag'
> 'struct bio_vec', as the naming of 'struct page_frag' seems to best match
the page_frag API, 'struct skb_frag' is the second preference because we
mostly need to fill skb frag anyway, and 'struct bio_vec' is the last
preference because it just happen to have almost all the values needed.
Is there any specific reason other than the above "almost all the values you
are using are exposed by that structure already " that you prefer bio_vec?"
1. https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
>
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-15 3:05 ` Yunsheng Lin
@ 2024-08-15 15:25 ` Alexander Duyck
2024-08-16 12:01 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-15 15:25 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On Wed, Aug 14, 2024 at 8:05 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 5:00, Alexander H Duyck wrote:
...
> >> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
> >> + unsigned int fragsz)
> >> +{
> >> + nc->pagecnt_bias++;
> >> + nc->remaining += fragsz;
> >> +}
> >> +
> >
> > This doesn't add up. Why would you need abort if you have commit? Isn't
> > this more of a revert? I wouldn't think that would be valid as it is
> > possible you took some sort of action that might have resulted in this
> > memory already being shared. We shouldn't allow rewinding the offset
> > pointer without knowing that there are no other entities sharing the
> > page.
>
> This is used for __tun_build_skb() in drivers/net/tun.c as below, mainly
> used to avoid performance penalty for XDP drop case:
Yeah, I reviewed that patch. As I said there, rewinding the offset
should be avoided unless you can verify you are the only owner of the
page as you have no guarantees that somebody else didn't take an
access to the page/data to send it off somewhere else. Once you expose
the page to any other entity it should be written off or committed in
your case and you should move on to the next block.
>
> >> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
> >> + gfp_t gfp_mask)
> >> {
> >> + struct page *page;
> >> +
> >> if (likely(nc->encoded_va)) {
> >> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
> >> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
> >> + if (page)
> >> goto out;
> >> }
> >>
> >> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
> >> - return false;
> >> + page = __page_frag_cache_refill(nc, gfp_mask);
> >> + if (unlikely(!page))
> >> + return NULL;
> >>
> >> out:
> >> /* reset page count bias and remaining to start of new frag */
> >> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
> >> - return true;
> >> + return page;
> >> +}
> >> +
> >
> > None of the functions above need to be returning page.
>
> Are you still suggesting to always use virt_to_page() even when it is
> not really necessary? why not return the page here to avoid the
> virt_to_page()?
Yes. The likelihood of you needing to pass this out as a page should
be low as most cases will just be you using the virtual address
anyway. You are essentially trading off branching for not having to
use virt_to_page. It is unnecessary optimization.
>
> >> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
> >> + unsigned int *offset, unsigned int fragsz,
> >> + gfp_t gfp)
> >> +{
> >> + unsigned int remaining = nc->remaining;
> >> + struct page *page;
> >> +
> >> + VM_BUG_ON(!fragsz);
> >> + if (likely(remaining >= fragsz)) {
> >> + unsigned long encoded_va = nc->encoded_va;
> >> +
> >> + *offset = page_frag_cache_page_size(encoded_va) -
> >> + remaining;
> >> +
> >> + return virt_to_page((void *)encoded_va);
> >> + }
> >> +
> >> + if (unlikely(fragsz > PAGE_SIZE))
> >> + return NULL;
> >> +
> >> + page = __page_frag_cache_reload(nc, gfp);
> >> + if (unlikely(!page))
> >> + return NULL;
> >> +
> >> + *offset = 0;
> >> + nc->remaining = remaining - fragsz;
> >> + nc->pagecnt_bias--;
> >> +
> >> + return page;
> >> }
> >> +EXPORT_SYMBOL(page_frag_alloc_pg);
> >
> > Again, this isn't returning a page. It is essentially returning a
> > bio_vec without calling it as such. You might as well pass the bio_vec
> > pointer as an argument and just have it populate it directly.
>
> I really don't think your bio_vec suggestion make much sense for now as
> the reason mentioned in below:
>
> "Through a quick look, there seems to be at least three structs which have
> similar values: struct bio_vec & struct skb_frag & struct page_frag.
>
> As your above agrument about using bio_vec, it seems it is ok to use any
> one of them as each one of them seems to have almost all the values we
> are using?
>
> Personally, my preference over them: 'struct page_frag' > 'struct skb_frag'
> > 'struct bio_vec', as the naming of 'struct page_frag' seems to best match
> the page_frag API, 'struct skb_frag' is the second preference because we
> mostly need to fill skb frag anyway, and 'struct bio_vec' is the last
> preference because it just happen to have almost all the values needed.
That is why I said I would be okay with us passing page_frag in patch
12 after looking closer at the code. The fact is it should make the
review of that patch set much easier if you essentially just pass the
page_frag back out of the call. Then it could be used in exactly the
same way it was before and should reduce the total number of lines of
code that need to be changed.
> Is there any specific reason other than the above "almost all the values you
> are using are exposed by that structure already " that you prefer bio_vec?"
>
> 1. https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
My reason for preferring bio_vec is that of the 3 it is the most setup
to be used as a local variable versus something stored in a struct
such as page_frag or used for some specialty user case such as
skb_frag_t. In addition it already has a set of helpers for converting
it to a virtual address or copying data to and from it which would
make it easier to get rid of a bunch of duplicate code.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-15 15:25 ` Alexander Duyck
@ 2024-08-16 12:01 ` Yunsheng Lin
2024-08-19 15:52 ` Alexander Duyck
0 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-16 12:01 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On 2024/8/15 23:25, Alexander Duyck wrote:
> On Wed, Aug 14, 2024 at 8:05 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/15 5:00, Alexander H Duyck wrote:
>
> ...
>
>>>> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
>>>> + unsigned int fragsz)
>>>> +{
>>>> + nc->pagecnt_bias++;
>>>> + nc->remaining += fragsz;
>>>> +}
>>>> +
>>>
>>> This doesn't add up. Why would you need abort if you have commit? Isn't
>>> this more of a revert? I wouldn't think that would be valid as it is
>>> possible you took some sort of action that might have resulted in this
>>> memory already being shared. We shouldn't allow rewinding the offset
>>> pointer without knowing that there are no other entities sharing the
>>> page.
>>
>> This is used for __tun_build_skb() in drivers/net/tun.c as below, mainly
>> used to avoid performance penalty for XDP drop case:
>
> Yeah, I reviewed that patch. As I said there, rewinding the offset
> should be avoided unless you can verify you are the only owner of the
> page as you have no guarantees that somebody else didn't take an
> access to the page/data to send it off somewhere else. Once you expose
> the page to any other entity it should be written off or committed in
> your case and you should move on to the next block.
Yes, the expectation is that somebody else didn't take an access to the
page/data to send it off somewhere else between page_frag_alloc_va()
and page_frag_alloc_abort(), did you see expectation was broken in that
patch? If yes, we should fix that by using page_frag_free_va() related
API instead of using page_frag_alloc_abort().
>
>
>>
>>>> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
>>>> + gfp_t gfp_mask)
>>>> {
>>>> + struct page *page;
>>>> +
>>>> if (likely(nc->encoded_va)) {
>>>> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
>>>> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
>>>> + if (page)
>>>> goto out;
>>>> }
>>>>
>>>> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
>>>> - return false;
>>>> + page = __page_frag_cache_refill(nc, gfp_mask);
>>>> + if (unlikely(!page))
>>>> + return NULL;
>>>>
>>>> out:
>>>> /* reset page count bias and remaining to start of new frag */
>>>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>>>> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
>>>> - return true;
>>>> + return page;
>>>> +}
>>>> +
>>>
>>> None of the functions above need to be returning page.
>>
>> Are you still suggesting to always use virt_to_page() even when it is
>> not really necessary? why not return the page here to avoid the
>> virt_to_page()?
>
> Yes. The likelihood of you needing to pass this out as a page should
> be low as most cases will just be you using the virtual address
> anyway. You are essentially trading off branching for not having to
> use virt_to_page. It is unnecessary optimization.
As my understanding, I am not trading off branching for not having to
use virt_to_page, the branching is already needed no matter we utilize
it to avoid calling virt_to_page() or not, please be more specific about
which branching is traded off for not having to use virt_to_page() here.
>
>
>>
>>>> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
>>>> + unsigned int *offset, unsigned int fragsz,
>>>> + gfp_t gfp)
>>>> +{
>>>> + unsigned int remaining = nc->remaining;
>>>> + struct page *page;
>>>> +
>>>> + VM_BUG_ON(!fragsz);
>>>> + if (likely(remaining >= fragsz)) {
>>>> + unsigned long encoded_va = nc->encoded_va;
>>>> +
>>>> + *offset = page_frag_cache_page_size(encoded_va) -
>>>> + remaining;
>>>> +
>>>> + return virt_to_page((void *)encoded_va);
>>>> + }
>>>> +
>>>> + if (unlikely(fragsz > PAGE_SIZE))
>>>> + return NULL;
>>>> +
>>>> + page = __page_frag_cache_reload(nc, gfp);
>>>> + if (unlikely(!page))
>>>> + return NULL;
>>>> +
>>>> + *offset = 0;
>>>> + nc->remaining = remaining - fragsz;
>>>> + nc->pagecnt_bias--;
>>>> +
>>>> + return page;
>>>> }
>>>> +EXPORT_SYMBOL(page_frag_alloc_pg);
>>>
>>> Again, this isn't returning a page. It is essentially returning a
>>> bio_vec without calling it as such. You might as well pass the bio_vec
>>> pointer as an argument and just have it populate it directly.
>>
>> I really don't think your bio_vec suggestion make much sense for now as
>> the reason mentioned in below:
>>
>> "Through a quick look, there seems to be at least three structs which have
>> similar values: struct bio_vec & struct skb_frag & struct page_frag.
>>
>> As your above agrument about using bio_vec, it seems it is ok to use any
>> one of them as each one of them seems to have almost all the values we
>> are using?
>>
>> Personally, my preference over them: 'struct page_frag' > 'struct skb_frag'
>>> 'struct bio_vec', as the naming of 'struct page_frag' seems to best match
>> the page_frag API, 'struct skb_frag' is the second preference because we
>> mostly need to fill skb frag anyway, and 'struct bio_vec' is the last
>> preference because it just happen to have almost all the values needed.
>
> That is why I said I would be okay with us passing page_frag in patch
> 12 after looking closer at the code. The fact is it should make the
> review of that patch set much easier if you essentially just pass the
> page_frag back out of the call. Then it could be used in exactly the
> same way it was before and should reduce the total number of lines of
> code that need to be changed.
So the your suggestion changed to something like below?
int page_frag_alloc_pfrag(struct page_frag_cache *nc, struct page_frag *pfrag);
The API naming of 'page_frag_alloc_pfrag' seems a little odd to me, any better
one in your mind?
>
>> Is there any specific reason other than the above "almost all the values you
>> are using are exposed by that structure already " that you prefer bio_vec?"
>>
>> 1. https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
>
> My reason for preferring bio_vec is that of the 3 it is the most setup
> to be used as a local variable versus something stored in a struct
> such as page_frag or used for some specialty user case such as
> skb_frag_t. In addition it already has a set of helpers for converting
> it to a virtual address or copying data to and from it which would
> make it easier to get rid of a bunch of duplicate code.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-16 12:01 ` Yunsheng Lin
@ 2024-08-19 15:52 ` Alexander Duyck
2024-08-20 13:08 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-19 15:52 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On Fri, Aug 16, 2024 at 5:01 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/15 23:25, Alexander Duyck wrote:
> > On Wed, Aug 14, 2024 at 8:05 PM Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >>
> >> On 2024/8/15 5:00, Alexander H Duyck wrote:
> >
> > ...
> >
> >>>> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
> >>>> + unsigned int fragsz)
> >>>> +{
> >>>> + nc->pagecnt_bias++;
> >>>> + nc->remaining += fragsz;
> >>>> +}
> >>>> +
> >>>
> >>> This doesn't add up. Why would you need abort if you have commit? Isn't
> >>> this more of a revert? I wouldn't think that would be valid as it is
> >>> possible you took some sort of action that might have resulted in this
> >>> memory already being shared. We shouldn't allow rewinding the offset
> >>> pointer without knowing that there are no other entities sharing the
> >>> page.
> >>
> >> This is used for __tun_build_skb() in drivers/net/tun.c as below, mainly
> >> used to avoid performance penalty for XDP drop case:
> >
> > Yeah, I reviewed that patch. As I said there, rewinding the offset
> > should be avoided unless you can verify you are the only owner of the
> > page as you have no guarantees that somebody else didn't take an
> > access to the page/data to send it off somewhere else. Once you expose
> > the page to any other entity it should be written off or committed in
> > your case and you should move on to the next block.
>
> Yes, the expectation is that somebody else didn't take an access to the
> page/data to send it off somewhere else between page_frag_alloc_va()
> and page_frag_alloc_abort(), did you see expectation was broken in that
> patch? If yes, we should fix that by using page_frag_free_va() related
> API instead of using page_frag_alloc_abort().
The problem is when you expose it to XDP there are a number of
different paths it can take. As such you shouldn't be expecting XDP to
not do something like that. Basically you have to check the reference
count before you can rewind the page.
> >
> >
> >>
> >>>> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
> >>>> + gfp_t gfp_mask)
> >>>> {
> >>>> + struct page *page;
> >>>> +
> >>>> if (likely(nc->encoded_va)) {
> >>>> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
> >>>> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
> >>>> + if (page)
> >>>> goto out;
> >>>> }
> >>>>
> >>>> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
> >>>> - return false;
> >>>> + page = __page_frag_cache_refill(nc, gfp_mask);
> >>>> + if (unlikely(!page))
> >>>> + return NULL;
> >>>>
> >>>> out:
> >>>> /* reset page count bias and remaining to start of new frag */
> >>>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >>>> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
> >>>> - return true;
> >>>> + return page;
> >>>> +}
> >>>> +
> >>>
> >>> None of the functions above need to be returning page.
> >>
> >> Are you still suggesting to always use virt_to_page() even when it is
> >> not really necessary? why not return the page here to avoid the
> >> virt_to_page()?
> >
> > Yes. The likelihood of you needing to pass this out as a page should
> > be low as most cases will just be you using the virtual address
> > anyway. You are essentially trading off branching for not having to
> > use virt_to_page. It is unnecessary optimization.
>
> As my understanding, I am not trading off branching for not having to
> use virt_to_page, the branching is already needed no matter we utilize
> it to avoid calling virt_to_page() or not, please be more specific about
> which branching is traded off for not having to use virt_to_page() here.
The virt_to_page overhead isn't that high. It would be better to just
use a consistent path rather than try to optimize for an unlikely
branch in your datapath.
> >
> >
> >>
> >>>> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
> >>>> + unsigned int *offset, unsigned int fragsz,
> >>>> + gfp_t gfp)
> >>>> +{
> >>>> + unsigned int remaining = nc->remaining;
> >>>> + struct page *page;
> >>>> +
> >>>> + VM_BUG_ON(!fragsz);
> >>>> + if (likely(remaining >= fragsz)) {
> >>>> + unsigned long encoded_va = nc->encoded_va;
> >>>> +
> >>>> + *offset = page_frag_cache_page_size(encoded_va) -
> >>>> + remaining;
> >>>> +
> >>>> + return virt_to_page((void *)encoded_va);
> >>>> + }
> >>>> +
> >>>> + if (unlikely(fragsz > PAGE_SIZE))
> >>>> + return NULL;
> >>>> +
> >>>> + page = __page_frag_cache_reload(nc, gfp);
> >>>> + if (unlikely(!page))
> >>>> + return NULL;
> >>>> +
> >>>> + *offset = 0;
> >>>> + nc->remaining = remaining - fragsz;
> >>>> + nc->pagecnt_bias--;
> >>>> +
> >>>> + return page;
> >>>> }
> >>>> +EXPORT_SYMBOL(page_frag_alloc_pg);
> >>>
> >>> Again, this isn't returning a page. It is essentially returning a
> >>> bio_vec without calling it as such. You might as well pass the bio_vec
> >>> pointer as an argument and just have it populate it directly.
> >>
> >> I really don't think your bio_vec suggestion make much sense for now as
> >> the reason mentioned in below:
> >>
> >> "Through a quick look, there seems to be at least three structs which have
> >> similar values: struct bio_vec & struct skb_frag & struct page_frag.
> >>
> >> As your above agrument about using bio_vec, it seems it is ok to use any
> >> one of them as each one of them seems to have almost all the values we
> >> are using?
> >>
> >> Personally, my preference over them: 'struct page_frag' > 'struct skb_frag'
> >>> 'struct bio_vec', as the naming of 'struct page_frag' seems to best match
> >> the page_frag API, 'struct skb_frag' is the second preference because we
> >> mostly need to fill skb frag anyway, and 'struct bio_vec' is the last
> >> preference because it just happen to have almost all the values needed.
> >
> > That is why I said I would be okay with us passing page_frag in patch
> > 12 after looking closer at the code. The fact is it should make the
> > review of that patch set much easier if you essentially just pass the
> > page_frag back out of the call. Then it could be used in exactly the
> > same way it was before and should reduce the total number of lines of
> > code that need to be changed.
>
> So the your suggestion changed to something like below?
>
> int page_frag_alloc_pfrag(struct page_frag_cache *nc, struct page_frag *pfrag);
>
> The API naming of 'page_frag_alloc_pfrag' seems a little odd to me, any better
> one in your mind?
Well at this point we are populating/getting/pulling a page frag from
the page frag cache. Maybe look for a word other than alloc such as
populate. Essentially what you are doing is populating the pfrag from
the frag cache, although I thought there was a size value you passed
for that isn't there?
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API
2024-08-19 15:52 ` Alexander Duyck
@ 2024-08-20 13:08 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-20 13:08 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
linux-mm
On 2024/8/19 23:52, Alexander Duyck wrote:
>>
>> Yes, the expectation is that somebody else didn't take an access to the
>> page/data to send it off somewhere else between page_frag_alloc_va()
>> and page_frag_alloc_abort(), did you see expectation was broken in that
>> patch? If yes, we should fix that by using page_frag_free_va() related
>> API instead of using page_frag_alloc_abort().
>
> The problem is when you expose it to XDP there are a number of
> different paths it can take. As such you shouldn't be expecting XDP to
> not do something like that. Basically you have to check the reference
Even if XDP operations like xdp_do_redirect() or tun_xdp_xmit() return
failure, we still can not do that? It seems odd that happens.
If not, can we use page_frag_alloc_abort() with fragsz being zero to avoid
atomic operation?
> count before you can rewind the page.
>
>>>
>>>
>>>>
>>>>>> +static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
>>>>>> + gfp_t gfp_mask)
>>>>>> {
>>>>>> + struct page *page;
>>>>>> +
>>>>>> if (likely(nc->encoded_va)) {
>>>>>> - if (__page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias))
>>>>>> + page = __page_frag_cache_reuse(nc->encoded_va, nc->pagecnt_bias);
>>>>>> + if (page)
>>>>>> goto out;
>>>>>> }
>>>>>>
>>>>>> - if (unlikely(!__page_frag_cache_refill(nc, gfp_mask)))
>>>>>> - return false;
>>>>>> + page = __page_frag_cache_refill(nc, gfp_mask);
>>>>>> + if (unlikely(!page))
>>>>>> + return NULL;
>>>>>>
>>>>>> out:
>>>>>> /* reset page count bias and remaining to start of new frag */
>>>>>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>>>>>> nc->remaining = page_frag_cache_page_size(nc->encoded_va);
>>>>>> - return true;
>>>>>> + return page;
>>>>>> +}
>>>>>> +
>>>>>
>>>>> None of the functions above need to be returning page.
>>>>
>>>> Are you still suggesting to always use virt_to_page() even when it is
>>>> not really necessary? why not return the page here to avoid the
>>>> virt_to_page()?
>>>
>>> Yes. The likelihood of you needing to pass this out as a page should
>>> be low as most cases will just be you using the virtual address
>>> anyway. You are essentially trading off branching for not having to
>>> use virt_to_page. It is unnecessary optimization.
>>
>> As my understanding, I am not trading off branching for not having to
>> use virt_to_page, the branching is already needed no matter we utilize
>> it to avoid calling virt_to_page() or not, please be more specific about
>> which branching is traded off for not having to use virt_to_page() here.
>
> The virt_to_page overhead isn't that high. It would be better to just
> use a consistent path rather than try to optimize for an unlikely
> branch in your datapath.
I am not sure if I understand what do you mean by 'consistent path' here.
If I understand your comment correctly, the path is already not consistent
to avoid having to fetch size multiple times multiple ways as mentioned in
[1]. As below, doesn't it seems nature to avoid virt_to_page() calling while
avoiding page_frag_cache_page_size() calling, even if it is an unlikely case
as you mentioned:
struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
unsigned int *offset, unsigned int fragsz,
gfp_t gfp)
{
unsigned int remaining = nc->remaining;
struct page *page;
VM_BUG_ON(!fragsz);
if (likely(remaining >= fragsz)) {
unsigned long encoded_va = nc->encoded_va;
*offset = page_frag_cache_page_size(encoded_va) -
remaining;
return virt_to_page((void *)encoded_va);
}
if (unlikely(fragsz > PAGE_SIZE))
return NULL;
page = __page_frag_cache_reload(nc, gfp);
if (unlikely(!page))
return NULL;
*offset = 0;
nc->remaining -= fragsz;
nc->pagecnt_bias--;
return page;
}
1. https://lore.kernel.org/all/CAKgT0UeQ9gwYo7qttak0UgXC9+kunO2gedm_yjtPiMk4VJp9yQ@mail.gmail.com/
>
>>>
>>>
>>>>
>>>>>> +struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
>>>>>> + unsigned int *offset, unsigned int fragsz,
>>>>>> + gfp_t gfp)
>>>>>> +{
>>>>>> + unsigned int remaining = nc->remaining;
>>>>>> + struct page *page;
>>>>>> +
>>>>>> + VM_BUG_ON(!fragsz);
>>>>>> + if (likely(remaining >= fragsz)) {
>>>>>> + unsigned long encoded_va = nc->encoded_va;
>>>>>> +
>>>>>> + *offset = page_frag_cache_page_size(encoded_va) -
>>>>>> + remaining;
>>>>>> +
>>>>>> + return virt_to_page((void *)encoded_va);
>>>>>> + }
>>>>>> +
>>>>>> + if (unlikely(fragsz > PAGE_SIZE))
>>>>>> + return NULL;
>>>>>> +
>>>>>> + page = __page_frag_cache_reload(nc, gfp);
>>>>>> + if (unlikely(!page))
>>>>>> + return NULL;
>>>>>> +
>>>>>> + *offset = 0;
>>>>>> + nc->remaining = remaining - fragsz;
>>>>>> + nc->pagecnt_bias--;
>>>>>> +
>>>>>> + return page;
>>>>>> }
>>>>>> +EXPORT_SYMBOL(page_frag_alloc_pg);
>>>>>
>>>>> Again, this isn't returning a page. It is essentially returning a
>>>>> bio_vec without calling it as such. You might as well pass the bio_vec
>>>>> pointer as an argument and just have it populate it directly.
>>>>
>>>> I really don't think your bio_vec suggestion make much sense for now as
>>>> the reason mentioned in below:
>>>>
>>>> "Through a quick look, there seems to be at least three structs which have
>>>> similar values: struct bio_vec & struct skb_frag & struct page_frag.
>>>>
>>>> As your above agrument about using bio_vec, it seems it is ok to use any
>>>> one of them as each one of them seems to have almost all the values we
>>>> are using?
>>>>
>>>> Personally, my preference over them: 'struct page_frag' > 'struct skb_frag'
>>>>> 'struct bio_vec', as the naming of 'struct page_frag' seems to best match
>>>> the page_frag API, 'struct skb_frag' is the second preference because we
>>>> mostly need to fill skb frag anyway, and 'struct bio_vec' is the last
>>>> preference because it just happen to have almost all the values needed.
>>>
>>> That is why I said I would be okay with us passing page_frag in patch
>>> 12 after looking closer at the code. The fact is it should make the
>>> review of that patch set much easier if you essentially just pass the
>>> page_frag back out of the call. Then it could be used in exactly the
>>> same way it was before and should reduce the total number of lines of
>>> code that need to be changed.
>>
>> So the your suggestion changed to something like below?
>>
>> int page_frag_alloc_pfrag(struct page_frag_cache *nc, struct page_frag *pfrag);
>>
>> The API naming of 'page_frag_alloc_pfrag' seems a little odd to me, any better
>> one in your mind?
>
> Well at this point we are populating/getting/pulling a page frag from
> the page frag cache. Maybe look for a word other than alloc such as
> populate. Essentially what you are doing is populating the pfrag from
> the frag cache, although I thought there was a size value you passed
> for that isn't there?
'struct page_frag' does have a size field, but I am not sure if I
understand what do you mean by 'although I thought there was a size
value you passed for that isn't there?‘ yet.
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (10 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 11/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-14 22:01 ` Alexander H Duyck
2024-08-08 12:37 ` [PATCH net-next v13 13/14] mm: page_frag: update documentation for page_frag Yunsheng Lin
` (2 subsequent siblings)
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Mat Martineau, Ayush Sawal, Eric Dumazet, Willem de Bruijn,
Jason Wang, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, John Fastabend, Jakub Sitnicki,
David Ahern, Matthieu Baerts, Geliang Tang, Jamal Hadi Salim,
Cong Wang, Jiri Pirko, Boris Pismenny, bpf, mptcp
Use the newly introduced prepare/probe/commit API to
replace page_frag with page_frag_cache for sk_page_frag().
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Mat Martineau <martineau@kernel.org>
---
.../chelsio/inline_crypto/chtls/chtls.h | 3 -
.../chelsio/inline_crypto/chtls/chtls_io.c | 100 ++++---------
.../chelsio/inline_crypto/chtls/chtls_main.c | 3 -
drivers/net/tun.c | 48 +++---
include/linux/sched.h | 2 +-
include/net/sock.h | 14 +-
kernel/exit.c | 3 +-
kernel/fork.c | 3 +-
net/core/skbuff.c | 59 +++++---
net/core/skmsg.c | 22 +--
net/core/sock.c | 46 ++++--
net/ipv4/ip_output.c | 33 +++--
net/ipv4/tcp.c | 32 ++--
net/ipv4/tcp_output.c | 28 ++--
net/ipv6/ip6_output.c | 33 +++--
net/kcm/kcmsock.c | 27 ++--
net/mptcp/protocol.c | 67 +++++----
net/sched/em_meta.c | 2 +-
net/tls/tls_device.c | 137 ++++++++++--------
19 files changed, 347 insertions(+), 315 deletions(-)
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
index 7ff82b6778ba..fe2b6a8ef718 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
@@ -234,7 +234,6 @@ struct chtls_dev {
struct list_head list_node;
struct list_head rcu_node;
struct list_head na_node;
- unsigned int send_page_order;
int max_host_sndbuf;
u32 round_robin_cnt;
struct key_map kmap;
@@ -453,8 +452,6 @@ enum {
/* The ULP mode/submode of an skbuff */
#define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode)
-#define TCP_PAGE(sk) (sk->sk_frag.page)
-#define TCP_OFF(sk) (sk->sk_frag.offset)
static inline struct chtls_dev *to_chtls_dev(struct tls_toe_device *tlsdev)
{
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
index d567e42e1760..334381c1587f 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
@@ -825,12 +825,6 @@ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags)
ULP_SKB_CB(skb)->flags = flags;
__skb_queue_tail(&csk->txq, skb);
sk->sk_wmem_queued += skb->truesize;
-
- if (TCP_PAGE(sk) && TCP_OFF(sk)) {
- put_page(TCP_PAGE(sk));
- TCP_PAGE(sk) = NULL;
- TCP_OFF(sk) = 0;
- }
}
static struct sk_buff *get_tx_skb(struct sock *sk, int size)
@@ -882,16 +876,12 @@ static void push_frames_if_head(struct sock *sk)
chtls_push_frames(csk, 1);
}
-static int chtls_skb_copy_to_page_nocache(struct sock *sk,
- struct iov_iter *from,
- struct sk_buff *skb,
- struct page *page,
- int off, int copy)
+static int chtls_skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from,
+ struct sk_buff *skb, char *va, int copy)
{
int err;
- err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) +
- off, copy, skb->len);
+ err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len);
if (err)
return err;
@@ -1114,82 +1104,44 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
if (err)
goto do_fault;
} else {
+ struct page_frag_cache *pfrag = &sk->sk_frag;
int i = skb_shinfo(skb)->nr_frags;
- struct page *page = TCP_PAGE(sk);
- int pg_size = PAGE_SIZE;
- int off = TCP_OFF(sk);
- bool merge;
-
- if (page)
- pg_size = page_size(page);
- if (off < pg_size &&
- skb_can_coalesce(skb, i, page, off)) {
+ unsigned int offset, fragsz;
+ bool merge = false;
+ struct page *page;
+ void *va;
+
+ fragsz = 32U;
+ page = page_frag_alloc_prepare(pfrag, &offset, &fragsz,
+ &va, sk->sk_allocation);
+ if (unlikely(!page))
+ goto wait_for_memory;
+
+ if (skb_can_coalesce(skb, i, page, offset))
merge = true;
- goto copy;
- }
- merge = false;
- if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
- MAX_SKB_FRAGS))
+ else if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
+ MAX_SKB_FRAGS))
goto new_buf;
- if (page && off == pg_size) {
- put_page(page);
- TCP_PAGE(sk) = page = NULL;
- pg_size = PAGE_SIZE;
- }
-
- if (!page) {
- gfp_t gfp = sk->sk_allocation;
- int order = cdev->send_page_order;
-
- if (order) {
- page = alloc_pages(gfp | __GFP_COMP |
- __GFP_NOWARN |
- __GFP_NORETRY,
- order);
- if (page)
- pg_size <<= order;
- }
- if (!page) {
- page = alloc_page(gfp);
- pg_size = PAGE_SIZE;
- }
- if (!page)
- goto wait_for_memory;
- off = 0;
- }
-copy:
- if (copy > pg_size - off)
- copy = pg_size - off;
+ copy = min_t(int, copy, fragsz);
if (is_tls_tx(csk))
copy = min_t(int, copy, csk->tlshws.txleft);
- err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter,
- skb, page,
- off, copy);
- if (unlikely(err)) {
- if (!TCP_PAGE(sk)) {
- TCP_PAGE(sk) = page;
- TCP_OFF(sk) = 0;
- }
+ err = chtls_skb_copy_to_va_nocache(sk, &msg->msg_iter,
+ skb, va, copy);
+ if (unlikely(err))
goto do_fault;
- }
+
/* Update the skb. */
if (merge) {
skb_frag_size_add(
&skb_shinfo(skb)->frags[i - 1],
copy);
+ page_frag_alloc_commit_noref(pfrag, copy);
} else {
- skb_fill_page_desc(skb, i, page, off, copy);
- if (off + copy < pg_size) {
- /* space left keep page */
- get_page(page);
- TCP_PAGE(sk) = page;
- } else {
- TCP_PAGE(sk) = NULL;
- }
+ skb_fill_page_desc(skb, i, page, offset, copy);
+ page_frag_alloc_commit(pfrag, copy);
}
- TCP_OFF(sk) = off + copy;
}
if (unlikely(skb->len == mss))
tx_skb_finalize(skb);
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
index 455a54708be4..ba88b2fc7cd8 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
@@ -34,7 +34,6 @@ static DEFINE_MUTEX(notify_mutex);
static RAW_NOTIFIER_HEAD(listen_notify_list);
static struct proto chtls_cpl_prot, chtls_cpl_protv6;
struct request_sock_ops chtls_rsk_ops, chtls_rsk_opsv6;
-static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT;
static void register_listen_notifier(struct notifier_block *nb)
{
@@ -273,8 +272,6 @@ static void *chtls_uld_add(const struct cxgb4_lld_info *info)
INIT_WORK(&cdev->deferq_task, process_deferq);
spin_lock_init(&cdev->listen_lock);
spin_lock_init(&cdev->idr_lock);
- cdev->send_page_order = min_t(uint, get_order(32768),
- send_page_order);
cdev->max_host_sndbuf = 48 * 1024;
if (lldi->vr->key.size)
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 1d06c560c5e6..51df92fd60db 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -1598,21 +1598,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
}
static struct sk_buff *__tun_build_skb(struct tun_file *tfile,
- struct page_frag *alloc_frag, char *buf,
- int buflen, int len, int pad)
+ char *buf, int buflen, int len, int pad)
{
struct sk_buff *skb = build_skb(buf, buflen);
- if (!skb)
+ if (!skb) {
+ page_frag_free_va(buf);
return ERR_PTR(-ENOMEM);
+ }
skb_reserve(skb, pad);
skb_put(skb, len);
skb_set_owner_w(skb, tfile->socket.sk);
- get_page(alloc_frag->page);
- alloc_frag->offset += buflen;
-
return skb;
}
@@ -1660,7 +1658,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
struct virtio_net_hdr *hdr,
int len, int *skb_xdp)
{
- struct page_frag *alloc_frag = ¤t->task_frag;
+ struct page_frag_cache *alloc_frag = ¤t->task_frag;
struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
struct bpf_prog *xdp_prog;
int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -1676,16 +1674,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
buflen += SKB_DATA_ALIGN(len + pad);
rcu_read_unlock();
- alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
- if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL)))
+ buf = page_frag_alloc_va_align(alloc_frag, buflen, GFP_KERNEL,
+ SMP_CACHE_BYTES);
+ if (unlikely(!buf))
return ERR_PTR(-ENOMEM);
- buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset;
- copied = copy_page_from_iter(alloc_frag->page,
- alloc_frag->offset + pad,
- len, from);
- if (copied != len)
+ copied = copy_from_iter(buf + pad, len, from);
+ if (copied != len) {
+ page_frag_alloc_abort(alloc_frag, buflen);
return ERR_PTR(-EFAULT);
+ }
/* There's a small window that XDP may be set after the check
* of xdp_prog above, this should be rare and for simplicity
@@ -1693,8 +1691,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
*/
if (hdr->gso_type || !xdp_prog) {
*skb_xdp = 1;
- return __tun_build_skb(tfile, alloc_frag, buf, buflen, len,
- pad);
+ return __tun_build_skb(tfile, buf, buflen, len, pad);
}
*skb_xdp = 0;
@@ -1711,21 +1708,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
xdp_prepare_buff(&xdp, buf, pad, len, false);
act = bpf_prog_run_xdp(xdp_prog, &xdp);
- if (act == XDP_REDIRECT || act == XDP_TX) {
- get_page(alloc_frag->page);
- alloc_frag->offset += buflen;
- }
err = tun_xdp_act(tun, xdp_prog, &xdp, act);
- if (err < 0) {
- if (act == XDP_REDIRECT || act == XDP_TX)
- put_page(alloc_frag->page);
- goto out;
- }
-
if (err == XDP_REDIRECT)
xdp_do_flush();
- if (err != XDP_PASS)
+
+ if (err == XDP_REDIRECT || err == XDP_TX) {
+ goto out;
+ } else if (err < 0 || err != XDP_PASS) {
+ page_frag_alloc_abort(alloc_frag, buflen);
goto out;
+ }
pad = xdp.data - xdp.data_hard_start;
len = xdp.data_end - xdp.data;
@@ -1734,7 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
rcu_read_unlock();
local_bh_enable();
- return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad);
+ return __tun_build_skb(tfile, buf, buflen, len, pad);
out:
bpf_net_ctx_clear(bpf_net_ctx);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index f8d150343d42..bb9a8e9d6d2d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1355,7 +1355,7 @@ struct task_struct {
/* Cache last used pipe for splice(): */
struct pipe_inode_info *splice_pipe;
- struct page_frag task_frag;
+ struct page_frag_cache task_frag;
#ifdef CONFIG_TASK_DELAY_ACCT
struct task_delay_info *delays;
diff --git a/include/net/sock.h b/include/net/sock.h
index b5e702298ab7..8f6cc0dd2f4f 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -461,7 +461,7 @@ struct sock {
struct sk_buff_head sk_write_queue;
u32 sk_dst_pending_confirm;
u32 sk_pacing_status; /* see enum sk_pacing */
- struct page_frag sk_frag;
+ struct page_frag_cache sk_frag;
struct timer_list sk_timer;
unsigned long sk_pacing_rate; /* bytes per second */
@@ -2484,7 +2484,7 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk)
* Return: a per task page_frag if context allows that,
* otherwise a per socket one.
*/
-static inline struct page_frag *sk_page_frag(struct sock *sk)
+static inline struct page_frag_cache *sk_page_frag(struct sock *sk)
{
if (sk->sk_use_task_frag)
return ¤t->task_frag;
@@ -2492,7 +2492,15 @@ static inline struct page_frag *sk_page_frag(struct sock *sk)
return &sk->sk_frag;
}
-bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag);
+struct page *sk_page_frag_alloc_prepare(struct sock *sk,
+ struct page_frag_cache *pfrag,
+ unsigned int *size,
+ unsigned int *offset, void **va);
+
+struct page *sk_page_frag_alloc_pg_prepare(struct sock *sk,
+ struct page_frag_cache *pfrag,
+ unsigned int *size,
+ unsigned int *offset);
/*
* Default write policy as shown to user space via poll/select/SIGIO
diff --git a/kernel/exit.c b/kernel/exit.c
index 7430852a8571..b5257e74ec1c 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -917,8 +917,7 @@ void __noreturn do_exit(long code)
if (tsk->splice_pipe)
free_pipe_info(tsk->splice_pipe);
- if (tsk->task_frag.page)
- put_page(tsk->task_frag.page);
+ page_frag_cache_drain(&tsk->task_frag);
exit_task_stack_account(tsk);
diff --git a/kernel/fork.c b/kernel/fork.c
index cc760491f201..7d380a6fd64a 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -80,6 +80,7 @@
#include <linux/tty.h>
#include <linux/fs_struct.h>
#include <linux/magic.h>
+#include <linux/page_frag_cache.h>
#include <linux/perf_event.h>
#include <linux/posix-timers.h>
#include <linux/user-return-notifier.h>
@@ -1157,10 +1158,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
tsk->btrace_seq = 0;
#endif
tsk->splice_pipe = NULL;
- tsk->task_frag.page = NULL;
tsk->wake_q.next = NULL;
tsk->worker_private = NULL;
+ page_frag_cache_init(&tsk->task_frag);
kcov_task_init(tsk);
kmsan_task_create(tsk);
kmap_local_fork(tsk);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index bb77c3fd192f..a7e86984bbcd 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3040,25 +3040,6 @@ static void sock_spd_release(struct splice_pipe_desc *spd, unsigned int i)
put_page(spd->pages[i]);
}
-static struct page *linear_to_page(struct page *page, unsigned int *len,
- unsigned int *offset,
- struct sock *sk)
-{
- struct page_frag *pfrag = sk_page_frag(sk);
-
- if (!sk_page_frag_refill(sk, pfrag))
- return NULL;
-
- *len = min_t(unsigned int, *len, pfrag->size - pfrag->offset);
-
- memcpy(page_address(pfrag->page) + pfrag->offset,
- page_address(page) + *offset, *len);
- *offset = pfrag->offset;
- pfrag->offset += *len;
-
- return pfrag->page;
-}
-
static bool spd_can_coalesce(const struct splice_pipe_desc *spd,
struct page *page,
unsigned int offset)
@@ -3069,6 +3050,38 @@ static bool spd_can_coalesce(const struct splice_pipe_desc *spd,
spd->partial[spd->nr_pages - 1].len == offset);
}
+static bool spd_fill_linear_page(struct splice_pipe_desc *spd,
+ struct page *page, unsigned int offset,
+ unsigned int *len, struct sock *sk)
+{
+ struct page_frag_cache *pfrag = sk_page_frag(sk);
+ unsigned int frag_len, frag_offset;
+ struct page *frag_page;
+ void *va;
+
+ frag_page = sk_page_frag_alloc_prepare(sk, pfrag, &frag_offset,
+ &frag_len, &va);
+ if (!frag_page)
+ return true;
+
+ *len = min_t(unsigned int, *len, frag_len);
+ memcpy(va, page_address(page) + offset, *len);
+
+ if (spd_can_coalesce(spd, frag_page, frag_offset)) {
+ spd->partial[spd->nr_pages - 1].len += *len;
+ page_frag_alloc_commit_noref(pfrag, *len);
+ return false;
+ }
+
+ page_frag_alloc_commit(pfrag, *len);
+ spd->pages[spd->nr_pages] = frag_page;
+ spd->partial[spd->nr_pages].len = *len;
+ spd->partial[spd->nr_pages].offset = frag_offset;
+ spd->nr_pages++;
+
+ return false;
+}
+
/*
* Fill page/offset/length into spd, if it can hold more pages.
*/
@@ -3081,11 +3094,9 @@ static bool spd_fill_page(struct splice_pipe_desc *spd,
if (unlikely(spd->nr_pages == MAX_SKB_FRAGS))
return true;
- if (linear) {
- page = linear_to_page(page, len, &offset, sk);
- if (!page)
- return true;
- }
+ if (linear)
+ return spd_fill_linear_page(spd, page, offset, len, sk);
+
if (spd_can_coalesce(spd, page, offset)) {
spd->partial[spd->nr_pages - 1].len += *len;
return false;
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index bbf40b999713..956fd6103909 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -27,23 +27,25 @@ static bool sk_msg_try_coalesce_ok(struct sk_msg *msg, int elem_first_coalesce)
int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
int elem_first_coalesce)
{
- struct page_frag *pfrag = sk_page_frag(sk);
+ struct page_frag_cache *pfrag = sk_page_frag(sk);
u32 osize = msg->sg.size;
int ret = 0;
len -= msg->sg.size;
while (len > 0) {
+ unsigned int frag_offset, frag_len;
struct scatterlist *sge;
- u32 orig_offset;
+ struct page *page;
int use, i;
- if (!sk_page_frag_refill(sk, pfrag)) {
+ page = sk_page_frag_alloc_pg_prepare(sk, pfrag, &frag_offset,
+ &frag_len);
+ if (!page) {
ret = -ENOMEM;
goto msg_trim;
}
- orig_offset = pfrag->offset;
- use = min_t(int, len, pfrag->size - orig_offset);
+ use = min_t(int, len, frag_len);
if (!sk_wmem_schedule(sk, use)) {
ret = -ENOMEM;
goto msg_trim;
@@ -54,9 +56,10 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
sge = &msg->sg.data[i];
if (sk_msg_try_coalesce_ok(msg, elem_first_coalesce) &&
- sg_page(sge) == pfrag->page &&
- sge->offset + sge->length == orig_offset) {
+ sg_page(sge) == page &&
+ sge->offset + sge->length == frag_offset) {
sge->length += use;
+ page_frag_alloc_commit_noref(pfrag, use);
} else {
if (sk_msg_full(msg)) {
ret = -ENOSPC;
@@ -65,14 +68,13 @@ int sk_msg_alloc(struct sock *sk, struct sk_msg *msg, int len,
sge = &msg->sg.data[msg->sg.end];
sg_unmark_end(sge);
- sg_set_page(sge, pfrag->page, use, orig_offset);
- get_page(pfrag->page);
+ sg_set_page(sge, page, use, frag_offset);
+ page_frag_alloc_commit(pfrag, use);
sk_msg_iter_next(msg, end);
}
sk_mem_charge(sk, use);
msg->sg.size += use;
- pfrag->offset += use;
len -= use;
}
diff --git a/net/core/sock.c b/net/core/sock.c
index 9abc4fe25953..26c100ee9001 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2207,10 +2207,7 @@ static void __sk_destruct(struct rcu_head *head)
pr_debug("%s: optmem leakage (%d bytes) detected\n",
__func__, atomic_read(&sk->sk_omem_alloc));
- if (sk->sk_frag.page) {
- put_page(sk->sk_frag.page);
- sk->sk_frag.page = NULL;
- }
+ page_frag_cache_drain(&sk->sk_frag);
/* We do not need to acquire sk->sk_peer_lock, we are the last user. */
put_cred(sk->sk_peer_cred);
@@ -2956,16 +2953,43 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t gfp)
}
EXPORT_SYMBOL(skb_page_frag_refill);
-bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag)
+struct page *sk_page_frag_alloc_prepare(struct sock *sk,
+ struct page_frag_cache *pfrag,
+ unsigned int *offset,
+ unsigned int *size, void **va)
{
- if (likely(skb_page_frag_refill(32U, pfrag, sk->sk_allocation)))
- return true;
+ struct page *page;
+
+ *size = 32U;
+ page = page_frag_alloc_prepare(pfrag, offset, size, va,
+ sk->sk_allocation);
+ if (likely(page))
+ return page;
sk_enter_memory_pressure(sk);
sk_stream_moderate_sndbuf(sk);
- return false;
+ return NULL;
+}
+EXPORT_SYMBOL(sk_page_frag_alloc_prepare);
+
+struct page *sk_page_frag_alloc_pg_prepare(struct sock *sk,
+ struct page_frag_cache *pfrag,
+ unsigned int *offset,
+ unsigned int *size)
+{
+ struct page *page;
+
+ *size = 32U;
+ page = page_frag_alloc_pg_prepare(pfrag, offset, size,
+ sk->sk_allocation);
+ if (likely(page))
+ return page;
+
+ sk_enter_memory_pressure(sk);
+ sk_stream_moderate_sndbuf(sk);
+ return NULL;
}
-EXPORT_SYMBOL(sk_page_frag_refill);
+EXPORT_SYMBOL(sk_page_frag_alloc_pg_prepare);
void __lock_sock(struct sock *sk)
__releases(&sk->sk_lock.slock)
@@ -3487,8 +3511,8 @@ void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid)
sk->sk_error_report = sock_def_error_report;
sk->sk_destruct = sock_def_destruct;
- sk->sk_frag.page = NULL;
- sk->sk_frag.offset = 0;
+ page_frag_cache_init(&sk->sk_frag);
+
sk->sk_peek_off = -1;
sk->sk_peer_pid = NULL;
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 8a10a7c67834..57499e3ed9e5 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -952,7 +952,7 @@ static int __ip_append_data(struct sock *sk,
struct flowi4 *fl4,
struct sk_buff_head *queue,
struct inet_cork *cork,
- struct page_frag *pfrag,
+ struct page_frag_cache *pfrag,
int getfrag(void *from, char *to, int offset,
int len, int odd, struct sk_buff *skb),
void *from, int length, int transhdrlen,
@@ -1228,31 +1228,38 @@ static int __ip_append_data(struct sock *sk,
wmem_alloc_delta += copy;
} else if (!zc) {
int i = skb_shinfo(skb)->nr_frags;
+ unsigned int frag_offset, frag_size;
+ struct page *page;
+ void *va;
err = -ENOMEM;
- if (!sk_page_frag_refill(sk, pfrag))
+ page = sk_page_frag_alloc_prepare(sk, pfrag,
+ &frag_offset,
+ &frag_size, &va);
+ if (!page)
goto error;
skb_zcopy_downgrade_managed(skb);
- if (!skb_can_coalesce(skb, i, pfrag->page,
- pfrag->offset)) {
+ copy = min_t(int, copy, frag_size);
+
+ if (!skb_can_coalesce(skb, i, page, frag_offset)) {
err = -EMSGSIZE;
if (i == MAX_SKB_FRAGS)
goto error;
- __skb_fill_page_desc(skb, i, pfrag->page,
- pfrag->offset, 0);
+ __skb_fill_page_desc(skb, i, page, frag_offset,
+ copy);
skb_shinfo(skb)->nr_frags = ++i;
- get_page(pfrag->page);
+ page_frag_alloc_commit(pfrag, copy);
+ } else {
+ skb_frag_size_add(
+ &skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_alloc_commit_noref(pfrag, copy);
}
- copy = min_t(int, copy, pfrag->size - pfrag->offset);
- if (getfrag(from,
- page_address(pfrag->page) + pfrag->offset,
- offset, copy, skb->len, skb) < 0)
+
+ if (getfrag(from, va, offset, copy, skb->len, skb) < 0)
goto error_efault;
- pfrag->offset += copy;
- skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
skb_len_add(skb, copy);
wmem_alloc_delta += copy;
} else {
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 7c392710ae15..815ec53b16d5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1189,13 +1189,17 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
if (zc == 0) {
bool merge = true;
int i = skb_shinfo(skb)->nr_frags;
- struct page_frag *pfrag = sk_page_frag(sk);
-
- if (!sk_page_frag_refill(sk, pfrag))
+ struct page_frag_cache *pfrag = sk_page_frag(sk);
+ unsigned int frag_offset, frag_size;
+ struct page *page;
+ void *va;
+
+ page = sk_page_frag_alloc_prepare(sk, pfrag, &frag_offset,
+ &frag_size, &va);
+ if (!page)
goto wait_for_space;
- if (!skb_can_coalesce(skb, i, pfrag->page,
- pfrag->offset)) {
+ if (!skb_can_coalesce(skb, i, page, frag_offset)) {
if (i >= READ_ONCE(net_hotdata.sysctl_max_skb_frags)) {
tcp_mark_push(tp, skb);
goto new_segment;
@@ -1203,7 +1207,7 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
merge = false;
}
- copy = min_t(int, copy, pfrag->size - pfrag->offset);
+ copy = min_t(int, copy, frag_size);
if (unlikely(skb_zcopy_pure(skb) || skb_zcopy_managed(skb))) {
if (tcp_downgrade_zcopy_pure(sk, skb))
@@ -1216,20 +1220,18 @@ int tcp_sendmsg_locked(struct sock *sk, struct msghdr *msg, size_t size)
goto wait_for_space;
err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb,
- page_address(pfrag->page) +
- pfrag->offset, copy);
+ va, copy);
if (err)
goto do_error;
/* Update the skb. */
if (merge) {
skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_alloc_commit_noref(pfrag, copy);
} else {
- skb_fill_page_desc(skb, i, pfrag->page,
- pfrag->offset, copy);
- page_ref_inc(pfrag->page);
+ skb_fill_page_desc(skb, i, page, frag_offset, copy);
+ page_frag_alloc_commit(pfrag, copy);
}
- pfrag->offset += copy;
} else if (zc == MSG_ZEROCOPY) {
/* First append to a fragless skb builds initial
* pure zerocopy skb
@@ -3131,11 +3133,7 @@ int tcp_disconnect(struct sock *sk, int flags)
WARN_ON(inet->inet_num && !icsk->icsk_bind_hash);
- if (sk->sk_frag.page) {
- put_page(sk->sk_frag.page);
- sk->sk_frag.page = NULL;
- sk->sk_frag.offset = 0;
- }
+ page_frag_cache_drain(&sk->sk_frag);
sk_error_report(sk);
return 0;
}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 16c48df8df4c..43208092b89c 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -3970,9 +3970,12 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
struct inet_connection_sock *icsk = inet_csk(sk);
struct tcp_sock *tp = tcp_sk(sk);
struct tcp_fastopen_request *fo = tp->fastopen_req;
- struct page_frag *pfrag = sk_page_frag(sk);
+ struct page_frag_cache *pfrag = sk_page_frag(sk);
+ unsigned int offset, size;
struct sk_buff *syn_data;
int space, err = 0;
+ struct page *page;
+ void *va;
tp->rx_opt.mss_clamp = tp->advmss; /* If MSS is not cached */
if (!tcp_fastopen_cookie_check(sk, &tp->rx_opt.mss_clamp, &fo->cookie))
@@ -3991,30 +3994,31 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn)
space = min_t(size_t, space, fo->size);
- if (space &&
- !skb_page_frag_refill(min_t(size_t, space, PAGE_SIZE),
- pfrag, sk->sk_allocation))
- goto fallback;
+ if (space) {
+ size = min_t(size_t, space, PAGE_SIZE);
+ page = page_frag_alloc_prepare(pfrag, &offset, &size, &va,
+ sk->sk_allocation);
+ if (!page)
+ goto fallback;
+ }
+
syn_data = tcp_stream_alloc_skb(sk, sk->sk_allocation, false);
if (!syn_data)
goto fallback;
memcpy(syn_data->cb, syn->cb, sizeof(syn->cb));
if (space) {
- space = min_t(size_t, space, pfrag->size - pfrag->offset);
+ space = min_t(size_t, space, size);
space = tcp_wmem_schedule(sk, space);
}
if (space) {
- space = copy_page_from_iter(pfrag->page, pfrag->offset,
- space, &fo->data->msg_iter);
+ space = _copy_from_iter(va, space, &fo->data->msg_iter);
if (unlikely(!space)) {
tcp_skb_tsorted_anchor_cleanup(syn_data);
kfree_skb(syn_data);
goto fallback;
}
- skb_fill_page_desc(syn_data, 0, pfrag->page,
- pfrag->offset, space);
- page_ref_inc(pfrag->page);
- pfrag->offset += space;
+ skb_fill_page_desc(syn_data, 0, page, offset, space);
+ page_frag_alloc_commit(pfrag, space);
skb_len_add(syn_data, space);
skb_zcopy_set(syn_data, fo->uarg, NULL);
}
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index ab504d31f0cd..86086b4bb55c 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -1405,7 +1405,7 @@ static int __ip6_append_data(struct sock *sk,
struct sk_buff_head *queue,
struct inet_cork_full *cork_full,
struct inet6_cork *v6_cork,
- struct page_frag *pfrag,
+ struct page_frag_cache *pfrag,
int getfrag(void *from, char *to, int offset,
int len, int odd, struct sk_buff *skb),
void *from, size_t length, int transhdrlen,
@@ -1746,32 +1746,39 @@ static int __ip6_append_data(struct sock *sk,
copy = err;
wmem_alloc_delta += copy;
} else if (!zc) {
+ unsigned int frag_offset, frag_size;
int i = skb_shinfo(skb)->nr_frags;
+ struct page *page;
+ void *va;
err = -ENOMEM;
- if (!sk_page_frag_refill(sk, pfrag))
+ page = sk_page_frag_alloc_prepare(sk, pfrag,
+ &frag_offset,
+ &frag_size, &va);
+ if (!page)
goto error;
skb_zcopy_downgrade_managed(skb);
- if (!skb_can_coalesce(skb, i, pfrag->page,
- pfrag->offset)) {
+ copy = min_t(int, copy, frag_size);
+
+ if (!skb_can_coalesce(skb, i, page, frag_offset)) {
err = -EMSGSIZE;
if (i == MAX_SKB_FRAGS)
goto error;
- __skb_fill_page_desc(skb, i, pfrag->page,
- pfrag->offset, 0);
+ __skb_fill_page_desc(skb, i, page, frag_offset,
+ copy);
skb_shinfo(skb)->nr_frags = ++i;
- get_page(pfrag->page);
+ page_frag_alloc_commit(pfrag, copy);
+ } else {
+ skb_frag_size_add(
+ &skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_alloc_commit_noref(pfrag, copy);
}
- copy = min_t(int, copy, pfrag->size - pfrag->offset);
- if (getfrag(from,
- page_address(pfrag->page) + pfrag->offset,
- offset, copy, skb->len, skb) < 0)
+
+ if (getfrag(from, va, offset, copy, skb->len, skb) < 0)
goto error_efault;
- pfrag->offset += copy;
- skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
skb->len += copy;
skb->data_len += copy;
skb->truesize += copy;
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index eec6c56b7f3e..e52ddf716fa5 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -803,13 +803,17 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
while (msg_data_left(msg)) {
bool merge = true;
int i = skb_shinfo(skb)->nr_frags;
- struct page_frag *pfrag = sk_page_frag(sk);
-
- if (!sk_page_frag_refill(sk, pfrag))
+ struct page_frag_cache *pfrag = sk_page_frag(sk);
+ unsigned int offset, size;
+ struct page *page;
+ void *va;
+
+ page = sk_page_frag_alloc_prepare(sk, pfrag, &offset, &size,
+ &va);
+ if (!page)
goto wait_for_memory;
- if (!skb_can_coalesce(skb, i, pfrag->page,
- pfrag->offset)) {
+ if (!skb_can_coalesce(skb, i, page, offset)) {
if (i == MAX_SKB_FRAGS) {
struct sk_buff *tskb;
@@ -850,14 +854,12 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
if (head != skb)
head->truesize += copy;
} else {
- copy = min_t(int, msg_data_left(msg),
- pfrag->size - pfrag->offset);
+ copy = min_t(int, msg_data_left(msg), size);
if (!sk_wmem_schedule(sk, copy))
goto wait_for_memory;
err = skb_copy_to_va_nocache(sk, &msg->msg_iter, skb,
- page_address(pfrag->page) +
- pfrag->offset, copy);
+ va, copy);
if (err)
goto out_error;
@@ -865,13 +867,12 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
if (merge) {
skb_frag_size_add(
&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_alloc_commit_noref(pfrag, copy);
} else {
- skb_fill_page_desc(skb, i, pfrag->page,
- pfrag->offset, copy);
- get_page(pfrag->page);
+ skb_fill_page_desc(skb, i, page, offset, copy);
+ page_frag_alloc_commit(pfrag, copy);
}
- pfrag->offset += copy;
}
copied += copy;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 0d536b183a6c..3d27ede2c781 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -960,17 +960,16 @@ static bool mptcp_skb_can_collapse_to(u64 write_seq,
}
/* we can append data to the given data frag if:
- * - there is space available in the backing page_frag
- * - the data frag tail matches the current page_frag free offset
+ * - the data frag tail matches the current page and offset
* - the data frag end sequence number matches the current write seq
*/
static bool mptcp_frag_can_collapse_to(const struct mptcp_sock *msk,
- const struct page_frag *pfrag,
+ const struct page *page,
+ const unsigned int offset,
const struct mptcp_data_frag *df)
{
- return df && pfrag->page == df->page &&
- pfrag->size - pfrag->offset > 0 &&
- pfrag->offset == (df->offset + df->data_len) &&
+ return df && page == df->page &&
+ offset == (df->offset + df->data_len) &&
df->data_seq + df->data_len == msk->write_seq;
}
@@ -1085,30 +1084,36 @@ static void mptcp_enter_memory_pressure(struct sock *sk)
/* ensure we get enough memory for the frag hdr, beyond some minimal amount of
* data
*/
-static bool mptcp_page_frag_refill(struct sock *sk, struct page_frag *pfrag)
+static struct page *mptcp_page_frag_alloc_prepare(struct sock *sk,
+ struct page_frag_cache *pfrag,
+ unsigned int *offset,
+ unsigned int *size, void **va)
{
- if (likely(skb_page_frag_refill(32U + sizeof(struct mptcp_data_frag),
- pfrag, sk->sk_allocation)))
- return true;
+ struct page *page;
+
+ page = page_frag_alloc_prepare(pfrag, offset, size, va,
+ sk->sk_allocation);
+ if (likely(page))
+ return page;
mptcp_enter_memory_pressure(sk);
- return false;
+ return NULL;
}
static struct mptcp_data_frag *
-mptcp_carve_data_frag(const struct mptcp_sock *msk, struct page_frag *pfrag,
- int orig_offset)
+mptcp_carve_data_frag(const struct mptcp_sock *msk, struct page *page,
+ unsigned int orig_offset)
{
int offset = ALIGN(orig_offset, sizeof(long));
struct mptcp_data_frag *dfrag;
- dfrag = (struct mptcp_data_frag *)(page_to_virt(pfrag->page) + offset);
+ dfrag = (struct mptcp_data_frag *)(page_to_virt(page) + offset);
dfrag->data_len = 0;
dfrag->data_seq = msk->write_seq;
dfrag->overhead = offset - orig_offset + sizeof(struct mptcp_data_frag);
dfrag->offset = offset + sizeof(struct mptcp_data_frag);
dfrag->already_sent = 0;
- dfrag->page = pfrag->page;
+ dfrag->page = page;
return dfrag;
}
@@ -1795,7 +1800,7 @@ static u32 mptcp_send_limit(const struct sock *sk)
static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
{
struct mptcp_sock *msk = mptcp_sk(sk);
- struct page_frag *pfrag;
+ struct page_frag_cache *pfrag;
size_t copied = 0;
int ret = 0;
long timeo;
@@ -1834,9 +1839,12 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
while (msg_data_left(msg)) {
int total_ts, frag_truesize = 0;
struct mptcp_data_frag *dfrag;
+ unsigned int offset = 0, size;
bool dfrag_collapsed;
- size_t psize, offset;
+ struct page *page;
u32 copy_limit;
+ size_t psize;
+ void *va;
/* ensure fitting the notsent_lowat() constraint */
copy_limit = mptcp_send_limit(sk);
@@ -1847,21 +1855,27 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
* page allocator
*/
dfrag = mptcp_pending_tail(sk);
- dfrag_collapsed = mptcp_frag_can_collapse_to(msk, pfrag, dfrag);
+ size = 1;
+ page = page_frag_alloc_probe(pfrag, &offset, &size, &va);
+ dfrag_collapsed = mptcp_frag_can_collapse_to(msk, page, offset,
+ dfrag);
if (!dfrag_collapsed) {
- if (!mptcp_page_frag_refill(sk, pfrag))
+ size = 32U + sizeof(struct mptcp_data_frag);
+ page = mptcp_page_frag_alloc_prepare(sk, pfrag, &offset,
+ &size, &va);
+ if (!page)
goto wait_for_memory;
- dfrag = mptcp_carve_data_frag(msk, pfrag, pfrag->offset);
+ dfrag = mptcp_carve_data_frag(msk, page, offset);
frag_truesize = dfrag->overhead;
+ va += dfrag->overhead;
}
/* we do not bound vs wspace, to allow a single packet.
* memory accounting will prevent execessive memory usage
* anyway
*/
- offset = dfrag->offset + dfrag->data_len;
- psize = pfrag->size - offset;
+ psize = size - frag_truesize;
psize = min_t(size_t, psize, msg_data_left(msg));
psize = min_t(size_t, psize, copy_limit);
total_ts = psize + frag_truesize;
@@ -1869,8 +1883,7 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (!sk_wmem_schedule(sk, total_ts))
goto wait_for_memory;
- ret = do_copy_data_nocache(sk, psize, &msg->msg_iter,
- page_address(dfrag->page) + offset);
+ ret = do_copy_data_nocache(sk, psize, &msg->msg_iter, va);
if (ret)
goto do_error;
@@ -1879,7 +1892,6 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
copied += psize;
dfrag->data_len += psize;
frag_truesize += psize;
- pfrag->offset += frag_truesize;
WRITE_ONCE(msk->write_seq, msk->write_seq + psize);
/* charge data on mptcp pending queue to the msk socket
@@ -1887,11 +1899,14 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
*/
sk_wmem_queued_add(sk, frag_truesize);
if (!dfrag_collapsed) {
- get_page(dfrag->page);
+ page_frag_alloc_commit(pfrag, frag_truesize);
list_add_tail(&dfrag->list, &msk->rtx_queue);
if (!msk->first_pending)
WRITE_ONCE(msk->first_pending, dfrag);
+ } else {
+ page_frag_alloc_commit_noref(pfrag, frag_truesize);
}
+
pr_debug("msk=%p dfrag at seq=%llu len=%u sent=%u new=%d", msk,
dfrag->data_seq, dfrag->data_len, dfrag->already_sent,
!dfrag_collapsed);
diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c
index 8996c73c9779..4da465af972f 100644
--- a/net/sched/em_meta.c
+++ b/net/sched/em_meta.c
@@ -590,7 +590,7 @@ META_COLLECTOR(int_sk_sendmsg_off)
*err = -1;
return;
}
- dst->value = sk->sk_frag.offset;
+ dst->value = page_frag_cache_page_offset(&sk->sk_frag);
}
META_COLLECTOR(int_sk_write_pend)
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index dc063c2c7950..02925c25ae12 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -253,24 +253,42 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx,
}
static void tls_append_frag(struct tls_record_info *record,
- struct page_frag *pfrag,
- int size)
+ struct page_frag_cache *pfrag, struct page *page,
+ unsigned int offset, unsigned int size)
{
skb_frag_t *frag;
frag = &record->frags[record->num_frags - 1];
- if (skb_frag_page(frag) == pfrag->page &&
- skb_frag_off(frag) + skb_frag_size(frag) == pfrag->offset) {
+ if (skb_frag_page(frag) == page &&
+ skb_frag_off(frag) + skb_frag_size(frag) == offset) {
skb_frag_size_add(frag, size);
+ page_frag_alloc_commit_noref(pfrag, size);
} else {
++frag;
- skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
- size);
+ skb_frag_fill_page_desc(frag, page, offset, size);
++record->num_frags;
- get_page(pfrag->page);
+ page_frag_alloc_commit(pfrag, size);
+ }
+
+ record->len += size;
+}
+
+static void tls_append_page(struct tls_record_info *record, struct page *page,
+ unsigned int offset, unsigned int size)
+{
+ skb_frag_t *frag;
+
+ frag = &record->frags[record->num_frags - 1];
+ if (skb_frag_page(frag) == page &&
+ skb_frag_off(frag) + skb_frag_size(frag) == offset) {
+ skb_frag_size_add(frag, size);
+ } else {
+ ++frag;
+ skb_frag_fill_page_desc(frag, page, offset, size);
+ ++record->num_frags;
+ get_page(page);
}
- pfrag->offset += size;
record->len += size;
}
@@ -311,11 +329,12 @@ static int tls_push_record(struct sock *sk,
static void tls_device_record_close(struct sock *sk,
struct tls_context *ctx,
struct tls_record_info *record,
- struct page_frag *pfrag,
+ struct page_frag_cache *pfrag,
unsigned char record_type)
{
struct tls_prot_info *prot = &ctx->prot_info;
- struct page_frag dummy_tag_frag;
+ unsigned int offset, size;
+ struct page *page;
/* append tag
* device will fill in the tag, we just need to append a placeholder
@@ -323,13 +342,14 @@ static void tls_device_record_close(struct sock *sk,
* increases frag count)
* if we can't allocate memory now use the dummy page
*/
- if (unlikely(pfrag->size - pfrag->offset < prot->tag_size) &&
- !skb_page_frag_refill(prot->tag_size, pfrag, sk->sk_allocation)) {
- dummy_tag_frag.page = dummy_page;
- dummy_tag_frag.offset = 0;
- pfrag = &dummy_tag_frag;
+ size = prot->tag_size;
+ page = page_frag_alloc_pg_prepare(pfrag, &offset, &size,
+ sk->sk_allocation);
+ if (unlikely(!page)) {
+ tls_append_page(record, dummy_page, 0, prot->tag_size);
+ } else {
+ tls_append_frag(record, pfrag, page, offset, prot->tag_size);
}
- tls_append_frag(record, pfrag, prot->tag_size);
/* fill prepend */
tls_fill_prepend(ctx, skb_frag_address(&record->frags[0]),
@@ -337,57 +357,52 @@ static void tls_device_record_close(struct sock *sk,
record_type);
}
-static int tls_create_new_record(struct tls_offload_context_tx *offload_ctx,
- struct page_frag *pfrag,
+static int tls_create_new_record(struct sock *sk,
+ struct tls_offload_context_tx *offload_ctx,
+ struct page_frag_cache *pfrag,
size_t prepend_size)
{
struct tls_record_info *record;
+ unsigned int offset;
+ struct page *page;
skb_frag_t *frag;
record = kmalloc(sizeof(*record), GFP_KERNEL);
if (!record)
return -ENOMEM;
- frag = &record->frags[0];
- skb_frag_fill_page_desc(frag, pfrag->page, pfrag->offset,
- prepend_size);
-
- get_page(pfrag->page);
- pfrag->offset += prepend_size;
+ page = page_frag_alloc_pg(pfrag, &offset, prepend_size,
+ sk->sk_allocation);
+ if (!page) {
+ kfree(record);
+ READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk);
+ sk_stream_moderate_sndbuf(sk);
+ return -ENOMEM;
+ }
+ frag = &record->frags[0];
+ skb_frag_fill_page_desc(frag, page, offset, prepend_size);
record->num_frags = 1;
record->len = prepend_size;
offload_ctx->open_record = record;
return 0;
}
-static int tls_do_allocation(struct sock *sk,
- struct tls_offload_context_tx *offload_ctx,
- struct page_frag *pfrag,
- size_t prepend_size)
+static struct page *tls_do_allocation(struct sock *sk,
+ struct tls_offload_context_tx *ctx,
+ struct page_frag_cache *pfrag,
+ size_t prepend_size, unsigned int *offset,
+ unsigned int *size, void **va)
{
- int ret;
-
- if (!offload_ctx->open_record) {
- if (unlikely(!skb_page_frag_refill(prepend_size, pfrag,
- sk->sk_allocation))) {
- READ_ONCE(sk->sk_prot)->enter_memory_pressure(sk);
- sk_stream_moderate_sndbuf(sk);
- return -ENOMEM;
- }
+ if (!ctx->open_record) {
+ int ret;
- ret = tls_create_new_record(offload_ctx, pfrag, prepend_size);
+ ret = tls_create_new_record(sk, ctx, pfrag, prepend_size);
if (ret)
- return ret;
-
- if (pfrag->size > pfrag->offset)
- return 0;
+ return NULL;
}
- if (!sk_page_frag_refill(sk, pfrag))
- return -ENOMEM;
-
- return 0;
+ return sk_page_frag_alloc_prepare(sk, pfrag, offset, size, va);
}
static int tls_device_copy_data(void *addr, size_t bytes, struct iov_iter *i)
@@ -424,8 +439,8 @@ static int tls_push_data(struct sock *sk,
struct tls_prot_info *prot = &tls_ctx->prot_info;
struct tls_offload_context_tx *ctx = tls_offload_ctx_tx(tls_ctx);
struct tls_record_info *record;
+ struct page_frag_cache *pfrag;
int tls_push_record_flags;
- struct page_frag *pfrag;
size_t orig_size = size;
u32 max_open_record_len;
bool more = false;
@@ -462,8 +477,13 @@ static int tls_push_data(struct sock *sk,
max_open_record_len = TLS_MAX_PAYLOAD_SIZE +
prot->prepend_size;
do {
- rc = tls_do_allocation(sk, ctx, pfrag, prot->prepend_size);
- if (unlikely(rc)) {
+ unsigned int frag_offset, frag_size;
+ struct page *page;
+ void *va;
+
+ page = tls_do_allocation(sk, ctx, pfrag, prot->prepend_size,
+ &frag_offset, &frag_size, &va);
+ if (unlikely(!page)) {
rc = sk_stream_wait_memory(sk, &timeo);
if (!rc)
continue;
@@ -491,8 +511,8 @@ static int tls_push_data(struct sock *sk,
copy = min_t(size_t, size, max_open_record_len - record->len);
if (copy && (flags & MSG_SPLICE_PAGES)) {
- struct page_frag zc_pfrag;
- struct page **pages = &zc_pfrag.page;
+ struct page *splice_page;
+ struct page **pages = &splice_page;
size_t off;
rc = iov_iter_extract_pages(iter, &pages,
@@ -504,24 +524,21 @@ static int tls_push_data(struct sock *sk,
}
copy = rc;
- if (WARN_ON_ONCE(!sendpage_ok(zc_pfrag.page))) {
+ if (WARN_ON_ONCE(!sendpage_ok(splice_page))) {
iov_iter_revert(iter, copy);
rc = -EIO;
goto handle_error;
}
- zc_pfrag.offset = off;
- zc_pfrag.size = copy;
- tls_append_frag(record, &zc_pfrag, copy);
+ tls_append_page(record, splice_page, off, copy);
} else if (copy) {
- copy = min_t(size_t, copy, pfrag->size - pfrag->offset);
+ copy = min_t(size_t, copy, frag_size);
- rc = tls_device_copy_data(page_address(pfrag->page) +
- pfrag->offset, copy,
- iter);
+ rc = tls_device_copy_data(va, copy, iter);
if (rc)
goto handle_error;
- tls_append_frag(record, pfrag, copy);
+
+ tls_append_frag(record, pfrag, page, frag_offset, copy);
}
size -= copy;
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache
2024-08-08 12:37 ` [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin
@ 2024-08-14 22:01 ` Alexander H Duyck
2024-08-18 14:17 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander H Duyck @ 2024-08-14 22:01 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Mat Martineau, Ayush Sawal, Eric Dumazet,
Willem de Bruijn, Jason Wang, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, John Fastabend,
Jakub Sitnicki, David Ahern, Matthieu Baerts, Geliang Tang,
Jamal Hadi Salim, Cong Wang, Jiri Pirko, Boris Pismenny, bpf,
mptcp
On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
> Use the newly introduced prepare/probe/commit API to
> replace page_frag with page_frag_cache for sk_page_frag().
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Acked-by: Mat Martineau <martineau@kernel.org>
> ---
> .../chelsio/inline_crypto/chtls/chtls.h | 3 -
> .../chelsio/inline_crypto/chtls/chtls_io.c | 100 ++++---------
> .../chelsio/inline_crypto/chtls/chtls_main.c | 3 -
> drivers/net/tun.c | 48 +++---
> include/linux/sched.h | 2 +-
> include/net/sock.h | 14 +-
> kernel/exit.c | 3 +-
> kernel/fork.c | 3 +-
> net/core/skbuff.c | 59 +++++---
> net/core/skmsg.c | 22 +--
> net/core/sock.c | 46 ++++--
> net/ipv4/ip_output.c | 33 +++--
> net/ipv4/tcp.c | 32 ++--
> net/ipv4/tcp_output.c | 28 ++--
> net/ipv6/ip6_output.c | 33 +++--
> net/kcm/kcmsock.c | 27 ++--
> net/mptcp/protocol.c | 67 +++++----
> net/sched/em_meta.c | 2 +-
> net/tls/tls_device.c | 137 ++++++++++--------
> 19 files changed, 347 insertions(+), 315 deletions(-)
>
> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
> index 7ff82b6778ba..fe2b6a8ef718 100644
> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
> @@ -234,7 +234,6 @@ struct chtls_dev {
> struct list_head list_node;
> struct list_head rcu_node;
> struct list_head na_node;
> - unsigned int send_page_order;
> int max_host_sndbuf;
> u32 round_robin_cnt;
> struct key_map kmap;
> @@ -453,8 +452,6 @@ enum {
>
> /* The ULP mode/submode of an skbuff */
> #define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode)
> -#define TCP_PAGE(sk) (sk->sk_frag.page)
> -#define TCP_OFF(sk) (sk->sk_frag.offset)
>
> static inline struct chtls_dev *to_chtls_dev(struct tls_toe_device *tlsdev)
> {
> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
> index d567e42e1760..334381c1587f 100644
> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
> @@ -825,12 +825,6 @@ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags)
> ULP_SKB_CB(skb)->flags = flags;
> __skb_queue_tail(&csk->txq, skb);
> sk->sk_wmem_queued += skb->truesize;
> -
> - if (TCP_PAGE(sk) && TCP_OFF(sk)) {
> - put_page(TCP_PAGE(sk));
> - TCP_PAGE(sk) = NULL;
> - TCP_OFF(sk) = 0;
> - }
> }
>
> static struct sk_buff *get_tx_skb(struct sock *sk, int size)
> @@ -882,16 +876,12 @@ static void push_frames_if_head(struct sock *sk)
> chtls_push_frames(csk, 1);
> }
>
> -static int chtls_skb_copy_to_page_nocache(struct sock *sk,
> - struct iov_iter *from,
> - struct sk_buff *skb,
> - struct page *page,
> - int off, int copy)
> +static int chtls_skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from,
> + struct sk_buff *skb, char *va, int copy)
> {
> int err;
>
> - err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) +
> - off, copy, skb->len);
> + err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len);
> if (err)
> return err;
>
> @@ -1114,82 +1104,44 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
> if (err)
> goto do_fault;
> } else {
> + struct page_frag_cache *pfrag = &sk->sk_frag;
Is this even valid? Shouldn't it be using sk_page_frag to get the
reference here? Seems like it might be trying to instantiate an unused
cache.
As per my earlier suggestion this could be made very simple if we are
just pulling a bio_vec out from the page cache at the start. With that
we could essentially plug it into the TCP_PAGE/TCP_OFF block here and
most of it would just function the same.
> int i = skb_shinfo(skb)->nr_frags;
> - struct page *page = TCP_PAGE(sk);
> - int pg_size = PAGE_SIZE;
> - int off = TCP_OFF(sk);
> - bool merge;
> -
> - if (page)
> - pg_size = page_size(page);
> - if (off < pg_size &&
> - skb_can_coalesce(skb, i, page, off)) {
> + unsigned int offset, fragsz;
> + bool merge = false;
> + struct page *page;
> + void *va;
> +
> + fragsz = 32U;
> + page = page_frag_alloc_prepare(pfrag, &offset, &fragsz,
> + &va, sk->sk_allocation);
> + if (unlikely(!page))
> + goto wait_for_memory;
> +
> + if (skb_can_coalesce(skb, i, page, offset))
> merge = true;
> - goto copy;
> - }
> - merge = false;
> - if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
> - MAX_SKB_FRAGS))
> + else if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
> + MAX_SKB_FRAGS))
> goto new_buf;
>
> - if (page && off == pg_size) {
> - put_page(page);
> - TCP_PAGE(sk) = page = NULL;
> - pg_size = PAGE_SIZE;
> - }
> -
> - if (!page) {
> - gfp_t gfp = sk->sk_allocation;
> - int order = cdev->send_page_order;
> -
> - if (order) {
> - page = alloc_pages(gfp | __GFP_COMP |
> - __GFP_NOWARN |
> - __GFP_NORETRY,
> - order);
> - if (page)
> - pg_size <<= order;
> - }
> - if (!page) {
> - page = alloc_page(gfp);
> - pg_size = PAGE_SIZE;
> - }
> - if (!page)
> - goto wait_for_memory;
> - off = 0;
> - }
> -copy:
> - if (copy > pg_size - off)
> - copy = pg_size - off;
> + copy = min_t(int, copy, fragsz);
> if (is_tls_tx(csk))
> copy = min_t(int, copy, csk->tlshws.txleft);
>
> - err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter,
> - skb, page,
> - off, copy);
> - if (unlikely(err)) {
> - if (!TCP_PAGE(sk)) {
> - TCP_PAGE(sk) = page;
> - TCP_OFF(sk) = 0;
> - }
> + err = chtls_skb_copy_to_va_nocache(sk, &msg->msg_iter,
> + skb, va, copy);
> + if (unlikely(err))
> goto do_fault;
> - }
> +
> /* Update the skb. */
> if (merge) {
> skb_frag_size_add(
> &skb_shinfo(skb)->frags[i - 1],
> copy);
> + page_frag_alloc_commit_noref(pfrag, copy);
> } else {
> - skb_fill_page_desc(skb, i, page, off, copy);
> - if (off + copy < pg_size) {
> - /* space left keep page */
> - get_page(page);
> - TCP_PAGE(sk) = page;
> - } else {
> - TCP_PAGE(sk) = NULL;
> - }
> + skb_fill_page_desc(skb, i, page, offset, copy);
> + page_frag_alloc_commit(pfrag, copy);
> }
> - TCP_OFF(sk) = off + copy;
> }
> if (unlikely(skb->len == mss))
> tx_skb_finalize(skb);
Really there is so much refactor here it is hard to tell what is what.
I would suggest just trying to plug in an intermediary value and you
can save the refactor for later.
> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
> index 455a54708be4..ba88b2fc7cd8 100644
> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
> @@ -34,7 +34,6 @@ static DEFINE_MUTEX(notify_mutex);
> static RAW_NOTIFIER_HEAD(listen_notify_list);
> static struct proto chtls_cpl_prot, chtls_cpl_protv6;
> struct request_sock_ops chtls_rsk_ops, chtls_rsk_opsv6;
> -static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT;
>
> static void register_listen_notifier(struct notifier_block *nb)
> {
> @@ -273,8 +272,6 @@ static void *chtls_uld_add(const struct cxgb4_lld_info *info)
> INIT_WORK(&cdev->deferq_task, process_deferq);
> spin_lock_init(&cdev->listen_lock);
> spin_lock_init(&cdev->idr_lock);
> - cdev->send_page_order = min_t(uint, get_order(32768),
> - send_page_order);
> cdev->max_host_sndbuf = 48 * 1024;
>
> if (lldi->vr->key.size)
> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
> index 1d06c560c5e6..51df92fd60db 100644
> --- a/drivers/net/tun.c
> +++ b/drivers/net/tun.c
> @@ -1598,21 +1598,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
> }
>
> static struct sk_buff *__tun_build_skb(struct tun_file *tfile,
> - struct page_frag *alloc_frag, char *buf,
> - int buflen, int len, int pad)
> + char *buf, int buflen, int len, int pad)
> {
> struct sk_buff *skb = build_skb(buf, buflen);
>
> - if (!skb)
> + if (!skb) {
> + page_frag_free_va(buf);
> return ERR_PTR(-ENOMEM);
> + }
>
> skb_reserve(skb, pad);
> skb_put(skb, len);
> skb_set_owner_w(skb, tfile->socket.sk);
>
> - get_page(alloc_frag->page);
> - alloc_frag->offset += buflen;
> -
Rather than freeing the buf it would be better if you were to just
stick to the existing pattern and commit the alloc_frag at the end here
instead of calling get_page.
> return skb;
> }
>
> @@ -1660,7 +1658,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> struct virtio_net_hdr *hdr,
> int len, int *skb_xdp)
> {
> - struct page_frag *alloc_frag = ¤t->task_frag;
> + struct page_frag_cache *alloc_frag = ¤t->task_frag;
> struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
> struct bpf_prog *xdp_prog;
> int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> @@ -1676,16 +1674,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> buflen += SKB_DATA_ALIGN(len + pad);
> rcu_read_unlock();
>
> - alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
> - if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL)))
> + buf = page_frag_alloc_va_align(alloc_frag, buflen, GFP_KERNEL,
> + SMP_CACHE_BYTES);
> + if (unlikely(!buf))
> return ERR_PTR(-ENOMEM);
>
> - buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset;
> - copied = copy_page_from_iter(alloc_frag->page,
> - alloc_frag->offset + pad,
> - len, from);
> - if (copied != len)
> + copied = copy_from_iter(buf + pad, len, from);
> + if (copied != len) {
> + page_frag_alloc_abort(alloc_frag, buflen);
> return ERR_PTR(-EFAULT);
> + }
>
> /* There's a small window that XDP may be set after the check
> * of xdp_prog above, this should be rare and for simplicity
> @@ -1693,8 +1691,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> */
> if (hdr->gso_type || !xdp_prog) {
> *skb_xdp = 1;
> - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len,
> - pad);
> + return __tun_build_skb(tfile, buf, buflen, len, pad);
> }
>
> *skb_xdp = 0;
> @@ -1711,21 +1708,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> xdp_prepare_buff(&xdp, buf, pad, len, false);
>
> act = bpf_prog_run_xdp(xdp_prog, &xdp);
> - if (act == XDP_REDIRECT || act == XDP_TX) {
> - get_page(alloc_frag->page);
> - alloc_frag->offset += buflen;
> - }
> err = tun_xdp_act(tun, xdp_prog, &xdp, act);
> - if (err < 0) {
> - if (act == XDP_REDIRECT || act == XDP_TX)
> - put_page(alloc_frag->page);
> - goto out;
> - }
> -
> if (err == XDP_REDIRECT)
> xdp_do_flush();
> - if (err != XDP_PASS)
> +
> + if (err == XDP_REDIRECT || err == XDP_TX) {
> + goto out;
> + } else if (err < 0 || err != XDP_PASS) {
> + page_frag_alloc_abort(alloc_frag, buflen);
> goto out;
> + }
>
Your abort function here is not necessarily safe. It is assuming that
nothing else might have taken a reference to the page or modified it in
some way. Generally we shouldn't allow rewinding the pointer until we
check the page count to guarantee nobody else is now working with a
copy of the page.
> pad = xdp.data - xdp.data_hard_start;
> len = xdp.data_end - xdp.data;
> @@ -1734,7 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
> rcu_read_unlock();
> local_bh_enable();
>
> - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad);
> + return __tun_build_skb(tfile, buf, buflen, len, pad);
>
> out:
> bpf_net_ctx_clear(bpf_net_ctx);
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index f8d150343d42..bb9a8e9d6d2d 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1355,7 +1355,7 @@ struct task_struct {
> /* Cache last used pipe for splice(): */
> struct pipe_inode_info *splice_pipe;
>
> - struct page_frag task_frag;
> + struct page_frag_cache task_frag;
>
> #ifdef CONFIG_TASK_DELAY_ACCT
> struct task_delay_info *delays;
> diff --git a/include/net/sock.h b/include/net/sock.h
> index b5e702298ab7..8f6cc0dd2f4f 100644
>
It occurs to me that bio_vec and page_frag are essentially the same
thing. Instead of having your functions pass a bio_vec it might make
more sense to work with just a page_frag as the unit to be probed and
committed with the page_frag_cache being what is borrowed from.
With that I think you could clean up a bunch of the change this code is
generating as there is too much refactor to make this easy to review.
If you were to change things though so that you maintain working with a
page_frag and are just probing it out of the page_frag_cache and
committing your change back in it would make the diff much more
readable in my opinion.
The general idea would be that the page and offset should not be
changed from probe to commit, and size would only be able to be reduced
vs remaining. It would help to make this more readable instead of
returning page while passing pointers to offset and length/size.
Also as I mentioned earlier we cannot be rolling the offset backwards.
It needs to always be making forward progress unless we own all
instances of the page as it is possible that a section may have been
shared out from underneath us when we showed some other entity the
memory.
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache
2024-08-14 22:01 ` Alexander H Duyck
@ 2024-08-18 14:17 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-18 14:17 UTC (permalink / raw)
To: Alexander H Duyck, Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Mat Martineau, Ayush Sawal, Eric Dumazet,
Willem de Bruijn, Jason Wang, Ingo Molnar, Peter Zijlstra,
Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, John Fastabend,
Jakub Sitnicki, David Ahern, Matthieu Baerts, Geliang Tang,
Jamal Hadi Salim, Cong Wang, Jiri Pirko, Boris Pismenny, bpf,
mptcp
On 8/15/2024 6:01 AM, Alexander H Duyck wrote:
> On Thu, 2024-08-08 at 20:37 +0800, Yunsheng Lin wrote:
>> Use the newly introduced prepare/probe/commit API to
>> replace page_frag with page_frag_cache for sk_page_frag().
>>
>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
>> Acked-by: Mat Martineau <martineau@kernel.org>
>> ---
>> .../chelsio/inline_crypto/chtls/chtls.h | 3 -
>> .../chelsio/inline_crypto/chtls/chtls_io.c | 100 ++++---------
>> .../chelsio/inline_crypto/chtls/chtls_main.c | 3 -
>> drivers/net/tun.c | 48 +++---
>> include/linux/sched.h | 2 +-
>> include/net/sock.h | 14 +-
>> kernel/exit.c | 3 +-
>> kernel/fork.c | 3 +-
>> net/core/skbuff.c | 59 +++++---
>> net/core/skmsg.c | 22 +--
>> net/core/sock.c | 46 ++++--
>> net/ipv4/ip_output.c | 33 +++--
>> net/ipv4/tcp.c | 32 ++--
>> net/ipv4/tcp_output.c | 28 ++--
>> net/ipv6/ip6_output.c | 33 +++--
>> net/kcm/kcmsock.c | 27 ++--
>> net/mptcp/protocol.c | 67 +++++----
>> net/sched/em_meta.c | 2 +-
>> net/tls/tls_device.c | 137 ++++++++++--------
>> 19 files changed, 347 insertions(+), 315 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
>> index 7ff82b6778ba..fe2b6a8ef718 100644
>> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
>> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls.h
>> @@ -234,7 +234,6 @@ struct chtls_dev {
>> struct list_head list_node;
>> struct list_head rcu_node;
>> struct list_head na_node;
>> - unsigned int send_page_order;
>> int max_host_sndbuf;
>> u32 round_robin_cnt;
>> struct key_map kmap;
>> @@ -453,8 +452,6 @@ enum {
>>
>> /* The ULP mode/submode of an skbuff */
>> #define skb_ulp_mode(skb) (ULP_SKB_CB(skb)->ulp_mode)
>> -#define TCP_PAGE(sk) (sk->sk_frag.page)
>> -#define TCP_OFF(sk) (sk->sk_frag.offset)
>>
>> static inline struct chtls_dev *to_chtls_dev(struct tls_toe_device *tlsdev)
>> {
>> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
>> index d567e42e1760..334381c1587f 100644
>> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
>> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_io.c
>> @@ -825,12 +825,6 @@ void skb_entail(struct sock *sk, struct sk_buff *skb, int flags)
>> ULP_SKB_CB(skb)->flags = flags;
>> __skb_queue_tail(&csk->txq, skb);
>> sk->sk_wmem_queued += skb->truesize;
>> -
>> - if (TCP_PAGE(sk) && TCP_OFF(sk)) {
>> - put_page(TCP_PAGE(sk));
>> - TCP_PAGE(sk) = NULL;
>> - TCP_OFF(sk) = 0;
>> - }
>> }
>>
>> static struct sk_buff *get_tx_skb(struct sock *sk, int size)
>> @@ -882,16 +876,12 @@ static void push_frames_if_head(struct sock *sk)
>> chtls_push_frames(csk, 1);
>> }
>>
>> -static int chtls_skb_copy_to_page_nocache(struct sock *sk,
>> - struct iov_iter *from,
>> - struct sk_buff *skb,
>> - struct page *page,
>> - int off, int copy)
>> +static int chtls_skb_copy_to_va_nocache(struct sock *sk, struct iov_iter *from,
>> + struct sk_buff *skb, char *va, int copy)
>> {
>> int err;
>>
>> - err = skb_do_copy_data_nocache(sk, skb, from, page_address(page) +
>> - off, copy, skb->len);
>> + err = skb_do_copy_data_nocache(sk, skb, from, va, copy, skb->len);
>> if (err)
>> return err;
>>
>> @@ -1114,82 +1104,44 @@ int chtls_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
>> if (err)
>> goto do_fault;
>> } else {
>> + struct page_frag_cache *pfrag = &sk->sk_frag;
>
> Is this even valid? Shouldn't it be using sk_page_frag to get the
chtls_sendmsg() only use sk->sk_frag, see below.
> reference here? Seems like it might be trying to instantiate an unused
> cache.
I am not sure if I understand what you meant by "trying to instantiate
an unused cache". sk->sk_frag is supposed to be instantiated in
sock_init_data_uid() by calling page_frag_cache_init() in this patch.
>
> As per my earlier suggestion this could be made very simple if we are
> just pulling a bio_vec out from the page cache at the start. With that
> we could essentially plug it into the TCP_PAGE/TCP_OFF block here and
> most of it would just function the same.
I am not sure if we had the same common understanding as why chtls had
more changing than other places when replacing page_frag with
page_frag_cache.
chtls_sendmsg() was duplicating its own implementation of page_frag
refilling instead of using skb_page_frag_refill(), we can remove that
implementation by using the new API, that is why there is more changing
for chtls than other places. Are you suggesting to keep chtls's own
implementation of page_frag refilling by 'plug it into the TCP_PAGE/
TCP_OFF block' here?
>
>> int i = skb_shinfo(skb)->nr_frags;
>> - struct page *page = TCP_PAGE(sk);
TCP_PAGE macro is defined as below, that is why sk->sk_frag is used
instead of sk_page_frag() for chtls case if I understand your question
correctly:
#define TCP_PAGE(sk) (sk->sk_frag.page)
#define TCP_OFF(sk) (sk->sk_frag.offset)
>> - int pg_size = PAGE_SIZE;
>> - int off = TCP_OFF(sk);
>> - bool merge;
>> -
>> - if (page)
>> - pg_size = page_size(page);
>> - if (off < pg_size &&
>> - skb_can_coalesce(skb, i, page, off)) {
>> + unsigned int offset, fragsz;
>> + bool merge = false;
>> + struct page *page;
>> + void *va;
>> +
>> + fragsz = 32U;
>> + page = page_frag_alloc_prepare(pfrag, &offset, &fragsz,
>> + &va, sk->sk_allocation);
>> + if (unlikely(!page))
>> + goto wait_for_memory;
>> +
>> + if (skb_can_coalesce(skb, i, page, offset))
>> merge = true;
>> - goto copy;
>> - }
>> - merge = false;
>> - if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
>> - MAX_SKB_FRAGS))
>> + else if (i == (is_tls_tx(csk) ? (MAX_SKB_FRAGS - 1) :
>> + MAX_SKB_FRAGS))
>> goto new_buf;
>>
>> - if (page && off == pg_size) {
>> - put_page(page);
>> - TCP_PAGE(sk) = page = NULL;
>> - pg_size = PAGE_SIZE;
>> - }
>> -
>> - if (!page) {
>> - gfp_t gfp = sk->sk_allocation;
>> - int order = cdev->send_page_order;
>> -
>> - if (order) {
>> - page = alloc_pages(gfp | __GFP_COMP |
>> - __GFP_NOWARN |
>> - __GFP_NORETRY,
>> - order);
>> - if (page)
>> - pg_size <<= order;
>> - }
>> - if (!page) {
>> - page = alloc_page(gfp);
>> - pg_size = PAGE_SIZE;
>> - }
>> - if (!page)
>> - goto wait_for_memory;
>> - off = 0;
>> - }
>> -copy:
>> - if (copy > pg_size - off)
>> - copy = pg_size - off;
>> + copy = min_t(int, copy, fragsz);
>> if (is_tls_tx(csk))
>> copy = min_t(int, copy, csk->tlshws.txleft);
>>
>> - err = chtls_skb_copy_to_page_nocache(sk, &msg->msg_iter,
>> - skb, page,
>> - off, copy);
>> - if (unlikely(err)) {
>> - if (!TCP_PAGE(sk)) {
>> - TCP_PAGE(sk) = page;
>> - TCP_OFF(sk) = 0;
>> - }
>> + err = chtls_skb_copy_to_va_nocache(sk, &msg->msg_iter,
>> + skb, va, copy);
>> + if (unlikely(err))
>> goto do_fault;
>> - }
>> +
>> /* Update the skb. */
>> if (merge) {
>> skb_frag_size_add(
>> &skb_shinfo(skb)->frags[i - 1],
>> copy);
>> + page_frag_alloc_commit_noref(pfrag, copy);
>> } else {
>> - skb_fill_page_desc(skb, i, page, off, copy);
>> - if (off + copy < pg_size) {
>> - /* space left keep page */
>> - get_page(page);
>> - TCP_PAGE(sk) = page;
>> - } else {
>> - TCP_PAGE(sk) = NULL;
>> - }
>> + skb_fill_page_desc(skb, i, page, offset, copy);
>> + page_frag_alloc_commit(pfrag, copy);
>> }
>> - TCP_OFF(sk) = off + copy;
>> }
>> if (unlikely(skb->len == mss))
>> tx_skb_finalize(skb);
>
> Really there is so much refactor here it is hard to tell what is what.
> I would suggest just trying to plug in an intermediary value and you
> can save the refactor for later.
I am not sure if your above suggestion works, if it does work, I am not
sure if it is that simple enough to just plug in an intermediary value
as the the fields in 'struct page_frag_cache' is much different from the
fields in 'struct page_frag' as below when replacing page_frag with
page_frag_cache for sk->sk_frag:
struct page_frag_cache {
unsigned long encoded_va;
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
__u16 remaining;
__u16 pagecnt_bias;
#else
__u32 remaining;
__u32 pagecnt_bias;
#endif
};
struct page_frag {
struct page *page;
#if (BITS_PER_LONG > 32) || (PAGE_SIZE >= 65536)
__u32 offset;
__u32 size;
#else
__u16 offset;
__u16 size;
#endif
};
>
>> diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
>> index 455a54708be4..ba88b2fc7cd8 100644
>> --- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
>> +++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
>> @@ -34,7 +34,6 @@ static DEFINE_MUTEX(notify_mutex);
>> static RAW_NOTIFIER_HEAD(listen_notify_list);
>> static struct proto chtls_cpl_prot, chtls_cpl_protv6;
>> struct request_sock_ops chtls_rsk_ops, chtls_rsk_opsv6;
>> -static uint send_page_order = (14 - PAGE_SHIFT < 0) ? 0 : 14 - PAGE_SHIFT;
>>
>> static void register_listen_notifier(struct notifier_block *nb)
>> {
>> @@ -273,8 +272,6 @@ static void *chtls_uld_add(const struct cxgb4_lld_info *info)
>> INIT_WORK(&cdev->deferq_task, process_deferq);
>> spin_lock_init(&cdev->listen_lock);
>> spin_lock_init(&cdev->idr_lock);
>> - cdev->send_page_order = min_t(uint, get_order(32768),
>> - send_page_order);
>> cdev->max_host_sndbuf = 48 * 1024;
>>
>> if (lldi->vr->key.size)
>> diff --git a/drivers/net/tun.c b/drivers/net/tun.c
>> index 1d06c560c5e6..51df92fd60db 100644
>> --- a/drivers/net/tun.c
>> +++ b/drivers/net/tun.c
>> @@ -1598,21 +1598,19 @@ static bool tun_can_build_skb(struct tun_struct *tun, struct tun_file *tfile,
>> }
>>
>> static struct sk_buff *__tun_build_skb(struct tun_file *tfile,
>> - struct page_frag *alloc_frag, char *buf,
>> - int buflen, int len, int pad)
>> + char *buf, int buflen, int len, int pad)
>> {
>> struct sk_buff *skb = build_skb(buf, buflen);
>>
>> - if (!skb)
>> + if (!skb) {
>> + page_frag_free_va(buf);
>> return ERR_PTR(-ENOMEM);
>> + }
>>
>> skb_reserve(skb, pad);
>> skb_put(skb, len);
>> skb_set_owner_w(skb, tfile->socket.sk);
>>
>> - get_page(alloc_frag->page);
>> - alloc_frag->offset += buflen;
>> -
>
> Rather than freeing the buf it would be better if you were to just
> stick to the existing pattern and commit the alloc_frag at the end here
> instead of calling get_page.
>
>> return skb;
>> }
>>
>> @@ -1660,7 +1658,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> struct virtio_net_hdr *hdr,
>> int len, int *skb_xdp)
>> {
>> - struct page_frag *alloc_frag = ¤t->task_frag;
>> + struct page_frag_cache *alloc_frag = ¤t->task_frag;
>> struct bpf_net_context __bpf_net_ctx, *bpf_net_ctx;
>> struct bpf_prog *xdp_prog;
>> int buflen = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>> @@ -1676,16 +1674,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> buflen += SKB_DATA_ALIGN(len + pad);
>> rcu_read_unlock();
>>
>> - alloc_frag->offset = ALIGN((u64)alloc_frag->offset, SMP_CACHE_BYTES);
>> - if (unlikely(!skb_page_frag_refill(buflen, alloc_frag, GFP_KERNEL)))
>> + buf = page_frag_alloc_va_align(alloc_frag, buflen, GFP_KERNEL,
>> + SMP_CACHE_BYTES);
>> + if (unlikely(!buf))
>> return ERR_PTR(-ENOMEM);
>>
>> - buf = (char *)page_address(alloc_frag->page) + alloc_frag->offset;
>> - copied = copy_page_from_iter(alloc_frag->page,
>> - alloc_frag->offset + pad,
>> - len, from);
>> - if (copied != len)
>> + copied = copy_from_iter(buf + pad, len, from);
>> + if (copied != len) {
>> + page_frag_alloc_abort(alloc_frag, buflen);
>> return ERR_PTR(-EFAULT);
>> + }
>>
>> /* There's a small window that XDP may be set after the check
>> * of xdp_prog above, this should be rare and for simplicity
>> @@ -1693,8 +1691,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> */
>> if (hdr->gso_type || !xdp_prog) {
>> *skb_xdp = 1;
>> - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len,
>> - pad);
>> + return __tun_build_skb(tfile, buf, buflen, len, pad);
>> }
>>
>> *skb_xdp = 0;
>> @@ -1711,21 +1708,16 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> xdp_prepare_buff(&xdp, buf, pad, len, false);
>>
>> act = bpf_prog_run_xdp(xdp_prog, &xdp);
>> - if (act == XDP_REDIRECT || act == XDP_TX) {
>> - get_page(alloc_frag->page);
>> - alloc_frag->offset += buflen;
the above is only executed for XDP_REDIRECT and XDP_TX cases.
>> - }
>> err = tun_xdp_act(tun, xdp_prog, &xdp, act);
>> - if (err < 0) {
>> - if (act == XDP_REDIRECT || act == XDP_TX)
>> - put_page(alloc_frag->page);
And there is a put_page() immediately when xdp_do_redirect() or
tun_xdp_tx() fails in tun_xdp_act(), so there is something else
might have taken a reference to the page and modified it in some way
even when tun_xdp_act() return error? Would you be more specific
about why above happens?
>> - goto out;
>> - }
>> -
>> if (err == XDP_REDIRECT)
>> xdp_do_flush();
>> - if (err != XDP_PASS)
>> +
>> + if (err == XDP_REDIRECT || err == XDP_TX) {
>> + goto out;
>> + } else if (err < 0 || err != XDP_PASS) {
>> + page_frag_alloc_abort(alloc_frag, buflen);
>> goto out;
>> + }
>>
>
> Your abort function here is not necessarily safe. It is assuming that
> nothing else might have taken a reference to the page or modified it in
> some way. Generally we shouldn't allow rewinding the pointer until we
> check the page count to guarantee nobody else is now working with a
> copy of the page.
>
>> pad = xdp.data - xdp.data_hard_start;
>> len = xdp.data_end - xdp.data;
>> @@ -1734,7 +1726,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
>> rcu_read_unlock();
>> local_bh_enable();
>>
>> - return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad);
>> + return __tun_build_skb(tfile, buf, buflen, len, pad);
>>
>> out:
>> bpf_net_ctx_clear(bpf_net_ctx);
>
...
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH net-next v13 13/14] mm: page_frag: update documentation for page_frag
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (11 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 12/14] net: replace page_frag with page_frag_cache Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-08 12:37 ` [PATCH net-next v13 14/14] mm: page_frag: add an entry in MAINTAINERS " Yunsheng Lin
2024-08-13 11:30 ` [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Jonathan Corbet, Andrew Morton, linux-mm, linux-doc
Update documentation about design, implementation and API usages
for page_frag.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
Documentation/mm/page_frags.rst | 169 +++++++++++++++++++++++++++++++-
include/linux/page_frag_cache.h | 107 ++++++++++++++++++++
mm/page_frag_cache.c | 77 ++++++++++++++-
3 files changed, 350 insertions(+), 3 deletions(-)
diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst
index 503ca6cdb804..abdab415a8e2 100644
--- a/Documentation/mm/page_frags.rst
+++ b/Documentation/mm/page_frags.rst
@@ -1,3 +1,5 @@
+.. SPDX-License-Identifier: GPL-2.0
+
==============
Page fragments
==============
@@ -40,4 +42,169 @@ page via a single call. The advantage to doing this is that it allows for
cleaning up the multiple references that were added to a page in order to
avoid calling get_page per allocation.
-Alexander Duyck, Nov 29, 2016.
+
+Architecture overview
+=====================
+
+.. code-block:: none
+
+ +----------------------+
+ | page_frag API caller |
+ +----------------------+
+ |
+ |
+ v
+ +------------------------------------------------------------------+
+ | request page fragment |
+ +------------------------------------------------------------------+
+ | | |
+ | | |
+ | Cache not enough |
+ | | |
+ | +-----------------+ |
+ | | reuse old cache |--Usable-->|
+ | +-----------------+ |
+ | | |
+ | Not usable |
+ | | |
+ | v |
+ Cache empty +-----------------+ |
+ | | drain old cache | |
+ | +-----------------+ |
+ | | |
+ v_________________________________v |
+ | |
+ | |
+ _________________v_______________ |
+ | | Cache is enough
+ | | |
+ PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | |
+ | | |
+ | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE |
+ v | |
+ +----------------------------------+ | |
+ | refill cache with order > 0 page | | |
+ +----------------------------------+ | |
+ | | | |
+ | | | |
+ | Refill failed | |
+ | | | |
+ | v v |
+ | +------------------------------------+ |
+ | | refill cache with order 0 page | |
+ | +----------------------------------=-+ |
+ | | |
+ Refill succeed | |
+ | Refill succeed |
+ | | |
+ v v v
+ +------------------------------------------------------------------+
+ | allocate fragment from cache |
+ +------------------------------------------------------------------+
+
+API interface
+=============
+As the design and implementation of page_frag API implies, the allocation side
+does not allow concurrent calling. Instead it is assumed that the caller must
+ensure there is not concurrent alloc calling to the same page_frag_cache
+instance by using its own lock or rely on some lockless guarantee like NAPI
+softirq.
+
+Depending on different aligning requirement, the page_frag API caller may call
+page_frag_alloc*_align*() to ensure the returned virtual address or offset of
+the page is aligned according to the 'align/alignment' parameter. Note the size
+of the allocated fragment is not aligned, the caller needs to provide an aligned
+fragsz if there is an alignment requirement for the size of the fragment.
+
+Depending on different use cases, callers expecting to deal with va, page or
+both va and page for them may call page_frag_alloc_va*, page_frag_alloc_pg*,
+or page_frag_alloc* API accordingly.
+
+There is also a use case that needs minimum memory in order for forward progress,
+but more performant if more memory is available. Using page_frag_alloc_prepare()
+and page_frag_alloc_commit() related API, the caller requests the minimum memory
+it needs and the prepare API will return the maximum size of the fragment
+returned. The caller needs to either call the commit API to report how much
+memory it actually uses, or not do so if deciding to not use any memory.
+
+.. kernel-doc:: include/linux/page_frag_cache.h
+ :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc
+ page_frag_cache_page_offset page_frag_alloc_va
+ page_frag_alloc_va_align page_frag_alloc_va_prepare_align
+ page_frag_alloc_probe page_frag_alloc_commit
+ page_frag_alloc_commit_noref page_frag_alloc_abort
+
+.. kernel-doc:: mm/page_frag_cache.c
+ :identifiers: __page_frag_alloc_va_align page_frag_alloc_pg
+ page_frag_alloc_va_prepare page_frag_alloc_pg_prepare
+ page_frag_alloc_prepare page_frag_cache_drain
+ page_frag_free_va
+
+Coding examples
+===============
+
+Init & Drain API
+----------------
+
+.. code-block:: c
+
+ page_frag_cache_init(pfrag);
+ ...
+ page_frag_cache_drain(pfrag);
+
+
+Alloc & Free API
+----------------
+
+.. code-block:: c
+
+ void *va;
+
+ va = page_frag_alloc_va_align(pfrag, size, gfp, align);
+ if (!va)
+ goto do_error;
+
+ err = do_something(va, size);
+ if (err) {
+ page_frag_free_va(va);
+ goto do_error;
+ }
+
+Prepare & Commit API
+--------------------
+
+.. code-block:: c
+
+ unsigned int offset, size;
+ bool merge = true;
+ struct page *page;
+ void *va;
+
+ size = 32U;
+ page = page_frag_alloc_prepare(pfrag, &offset, &size, &va);
+ if (!page)
+ goto wait_for_space;
+
+ copy = min_t(unsigned int, copy, size);
+ if (!skb_can_coalesce(skb, i, page, offset)) {
+ if (i >= max_skb_frags)
+ goto new_segment;
+
+ merge = false;
+ }
+
+ copy = mem_schedule(copy);
+ if (!copy)
+ goto wait_for_space;
+
+ err = copy_from_iter_full_nocache(va, copy, iter);
+ if (err)
+ goto do_error;
+
+ if (merge) {
+ skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_alloc_commit_noref(pfrag, offset, copy);
+ } else {
+ skb_fill_page_desc(skb, i, page, offset, copy);
+ page_frag_alloc_commit(pfrag, offset, copy);
+ }
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index ba5d7f8a03cd..9a2c9abd23d0 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -52,11 +52,28 @@ static inline void *encoded_page_address(unsigned long encoded_va)
return (void *)(encoded_va & PAGE_MASK);
}
+/**
+ * page_frag_cache_init() - Init page_frag cache.
+ * @nc: page_frag cache from which to init
+ *
+ * Inline helper to init the page_frag cache.
+ */
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
memset(nc, 0, sizeof(*nc));
}
+/**
+ * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc.
+ * @nc: page_frag cache from which to check
+ *
+ * Used to check if the current page in page_frag cache is pfmemalloc'ed.
+ * It has the same calling context expectation as the alloc API.
+ *
+ * Return:
+ * true if the current page in page_frag cache is pfmemalloc'ed, otherwise
+ * return false.
+ */
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
return encoded_page_pfmemalloc(nc->encoded_va);
@@ -76,6 +93,19 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask);
+/**
+ * page_frag_alloc_va_align() - Alloc a page fragment with aligning requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for virtual address of fragment
+ *
+ * WARN_ON_ONCE() checking for @align before allocing a page fragment from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int fragsz,
gfp_t gfp_mask, unsigned int align)
@@ -84,11 +114,32 @@ static inline void *page_frag_alloc_va_align(struct page_frag_cache *nc,
return __page_frag_alloc_va_align(nc, fragsz, gfp_mask, -align);
}
+/**
+ * page_frag_cache_page_offset() - Return the current page fragment's offset.
+ * @nc: page_frag cache from which to check
+ *
+ * The API is only used in net/sched/em_meta.c for historical reason, do not use
+ * it for new caller unless there is a strong reason.
+ *
+ * Return:
+ * the offset of the current page fragment in the page_frag cache.
+ */
static inline unsigned int page_frag_cache_page_offset(const struct page_frag_cache *nc)
{
return page_frag_cache_page_size(nc->encoded_va) - nc->remaining;
}
+/**
+ * page_frag_alloc_va() - Alloc a page fragment.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Get a page fragment from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_va(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask)
{
@@ -98,6 +149,21 @@ static inline void *page_frag_alloc_va(struct page_frag_cache *nc,
void *page_frag_alloc_va_prepare(struct page_frag_cache *nc, unsigned int *fragsz,
gfp_t gfp);
+/**
+ * page_frag_alloc_va_prepare_align() - Prepare allocing a page fragment with
+ * aligning requirement.
+ * @nc: page_frag cache from which to prepare
+ * @fragsz: in as the requested size, out as the available size
+ * @gfp: the allocation gfp to use when cache need to be refilled
+ * @align: the requested aligning requirement
+ *
+ * WARN_ON_ONCE() checking for @align before preparing an aligned page fragment
+ * with minimum size of @fragsz, @fragsz is also used to report the maximum size
+ * of the page fragment the caller can use.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_va_prepare_align(struct page_frag_cache *nc,
unsigned int *fragsz,
gfp_t gfp,
@@ -117,6 +183,21 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
unsigned int *fragsz,
void **va, gfp_t gfp);
+/**
+ * page_frag_alloc_probe - Probe the available page fragment.
+ * @nc: page_frag cache from which to probe
+ * @offset: out as the offset of the page fragment
+ * @fragsz: in as the requested size, out as the available size
+ * @va: out as the virtual address of the returned page fragment
+ *
+ * Probe the current available memory to caller without doing cache refilling.
+ * If no space is available in the page_frag cache, return NULL.
+ * If the requested space is available, up to @fragsz bytes may be added to the
+ * fragment using commit API.
+ *
+ * Return:
+ * the page fragment, otherwise return NULL.
+ */
static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc,
unsigned int *offset,
unsigned int *fragsz,
@@ -138,6 +219,14 @@ static inline struct page *page_frag_alloc_probe(struct page_frag_cache *nc,
return page;
}
+/**
+ * page_frag_alloc_commit - Commit allocing a page fragment.
+ * @nc: page_frag cache from which to commit
+ * @fragsz: size of the page fragment has been used
+ *
+ * Commit the actual used size for the allocation that was either prepared or
+ * probed.
+ */
static inline void page_frag_alloc_commit(struct page_frag_cache *nc,
unsigned int fragsz)
{
@@ -146,6 +235,16 @@ static inline void page_frag_alloc_commit(struct page_frag_cache *nc,
nc->remaining -= fragsz;
}
+/**
+ * page_frag_alloc_commit_noref - Commit allocing a page fragment without taking
+ * page refcount.
+ * @nc: page_frag cache from which to commit
+ * @fragsz: size of the page fragment has been used
+ *
+ * Commit the alloc preparing or probing by passing the actual used size, but
+ * not taking refcount. Mostly used for fragmemt coalescing case when the
+ * current fragment can share the same refcount with previous fragment.
+ */
static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc,
unsigned int fragsz)
{
@@ -153,6 +252,14 @@ static inline void page_frag_alloc_commit_noref(struct page_frag_cache *nc,
nc->remaining -= fragsz;
}
+/**
+ * page_frag_alloc_abort - Abort the page fragment allocation.
+ * @nc: page_frag cache to which the page fragment is aborted back
+ * @fragsz: size of the page fragment to be aborted
+ *
+ * It is expected to be called from the same context as the alloc API.
+ * Mostly used for error handling cases where the fragment is no longer needed.
+ */
static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
unsigned int fragsz)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index f8fad7d2cca8..509bcc4603d3 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -97,6 +97,18 @@ static struct page *__page_frag_cache_reload(struct page_frag_cache *nc,
return page;
}
+/**
+ * page_frag_alloc_va_prepare() - Prepare allocing a page fragment.
+ * @nc: page_frag cache from which to prepare
+ * @fragsz: in as the requested size, out as the available size
+ * @gfp: the allocation gfp to use when cache needs to be refilled
+ *
+ * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used
+ * to report the maximum size of the page fragment the caller can use.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
void *page_frag_alloc_va_prepare(struct page_frag_cache *nc,
unsigned int *fragsz, gfp_t gfp)
{
@@ -125,6 +137,19 @@ void *page_frag_alloc_va_prepare(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(page_frag_alloc_va_prepare);
+/**
+ * page_frag_alloc_pg_prepare - Prepare allocing a page fragment.
+ * @nc: page_frag cache from which to prepare
+ * @offset: out as the offset of the page fragment
+ * @fragsz: in as the requested size, out as the available size
+ * @gfp: the allocation gfp to use when cache needs to be refilled
+ *
+ * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used
+ * to report the maximum size of the page fragment the caller can use.
+ *
+ * Return:
+ * the page fragment, otherwise return NULL.
+ */
struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
unsigned int *offset,
unsigned int *fragsz, gfp_t gfp)
@@ -152,6 +177,21 @@ struct page *page_frag_alloc_pg_prepare(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(page_frag_alloc_pg_prepare);
+/**
+ * page_frag_alloc_prepare - Prepare allocing a page fragment.
+ * @nc: page_frag cache from which to prepare
+ * @offset: out as the offset of the page fragment
+ * @fragsz: in as the requested size, out as the available size
+ * @va: out as the virtual address of the returned page fragment
+ * @gfp: the allocation gfp to use when cache needs to be refilled
+ *
+ * Prepare a page fragment with minimum size of @fragsz, @fragsz is also used
+ * to report the maximum size of the page fragment. Return both 'struct page'
+ * and virtual address of the fragment to the caller.
+ *
+ * Return:
+ * the page fragment, otherwise return NULL.
+ */
struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
unsigned int *offset,
unsigned int *fragsz,
@@ -183,6 +223,18 @@ struct page *page_frag_alloc_prepare(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(page_frag_alloc_prepare);
+/**
+ * page_frag_alloc_pg - Alloce a page fragment.
+ * @nc: page_frag cache from which to alloce
+ * @offset: out as the offset of the page fragment
+ * @fragsz: the requested fragment size
+ * @gfp: the allocation gfp to use when cache needs to be refilled
+ *
+ * Get a page fragment from page_frag cache.
+ *
+ * Return:
+ * the page fragment, otherwise return NULL.
+ */
struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
unsigned int *offset, unsigned int fragsz,
gfp_t gfp)
@@ -215,6 +267,10 @@ struct page *page_frag_alloc_pg(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(page_frag_alloc_pg);
+/**
+ * page_frag_cache_drain - Drain the current page from page_frag cache.
+ * @nc: page_frag cache from which to drain
+ */
void page_frag_cache_drain(struct page_frag_cache *nc)
{
if (!nc->encoded_va)
@@ -235,6 +291,19 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
}
EXPORT_SYMBOL(__page_frag_cache_drain);
+/**
+ * __page_frag_alloc_va_align() - Alloc a page fragment with aligning
+ * requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the 'va'
+ *
+ * Get a page fragment from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * Return va of the page fragment, otherwise return NULL.
+ */
void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
@@ -281,8 +350,12 @@ void *__page_frag_alloc_va_align(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(__page_frag_alloc_va_align);
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
+/**
+ * page_frag_free_va - Free a page fragment.
+ * @addr: va of page fragment to be freed
+ *
+ * Free a page fragment allocated out of either a compound or order 0 page by
+ * virtual address.
*/
void page_frag_free_va(void *addr)
{
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* [PATCH net-next v13 14/14] mm: page_frag: add an entry in MAINTAINERS for page_frag
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (12 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 13/14] mm: page_frag: update documentation for page_frag Yunsheng Lin
@ 2024-08-08 12:37 ` Yunsheng Lin
2024-08-13 11:30 ` [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
14 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-08 12:37 UTC (permalink / raw)
To: davem, kuba, pabeni; +Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck
After this patchset, page_frag is a small subsystem/library
on its own, so add an entry in MAINTAINERS to indicate the
new subsystem/library's maintainer, maillist, status and file
lists of page_frag.
Alexander is the original author of page_frag, add him in the
MAINTAINERS too.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
MAINTAINERS | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index a9dace908305..4ad9a53a4faf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -17235,6 +17235,17 @@ F: mm/page-writeback.c
F: mm/readahead.c
F: mm/truncate.c
+PAGE FRAG
+M: Alexander Duyck <alexander.duyck@gmail.com>
+M: Yunsheng Lin <linyunsheng@huawei.com>
+L: linux-mm@kvack.org
+L: netdev@vger.kernel.org
+S: Supported
+F: Documentation/mm/page_frags.rst
+F: include/linux/page_frag_cache.h
+F: mm/page_frag_cache.c
+F: tools/testing/selftests/mm/page_frag
+
PAGE POOL
M: Jesper Dangaard Brouer <hawk@kernel.org>
M: Ilias Apalodimas <ilias.apalodimas@linaro.org>
--
2.33.0
^ permalink raw reply related [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
2024-08-08 12:37 [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
` (13 preceding siblings ...)
2024-08-08 12:37 ` [PATCH net-next v13 14/14] mm: page_frag: add an entry in MAINTAINERS " Yunsheng Lin
@ 2024-08-13 11:30 ` Yunsheng Lin
2024-08-13 15:11 ` Alexander Duyck
14 siblings, 1 reply; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-13 11:30 UTC (permalink / raw)
To: davem, kuba, pabeni, Alexander Duyck
Cc: netdev, linux-kernel, Alexei Starovoitov, Daniel Borkmann,
Jesper Dangaard Brouer, John Fastabend, Matthias Brugger,
AngeloGioacchino Del Regno, bpf, linux-arm-kernel, linux-mediatek
On 2024/8/8 20:37, Yunsheng Lin wrote:
...
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
>
> 1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/
>
> Change log:
> V13:
> 1. Move page_frag_test from mm/ to tools/testing/selftest/mm
> 2. Use ptr_ring to replace ptr_pool for page_frag_test.c
> 3. Retest based on the new testing ko, which shows a big different
> result than using ptr_pool.
Hi, Davem & Jakub & Paolo
It seems the state of this patchset is changed to 'Deferred' in the
patchwork, as the info regarding the state in 'Documentation/process/
maintainer-netdev.rst':
Deferred patch needs to be reposted later, usually due to dependency
or because it was posted for a closed tree
Obviously it was not the a closed tree reason here, I guess it was the dependency
reason casuing the 'Deferred' here? I am not sure if I understand what sort of
dependency this patchset is running into? It would be good to mention what need
to be done avoid the kind of dependency too.
Hi, Alexander
The v13 mainly address your comments about the page_fage_test module.
It seems the your main comment about this patchset is about the new API
naming now, and it seems there was no feedback in previous version for
about a week:
https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
If there is still disagreement about the new API naming or other things, it
would be good to continue the discussion, so that we can have better
understanding of each other's main concern and better idea might come up too,
like the discussion about new layout for 'struct page_frag_cache' and the
new refactoring in patch 8.
>
^ permalink raw reply [flat|nested] 47+ messages in thread* Re: [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
2024-08-13 11:30 ` [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Yunsheng Lin
@ 2024-08-13 15:11 ` Alexander Duyck
2024-08-14 11:55 ` Yunsheng Lin
0 siblings, 1 reply; 47+ messages in thread
From: Alexander Duyck @ 2024-08-13 15:11 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Matthias Brugger, AngeloGioacchino Del Regno, bpf,
linux-arm-kernel, linux-mediatek
On Tue, Aug 13, 2024 at 4:30 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/8/8 20:37, Yunsheng Lin wrote:
>
> ...
>
> >
> > CC: Alexander Duyck <alexander.duyck@gmail.com>
> >
> > 1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/
> >
> > Change log:
> > V13:
> > 1. Move page_frag_test from mm/ to tools/testing/selftest/mm
> > 2. Use ptr_ring to replace ptr_pool for page_frag_test.c
> > 3. Retest based on the new testing ko, which shows a big different
> > result than using ptr_pool.
>
> Hi, Davem & Jakub & Paolo
> It seems the state of this patchset is changed to 'Deferred' in the
> patchwork, as the info regarding the state in 'Documentation/process/
> maintainer-netdev.rst':
>
> Deferred patch needs to be reposted later, usually due to dependency
> or because it was posted for a closed tree
>
> Obviously it was not the a closed tree reason here, I guess it was the dependency
> reason casuing the 'Deferred' here? I am not sure if I understand what sort of
> dependency this patchset is running into? It would be good to mention what need
> to be done avoid the kind of dependency too.
>
>
> Hi, Alexander
> The v13 mainly address your comments about the page_fage_test module.
> It seems the your main comment about this patchset is about the new API
> naming now, and it seems there was no feedback in previous version for
> about a week:
>
> https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
>
> If there is still disagreement about the new API naming or other things, it
> would be good to continue the discussion, so that we can have better
> understanding of each other's main concern and better idea might come up too,
> like the discussion about new layout for 'struct page_frag_cache' and the
> new refactoring in patch 8.
Sorry for not getting to this sooner. I have been travelling for the
last week and a half. I just got home on Sunday and I am suffering
from a pretty bad bout of jet lag as I am overcoming a 12.5 hour time
change. The earliest I can probably get to this for review would be
tomorrow morning (8/14 in the AM PDT) as my calendar has me fully
booked with meetings most of today.
Thanks,
- Alex
^ permalink raw reply [flat|nested] 47+ messages in thread
* Re: [PATCH net-next v13 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
2024-08-13 15:11 ` Alexander Duyck
@ 2024-08-14 11:55 ` Yunsheng Lin
0 siblings, 0 replies; 47+ messages in thread
From: Yunsheng Lin @ 2024-08-14 11:55 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Alexei Starovoitov,
Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
Matthias Brugger, AngeloGioacchino Del Regno, bpf,
linux-arm-kernel, linux-mediatek
On 2024/8/13 23:11, Alexander Duyck wrote:
> On Tue, Aug 13, 2024 at 4:30 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>>
>> On 2024/8/8 20:37, Yunsheng Lin wrote:
>>
>> ...
>>
>>>
>>> CC: Alexander Duyck <alexander.duyck@gmail.com>
>>>
>>> 1. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/
>>>
>>> Change log:
>>> V13:
>>> 1. Move page_frag_test from mm/ to tools/testing/selftest/mm
>>> 2. Use ptr_ring to replace ptr_pool for page_frag_test.c
>>> 3. Retest based on the new testing ko, which shows a big different
>>> result than using ptr_pool.
>>
>> Hi, Davem & Jakub & Paolo
>> It seems the state of this patchset is changed to 'Deferred' in the
>> patchwork, as the info regarding the state in 'Documentation/process/
>> maintainer-netdev.rst':
>>
>> Deferred patch needs to be reposted later, usually due to dependency
>> or because it was posted for a closed tree
>>
>> Obviously it was not the a closed tree reason here, I guess it was the dependency
>> reason casuing the 'Deferred' here? I am not sure if I understand what sort of
>> dependency this patchset is running into? It would be good to mention what need
>> to be done avoid the kind of dependency too.
>>
>>
>> Hi, Alexander
>> The v13 mainly address your comments about the page_fage_test module.
>> It seems the your main comment about this patchset is about the new API
>> naming now, and it seems there was no feedback in previous version for
>> about a week:
>>
>> https://lore.kernel.org/all/ca6be29e-ab53-4673-9624-90d41616a154@huawei.com/
>>
>> If there is still disagreement about the new API naming or other things, it
>> would be good to continue the discussion, so that we can have better
>> understanding of each other's main concern and better idea might come up too,
>> like the discussion about new layout for 'struct page_frag_cache' and the
>> new refactoring in patch 8.
>
> Sorry for not getting to this sooner. I have been travelling for the
> last week and a half. I just got home on Sunday and I am suffering
> from a pretty bad bout of jet lag as I am overcoming a 12.5 hour time
> change. The earliest I can probably get to this for review would be
> tomorrow morning (8/14 in the AM PDT) as my calendar has me fully
> booked with meetings most of today.
Thanks for the clarifying.
Appreciating for the time and effort of reviewing.
^ permalink raw reply [flat|nested] 47+ messages in thread