* [PATCH mptcp-next 0/3] add bpf_stale scheduler
@ 2023-08-02 5:10 Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Geliang Tang @ 2023-08-02 5:10 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patchset adds the new bpf_stale scheduler. Use sk_storage to save
the stale map instead of using subflow->stale flag to manage it.
Geliang Tang (3):
Squash to "bpf: Add bpf_mptcp_sched_ops"
selftests/bpf: Add bpf_stale scheduler
selftests/bpf: Add bpf_stale test
net/mptcp/bpf.c | 3 -
.../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++
.../selftests/bpf/progs/mptcp_bpf_stale.c | 155 ++++++++++++++++++
3 files changed, 193 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
--
2.35.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops"
2023-08-02 5:10 [PATCH mptcp-next 0/3] add bpf_stale scheduler Geliang Tang
@ 2023-08-02 5:10 ` Geliang Tang
2023-08-02 16:52 ` Mat Martineau
2023-08-03 7:25 ` Matthieu Baerts
2023-08-02 5:10 ` [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 3/3] selftests/bpf: Add bpf_stale test Geliang Tang
2 siblings, 2 replies; 10+ messages in thread
From: Geliang Tang @ 2023-08-02 5:10 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
No need to change subflow->stale bit-filed now, so drop the writing
access of it.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
net/mptcp/bpf.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
index 51634efe4741..5c82cc35bfe0 100644
--- a/net/mptcp/bpf.c
+++ b/net/mptcp/bpf.c
@@ -52,9 +52,6 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
case offsetof(struct mptcp_subflow_context, scheduled):
end = offsetofend(struct mptcp_subflow_context, scheduled);
break;
- case offsetofend(struct mptcp_subflow_context, map_csum_len):
- end = offsetof(struct mptcp_subflow_context, data_avail);
- break;
case offsetof(struct mptcp_subflow_context, avg_pacing_rate):
end = offsetofend(struct mptcp_subflow_context, avg_pacing_rate);
break;
--
2.35.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler
2023-08-02 5:10 [PATCH mptcp-next 0/3] add bpf_stale scheduler Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
@ 2023-08-02 5:10 ` Geliang Tang
2023-08-02 17:08 ` Mat Martineau
2023-08-02 5:10 ` [PATCH mptcp-next 3/3] selftests/bpf: Add bpf_stale test Geliang Tang
2 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2023-08-02 5:10 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch implements the setting a subflow as stale/unstale in BPF MPTCP
scheduler, named bpf_stale. The staled subflow will be added into a stale
map in sk_storage. Two helper mptcp_subflow_set_stale() and
mptcp_subflow_unstale() are added. The subflow will be set as stale in
bpf_stale_data_init() and will be checked in bpf_stale_get_subflow().
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../selftests/bpf/progs/mptcp_bpf_stale.c | 155 ++++++++++++++++++
1 file changed, 155 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
new file mode 100644
index 000000000000..44010ca83af3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
@@ -0,0 +1,155 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023, SUSE. */
+
+#include <linux/bpf.h>
+#include "bpf_tcp_helpers.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct mptcp_stale_storage {
+ __u8 nr;
+ struct mptcp_subflow_context *stale[MPTCP_SUBFLOWS_MAX];
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_SK_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct mptcp_stale_storage);
+} mptcp_stale_map SEC(".maps");
+
+static void mptcp_subflow_set_stale(struct mptcp_stale_storage *storage,
+ struct mptcp_subflow_context *subflow)
+{
+ for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (storage->stale[i] == subflow)
+ return;
+ }
+
+ if (storage->nr < MPTCP_SUBFLOWS_MAX - 1)
+ storage->stale[storage->nr++] = subflow;
+}
+
+static void mptcp_subflow_unstale(struct mptcp_stale_storage *storage,
+ struct mptcp_subflow_context *subflow)
+{
+ for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (storage->stale[i] == subflow) {
+ for (int j = i; j < MPTCP_SUBFLOWS_MAX - 1; j++) {
+ if (!storage->stale[j + 1])
+ break;
+ storage->stale[j] = storage->stale[j + 1];
+ storage->stale[j + 1] = NULL;
+ }
+ storage->nr--;
+ return;
+ }
+ }
+}
+
+static bool mptcp_subflow_is_stale(struct mptcp_stale_storage *storage,
+ struct mptcp_subflow_context *subflow)
+{
+ for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (storage->stale[i] == subflow)
+ return true;
+ }
+
+ return false;
+}
+
+static bool mptcp_subflow_is_active(struct mptcp_sched_data *data,
+ struct mptcp_subflow_context *stale)
+{
+ for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
+ struct mptcp_subflow_context *subflow;
+
+ subflow = mptcp_subflow_ctx_by_pos(data, i);
+ if (!subflow)
+ break;
+ if (subflow == stale)
+ return true;
+ }
+
+ return false;
+}
+
+SEC("struct_ops/mptcp_sched_stale_init")
+void BPF_PROG(mptcp_sched_stale_init, struct mptcp_sock *msk)
+{
+}
+
+SEC("struct_ops/mptcp_sched_stale_release")
+void BPF_PROG(mptcp_sched_stale_release, struct mptcp_sock *msk)
+{
+}
+
+void BPF_STRUCT_OPS(bpf_stale_data_init, struct mptcp_sock *msk,
+ struct mptcp_sched_data *data)
+{
+ struct mptcp_subflow_context *subflow;
+ struct mptcp_stale_storage *storage;
+
+ mptcp_sched_data_set_contexts(msk, data);
+
+ storage = bpf_sk_storage_get(&mptcp_stale_map, msk, 0,
+ BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!storage)
+ return;
+
+ for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
+ if (!mptcp_subflow_is_active(data, storage->stale[i]))
+ mptcp_subflow_unstale(storage, storage->stale[i]);
+ }
+
+ subflow = mptcp_subflow_ctx_by_pos(data, 0);
+ if (subflow) {
+ mptcp_subflow_set_stale(storage, subflow);
+ mptcp_subflow_unstale(storage, subflow);
+ }
+
+ subflow = mptcp_subflow_ctx_by_pos(data, 1);
+ if (subflow) {
+ mptcp_subflow_set_stale(storage, subflow);
+ mptcp_subflow_unstale(storage, subflow);
+ mptcp_subflow_set_stale(storage, subflow);
+ }
+}
+
+int BPF_STRUCT_OPS(bpf_stale_get_subflow, struct mptcp_sock *msk,
+ const struct mptcp_sched_data *data)
+{
+ struct mptcp_stale_storage *storage;
+ int nr = -1;
+
+ storage = bpf_sk_storage_get(&mptcp_stale_map, msk, 0,
+ BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!storage)
+ return -1;
+
+ for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
+ struct mptcp_subflow_context *subflow;
+
+ subflow = mptcp_subflow_ctx_by_pos(data, i);
+ if (!subflow)
+ break;
+
+ if (mptcp_subflow_is_stale(storage, subflow))
+ continue;
+
+ nr = i;
+ }
+
+ if (nr != -1)
+ mptcp_subflow_set_scheduled(mptcp_subflow_ctx_by_pos(data, nr), true);
+ return 0;
+}
+
+SEC(".struct_ops")
+struct mptcp_sched_ops stale = {
+ .init = (void *)mptcp_sched_stale_init,
+ .release = (void *)mptcp_sched_stale_release,
+ .data_init = (void *)bpf_stale_data_init,
+ .get_subflow = (void *)bpf_stale_get_subflow,
+ .name = "bpf_stale",
+};
--
2.35.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH mptcp-next 3/3] selftests/bpf: Add bpf_stale test
2023-08-02 5:10 [PATCH mptcp-next 0/3] add bpf_stale scheduler Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler Geliang Tang
@ 2023-08-02 5:10 ` Geliang Tang
2023-08-02 6:19 ` selftests/bpf: Add bpf_stale test: Tests Results MPTCP CI
2 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2023-08-02 5:10 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
This patch adds the bpf_stale scheduler test: test_stale(). Use sysctl to
set net.mptcp.scheduler to use this sched. Add two veth net devices to
simulate the multiple addresses case. Use 'ip mptcp endpoint' command to
add the new endpoint ADDR_2 to PM netlink. Send data and check bytes_sent
of 'ss' output after it to make sure the data has been only sent on ADDR_1
since ADDR_2 is set as stale.
Signed-off-by: Geliang Tang <geliang.tang@suse.com>
---
.../testing/selftests/bpf/prog_tests/mptcp.c | 38 +++++++++++++++++++
1 file changed, 38 insertions(+)
diff --git a/tools/testing/selftests/bpf/prog_tests/mptcp.c b/tools/testing/selftests/bpf/prog_tests/mptcp.c
index 09770aa31f4a..5accc34d35e5 100644
--- a/tools/testing/selftests/bpf/prog_tests/mptcp.c
+++ b/tools/testing/selftests/bpf/prog_tests/mptcp.c
@@ -12,6 +12,7 @@
#include "mptcp_bpf_rr.skel.h"
#include "mptcp_bpf_red.skel.h"
#include "mptcp_bpf_burst.skel.h"
+#include "mptcp_bpf_stale.skel.h"
char NS_TEST[32];
@@ -524,6 +525,41 @@ static void test_burst(void)
mptcp_bpf_burst__destroy(burst_skel);
}
+static void test_stale(void)
+{
+ struct mptcp_bpf_stale *stale_skel;
+ int server_fd, client_fd;
+ struct nstoken *nstoken;
+ struct bpf_link *link;
+
+ stale_skel = mptcp_bpf_stale__open_and_load();
+ if (!ASSERT_OK_PTR(stale_skel, "bpf_stale__open_and_load"))
+ return;
+
+ link = bpf_map__attach_struct_ops(stale_skel->maps.stale);
+ if (!ASSERT_OK_PTR(link, "bpf_map__attach_struct_ops")) {
+ mptcp_bpf_stale__destroy(stale_skel);
+ return;
+ }
+
+ nstoken = sched_init("subflow", "bpf_stale");
+ if (!ASSERT_OK_PTR(nstoken, "sched_init:bpf_stale"))
+ goto fail;
+ server_fd = start_mptcp_server(AF_INET, ADDR_1, PORT_1, 0);
+ client_fd = connect_to_fd(server_fd, 0);
+
+ send_data(server_fd, client_fd, "bpf_stale");
+ ASSERT_OK(has_bytes_sent(ADDR_1), "has_bytes_sent addr_1");
+ ASSERT_GT(has_bytes_sent(ADDR_2), 0, "has_bytes_sent addr_2");
+
+ close(client_fd);
+ close(server_fd);
+fail:
+ cleanup_netns(nstoken);
+ bpf_link__destroy(link);
+ mptcp_bpf_stale__destroy(stale_skel);
+}
+
void test_mptcp(void)
{
if (test__start_subtest("base"))
@@ -540,4 +576,6 @@ void test_mptcp(void)
test_red();
if (test__start_subtest("burst"))
test_burst();
+ if (test__start_subtest("stale"))
+ test_stale();
}
--
2.35.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: selftests/bpf: Add bpf_stale test: Tests Results
2023-08-02 5:10 ` [PATCH mptcp-next 3/3] selftests/bpf: Add bpf_stale test Geliang Tang
@ 2023-08-02 6:19 ` MPTCP CI
0 siblings, 0 replies; 10+ messages in thread
From: MPTCP CI @ 2023-08-02 6:19 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
Hi Geliang,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/6597383929200640
- Summary: https://api.cirrus-ci.com/v1/artifact/task/6597383929200640/summary/summary.txt
- KVM Validation: normal (only selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/4767796580581376
- Summary: https://api.cirrus-ci.com/v1/artifact/task/4767796580581376/summary/summary.txt
- KVM Validation: debug (only selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/5330746534002688
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5330746534002688/summary/summary.txt
- KVM Validation: debug (except selftest_mptcp_join):
- Success! ✅:
- Task: https://cirrus-ci.com/task/5893696487424000
- Summary: https://api.cirrus-ci.com/v1/artifact/task/5893696487424000/summary/summary.txt
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/b56b7cd37f92
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-debug
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (Tessares)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops"
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
@ 2023-08-02 16:52 ` Mat Martineau
2023-08-03 7:25 ` Matthieu Baerts
1 sibling, 0 replies; 10+ messages in thread
From: Mat Martineau @ 2023-08-02 16:52 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Wed, 2 Aug 2023, Geliang Tang wrote:
> No need to change subflow->stale bit-filed now, so drop the writing
> access of it.
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> net/mptcp/bpf.c | 3 ---
> 1 file changed, 3 deletions(-)
This looks ok to squash, thanks Geliang.
- Mat
>
> diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
> index 51634efe4741..5c82cc35bfe0 100644
> --- a/net/mptcp/bpf.c
> +++ b/net/mptcp/bpf.c
> @@ -52,9 +52,6 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
> case offsetof(struct mptcp_subflow_context, scheduled):
> end = offsetofend(struct mptcp_subflow_context, scheduled);
> break;
> - case offsetofend(struct mptcp_subflow_context, map_csum_len):
> - end = offsetof(struct mptcp_subflow_context, data_avail);
> - break;
> case offsetof(struct mptcp_subflow_context, avg_pacing_rate):
> end = offsetofend(struct mptcp_subflow_context, avg_pacing_rate);
> break;
> --
> 2.35.3
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler
2023-08-02 5:10 ` [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler Geliang Tang
@ 2023-08-02 17:08 ` Mat Martineau
0 siblings, 0 replies; 10+ messages in thread
From: Mat Martineau @ 2023-08-02 17:08 UTC (permalink / raw)
To: Geliang Tang; +Cc: mptcp
On Wed, 2 Aug 2023, Geliang Tang wrote:
> This patch implements the setting a subflow as stale/unstale in BPF MPTCP
> scheduler, named bpf_stale. The staled subflow will be added into a stale
> map in sk_storage. Two helper mptcp_subflow_set_stale() and
> mptcp_subflow_unstale() are added. The subflow will be set as stale in
> bpf_stale_data_init() and will be checked in bpf_stale_get_subflow().
>
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> .../selftests/bpf/progs/mptcp_bpf_stale.c | 155 ++++++++++++++++++
> 1 file changed, 155 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
>
> diff --git a/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c b/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
> new file mode 100644
> index 000000000000..44010ca83af3
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/mptcp_bpf_stale.c
> @@ -0,0 +1,155 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (c) 2023, SUSE. */
> +
> +#include <linux/bpf.h>
> +#include "bpf_tcp_helpers.h"
> +
> +char _license[] SEC("license") = "GPL";
> +
> +struct mptcp_stale_storage {
> + __u8 nr;
> + struct mptcp_subflow_context *stale[MPTCP_SUBFLOWS_MAX];
Hi Geliang -
How about storing an array of u32 subflow_id values instead? That way
there's no chance we have old/invalid pointers stored when subflows are
removed.
I realize we're only comparing against these subflow pointers and not
dereferencing them, but scheduler developers will be looking at these test
programs as examples and the subflow_id would be a better fit as example
code.
Thanks,
Mat
> +};
> +
> +struct {
> + __uint(type, BPF_MAP_TYPE_SK_STORAGE);
> + __uint(map_flags, BPF_F_NO_PREALLOC);
> + __type(key, int);
> + __type(value, struct mptcp_stale_storage);
> +} mptcp_stale_map SEC(".maps");
> +
> +static void mptcp_subflow_set_stale(struct mptcp_stale_storage *storage,
> + struct mptcp_subflow_context *subflow)
> +{
> + for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (storage->stale[i] == subflow)
> + return;
> + }
> +
> + if (storage->nr < MPTCP_SUBFLOWS_MAX - 1)
> + storage->stale[storage->nr++] = subflow;
> +}
> +
> +static void mptcp_subflow_unstale(struct mptcp_stale_storage *storage,
> + struct mptcp_subflow_context *subflow)
> +{
> + for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (storage->stale[i] == subflow) {
> + for (int j = i; j < MPTCP_SUBFLOWS_MAX - 1; j++) {
> + if (!storage->stale[j + 1])
> + break;
> + storage->stale[j] = storage->stale[j + 1];
> + storage->stale[j + 1] = NULL;
> + }
> + storage->nr--;
> + return;
> + }
> + }
> +}
> +
> +static bool mptcp_subflow_is_stale(struct mptcp_stale_storage *storage,
> + struct mptcp_subflow_context *subflow)
> +{
> + for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (storage->stale[i] == subflow)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static bool mptcp_subflow_is_active(struct mptcp_sched_data *data,
> + struct mptcp_subflow_context *stale)
> +{
> + for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
> + struct mptcp_subflow_context *subflow;
> +
> + subflow = mptcp_subflow_ctx_by_pos(data, i);
> + if (!subflow)
> + break;
> + if (subflow == stale)
> + return true;
> + }
> +
> + return false;
> +}
> +
> +SEC("struct_ops/mptcp_sched_stale_init")
> +void BPF_PROG(mptcp_sched_stale_init, struct mptcp_sock *msk)
> +{
> +}
> +
> +SEC("struct_ops/mptcp_sched_stale_release")
> +void BPF_PROG(mptcp_sched_stale_release, struct mptcp_sock *msk)
> +{
> +}
> +
> +void BPF_STRUCT_OPS(bpf_stale_data_init, struct mptcp_sock *msk,
> + struct mptcp_sched_data *data)
> +{
> + struct mptcp_subflow_context *subflow;
> + struct mptcp_stale_storage *storage;
> +
> + mptcp_sched_data_set_contexts(msk, data);
> +
> + storage = bpf_sk_storage_get(&mptcp_stale_map, msk, 0,
> + BPF_LOCAL_STORAGE_GET_F_CREATE);
> + if (!storage)
> + return;
> +
> + for (int i = 0; i < storage->nr && i < MPTCP_SUBFLOWS_MAX; i++) {
> + if (!mptcp_subflow_is_active(data, storage->stale[i]))
> + mptcp_subflow_unstale(storage, storage->stale[i]);
> + }
> +
> + subflow = mptcp_subflow_ctx_by_pos(data, 0);
> + if (subflow) {
> + mptcp_subflow_set_stale(storage, subflow);
> + mptcp_subflow_unstale(storage, subflow);
> + }
> +
> + subflow = mptcp_subflow_ctx_by_pos(data, 1);
> + if (subflow) {
> + mptcp_subflow_set_stale(storage, subflow);
> + mptcp_subflow_unstale(storage, subflow);
> + mptcp_subflow_set_stale(storage, subflow);
> + }
> +}
> +
> +int BPF_STRUCT_OPS(bpf_stale_get_subflow, struct mptcp_sock *msk,
> + const struct mptcp_sched_data *data)
> +{
> + struct mptcp_stale_storage *storage;
> + int nr = -1;
> +
> + storage = bpf_sk_storage_get(&mptcp_stale_map, msk, 0,
> + BPF_LOCAL_STORAGE_GET_F_CREATE);
> + if (!storage)
> + return -1;
> +
> + for (int i = 0; i < data->subflows && i < MPTCP_SUBFLOWS_MAX; i++) {
> + struct mptcp_subflow_context *subflow;
> +
> + subflow = mptcp_subflow_ctx_by_pos(data, i);
> + if (!subflow)
> + break;
> +
> + if (mptcp_subflow_is_stale(storage, subflow))
> + continue;
> +
> + nr = i;
> + }
> +
> + if (nr != -1)
> + mptcp_subflow_set_scheduled(mptcp_subflow_ctx_by_pos(data, nr), true);
> + return 0;
> +}
> +
> +SEC(".struct_ops")
> +struct mptcp_sched_ops stale = {
> + .init = (void *)mptcp_sched_stale_init,
> + .release = (void *)mptcp_sched_stale_release,
> + .data_init = (void *)bpf_stale_data_init,
> + .get_subflow = (void *)bpf_stale_get_subflow,
> + .name = "bpf_stale",
> +};
> --
> 2.35.3
>
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops"
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
2023-08-02 16:52 ` Mat Martineau
@ 2023-08-03 7:25 ` Matthieu Baerts
2023-08-03 7:36 ` Geliang Tang
1 sibling, 1 reply; 10+ messages in thread
From: Matthieu Baerts @ 2023-08-03 7:25 UTC (permalink / raw)
To: Geliang Tang, mptcp
Hi Geliang,
On 02/08/2023 07:10, Geliang Tang wrote:
> No need to change subflow->stale bit-filed now, so drop the writing
> access of it.
Just to be sure, do I still need to apply this patch that is no longer
part of the v2 series?
Cheers,
Matt
> Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> ---
> net/mptcp/bpf.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
> index 51634efe4741..5c82cc35bfe0 100644
> --- a/net/mptcp/bpf.c
> +++ b/net/mptcp/bpf.c
> @@ -52,9 +52,6 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
> case offsetof(struct mptcp_subflow_context, scheduled):
> end = offsetofend(struct mptcp_subflow_context, scheduled);
> break;
> - case offsetofend(struct mptcp_subflow_context, map_csum_len):
> - end = offsetof(struct mptcp_subflow_context, data_avail);
> - break;
> case offsetof(struct mptcp_subflow_context, avg_pacing_rate):
> end = offsetofend(struct mptcp_subflow_context, avg_pacing_rate);
> break;
--
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops"
2023-08-03 7:25 ` Matthieu Baerts
@ 2023-08-03 7:36 ` Geliang Tang
2023-08-04 12:27 ` Matthieu Baerts
0 siblings, 1 reply; 10+ messages in thread
From: Geliang Tang @ 2023-08-03 7:36 UTC (permalink / raw)
To: Matthieu Baerts; +Cc: mptcp
On Thu, Aug 03, 2023 at 09:25:42AM +0200, Matthieu Baerts wrote:
> Hi Geliang,
>
> On 02/08/2023 07:10, Geliang Tang wrote:
> > No need to change subflow->stale bit-filed now, so drop the writing
> > access of it.
>
> Just to be sure, do I still need to apply this patch that is no longer
> part of the v2 series?
Yes, this patch is still needed. Please apply it to the export branch.
Thanks,
-Geliang
>
> Cheers,
> Matt
>
> > Signed-off-by: Geliang Tang <geliang.tang@suse.com>
> > ---
> > net/mptcp/bpf.c | 3 ---
> > 1 file changed, 3 deletions(-)
> >
> > diff --git a/net/mptcp/bpf.c b/net/mptcp/bpf.c
> > index 51634efe4741..5c82cc35bfe0 100644
> > --- a/net/mptcp/bpf.c
> > +++ b/net/mptcp/bpf.c
> > @@ -52,9 +52,6 @@ static int bpf_mptcp_sched_btf_struct_access(struct bpf_verifier_log *log,
> > case offsetof(struct mptcp_subflow_context, scheduled):
> > end = offsetofend(struct mptcp_subflow_context, scheduled);
> > break;
> > - case offsetofend(struct mptcp_subflow_context, map_csum_len):
> > - end = offsetof(struct mptcp_subflow_context, data_avail);
> > - break;
> > case offsetof(struct mptcp_subflow_context, avg_pacing_rate):
> > end = offsetofend(struct mptcp_subflow_context, avg_pacing_rate);
> > break;
>
> --
> Tessares | Belgium | Hybrid Access Solutions
> www.tessares.net
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops"
2023-08-03 7:36 ` Geliang Tang
@ 2023-08-04 12:27 ` Matthieu Baerts
0 siblings, 0 replies; 10+ messages in thread
From: Matthieu Baerts @ 2023-08-04 12:27 UTC (permalink / raw)
To: Geliang Tang, Mat Martineau; +Cc: mptcp
Hi Geliang, Mat,
On 03/08/2023 09:36, Geliang Tang wrote:
> On Thu, Aug 03, 2023 at 09:25:42AM +0200, Matthieu Baerts wrote:
>> On 02/08/2023 07:10, Geliang Tang wrote:
>>> No need to change subflow->stale bit-filed now, so drop the writing
>>> access of it.
>>
>> Just to be sure, do I still need to apply this patch that is no longer
>> part of the v2 series?
>
> Yes, this patch is still needed. Please apply it to the export branch.
Thank you for the confirmation, the patch and the review!
Now in our tree:
New patches for t/upstream:
- e8eec68a01f0: "squashed" patch 1/3 in "bpf: Add bpf_mptcp_sched_ops"
- Results: e54740d33eef..49d27f1d2549 (export)
Tests are now in progress:
https://cirrus-ci.com/github/multipath-tcp/mptcp_net-next/export/20230804T122556
Cheers,
Matt
--
Tessares | Belgium | Hybrid Access Solutions
www.tessares.net
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-08-04 12:27 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-02 5:10 [PATCH mptcp-next 0/3] add bpf_stale scheduler Geliang Tang
2023-08-02 5:10 ` [PATCH mptcp-next 1/3] Squash to "bpf: Add bpf_mptcp_sched_ops" Geliang Tang
2023-08-02 16:52 ` Mat Martineau
2023-08-03 7:25 ` Matthieu Baerts
2023-08-03 7:36 ` Geliang Tang
2023-08-04 12:27 ` Matthieu Baerts
2023-08-02 5:10 ` [PATCH mptcp-next 2/3] selftests/bpf: Add bpf_stale scheduler Geliang Tang
2023-08-02 17:08 ` Mat Martineau
2023-08-02 5:10 ` [PATCH mptcp-next 3/3] selftests/bpf: Add bpf_stale test Geliang Tang
2023-08-02 6:19 ` selftests/bpf: Add bpf_stale test: Tests Results MPTCP CI
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).