* [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN
@ 2018-12-04 4:00 Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 1/5] net/flow_dissector: move bpf case into __skb_flow_bpf_dissect Stanislav Fomichev
` (5 more replies)
0 siblings, 6 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:00 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
This patch series adds support for testing flow dissector BPF programs by
extending already existing BPF_PROG_TEST_RUN. The goal is to have a
packet as an input and `struct bpf_flow_key' as an output. That way
we can easily test flow dissector programs' behavior.
I've also modified existing test_progs.c test to do a simple flow
dissector run as well.
* first patch introduces new __skb_flow_bpf_dissect to simplify
sharing between __skb_flow_bpf_dissect and BPF_PROG_TEST_RUN
* second patch adds actual BPF_PROG_TEST_RUN support
* third patch converts BPF flow dissector to thoff
* forth patch correctly caps nhoff and thoff returned from bpf flow
dissector
* fifth patch adds example usage to the selftests
v2 changes:
* new patch to use thoff instead of nhoff in bpf flow dissector
* new patch to correctly cap thoff for BPF case
* add missing memset(flow_keys, 0, ...) to __skb_flow_bpf_dissect
* set test iterations to 10
Stanislav Fomichev (5):
net/flow_dissector: move bpf case into __skb_flow_bpf_dissect
bpf: add BPF_PROG_TEST_RUN support for flow dissector
selftests/bpf: use thoff instead of nhoff in BPF flow dissector
net/flow_dissector: correctly cap nhoff and thoff in case of BPF
selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow
dissector
include/linux/bpf.h | 3 +
include/linux/skbuff.h | 5 ++
net/bpf/test_run.c | 71 ++++++++++++++-
net/core/filter.c | 1 +
net/core/flow_dissector.c | 88 ++++++++++++-------
tools/testing/selftests/bpf/Makefile | 3 +
tools/testing/selftests/bpf/bpf_flow.c | 36 ++++----
.../selftests/bpf/flow_dissector_load.c | 43 ++-------
.../selftests/bpf/flow_dissector_load.h | 55 ++++++++++++
tools/testing/selftests/bpf/test_progs.c | 78 +++++++++++++++-
10 files changed, 289 insertions(+), 94 deletions(-)
create mode 100644 tools/testing/selftests/bpf/flow_dissector_load.h
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH bpf-next v2 1/5] net/flow_dissector: move bpf case into __skb_flow_bpf_dissect
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
@ 2018-12-04 4:00 ` Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 2/5] bpf: add BPF_PROG_TEST_RUN support for flow dissector Stanislav Fomichev
` (4 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:00 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
This way, we can reuse it for flow dissector in BPF_PROG_TEST_RUN.
No functional changes.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
include/linux/skbuff.h | 5 +++
net/core/flow_dissector.c | 85 +++++++++++++++++++++++----------------
2 files changed, 56 insertions(+), 34 deletions(-)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 73902acf2b71..c10b27bb0cab 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1215,6 +1215,11 @@ static inline int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
}
#endif
+struct bpf_flow_keys;
+bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
+ const struct sk_buff *skb,
+ struct flow_dissector *flow_dissector,
+ struct bpf_flow_keys *flow_keys);
bool __skb_flow_dissect(const struct sk_buff *skb,
struct flow_dissector *flow_dissector,
void *target_container,
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 2e8d91e54179..3c8a78decbc0 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -683,6 +683,41 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
}
}
+bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
+ const struct sk_buff *skb,
+ struct flow_dissector *flow_dissector,
+ struct bpf_flow_keys *flow_keys)
+{
+ struct bpf_skb_data_end cb_saved;
+ struct bpf_skb_data_end *cb;
+ u32 result;
+
+ /* Note that even though the const qualifier is discarded
+ * throughout the execution of the BPF program, all changes(the
+ * control block) are reverted after the BPF program returns.
+ * Therefore, __skb_flow_dissect does not alter the skb.
+ */
+
+ cb = (struct bpf_skb_data_end *)skb->cb;
+
+ /* Save Control Block */
+ memcpy(&cb_saved, cb, sizeof(cb_saved));
+ memset(cb, 0, sizeof(*cb));
+
+ /* Pass parameters to the BPF program */
+ memset(flow_keys, 0, sizeof(*flow_keys));
+ cb->qdisc_cb.flow_keys = flow_keys;
+ flow_keys->nhoff = skb_network_offset(skb);
+
+ bpf_compute_data_pointers((struct sk_buff *)skb);
+ result = BPF_PROG_RUN(prog, skb);
+
+ /* Restore state */
+ memcpy(cb, &cb_saved, sizeof(cb_saved));
+
+ return result == BPF_OK;
+}
+
/**
* __skb_flow_dissect - extract the flow_keys struct and return it
* @skb: sk_buff to extract the flow from, can be NULL if the rest are specified
@@ -714,7 +749,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
struct flow_dissector_key_vlan *key_vlan;
enum flow_dissect_ret fdret;
enum flow_dissector_key_id dissector_vlan = FLOW_DISSECTOR_KEY_MAX;
- struct bpf_prog *attached = NULL;
int num_hdrs = 0;
u8 ip_proto = 0;
bool ret;
@@ -754,49 +788,32 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
FLOW_DISSECTOR_KEY_BASIC,
target_container);
- rcu_read_lock();
if (skb) {
+ struct bpf_flow_keys flow_keys;
+ struct bpf_prog *attached = NULL;
+
+ rcu_read_lock();
+
if (skb->dev)
attached = rcu_dereference(dev_net(skb->dev)->flow_dissector_prog);
else if (skb->sk)
attached = rcu_dereference(sock_net(skb->sk)->flow_dissector_prog);
else
WARN_ON_ONCE(1);
- }
- if (attached) {
- /* Note that even though the const qualifier is discarded
- * throughout the execution of the BPF program, all changes(the
- * control block) are reverted after the BPF program returns.
- * Therefore, __skb_flow_dissect does not alter the skb.
- */
- struct bpf_flow_keys flow_keys = {};
- struct bpf_skb_data_end cb_saved;
- struct bpf_skb_data_end *cb;
- u32 result;
-
- cb = (struct bpf_skb_data_end *)skb->cb;
-
- /* Save Control Block */
- memcpy(&cb_saved, cb, sizeof(cb_saved));
- memset(cb, 0, sizeof(cb_saved));
- /* Pass parameters to the BPF program */
- cb->qdisc_cb.flow_keys = &flow_keys;
- flow_keys.nhoff = nhoff;
-
- bpf_compute_data_pointers((struct sk_buff *)skb);
- result = BPF_PROG_RUN(attached, skb);
-
- /* Restore state */
- memcpy(cb, &cb_saved, sizeof(cb_saved));
-
- __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
- target_container);
- key_control->thoff = min_t(u16, key_control->thoff, skb->len);
+ if (attached) {
+ ret = __skb_flow_bpf_dissect(attached, skb,
+ flow_dissector,
+ &flow_keys);
+ __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
+ target_container);
+ key_control->thoff = min_t(u16, key_control->thoff,
+ skb->len);
+ rcu_read_unlock();
+ return ret;
+ }
rcu_read_unlock();
- return result == BPF_OK;
}
- rcu_read_unlock();
if (dissector_uses_key(flow_dissector,
FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v2 2/5] bpf: add BPF_PROG_TEST_RUN support for flow dissector
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 1/5] net/flow_dissector: move bpf case into __skb_flow_bpf_dissect Stanislav Fomichev
@ 2018-12-04 4:00 ` Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF " Stanislav Fomichev
` (3 subsequent siblings)
5 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:00 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
The input is packet data, the output is struct bpf_flow_key. This should
make it easy to test flow dissector programs without elaborate
setup.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
include/linux/bpf.h | 3 ++
net/bpf/test_run.c | 71 ++++++++++++++++++++++++++++++++++++++++++++-
net/core/filter.c | 1 +
3 files changed, 74 insertions(+), 1 deletion(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index e82b7039fc66..7a572d15d5dd 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -373,6 +373,9 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
union bpf_attr __user *uattr);
+int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ const union bpf_attr *kattr,
+ union bpf_attr __user *uattr);
/* an array of programs to be executed under rcu_lock.
*
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index c89c22c49015..0368ee9eec14 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -16,12 +16,26 @@
static __always_inline u32 bpf_test_run_one(struct bpf_prog *prog, void *ctx,
struct bpf_cgroup_storage *storage[MAX_BPF_CGROUP_STORAGE_TYPE])
{
+ struct bpf_skb_data_end *cb;
+ struct sk_buff *skb;
u32 ret;
preempt_disable();
rcu_read_lock();
bpf_cgroup_storage_set(storage);
- ret = BPF_PROG_RUN(prog, ctx);
+
+ switch (prog->type) {
+ case BPF_PROG_TYPE_FLOW_DISSECTOR:
+ skb = (struct sk_buff *)ctx;
+ cb = (struct bpf_skb_data_end *)skb->cb;
+ ret = __skb_flow_bpf_dissect(prog, ctx, &flow_keys_dissector,
+ cb->qdisc_cb.flow_keys);
+ break;
+ default:
+ ret = BPF_PROG_RUN(prog, ctx);
+ break;
+ }
+
rcu_read_unlock();
preempt_enable();
@@ -220,3 +234,58 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, const union bpf_attr *kattr,
kfree(data);
return ret;
}
+
+int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
+ const union bpf_attr *kattr,
+ union bpf_attr __user *uattr)
+{
+ u32 size = kattr->test.data_size_in;
+ u32 repeat = kattr->test.repeat;
+ struct bpf_flow_keys flow_keys;
+ struct bpf_skb_data_end *cb;
+ u32 retval, duration;
+ struct sk_buff *skb;
+ struct sock *sk;
+ void *data;
+ int ret;
+
+ if (prog->type != BPF_PROG_TYPE_FLOW_DISSECTOR)
+ return -EINVAL;
+
+ data = bpf_test_init(kattr, size, NET_SKB_PAD + NET_IP_ALIGN,
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+
+ sk = kzalloc(sizeof(*sk), GFP_USER);
+ if (!sk) {
+ kfree(data);
+ return -ENOMEM;
+ }
+ sock_net_set(sk, current->nsproxy->net_ns);
+ sock_init_data(NULL, sk);
+
+ skb = build_skb(data, 0);
+ if (!skb) {
+ kfree(data);
+ kfree(sk);
+ return -ENOMEM;
+ }
+ skb->sk = sk;
+
+ skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+ __skb_put(skb, size);
+ skb->protocol = eth_type_trans(skb,
+ current->nsproxy->net_ns->loopback_dev);
+ skb_reset_network_header(skb);
+
+ cb = (struct bpf_skb_data_end *)skb->cb;
+ cb->qdisc_cb.flow_keys = &flow_keys;
+ retval = bpf_test_run(prog, skb, repeat, &duration);
+
+ ret = bpf_test_finish(kattr, uattr, &flow_keys, sizeof(flow_keys),
+ retval, duration);
+ kfree_skb(skb);
+ kfree(sk);
+ return ret;
+}
diff --git a/net/core/filter.c b/net/core/filter.c
index bd0df75dc7b6..4eae6102399d 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -7657,6 +7657,7 @@ const struct bpf_verifier_ops flow_dissector_verifier_ops = {
};
const struct bpf_prog_ops flow_dissector_prog_ops = {
+ .test_run = bpf_prog_test_run_flow_dissector,
};
int sk_detach_filter(struct sock *sk)
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF flow dissector
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 1/5] net/flow_dissector: move bpf case into __skb_flow_bpf_dissect Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 2/5] bpf: add BPF_PROG_TEST_RUN support for flow dissector Stanislav Fomichev
@ 2018-12-04 4:00 ` Stanislav Fomichev
2018-12-04 23:16 ` Song Liu
2018-12-04 4:00 ` [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF Stanislav Fomichev
` (2 subsequent siblings)
5 siblings, 1 reply; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:00 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
We are returning thoff from the flow dissector, not the nhoff. Pass
thoff along with nhoff to the bpf program (initially thoff == nhoff)
and expect flow dissector amend/return thoff, not nhoff.
This avoids confusion, when by the time bpf flow dissector exits,
nhoff == thoff, which doesn't make much sense (this is relevant
in the context of the next patch, where I add simple selftest
and manually construct expected flow_keys).
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
net/core/flow_dissector.c | 1 +
tools/testing/selftests/bpf/bpf_flow.c | 36 ++++++++++++--------------
2 files changed, 18 insertions(+), 19 deletions(-)
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index 3c8a78decbc0..ac19da6f390b 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -708,6 +708,7 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
memset(flow_keys, 0, sizeof(*flow_keys));
cb->qdisc_cb.flow_keys = flow_keys;
flow_keys->nhoff = skb_network_offset(skb);
+ flow_keys->thoff = flow_keys->nhoff;
bpf_compute_data_pointers((struct sk_buff *)skb);
result = BPF_PROG_RUN(prog, skb);
diff --git a/tools/testing/selftests/bpf/bpf_flow.c b/tools/testing/selftests/bpf/bpf_flow.c
index b9798f558ca7..284660f5aa95 100644
--- a/tools/testing/selftests/bpf/bpf_flow.c
+++ b/tools/testing/selftests/bpf/bpf_flow.c
@@ -70,18 +70,18 @@ static __always_inline void *bpf_flow_dissect_get_header(struct __sk_buff *skb,
{
void *data_end = (void *)(long)skb->data_end;
void *data = (void *)(long)skb->data;
- __u16 nhoff = skb->flow_keys->nhoff;
+ __u16 thoff = skb->flow_keys->thoff;
__u8 *hdr;
/* Verifies this variable offset does not overflow */
- if (nhoff > (USHRT_MAX - hdr_size))
+ if (thoff > (USHRT_MAX - hdr_size))
return NULL;
- hdr = data + nhoff;
+ hdr = data + thoff;
if (hdr + hdr_size <= data_end)
return hdr;
- if (bpf_skb_load_bytes(skb, nhoff, buffer, hdr_size))
+ if (bpf_skb_load_bytes(skb, thoff, buffer, hdr_size))
return NULL;
return buffer;
@@ -158,13 +158,13 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
/* Only inspect standard GRE packets with version 0 */
return BPF_OK;
- keys->nhoff += sizeof(*gre); /* Step over GRE Flags and Proto */
+ keys->thoff += sizeof(*gre); /* Step over GRE Flags and Proto */
if (GRE_IS_CSUM(gre->flags))
- keys->nhoff += 4; /* Step over chksum and Padding */
+ keys->thoff += 4; /* Step over chksum and Padding */
if (GRE_IS_KEY(gre->flags))
- keys->nhoff += 4; /* Step over key */
+ keys->thoff += 4; /* Step over key */
if (GRE_IS_SEQ(gre->flags))
- keys->nhoff += 4; /* Step over sequence number */
+ keys->thoff += 4; /* Step over sequence number */
keys->is_encap = true;
@@ -174,7 +174,7 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
if (!eth)
return BPF_DROP;
- keys->nhoff += sizeof(*eth);
+ keys->thoff += sizeof(*eth);
return parse_eth_proto(skb, eth->h_proto);
} else {
@@ -191,7 +191,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
if ((__u8 *)tcp + (tcp->doff << 2) > data_end)
return BPF_DROP;
- keys->thoff = keys->nhoff;
keys->sport = tcp->source;
keys->dport = tcp->dest;
return BPF_OK;
@@ -201,7 +200,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
if (!udp)
return BPF_DROP;
- keys->thoff = keys->nhoff;
keys->sport = udp->source;
keys->dport = udp->dest;
return BPF_OK;
@@ -252,8 +250,8 @@ PROG(IP)(struct __sk_buff *skb)
keys->ipv4_src = iph->saddr;
keys->ipv4_dst = iph->daddr;
- keys->nhoff += iph->ihl << 2;
- if (data + keys->nhoff > data_end)
+ keys->thoff += iph->ihl << 2;
+ if (data + keys->thoff > data_end)
return BPF_DROP;
if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) {
@@ -285,7 +283,7 @@ PROG(IPV6)(struct __sk_buff *skb)
keys->addr_proto = ETH_P_IPV6;
memcpy(&keys->ipv6_src, &ip6h->saddr, 2*sizeof(ip6h->saddr));
- keys->nhoff += sizeof(struct ipv6hdr);
+ keys->thoff += sizeof(struct ipv6hdr);
return parse_ipv6_proto(skb, ip6h->nexthdr);
}
@@ -301,7 +299,7 @@ PROG(IPV6OP)(struct __sk_buff *skb)
/* hlen is in 8-octets and does not include the first 8 bytes
* of the header
*/
- skb->flow_keys->nhoff += (1 + ip6h->hdrlen) << 3;
+ skb->flow_keys->thoff += (1 + ip6h->hdrlen) << 3;
return parse_ipv6_proto(skb, ip6h->nexthdr);
}
@@ -315,7 +313,7 @@ PROG(IPV6FR)(struct __sk_buff *skb)
if (!fragh)
return BPF_DROP;
- keys->nhoff += sizeof(*fragh);
+ keys->thoff += sizeof(*fragh);
keys->is_frag = true;
if (!(fragh->frag_off & bpf_htons(IP6_OFFSET)))
keys->is_first_frag = true;
@@ -341,7 +339,7 @@ PROG(VLAN)(struct __sk_buff *skb)
__be16 proto;
/* Peek back to see if single or double-tagging */
- if (bpf_skb_load_bytes(skb, keys->nhoff - sizeof(proto), &proto,
+ if (bpf_skb_load_bytes(skb, keys->thoff - sizeof(proto), &proto,
sizeof(proto)))
return BPF_DROP;
@@ -354,14 +352,14 @@ PROG(VLAN)(struct __sk_buff *skb)
if (vlan->h_vlan_encapsulated_proto != bpf_htons(ETH_P_8021Q))
return BPF_DROP;
- keys->nhoff += sizeof(*vlan);
+ keys->thoff += sizeof(*vlan);
}
vlan = bpf_flow_dissect_get_header(skb, sizeof(*vlan), &_vlan);
if (!vlan)
return BPF_DROP;
- keys->nhoff += sizeof(*vlan);
+ keys->thoff += sizeof(*vlan);
/* Only allow 8021AD + 8021Q double tagging and no triple tagging.*/
if (vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021AD) ||
vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021Q))
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
` (2 preceding siblings ...)
2018-12-04 4:00 ` [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF " Stanislav Fomichev
@ 2018-12-04 4:00 ` Stanislav Fomichev
2018-12-04 23:17 ` Song Liu
2018-12-04 4:01 ` [PATCH bpf-next v2 5/5] selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow dissector Stanislav Fomichev
2018-12-05 17:55 ` [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Song Liu
5 siblings, 1 reply; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:00 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
We want to make sure that the following condition holds:
0 <= nhoff <= thoff <= skb->len
BPF program can set out-of-bounds nhoff and thoff, which is dangerous, see
recent commit d0c081b49137 ("flow_dissector: properly cap thoff field")'.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
net/core/flow_dissector.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
index ac19da6f390b..bb1a54747d64 100644
--- a/net/core/flow_dissector.c
+++ b/net/core/flow_dissector.c
@@ -716,6 +716,10 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
/* Restore state */
memcpy(cb, &cb_saved, sizeof(cb_saved));
+ flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, 0, skb->len);
+ flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
+ flow_keys->nhoff, skb->len);
+
return result == BPF_OK;
}
@@ -808,8 +812,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
&flow_keys);
__skb_flow_bpf_to_target(&flow_keys, flow_dissector,
target_container);
- key_control->thoff = min_t(u16, key_control->thoff,
- skb->len);
rcu_read_unlock();
return ret;
}
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH bpf-next v2 5/5] selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow dissector
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
` (3 preceding siblings ...)
2018-12-04 4:00 ` [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF Stanislav Fomichev
@ 2018-12-04 4:01 ` Stanislav Fomichev
2018-12-05 17:55 ` [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Song Liu
5 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 4:01 UTC (permalink / raw)
To: netdev; +Cc: davem, ast, daniel, simon.horman, liu.song.a23,
Stanislav Fomichev
Use existing pkt_v4 and pkt_v6 to make sure flow_keys are what we want.
Also, add new bpf_flow_load routine (and flow_dissector_load.h header)
that loads bpf_flow.o program and does all required setup.
Signed-off-by: Stanislav Fomichev <sdf@google.com>
---
tools/testing/selftests/bpf/Makefile | 3 +
.../selftests/bpf/flow_dissector_load.c | 43 ++--------
.../selftests/bpf/flow_dissector_load.h | 55 +++++++++++++
tools/testing/selftests/bpf/test_progs.c | 78 ++++++++++++++++++-
4 files changed, 139 insertions(+), 40 deletions(-)
create mode 100644 tools/testing/selftests/bpf/flow_dissector_load.h
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 73aa6d8f4a2f..387763e6d137 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -19,6 +19,9 @@ all: $(TEST_CUSTOM_PROGS)
$(TEST_CUSTOM_PROGS): $(OUTPUT)/%: %.c
$(CC) -o $(TEST_CUSTOM_PROGS) -static $< -Wl,--build-id
+flow_dissector_load.c: flow_dissector_load.h
+test_run.c: flow_dissector_load.h
+
# Order correspond to 'make run_tests' order
TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test_progs \
test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \
diff --git a/tools/testing/selftests/bpf/flow_dissector_load.c b/tools/testing/selftests/bpf/flow_dissector_load.c
index ae8180b11d5f..77cafa66d048 100644
--- a/tools/testing/selftests/bpf/flow_dissector_load.c
+++ b/tools/testing/selftests/bpf/flow_dissector_load.c
@@ -12,6 +12,7 @@
#include <bpf/libbpf.h>
#include "bpf_rlimit.h"
+#include "flow_dissector_load.h"
const char *cfg_pin_path = "/sys/fs/bpf/flow_dissector";
const char *cfg_map_name = "jmp_table";
@@ -21,46 +22,13 @@ char *cfg_path_name;
static void load_and_attach_program(void)
{
- struct bpf_program *prog, *main_prog;
- struct bpf_map *prog_array;
- int i, fd, prog_fd, ret;
+ int prog_fd, ret;
struct bpf_object *obj;
- int prog_array_fd;
- ret = bpf_prog_load(cfg_path_name, BPF_PROG_TYPE_FLOW_DISSECTOR, &obj,
- &prog_fd);
+ ret = bpf_flow_load(&obj, cfg_path_name, cfg_section_name,
+ cfg_map_name, &prog_fd);
if (ret)
- error(1, 0, "bpf_prog_load %s", cfg_path_name);
-
- main_prog = bpf_object__find_program_by_title(obj, cfg_section_name);
- if (!main_prog)
- error(1, 0, "bpf_object__find_program_by_title %s",
- cfg_section_name);
-
- prog_fd = bpf_program__fd(main_prog);
- if (prog_fd < 0)
- error(1, 0, "bpf_program__fd");
-
- prog_array = bpf_object__find_map_by_name(obj, cfg_map_name);
- if (!prog_array)
- error(1, 0, "bpf_object__find_map_by_name %s", cfg_map_name);
-
- prog_array_fd = bpf_map__fd(prog_array);
- if (prog_array_fd < 0)
- error(1, 0, "bpf_map__fd %s", cfg_map_name);
-
- i = 0;
- bpf_object__for_each_program(prog, obj) {
- fd = bpf_program__fd(prog);
- if (fd < 0)
- error(1, 0, "bpf_program__fd");
-
- if (fd != prog_fd) {
- printf("%d: %s\n", i, bpf_program__title(prog, false));
- bpf_map_update_elem(prog_array_fd, &i, &fd, BPF_ANY);
- ++i;
- }
- }
+ error(1, 0, "bpf_flow_load %s", cfg_path_name);
ret = bpf_prog_attach(prog_fd, 0 /* Ignore */, BPF_FLOW_DISSECTOR, 0);
if (ret)
@@ -69,7 +37,6 @@ static void load_and_attach_program(void)
ret = bpf_object__pin(obj, cfg_pin_path);
if (ret)
error(1, 0, "bpf_object__pin %s", cfg_pin_path);
-
}
static void detach_program(void)
diff --git a/tools/testing/selftests/bpf/flow_dissector_load.h b/tools/testing/selftests/bpf/flow_dissector_load.h
new file mode 100644
index 000000000000..41dd6959feb0
--- /dev/null
+++ b/tools/testing/selftests/bpf/flow_dissector_load.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
+#ifndef FLOW_DISSECTOR_LOAD
+#define FLOW_DISSECTOR_LOAD
+
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+static inline int bpf_flow_load(struct bpf_object **obj,
+ const char *path,
+ const char *section_name,
+ const char *map_name,
+ int *prog_fd)
+{
+ struct bpf_program *prog, *main_prog;
+ struct bpf_map *prog_array;
+ int prog_array_fd;
+ int ret, fd, i;
+
+ ret = bpf_prog_load(path, BPF_PROG_TYPE_FLOW_DISSECTOR, obj,
+ prog_fd);
+ if (ret)
+ return ret;
+
+ main_prog = bpf_object__find_program_by_title(*obj, section_name);
+ if (!main_prog)
+ return ret;
+
+ *prog_fd = bpf_program__fd(main_prog);
+ if (*prog_fd < 0)
+ return ret;
+
+ prog_array = bpf_object__find_map_by_name(*obj, map_name);
+ if (!prog_array)
+ return ret;
+
+ prog_array_fd = bpf_map__fd(prog_array);
+ if (prog_array_fd < 0)
+ return ret;
+
+ i = 0;
+ bpf_object__for_each_program(prog, *obj) {
+ fd = bpf_program__fd(prog);
+ if (fd < 0)
+ return fd;
+
+ if (fd != *prog_fd) {
+ bpf_map_update_elem(prog_array_fd, &i, &fd, BPF_ANY);
+ ++i;
+ }
+ }
+
+ return 0;
+}
+
+#endif /* FLOW_DISSECTOR_LOAD */
diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index 1c57abbe0945..b276e3d6685b 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -39,6 +39,7 @@ typedef __u16 __sum16;
#include "bpf_endian.h"
#include "bpf_rlimit.h"
#include "trace_helpers.h"
+#include "flow_dissector_load.h"
static int error_cnt, pass_cnt;
static bool jit_enabled;
@@ -53,9 +54,10 @@ static struct {
} __packed pkt_v4 = {
.eth.h_proto = bpf_htons(ETH_P_IP),
.iph.ihl = 5,
- .iph.protocol = 6,
+ .iph.protocol = IPPROTO_TCP,
.iph.tot_len = bpf_htons(MAGIC_BYTES),
.tcp.urg_ptr = 123,
+ .tcp.doff = 5,
};
/* ipv6 test vector */
@@ -65,9 +67,10 @@ static struct {
struct tcphdr tcp;
} __packed pkt_v6 = {
.eth.h_proto = bpf_htons(ETH_P_IPV6),
- .iph.nexthdr = 6,
+ .iph.nexthdr = IPPROTO_TCP,
.iph.payload_len = bpf_htons(MAGIC_BYTES),
.tcp.urg_ptr = 123,
+ .tcp.doff = 5,
};
#define CHECK(condition, tag, format...) ({ \
@@ -1830,6 +1833,76 @@ static void test_queue_stack_map(int type)
bpf_object__close(obj);
}
+#define CHECK_FLOW_KEYS(desc, got, expected) \
+ CHECK(memcmp(&got, &expected, sizeof(got)) != 0, \
+ desc, \
+ "nhoff=%u/%u " \
+ "thoff=%u/%u " \
+ "addr_proto=0x%x/0x%x " \
+ "is_frag=%u/%u " \
+ "is_first_frag=%u/%u " \
+ "is_encap=%u/%u " \
+ "n_proto=0x%x/0x%x " \
+ "sport=%u/%u " \
+ "dport=%u/%u\n", \
+ got.nhoff, expected.nhoff, \
+ got.thoff, expected.thoff, \
+ got.addr_proto, expected.addr_proto, \
+ got.is_frag, expected.is_frag, \
+ got.is_first_frag, expected.is_first_frag, \
+ got.is_encap, expected.is_encap, \
+ got.n_proto, expected.n_proto, \
+ got.sport, expected.sport, \
+ got.dport, expected.dport)
+
+static struct bpf_flow_keys pkt_v4_flow_keys = {
+ .nhoff = 0,
+ .thoff = sizeof(struct iphdr),
+ .addr_proto = ETH_P_IP,
+ .ip_proto = IPPROTO_TCP,
+ .n_proto = bpf_htons(ETH_P_IP),
+};
+
+static struct bpf_flow_keys pkt_v6_flow_keys = {
+ .nhoff = 0,
+ .thoff = sizeof(struct ipv6hdr),
+ .addr_proto = ETH_P_IPV6,
+ .ip_proto = IPPROTO_TCP,
+ .n_proto = bpf_htons(ETH_P_IPV6),
+};
+
+static void test_flow_dissector(void)
+{
+ struct bpf_flow_keys flow_keys;
+ struct bpf_object *obj;
+ __u32 duration, retval;
+ int err, prog_fd;
+ __u32 size;
+
+ err = bpf_flow_load(&obj, "./bpf_flow.o", "flow_dissector",
+ "jmp_table", &prog_fd);
+ if (err) {
+ error_cnt++;
+ return;
+ }
+
+ err = bpf_prog_test_run(prog_fd, 10, &pkt_v4, sizeof(pkt_v4),
+ &flow_keys, &size, &retval, &duration);
+ CHECK(size != sizeof(flow_keys) || err || retval != 1, "ipv4",
+ "err %d errno %d retval %d duration %d size %u/%lu\n",
+ err, errno, retval, duration, size, sizeof(flow_keys));
+ CHECK_FLOW_KEYS("ipv4_flow_keys", flow_keys, pkt_v4_flow_keys);
+
+ err = bpf_prog_test_run(prog_fd, 10, &pkt_v6, sizeof(pkt_v6),
+ &flow_keys, &size, &retval, &duration);
+ CHECK(size != sizeof(flow_keys) || err || retval != 1, "ipv6",
+ "err %d errno %d retval %d duration %d size %u/%lu\n",
+ err, errno, retval, duration, size, sizeof(flow_keys));
+ CHECK_FLOW_KEYS("ipv6_flow_keys", flow_keys, pkt_v6_flow_keys);
+
+ bpf_object__close(obj);
+}
+
int main(void)
{
srand(time(NULL));
@@ -1856,6 +1929,7 @@ int main(void)
test_reference_tracking();
test_queue_stack_map(QUEUE);
test_queue_stack_map(STACK);
+ test_flow_dissector();
printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, error_cnt);
return error_cnt ? EXIT_FAILURE : EXIT_SUCCESS;
--
2.20.0.rc1.387.gf8505762e3-goog
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF flow dissector
2018-12-04 4:00 ` [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF " Stanislav Fomichev
@ 2018-12-04 23:16 ` Song Liu
2018-12-04 23:26 ` Stanislav Fomichev
0 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2018-12-04 23:16 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Networking, David S . Miller, Alexei Starovoitov, Daniel Borkmann,
simon.horman
On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
>
> We are returning thoff from the flow dissector, not the nhoff. Pass
> thoff along with nhoff to the bpf program (initially thoff == nhoff)
> and expect flow dissector amend/return thoff, not nhoff.
>
> This avoids confusion, when by the time bpf flow dissector exits,
> nhoff == thoff, which doesn't make much sense (this is relevant
> in the context of the next patch, where I add simple selftest
> and manually construct expected flow_keys).
>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>
> ---
> net/core/flow_dissector.c | 1 +
> tools/testing/selftests/bpf/bpf_flow.c | 36 ++++++++++++--------------
> 2 files changed, 18 insertions(+), 19 deletions(-)
>
> diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> index 3c8a78decbc0..ac19da6f390b 100644
> --- a/net/core/flow_dissector.c
> +++ b/net/core/flow_dissector.c
> @@ -708,6 +708,7 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> memset(flow_keys, 0, sizeof(*flow_keys));
> cb->qdisc_cb.flow_keys = flow_keys;
> flow_keys->nhoff = skb_network_offset(skb);
> + flow_keys->thoff = flow_keys->nhoff;
Do we need this fix without this set? If yes, do we need it for bpf
tree as well?
Thanks,
Song
>
> bpf_compute_data_pointers((struct sk_buff *)skb);
> result = BPF_PROG_RUN(prog, skb);
> diff --git a/tools/testing/selftests/bpf/bpf_flow.c b/tools/testing/selftests/bpf/bpf_flow.c
> index b9798f558ca7..284660f5aa95 100644
> --- a/tools/testing/selftests/bpf/bpf_flow.c
> +++ b/tools/testing/selftests/bpf/bpf_flow.c
> @@ -70,18 +70,18 @@ static __always_inline void *bpf_flow_dissect_get_header(struct __sk_buff *skb,
> {
> void *data_end = (void *)(long)skb->data_end;
> void *data = (void *)(long)skb->data;
> - __u16 nhoff = skb->flow_keys->nhoff;
> + __u16 thoff = skb->flow_keys->thoff;
> __u8 *hdr;
>
> /* Verifies this variable offset does not overflow */
> - if (nhoff > (USHRT_MAX - hdr_size))
> + if (thoff > (USHRT_MAX - hdr_size))
> return NULL;
>
> - hdr = data + nhoff;
> + hdr = data + thoff;
> if (hdr + hdr_size <= data_end)
> return hdr;
>
> - if (bpf_skb_load_bytes(skb, nhoff, buffer, hdr_size))
> + if (bpf_skb_load_bytes(skb, thoff, buffer, hdr_size))
> return NULL;
>
> return buffer;
> @@ -158,13 +158,13 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> /* Only inspect standard GRE packets with version 0 */
> return BPF_OK;
>
> - keys->nhoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> + keys->thoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> if (GRE_IS_CSUM(gre->flags))
> - keys->nhoff += 4; /* Step over chksum and Padding */
> + keys->thoff += 4; /* Step over chksum and Padding */
> if (GRE_IS_KEY(gre->flags))
> - keys->nhoff += 4; /* Step over key */
> + keys->thoff += 4; /* Step over key */
> if (GRE_IS_SEQ(gre->flags))
> - keys->nhoff += 4; /* Step over sequence number */
> + keys->thoff += 4; /* Step over sequence number */
>
> keys->is_encap = true;
>
> @@ -174,7 +174,7 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> if (!eth)
> return BPF_DROP;
>
> - keys->nhoff += sizeof(*eth);
> + keys->thoff += sizeof(*eth);
>
> return parse_eth_proto(skb, eth->h_proto);
> } else {
> @@ -191,7 +191,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> if ((__u8 *)tcp + (tcp->doff << 2) > data_end)
> return BPF_DROP;
>
> - keys->thoff = keys->nhoff;
> keys->sport = tcp->source;
> keys->dport = tcp->dest;
> return BPF_OK;
> @@ -201,7 +200,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> if (!udp)
> return BPF_DROP;
>
> - keys->thoff = keys->nhoff;
> keys->sport = udp->source;
> keys->dport = udp->dest;
> return BPF_OK;
> @@ -252,8 +250,8 @@ PROG(IP)(struct __sk_buff *skb)
> keys->ipv4_src = iph->saddr;
> keys->ipv4_dst = iph->daddr;
>
> - keys->nhoff += iph->ihl << 2;
> - if (data + keys->nhoff > data_end)
> + keys->thoff += iph->ihl << 2;
> + if (data + keys->thoff > data_end)
> return BPF_DROP;
>
> if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) {
> @@ -285,7 +283,7 @@ PROG(IPV6)(struct __sk_buff *skb)
> keys->addr_proto = ETH_P_IPV6;
> memcpy(&keys->ipv6_src, &ip6h->saddr, 2*sizeof(ip6h->saddr));
>
> - keys->nhoff += sizeof(struct ipv6hdr);
> + keys->thoff += sizeof(struct ipv6hdr);
>
> return parse_ipv6_proto(skb, ip6h->nexthdr);
> }
> @@ -301,7 +299,7 @@ PROG(IPV6OP)(struct __sk_buff *skb)
> /* hlen is in 8-octets and does not include the first 8 bytes
> * of the header
> */
> - skb->flow_keys->nhoff += (1 + ip6h->hdrlen) << 3;
> + skb->flow_keys->thoff += (1 + ip6h->hdrlen) << 3;
>
> return parse_ipv6_proto(skb, ip6h->nexthdr);
> }
> @@ -315,7 +313,7 @@ PROG(IPV6FR)(struct __sk_buff *skb)
> if (!fragh)
> return BPF_DROP;
>
> - keys->nhoff += sizeof(*fragh);
> + keys->thoff += sizeof(*fragh);
> keys->is_frag = true;
> if (!(fragh->frag_off & bpf_htons(IP6_OFFSET)))
> keys->is_first_frag = true;
> @@ -341,7 +339,7 @@ PROG(VLAN)(struct __sk_buff *skb)
> __be16 proto;
>
> /* Peek back to see if single or double-tagging */
> - if (bpf_skb_load_bytes(skb, keys->nhoff - sizeof(proto), &proto,
> + if (bpf_skb_load_bytes(skb, keys->thoff - sizeof(proto), &proto,
> sizeof(proto)))
> return BPF_DROP;
>
> @@ -354,14 +352,14 @@ PROG(VLAN)(struct __sk_buff *skb)
> if (vlan->h_vlan_encapsulated_proto != bpf_htons(ETH_P_8021Q))
> return BPF_DROP;
>
> - keys->nhoff += sizeof(*vlan);
> + keys->thoff += sizeof(*vlan);
> }
>
> vlan = bpf_flow_dissect_get_header(skb, sizeof(*vlan), &_vlan);
> if (!vlan)
> return BPF_DROP;
>
> - keys->nhoff += sizeof(*vlan);
> + keys->thoff += sizeof(*vlan);
> /* Only allow 8021AD + 8021Q double tagging and no triple tagging.*/
> if (vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021AD) ||
> vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021Q))
> --
> 2.20.0.rc1.387.gf8505762e3-goog
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF
2018-12-04 4:00 ` [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF Stanislav Fomichev
@ 2018-12-04 23:17 ` Song Liu
2018-12-04 23:30 ` Stanislav Fomichev
0 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2018-12-04 23:17 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Networking, David S . Miller, Alexei Starovoitov, Daniel Borkmann,
simon.horman
On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
>
> We want to make sure that the following condition holds:
> 0 <= nhoff <= thoff <= skb->len
>
> BPF program can set out-of-bounds nhoff and thoff, which is dangerous, see
> recent commit d0c081b49137 ("flow_dissector: properly cap thoff field")'.
>
> Signed-off-by: Stanislav Fomichev <sdf@google.com>
> ---
> net/core/flow_dissector.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> index ac19da6f390b..bb1a54747d64 100644
> --- a/net/core/flow_dissector.c
> +++ b/net/core/flow_dissector.c
> @@ -716,6 +716,10 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> /* Restore state */
> memcpy(cb, &cb_saved, sizeof(cb_saved));
>
> + flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, 0, skb->len);
> + flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
> + flow_keys->nhoff, skb->len);
> +
> return result == BPF_OK;
> }
>
> @@ -808,8 +812,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
> &flow_keys);
> __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
> target_container);
> - key_control->thoff = min_t(u16, key_control->thoff,
> - skb->len);
> rcu_read_unlock();
> return ret;
> }
> --
> 2.20.0.rc1.387.gf8505762e3-goog
>
Same question as 3/5:
Do we need this fix without this set? If yes, do we need it for bpf
tree as well?
Thanks,
Song
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF flow dissector
2018-12-04 23:16 ` Song Liu
@ 2018-12-04 23:26 ` Stanislav Fomichev
2018-12-05 17:53 ` Song Liu
2018-12-06 2:47 ` Alexei Starovoitov
0 siblings, 2 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 23:26 UTC (permalink / raw)
To: Song Liu
Cc: Stanislav Fomichev, Networking, David S . Miller,
Alexei Starovoitov, Daniel Borkmann, simon.horman
On 12/04, Song Liu wrote:
> On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > We are returning thoff from the flow dissector, not the nhoff. Pass
> > thoff along with nhoff to the bpf program (initially thoff == nhoff)
> > and expect flow dissector amend/return thoff, not nhoff.
> >
> > This avoids confusion, when by the time bpf flow dissector exits,
> > nhoff == thoff, which doesn't make much sense (this is relevant
> > in the context of the next patch, where I add simple selftest
> > and manually construct expected flow_keys).
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> > net/core/flow_dissector.c | 1 +
> > tools/testing/selftests/bpf/bpf_flow.c | 36 ++++++++++++--------------
> > 2 files changed, 18 insertions(+), 19 deletions(-)
> >
> > diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> > index 3c8a78decbc0..ac19da6f390b 100644
> > --- a/net/core/flow_dissector.c
> > +++ b/net/core/flow_dissector.c
> > @@ -708,6 +708,7 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> > memset(flow_keys, 0, sizeof(*flow_keys));
> > cb->qdisc_cb.flow_keys = flow_keys;
> > flow_keys->nhoff = skb_network_offset(skb);
> > + flow_keys->thoff = flow_keys->nhoff;
>
> Do we need this fix without this set? If yes, do we need it for bpf
> tree as well?
No, I don't think so. This just changes input to the flow dissector
slightly (going forward).
It used to be nhoff in, thoff out. Now it's thoff in (with nhoff for
backwards compatibility) and thoff out.
>
> Thanks,
> Song
>
> >
> > bpf_compute_data_pointers((struct sk_buff *)skb);
> > result = BPF_PROG_RUN(prog, skb);
> > diff --git a/tools/testing/selftests/bpf/bpf_flow.c b/tools/testing/selftests/bpf/bpf_flow.c
> > index b9798f558ca7..284660f5aa95 100644
> > --- a/tools/testing/selftests/bpf/bpf_flow.c
> > +++ b/tools/testing/selftests/bpf/bpf_flow.c
> > @@ -70,18 +70,18 @@ static __always_inline void *bpf_flow_dissect_get_header(struct __sk_buff *skb,
> > {
> > void *data_end = (void *)(long)skb->data_end;
> > void *data = (void *)(long)skb->data;
> > - __u16 nhoff = skb->flow_keys->nhoff;
> > + __u16 thoff = skb->flow_keys->thoff;
> > __u8 *hdr;
> >
> > /* Verifies this variable offset does not overflow */
> > - if (nhoff > (USHRT_MAX - hdr_size))
> > + if (thoff > (USHRT_MAX - hdr_size))
> > return NULL;
> >
> > - hdr = data + nhoff;
> > + hdr = data + thoff;
> > if (hdr + hdr_size <= data_end)
> > return hdr;
> >
> > - if (bpf_skb_load_bytes(skb, nhoff, buffer, hdr_size))
> > + if (bpf_skb_load_bytes(skb, thoff, buffer, hdr_size))
> > return NULL;
> >
> > return buffer;
> > @@ -158,13 +158,13 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > /* Only inspect standard GRE packets with version 0 */
> > return BPF_OK;
> >
> > - keys->nhoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> > + keys->thoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> > if (GRE_IS_CSUM(gre->flags))
> > - keys->nhoff += 4; /* Step over chksum and Padding */
> > + keys->thoff += 4; /* Step over chksum and Padding */
> > if (GRE_IS_KEY(gre->flags))
> > - keys->nhoff += 4; /* Step over key */
> > + keys->thoff += 4; /* Step over key */
> > if (GRE_IS_SEQ(gre->flags))
> > - keys->nhoff += 4; /* Step over sequence number */
> > + keys->thoff += 4; /* Step over sequence number */
> >
> > keys->is_encap = true;
> >
> > @@ -174,7 +174,7 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > if (!eth)
> > return BPF_DROP;
> >
> > - keys->nhoff += sizeof(*eth);
> > + keys->thoff += sizeof(*eth);
> >
> > return parse_eth_proto(skb, eth->h_proto);
> > } else {
> > @@ -191,7 +191,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > if ((__u8 *)tcp + (tcp->doff << 2) > data_end)
> > return BPF_DROP;
> >
> > - keys->thoff = keys->nhoff;
> > keys->sport = tcp->source;
> > keys->dport = tcp->dest;
> > return BPF_OK;
> > @@ -201,7 +200,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > if (!udp)
> > return BPF_DROP;
> >
> > - keys->thoff = keys->nhoff;
> > keys->sport = udp->source;
> > keys->dport = udp->dest;
> > return BPF_OK;
> > @@ -252,8 +250,8 @@ PROG(IP)(struct __sk_buff *skb)
> > keys->ipv4_src = iph->saddr;
> > keys->ipv4_dst = iph->daddr;
> >
> > - keys->nhoff += iph->ihl << 2;
> > - if (data + keys->nhoff > data_end)
> > + keys->thoff += iph->ihl << 2;
> > + if (data + keys->thoff > data_end)
> > return BPF_DROP;
> >
> > if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) {
> > @@ -285,7 +283,7 @@ PROG(IPV6)(struct __sk_buff *skb)
> > keys->addr_proto = ETH_P_IPV6;
> > memcpy(&keys->ipv6_src, &ip6h->saddr, 2*sizeof(ip6h->saddr));
> >
> > - keys->nhoff += sizeof(struct ipv6hdr);
> > + keys->thoff += sizeof(struct ipv6hdr);
> >
> > return parse_ipv6_proto(skb, ip6h->nexthdr);
> > }
> > @@ -301,7 +299,7 @@ PROG(IPV6OP)(struct __sk_buff *skb)
> > /* hlen is in 8-octets and does not include the first 8 bytes
> > * of the header
> > */
> > - skb->flow_keys->nhoff += (1 + ip6h->hdrlen) << 3;
> > + skb->flow_keys->thoff += (1 + ip6h->hdrlen) << 3;
> >
> > return parse_ipv6_proto(skb, ip6h->nexthdr);
> > }
> > @@ -315,7 +313,7 @@ PROG(IPV6FR)(struct __sk_buff *skb)
> > if (!fragh)
> > return BPF_DROP;
> >
> > - keys->nhoff += sizeof(*fragh);
> > + keys->thoff += sizeof(*fragh);
> > keys->is_frag = true;
> > if (!(fragh->frag_off & bpf_htons(IP6_OFFSET)))
> > keys->is_first_frag = true;
> > @@ -341,7 +339,7 @@ PROG(VLAN)(struct __sk_buff *skb)
> > __be16 proto;
> >
> > /* Peek back to see if single or double-tagging */
> > - if (bpf_skb_load_bytes(skb, keys->nhoff - sizeof(proto), &proto,
> > + if (bpf_skb_load_bytes(skb, keys->thoff - sizeof(proto), &proto,
> > sizeof(proto)))
> > return BPF_DROP;
> >
> > @@ -354,14 +352,14 @@ PROG(VLAN)(struct __sk_buff *skb)
> > if (vlan->h_vlan_encapsulated_proto != bpf_htons(ETH_P_8021Q))
> > return BPF_DROP;
> >
> > - keys->nhoff += sizeof(*vlan);
> > + keys->thoff += sizeof(*vlan);
> > }
> >
> > vlan = bpf_flow_dissect_get_header(skb, sizeof(*vlan), &_vlan);
> > if (!vlan)
> > return BPF_DROP;
> >
> > - keys->nhoff += sizeof(*vlan);
> > + keys->thoff += sizeof(*vlan);
> > /* Only allow 8021AD + 8021Q double tagging and no triple tagging.*/
> > if (vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021AD) ||
> > vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021Q))
> > --
> > 2.20.0.rc1.387.gf8505762e3-goog
> >
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF
2018-12-04 23:17 ` Song Liu
@ 2018-12-04 23:30 ` Stanislav Fomichev
0 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-04 23:30 UTC (permalink / raw)
To: Song Liu
Cc: Stanislav Fomichev, Networking, David S . Miller,
Alexei Starovoitov, Daniel Borkmann, simon.horman
On 12/04, Song Liu wrote:
> On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > We want to make sure that the following condition holds:
> > 0 <= nhoff <= thoff <= skb->len
> >
> > BPF program can set out-of-bounds nhoff and thoff, which is dangerous, see
> > recent commit d0c081b49137 ("flow_dissector: properly cap thoff field")'.
> >
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> > net/core/flow_dissector.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> > index ac19da6f390b..bb1a54747d64 100644
> > --- a/net/core/flow_dissector.c
> > +++ b/net/core/flow_dissector.c
> > @@ -716,6 +716,10 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> > /* Restore state */
> > memcpy(cb, &cb_saved, sizeof(cb_saved));
> >
> > + flow_keys->nhoff = clamp_t(u16, flow_keys->nhoff, 0, skb->len);
> > + flow_keys->thoff = clamp_t(u16, flow_keys->thoff,
> > + flow_keys->nhoff, skb->len);
> > +
> > return result == BPF_OK;
> > }
> >
> > @@ -808,8 +812,6 @@ bool __skb_flow_dissect(const struct sk_buff *skb,
> > &flow_keys);
> > __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
> > target_container);
> > - key_control->thoff = min_t(u16, key_control->thoff,
> > - skb->len);
> > rcu_read_unlock();
> > return ret;
> > }
> > --
> > 2.20.0.rc1.387.gf8505762e3-goog
> >
>
> Same question as 3/5:
>
> Do we need this fix without this set? If yes, do we need it for bpf
> tree as well?
No, for the older versions we do this capping when copying to key_control:
key_control->thoff = min_t(u16, key_control->thoff,
skb->len);
I just moved this logic to the flow_keys and made it more approachable (use
actual clamping with min/max boundary, not cryptic min).
I think my commit message might be confusing. There is no real issue
here, it's done mostly for testing (so we see the result of clamping).
>
> Thanks,
> Song
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF flow dissector
2018-12-04 23:26 ` Stanislav Fomichev
@ 2018-12-05 17:53 ` Song Liu
2018-12-06 2:47 ` Alexei Starovoitov
1 sibling, 0 replies; 14+ messages in thread
From: Song Liu @ 2018-12-05 17:53 UTC (permalink / raw)
To: sdf
Cc: Stanislav Fomichev, Networking, David S . Miller,
Alexei Starovoitov, Daniel Borkmann, simon.horman
On Tue, Dec 4, 2018 at 3:26 PM Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> On 12/04, Song Liu wrote:
> > On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
> > >
> > > We are returning thoff from the flow dissector, not the nhoff. Pass
> > > thoff along with nhoff to the bpf program (initially thoff == nhoff)
> > > and expect flow dissector amend/return thoff, not nhoff.
> > >
> > > This avoids confusion, when by the time bpf flow dissector exits,
> > > nhoff == thoff, which doesn't make much sense (this is relevant
> > > in the context of the next patch, where I add simple selftest
> > > and manually construct expected flow_keys).
> > >
> > > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > > ---
> > > net/core/flow_dissector.c | 1 +
> > > tools/testing/selftests/bpf/bpf_flow.c | 36 ++++++++++++--------------
> > > 2 files changed, 18 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> > > index 3c8a78decbc0..ac19da6f390b 100644
> > > --- a/net/core/flow_dissector.c
> > > +++ b/net/core/flow_dissector.c
> > > @@ -708,6 +708,7 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> > > memset(flow_keys, 0, sizeof(*flow_keys));
> > > cb->qdisc_cb.flow_keys = flow_keys;
> > > flow_keys->nhoff = skb_network_offset(skb);
> > > + flow_keys->thoff = flow_keys->nhoff;
> >
> > Do we need this fix without this set? If yes, do we need it for bpf
> > tree as well?
> No, I don't think so. This just changes input to the flow dissector
> slightly (going forward).
> It used to be nhoff in, thoff out. Now it's thoff in (with nhoff for
> backwards compatibility) and thoff out.
Thanks for the explanation.
Acked-by: Song Liu <songliubraving@fb.com>
>
> >
> > Thanks,
> > Song
> >
> > >
> > > bpf_compute_data_pointers((struct sk_buff *)skb);
> > > result = BPF_PROG_RUN(prog, skb);
> > > diff --git a/tools/testing/selftests/bpf/bpf_flow.c b/tools/testing/selftests/bpf/bpf_flow.c
> > > index b9798f558ca7..284660f5aa95 100644
> > > --- a/tools/testing/selftests/bpf/bpf_flow.c
> > > +++ b/tools/testing/selftests/bpf/bpf_flow.c
> > > @@ -70,18 +70,18 @@ static __always_inline void *bpf_flow_dissect_get_header(struct __sk_buff *skb,
> > > {
> > > void *data_end = (void *)(long)skb->data_end;
> > > void *data = (void *)(long)skb->data;
> > > - __u16 nhoff = skb->flow_keys->nhoff;
> > > + __u16 thoff = skb->flow_keys->thoff;
> > > __u8 *hdr;
> > >
> > > /* Verifies this variable offset does not overflow */
> > > - if (nhoff > (USHRT_MAX - hdr_size))
> > > + if (thoff > (USHRT_MAX - hdr_size))
> > > return NULL;
> > >
> > > - hdr = data + nhoff;
> > > + hdr = data + thoff;
> > > if (hdr + hdr_size <= data_end)
> > > return hdr;
> > >
> > > - if (bpf_skb_load_bytes(skb, nhoff, buffer, hdr_size))
> > > + if (bpf_skb_load_bytes(skb, thoff, buffer, hdr_size))
> > > return NULL;
> > >
> > > return buffer;
> > > @@ -158,13 +158,13 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > > /* Only inspect standard GRE packets with version 0 */
> > > return BPF_OK;
> > >
> > > - keys->nhoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> > > + keys->thoff += sizeof(*gre); /* Step over GRE Flags and Proto */
> > > if (GRE_IS_CSUM(gre->flags))
> > > - keys->nhoff += 4; /* Step over chksum and Padding */
> > > + keys->thoff += 4; /* Step over chksum and Padding */
> > > if (GRE_IS_KEY(gre->flags))
> > > - keys->nhoff += 4; /* Step over key */
> > > + keys->thoff += 4; /* Step over key */
> > > if (GRE_IS_SEQ(gre->flags))
> > > - keys->nhoff += 4; /* Step over sequence number */
> > > + keys->thoff += 4; /* Step over sequence number */
> > >
> > > keys->is_encap = true;
> > >
> > > @@ -174,7 +174,7 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > > if (!eth)
> > > return BPF_DROP;
> > >
> > > - keys->nhoff += sizeof(*eth);
> > > + keys->thoff += sizeof(*eth);
> > >
> > > return parse_eth_proto(skb, eth->h_proto);
> > > } else {
> > > @@ -191,7 +191,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > > if ((__u8 *)tcp + (tcp->doff << 2) > data_end)
> > > return BPF_DROP;
> > >
> > > - keys->thoff = keys->nhoff;
> > > keys->sport = tcp->source;
> > > keys->dport = tcp->dest;
> > > return BPF_OK;
> > > @@ -201,7 +200,6 @@ static __always_inline int parse_ip_proto(struct __sk_buff *skb, __u8 proto)
> > > if (!udp)
> > > return BPF_DROP;
> > >
> > > - keys->thoff = keys->nhoff;
> > > keys->sport = udp->source;
> > > keys->dport = udp->dest;
> > > return BPF_OK;
> > > @@ -252,8 +250,8 @@ PROG(IP)(struct __sk_buff *skb)
> > > keys->ipv4_src = iph->saddr;
> > > keys->ipv4_dst = iph->daddr;
> > >
> > > - keys->nhoff += iph->ihl << 2;
> > > - if (data + keys->nhoff > data_end)
> > > + keys->thoff += iph->ihl << 2;
> > > + if (data + keys->thoff > data_end)
> > > return BPF_DROP;
> > >
> > > if (iph->frag_off & bpf_htons(IP_MF | IP_OFFSET)) {
> > > @@ -285,7 +283,7 @@ PROG(IPV6)(struct __sk_buff *skb)
> > > keys->addr_proto = ETH_P_IPV6;
> > > memcpy(&keys->ipv6_src, &ip6h->saddr, 2*sizeof(ip6h->saddr));
> > >
> > > - keys->nhoff += sizeof(struct ipv6hdr);
> > > + keys->thoff += sizeof(struct ipv6hdr);
> > >
> > > return parse_ipv6_proto(skb, ip6h->nexthdr);
> > > }
> > > @@ -301,7 +299,7 @@ PROG(IPV6OP)(struct __sk_buff *skb)
> > > /* hlen is in 8-octets and does not include the first 8 bytes
> > > * of the header
> > > */
> > > - skb->flow_keys->nhoff += (1 + ip6h->hdrlen) << 3;
> > > + skb->flow_keys->thoff += (1 + ip6h->hdrlen) << 3;
> > >
> > > return parse_ipv6_proto(skb, ip6h->nexthdr);
> > > }
> > > @@ -315,7 +313,7 @@ PROG(IPV6FR)(struct __sk_buff *skb)
> > > if (!fragh)
> > > return BPF_DROP;
> > >
> > > - keys->nhoff += sizeof(*fragh);
> > > + keys->thoff += sizeof(*fragh);
> > > keys->is_frag = true;
> > > if (!(fragh->frag_off & bpf_htons(IP6_OFFSET)))
> > > keys->is_first_frag = true;
> > > @@ -341,7 +339,7 @@ PROG(VLAN)(struct __sk_buff *skb)
> > > __be16 proto;
> > >
> > > /* Peek back to see if single or double-tagging */
> > > - if (bpf_skb_load_bytes(skb, keys->nhoff - sizeof(proto), &proto,
> > > + if (bpf_skb_load_bytes(skb, keys->thoff - sizeof(proto), &proto,
> > > sizeof(proto)))
> > > return BPF_DROP;
> > >
> > > @@ -354,14 +352,14 @@ PROG(VLAN)(struct __sk_buff *skb)
> > > if (vlan->h_vlan_encapsulated_proto != bpf_htons(ETH_P_8021Q))
> > > return BPF_DROP;
> > >
> > > - keys->nhoff += sizeof(*vlan);
> > > + keys->thoff += sizeof(*vlan);
> > > }
> > >
> > > vlan = bpf_flow_dissect_get_header(skb, sizeof(*vlan), &_vlan);
> > > if (!vlan)
> > > return BPF_DROP;
> > >
> > > - keys->nhoff += sizeof(*vlan);
> > > + keys->thoff += sizeof(*vlan);
> > > /* Only allow 8021AD + 8021Q double tagging and no triple tagging.*/
> > > if (vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021AD) ||
> > > vlan->h_vlan_encapsulated_proto == bpf_htons(ETH_P_8021Q))
> > > --
> > > 2.20.0.rc1.387.gf8505762e3-goog
> > >
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
` (4 preceding siblings ...)
2018-12-04 4:01 ` [PATCH bpf-next v2 5/5] selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow dissector Stanislav Fomichev
@ 2018-12-05 17:55 ` Song Liu
2018-12-05 17:57 ` Stanislav Fomichev
5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2018-12-05 17:55 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Networking, David S . Miller, Alexei Starovoitov, Daniel Borkmann,
simon.horman
On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
>
> This patch series adds support for testing flow dissector BPF programs by
> extending already existing BPF_PROG_TEST_RUN. The goal is to have a
> packet as an input and `struct bpf_flow_key' as an output. That way
> we can easily test flow dissector programs' behavior.
> I've also modified existing test_progs.c test to do a simple flow
> dissector run as well.
>
> * first patch introduces new __skb_flow_bpf_dissect to simplify
> sharing between __skb_flow_bpf_dissect and BPF_PROG_TEST_RUN
> * second patch adds actual BPF_PROG_TEST_RUN support
> * third patch converts BPF flow dissector to thoff
> * forth patch correctly caps nhoff and thoff returned from bpf flow
> dissector
> * fifth patch adds example usage to the selftests
>
> v2 changes:
>
> * new patch to use thoff instead of nhoff in bpf flow dissector
> * new patch to correctly cap thoff for BPF case
> * add missing memset(flow_keys, 0, ...) to __skb_flow_bpf_dissect
> * set test iterations to 10
>
> Stanislav Fomichev (5):
> net/flow_dissector: move bpf case into __skb_flow_bpf_dissect
> bpf: add BPF_PROG_TEST_RUN support for flow dissector
> selftests/bpf: use thoff instead of nhoff in BPF flow dissector
> net/flow_dissector: correctly cap nhoff and thoff in case of BPF
> selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow
> dissector
>
> include/linux/bpf.h | 3 +
> include/linux/skbuff.h | 5 ++
> net/bpf/test_run.c | 71 ++++++++++++++-
> net/core/filter.c | 1 +
> net/core/flow_dissector.c | 88 ++++++++++++-------
> tools/testing/selftests/bpf/Makefile | 3 +
> tools/testing/selftests/bpf/bpf_flow.c | 36 ++++----
> .../selftests/bpf/flow_dissector_load.c | 43 ++-------
> .../selftests/bpf/flow_dissector_load.h | 55 ++++++++++++
> tools/testing/selftests/bpf/test_progs.c | 78 +++++++++++++++-
> 10 files changed, 289 insertions(+), 94 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/flow_dissector_load.h
>
> --
> 2.20.0.rc1.387.gf8505762e3-goog
>
For the series:
Acked-by: Song Liu <songliubraving@fb.com>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN
2018-12-05 17:55 ` [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Song Liu
@ 2018-12-05 17:57 ` Stanislav Fomichev
0 siblings, 0 replies; 14+ messages in thread
From: Stanislav Fomichev @ 2018-12-05 17:57 UTC (permalink / raw)
To: Song Liu
Cc: Stanislav Fomichev, Networking, David S . Miller,
Alexei Starovoitov, Daniel Borkmann, simon.horman
On 12/05, Song Liu wrote:
> On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
> >
> > This patch series adds support for testing flow dissector BPF programs by
> > extending already existing BPF_PROG_TEST_RUN. The goal is to have a
> > packet as an input and `struct bpf_flow_key' as an output. That way
> > we can easily test flow dissector programs' behavior.
> > I've also modified existing test_progs.c test to do a simple flow
> > dissector run as well.
> >
> > * first patch introduces new __skb_flow_bpf_dissect to simplify
> > sharing between __skb_flow_bpf_dissect and BPF_PROG_TEST_RUN
> > * second patch adds actual BPF_PROG_TEST_RUN support
> > * third patch converts BPF flow dissector to thoff
> > * forth patch correctly caps nhoff and thoff returned from bpf flow
> > dissector
> > * fifth patch adds example usage to the selftests
> >
> > v2 changes:
> >
> > * new patch to use thoff instead of nhoff in bpf flow dissector
> > * new patch to correctly cap thoff for BPF case
> > * add missing memset(flow_keys, 0, ...) to __skb_flow_bpf_dissect
> > * set test iterations to 10
> >
> > Stanislav Fomichev (5):
> > net/flow_dissector: move bpf case into __skb_flow_bpf_dissect
> > bpf: add BPF_PROG_TEST_RUN support for flow dissector
> > selftests/bpf: use thoff instead of nhoff in BPF flow dissector
> > net/flow_dissector: correctly cap nhoff and thoff in case of BPF
> > selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow
> > dissector
> >
> > include/linux/bpf.h | 3 +
> > include/linux/skbuff.h | 5 ++
> > net/bpf/test_run.c | 71 ++++++++++++++-
> > net/core/filter.c | 1 +
> > net/core/flow_dissector.c | 88 ++++++++++++-------
> > tools/testing/selftests/bpf/Makefile | 3 +
> > tools/testing/selftests/bpf/bpf_flow.c | 36 ++++----
> > .../selftests/bpf/flow_dissector_load.c | 43 ++-------
> > .../selftests/bpf/flow_dissector_load.h | 55 ++++++++++++
> > tools/testing/selftests/bpf/test_progs.c | 78 +++++++++++++++-
> > 10 files changed, 289 insertions(+), 94 deletions(-)
> > create mode 100644 tools/testing/selftests/bpf/flow_dissector_load.h
> >
> > --
> > 2.20.0.rc1.387.gf8505762e3-goog
> >
>
> For the series:
>
> Acked-by: Song Liu <songliubraving@fb.com>
Thank you for a review!
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF flow dissector
2018-12-04 23:26 ` Stanislav Fomichev
2018-12-05 17:53 ` Song Liu
@ 2018-12-06 2:47 ` Alexei Starovoitov
1 sibling, 0 replies; 14+ messages in thread
From: Alexei Starovoitov @ 2018-12-06 2:47 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: Song Liu, Stanislav Fomichev, Networking, David S . Miller,
Alexei Starovoitov, Daniel Borkmann, simon.horman
On Tue, Dec 04, 2018 at 03:26:15PM -0800, Stanislav Fomichev wrote:
> On 12/04, Song Liu wrote:
> > On Mon, Dec 3, 2018 at 8:01 PM Stanislav Fomichev <sdf@google.com> wrote:
> > >
> > > We are returning thoff from the flow dissector, not the nhoff. Pass
> > > thoff along with nhoff to the bpf program (initially thoff == nhoff)
> > > and expect flow dissector amend/return thoff, not nhoff.
> > >
> > > This avoids confusion, when by the time bpf flow dissector exits,
> > > nhoff == thoff, which doesn't make much sense (this is relevant
> > > in the context of the next patch, where I add simple selftest
> > > and manually construct expected flow_keys).
> > >
> > > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > > ---
> > > net/core/flow_dissector.c | 1 +
> > > tools/testing/selftests/bpf/bpf_flow.c | 36 ++++++++++++--------------
> > > 2 files changed, 18 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
> > > index 3c8a78decbc0..ac19da6f390b 100644
> > > --- a/net/core/flow_dissector.c
> > > +++ b/net/core/flow_dissector.c
> > > @@ -708,6 +708,7 @@ bool __skb_flow_bpf_dissect(struct bpf_prog *prog,
> > > memset(flow_keys, 0, sizeof(*flow_keys));
> > > cb->qdisc_cb.flow_keys = flow_keys;
> > > flow_keys->nhoff = skb_network_offset(skb);
> > > + flow_keys->thoff = flow_keys->nhoff;
> >
> > Do we need this fix without this set? If yes, do we need it for bpf
> > tree as well?
> No, I don't think so. This just changes input to the flow dissector
> slightly (going forward).
> It used to be nhoff in, thoff out. Now it's thoff in (with nhoff for
> backwards compatibility) and thoff out.
That is still an api change.
Also patch 4 is a fix.
I think patches 3 and 4 need to go into bpf tree first.
Then wait for them to merge into bpf-next and resubmit the rest.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2018-12-06 2:47 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-12-04 4:00 [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 1/5] net/flow_dissector: move bpf case into __skb_flow_bpf_dissect Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 2/5] bpf: add BPF_PROG_TEST_RUN support for flow dissector Stanislav Fomichev
2018-12-04 4:00 ` [PATCH bpf-next v2 3/5] selftests/bpf: use thoff instead of nhoff in BPF " Stanislav Fomichev
2018-12-04 23:16 ` Song Liu
2018-12-04 23:26 ` Stanislav Fomichev
2018-12-05 17:53 ` Song Liu
2018-12-06 2:47 ` Alexei Starovoitov
2018-12-04 4:00 ` [PATCH bpf-next v2 4/5] net/flow_dissector: correctly cap nhoff and thoff in case of BPF Stanislav Fomichev
2018-12-04 23:17 ` Song Liu
2018-12-04 23:30 ` Stanislav Fomichev
2018-12-04 4:01 ` [PATCH bpf-next v2 5/5] selftests/bpf: add simple BPF_PROG_TEST_RUN examples for flow dissector Stanislav Fomichev
2018-12-05 17:55 ` [PATCH bpf-next v2 0/5] support flow dissector in BPF_PROG_TEST_RUN Song Liu
2018-12-05 17:57 ` Stanislav Fomichev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).