netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf] bpf: fix mlock precharge on arraymaps
@ 2018-02-16  0:10 Daniel Borkmann
  2018-02-16  5:37 ` Alexei Starovoitov
  0 siblings, 1 reply; 2+ messages in thread
From: Daniel Borkmann @ 2018-02-16  0:10 UTC (permalink / raw)
  To: ast; +Cc: netdev, Daniel Borkmann, Dennis Zhou

syzkaller recently triggered OOM during percpu map allocation;
while there is work in progress by Dennis Zhou to add __GFP_NORETRY
semantics for percpu allocator under pressure, there seems also a
missing bpf_map_precharge_memlock() check in array map allocation.

Given today the actual bpf_map_charge_memlock() happens after the
find_and_alloc_map() in syscall path, the bpf_map_precharge_memlock()
is there to bail out early before we go and do the map setup work
when we find that we hit the limits anyway. Therefore add this for
array map as well.

Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
Fixes: a10423b87a7e ("bpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map")
Reported-by: syzbot+adb03f3f0bb57ce3acda@syzkaller.appspotmail.com
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Dennis Zhou <dennisszhou@gmail.com>
---
 kernel/bpf/arraymap.c | 28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index b1f6648..a364c40 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -73,11 +73,11 @@ static int array_map_alloc_check(union bpf_attr *attr)
 static struct bpf_map *array_map_alloc(union bpf_attr *attr)
 {
 	bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
-	int numa_node = bpf_map_attr_numa_node(attr);
+	int ret, numa_node = bpf_map_attr_numa_node(attr);
 	u32 elem_size, index_mask, max_entries;
 	bool unpriv = !capable(CAP_SYS_ADMIN);
+	u64 cost, array_size, mask64;
 	struct bpf_array *array;
-	u64 array_size, mask64;
 
 	elem_size = round_up(attr->value_size, 8);
 
@@ -109,8 +109,19 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
 		array_size += (u64) max_entries * elem_size;
 
 	/* make sure there is no u32 overflow later in round_up() */
-	if (array_size >= U32_MAX - PAGE_SIZE)
+	cost = array_size;
+	if (cost >= U32_MAX - PAGE_SIZE)
 		return ERR_PTR(-ENOMEM);
+	if (percpu) {
+		cost += (u64)attr->max_entries * elem_size * num_possible_cpus();
+		if (cost >= U32_MAX - PAGE_SIZE)
+			return ERR_PTR(-ENOMEM);
+	}
+	cost = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT;
+
+	ret = bpf_map_precharge_memlock(cost);
+	if (ret < 0)
+		return ERR_PTR(ret);
 
 	/* allocate all map elements and zero-initialize them */
 	array = bpf_map_area_alloc(array_size, numa_node);
@@ -121,20 +132,13 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
 
 	/* copy mandatory map attributes */
 	bpf_map_init_from_attr(&array->map, attr);
+	array->map.pages = cost;
 	array->elem_size = elem_size;
 
-	if (!percpu)
-		goto out;
-
-	array_size += (u64) attr->max_entries * elem_size * num_possible_cpus();
-
-	if (array_size >= U32_MAX - PAGE_SIZE ||
-	    bpf_array_alloc_percpu(array)) {
+	if (percpu && bpf_array_alloc_percpu(array)) {
 		bpf_map_area_free(array);
 		return ERR_PTR(-ENOMEM);
 	}
-out:
-	array->map.pages = round_up(array_size, PAGE_SIZE) >> PAGE_SHIFT;
 
 	return &array->map;
 }
-- 
2.9.5

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH bpf] bpf: fix mlock precharge on arraymaps
  2018-02-16  0:10 [PATCH bpf] bpf: fix mlock precharge on arraymaps Daniel Borkmann
@ 2018-02-16  5:37 ` Alexei Starovoitov
  0 siblings, 0 replies; 2+ messages in thread
From: Alexei Starovoitov @ 2018-02-16  5:37 UTC (permalink / raw)
  To: Daniel Borkmann; +Cc: netdev, Dennis Zhou

On 2/15/18 4:10 PM, Daniel Borkmann wrote:
> syzkaller recently triggered OOM during percpu map allocation;
> while there is work in progress by Dennis Zhou to add __GFP_NORETRY
> semantics for percpu allocator under pressure, there seems also a
> missing bpf_map_precharge_memlock() check in array map allocation.
>
> Given today the actual bpf_map_charge_memlock() happens after the
> find_and_alloc_map() in syscall path, the bpf_map_precharge_memlock()
> is there to bail out early before we go and do the map setup work
> when we find that we hit the limits anyway. Therefore add this for
> array map as well.
>
> Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements")
> Fixes: a10423b87a7e ("bpf: introduce BPF_MAP_TYPE_PERCPU_ARRAY map")
> Reported-by: syzbot+adb03f3f0bb57ce3acda@syzkaller.appspotmail.com
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> Cc: Dennis Zhou <dennisszhou@gmail.com>

Applied to bpf tree, Thanks Daniel.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-02-16  5:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-16  0:10 [PATCH bpf] bpf: fix mlock precharge on arraymaps Daniel Borkmann
2018-02-16  5:37 ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).