From: Yafang Shao <laoar.shao@gmail.com>
To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com,
haoluo@google.com, jolsa@kernel.org, horenc@vt.edu,
xiyou.wangcong@gmail.com, houtao1@huawei.com
Cc: bpf@vger.kernel.org, Yafang Shao <laoar.shao@gmail.com>
Subject: [PATCH bpf-next v4 10/18] bpf: devmap memory usage
Date: Sun, 5 Mar 2023 12:46:07 +0000 [thread overview]
Message-ID: <20230305124615.12358-11-laoar.shao@gmail.com> (raw)
In-Reply-To: <20230305124615.12358-1-laoar.shao@gmail.com>
A new helper is introduced to calculate the memory usage of devmap and
devmap_hash. The number of dynamically allocated elements are recored
for devmap_hash already, but not for devmap. To track the memory size of
dynamically allocated elements, this patch also count the numbers for
devmap.
The result as follows,
- before
40: devmap name count_map flags 0x80
key 4B value 4B max_entries 65536 memlock 524288B
41: devmap_hash name count_map flags 0x80
key 4B value 4B max_entries 65536 memlock 524288B
- after
40: devmap name count_map flags 0x80 <<<< no elements
key 4B value 4B max_entries 65536 memlock 524608B
41: devmap_hash name count_map flags 0x80 <<<< no elements
key 4B value 4B max_entries 65536 memlock 524608B
Note that the number of buckets is same with max_entries for devmap_hash
in this case.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
kernel/bpf/devmap.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 2675fef..19b036a 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -819,8 +819,10 @@ static int dev_map_delete_elem(struct bpf_map *map, void *key)
return -EINVAL;
old_dev = unrcu_pointer(xchg(&dtab->netdev_map[k], NULL));
- if (old_dev)
+ if (old_dev) {
call_rcu(&old_dev->rcu, __dev_map_entry_free);
+ atomic_dec((atomic_t *)&dtab->items);
+ }
return 0;
}
@@ -931,6 +933,8 @@ static int __dev_map_update_elem(struct net *net, struct bpf_map *map,
old_dev = unrcu_pointer(xchg(&dtab->netdev_map[i], RCU_INITIALIZER(dev)));
if (old_dev)
call_rcu(&old_dev->rcu, __dev_map_entry_free);
+ else
+ atomic_inc((atomic_t *)&dtab->items);
return 0;
}
@@ -1016,6 +1020,20 @@ static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
__dev_map_hash_lookup_elem);
}
+static u64 dev_map_mem_usage(const struct bpf_map *map)
+{
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ u64 usage = sizeof(struct bpf_dtab);
+
+ if (map->map_type == BPF_MAP_TYPE_DEVMAP_HASH)
+ usage += (u64)dtab->n_buckets * sizeof(struct hlist_head);
+ else
+ usage += (u64)map->max_entries * sizeof(struct bpf_dtab_netdev *);
+ usage += atomic_read((atomic_t *)&dtab->items) *
+ (u64)sizeof(struct bpf_dtab_netdev);
+ return usage;
+}
+
BTF_ID_LIST_SINGLE(dev_map_btf_ids, struct, bpf_dtab)
const struct bpf_map_ops dev_map_ops = {
.map_meta_equal = bpf_map_meta_equal,
@@ -1026,6 +1044,7 @@ static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
.map_update_elem = dev_map_update_elem,
.map_delete_elem = dev_map_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = dev_map_mem_usage,
.map_btf_id = &dev_map_btf_ids[0],
.map_redirect = dev_map_redirect,
};
@@ -1039,6 +1058,7 @@ static int dev_hash_map_redirect(struct bpf_map *map, u64 ifindex, u64 flags)
.map_update_elem = dev_map_hash_update_elem,
.map_delete_elem = dev_map_hash_delete_elem,
.map_check_btf = map_check_no_btf,
+ .map_mem_usage = dev_map_mem_usage,
.map_btf_id = &dev_map_btf_ids[0],
.map_redirect = dev_hash_map_redirect,
};
@@ -1109,9 +1129,11 @@ static int dev_map_notification(struct notifier_block *notifier,
if (!dev || netdev != dev->dev)
continue;
odev = unrcu_pointer(cmpxchg(&dtab->netdev_map[i], RCU_INITIALIZER(dev), NULL));
- if (dev == odev)
+ if (dev == odev) {
call_rcu(&dev->rcu,
__dev_map_entry_free);
+ atomic_dec((atomic_t *)&dtab->items);
+ }
}
}
rcu_read_unlock();
--
1.8.3.1
next prev parent reply other threads:[~2023-03-05 12:46 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-05 12:45 [PATCH bpf-next v4 00/18] bpf: bpf memory usage Yafang Shao
2023-03-05 12:45 ` [PATCH bpf-next v4 01/18] bpf: add new map ops ->map_mem_usage Yafang Shao
2023-03-05 12:45 ` [PATCH bpf-next v4 02/18] bpf: lpm_trie memory usage Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 03/18] bpf: hashtab " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 04/18] bpf: arraymap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 05/18] bpf: stackmap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 06/18] bpf: reuseport_array " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 07/18] bpf: ringbuf " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 08/18] bpf: bloom_filter " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 09/18] bpf: cpumap " Yafang Shao
2023-03-05 12:46 ` Yafang Shao [this message]
2023-03-05 12:46 ` [PATCH bpf-next v4 11/18] bpf: queue_stack_maps " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 12/18] bpf: bpf_struct_ops " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 13/18] bpf: local_storage " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 14/18] bpf, net: bpf_local_storage " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 15/18] bpf, net: sock_map " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 16/18] bpf, net: xskmap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 17/18] bpf: offload map " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 18/18] bpf: enforce all maps having memory usage callback Yafang Shao
2023-03-07 17:40 ` [PATCH bpf-next v4 00/18] bpf: bpf memory usage patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230305124615.12358-11-laoar.shao@gmail.com \
--to=laoar.shao@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=horenc@vt.edu \
--cc=houtao1@huawei.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kafai@fb.com \
--cc=kpsingh@kernel.org \
--cc=sdf@google.com \
--cc=songliubraving@fb.com \
--cc=xiyou.wangcong@gmail.com \
--cc=yhs@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox