public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	kafai@fb.com, songliubraving@fb.com, yhs@fb.com,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com,
	haoluo@google.com, jolsa@kernel.org, horenc@vt.edu,
	xiyou.wangcong@gmail.com, houtao1@huawei.com
Cc: bpf@vger.kernel.org, Yafang Shao <laoar.shao@gmail.com>
Subject: [PATCH bpf-next v4 16/18] bpf, net: xskmap memory usage
Date: Sun,  5 Mar 2023 12:46:13 +0000	[thread overview]
Message-ID: <20230305124615.12358-17-laoar.shao@gmail.com> (raw)
In-Reply-To: <20230305124615.12358-1-laoar.shao@gmail.com>

A new helper is introduced to calculate xskmap memory usage.

The xfsmap memory usage can be dynamically changed when we add or remove
a xsk_map_node. Hence we need to track the count of xsk_map_node to get
its memory usage.

The result as follows,
- before
10: xskmap  name count_map  flags 0x0
        key 4B  value 4B  max_entries 65536  memlock 524288B

- after
10: xskmap  name count_map  flags 0x0 <<< no elements case
        key 4B  value 4B  max_entries 65536  memlock 524608B

Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/net/xdp_sock.h |  1 +
 net/xdp/xskmap.c       | 13 +++++++++++++
 2 files changed, 14 insertions(+)

diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index 3057e1a..e96a115 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -38,6 +38,7 @@ struct xdp_umem {
 struct xsk_map {
 	struct bpf_map map;
 	spinlock_t lock; /* Synchronize map updates */
+	atomic_t count;
 	struct xdp_sock __rcu *xsk_map[];
 };
 
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 771d0fa..0c38d71 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -24,6 +24,7 @@ static struct xsk_map_node *xsk_map_node_alloc(struct xsk_map *map,
 		return ERR_PTR(-ENOMEM);
 
 	bpf_map_inc(&map->map);
+	atomic_inc(&map->count);
 
 	node->map = map;
 	node->map_entry = map_entry;
@@ -32,8 +33,11 @@ static struct xsk_map_node *xsk_map_node_alloc(struct xsk_map *map,
 
 static void xsk_map_node_free(struct xsk_map_node *node)
 {
+	struct xsk_map *map = node->map;
+
 	bpf_map_put(&node->map->map);
 	kfree(node);
+	atomic_dec(&map->count);
 }
 
 static void xsk_map_sock_add(struct xdp_sock *xs, struct xsk_map_node *node)
@@ -85,6 +89,14 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr)
 	return &m->map;
 }
 
+static u64 xsk_map_mem_usage(const struct bpf_map *map)
+{
+	struct xsk_map *m = container_of(map, struct xsk_map, map);
+
+	return struct_size(m, xsk_map, map->max_entries) +
+		   (u64)atomic_read(&m->count) * sizeof(struct xsk_map_node);
+}
+
 static void xsk_map_free(struct bpf_map *map)
 {
 	struct xsk_map *m = container_of(map, struct xsk_map, map);
@@ -267,6 +279,7 @@ static bool xsk_map_meta_equal(const struct bpf_map *meta0,
 	.map_update_elem = xsk_map_update_elem,
 	.map_delete_elem = xsk_map_delete_elem,
 	.map_check_btf = map_check_no_btf,
+	.map_mem_usage = xsk_map_mem_usage,
 	.map_btf_id = &xsk_map_btf_ids[0],
 	.map_redirect = xsk_map_redirect,
 };
-- 
1.8.3.1


  parent reply	other threads:[~2023-03-05 12:46 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-05 12:45 [PATCH bpf-next v4 00/18] bpf: bpf memory usage Yafang Shao
2023-03-05 12:45 ` [PATCH bpf-next v4 01/18] bpf: add new map ops ->map_mem_usage Yafang Shao
2023-03-05 12:45 ` [PATCH bpf-next v4 02/18] bpf: lpm_trie memory usage Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 03/18] bpf: hashtab " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 04/18] bpf: arraymap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 05/18] bpf: stackmap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 06/18] bpf: reuseport_array " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 07/18] bpf: ringbuf " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 08/18] bpf: bloom_filter " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 09/18] bpf: cpumap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 10/18] bpf: devmap " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 11/18] bpf: queue_stack_maps " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 12/18] bpf: bpf_struct_ops " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 13/18] bpf: local_storage " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 14/18] bpf, net: bpf_local_storage " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 15/18] bpf, net: sock_map " Yafang Shao
2023-03-05 12:46 ` Yafang Shao [this message]
2023-03-05 12:46 ` [PATCH bpf-next v4 17/18] bpf: offload map " Yafang Shao
2023-03-05 12:46 ` [PATCH bpf-next v4 18/18] bpf: enforce all maps having memory usage callback Yafang Shao
2023-03-07 17:40 ` [PATCH bpf-next v4 00/18] bpf: bpf memory usage patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230305124615.12358-17-laoar.shao@gmail.com \
    --to=laoar.shao@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=haoluo@google.com \
    --cc=horenc@vt.edu \
    --cc=houtao1@huawei.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=sdf@google.com \
    --cc=songliubraving@fb.com \
    --cc=xiyou.wangcong@gmail.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox