From: Shakeel Butt <shakeel.butt@linux.dev>
To: Daniel Sedlak <daniel.sedlak@cdn77.com>
Cc: "David S. Miller" <davem@davemloft.net>,
"Eric Dumazet" <edumazet@google.com>,
"Jakub Kicinski" <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
"Simon Horman" <horms@kernel.org>,
"Jonathan Corbet" <corbet@lwn.net>,
"Neal Cardwell" <ncardwell@google.com>,
"Kuniyuki Iwashima" <kuniyu@google.com>,
"David Ahern" <dsahern@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Yosry Ahmed" <yosry.ahmed@linux.dev>,
linux-mm@kvack.org, netdev@vger.kernel.org,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Hocko" <mhocko@kernel.org>,
"Roman Gushchin" <roman.gushchin@linux.dev>,
"Muchun Song" <muchun.song@linux.dev>,
cgroups@vger.kernel.org, "Tejun Heo" <tj@kernel.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Matyas Hurtik" <matyas.hurtik@cdn77.com>
Subject: Re: [PATCH v4] memcg: expose socket memory pressure in a cgroup
Date: Tue, 5 Aug 2025 16:02:21 -0700 [thread overview]
Message-ID: <fcnlbvljynxu5qlzmnjeagll7nf5mje7rwkimbqok6doso37gl@lwepk3ztjga7> (raw)
In-Reply-To: <20250805064429.77876-1-daniel.sedlak@cdn77.com>
On Tue, Aug 05, 2025 at 08:44:29AM +0200, Daniel Sedlak wrote:
> This patch is a result of our long-standing debug sessions, where it all
> started as "networking is slow", and TCP network throughput suddenly
> dropped from tens of Gbps to few Mbps, and we could not see anything in
> the kernel log or netstat counters.
>
> Currently, we have two memory pressure counters for TCP sockets [1],
> which we manipulate only when the memory pressure is signalled through
> the proto struct [2]. However, the memory pressure can also be signaled
> through the cgroup memory subsystem, which we do not reflect in the
> netstat counters. In the end, when the cgroup memory subsystem signals
> that it is under pressure, we silently reduce the advertised TCP window
> with tcp_adjust_rcv_ssthresh() to 4*advmss, which causes a significant
> throughput reduction.
>
> Keep in mind that when the cgroup memory subsystem signals the socket
> memory pressure, it affects all sockets used in that cgroup.
>
> This patch exposes a new file for each cgroup in sysfs which signals
> the cgroup socket memory pressure. The file is accessible in
> the following path.
>
> /sys/fs/cgroup/**/<cgroup name>/memory.net.socket_pressure
let's keep the name concise. Maybe memory.net.pressure?
>
> The output value is a cumulative sum of microseconds spent
> under pressure for that particular cgroup.
>
> Link: https://elixir.bootlin.com/linux/v6.15.4/source/include/uapi/linux/snmp.h#L231-L232 [1]
> Link: https://elixir.bootlin.com/linux/v6.15.4/source/include/net/sock.h#L1300-L1301 [2]
> Co-developed-by: Matyas Hurtik <matyas.hurtik@cdn77.com>
> Signed-off-by: Matyas Hurtik <matyas.hurtik@cdn77.com>
> Signed-off-by: Daniel Sedlak <daniel.sedlak@cdn77.com>
> ---
> Changes:
> v3 -> v4:
> - Add documentation
> - Expose pressure as cummulative counter in microseconds
> - Link to v3: https://lore.kernel.org/netdev/20250722071146.48616-1-daniel.sedlak@cdn77.com/
>
> v2 -> v3:
> - Expose the socket memory pressure on the cgroups instead of netstat
> - Split patch
> - Link to v2: https://lore.kernel.org/netdev/20250714143613.42184-1-daniel.sedlak@cdn77.com/
>
> v1 -> v2:
> - Add tracepoint
> - Link to v1: https://lore.kernel.org/netdev/20250707105205.222558-1-daniel.sedlak@cdn77.com/
>
> Documentation/admin-guide/cgroup-v2.rst | 7 +++++++
> include/linux/memcontrol.h | 2 ++
> mm/memcontrol.c | 15 +++++++++++++++
> mm/vmpressure.c | 9 ++++++++-
> 4 files changed, 32 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 0cc35a14afbe..c810b449fb3d 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1884,6 +1884,13 @@ The following nested keys are defined.
> Shows pressure stall information for memory. See
> :ref:`Documentation/accounting/psi.rst <psi>` for details.
>
> + memory.net.socket_pressure
> + A read-only single value file showing how many microseconds
> + all sockets within that cgroup spent under pressure.
> +
> + Note that when the sockets are under pressure, the networking
> + throughput can be significantly degraded.
> +
>
> Usage Guidelines
> ~~~~~~~~~~~~~~~~
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 87b6688f124a..6a1cb9a99b88 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -252,6 +252,8 @@ struct mem_cgroup {
> * where socket memory is accounted/charged separately.
> */
> unsigned long socket_pressure;
> + /* exported statistic for memory.net.socket_pressure */
> + unsigned long socket_pressure_duration;
I think atomic_long_t would be better.
>
> int kmemcg_id;
> /*
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 902da8a9c643..8e299d94c073 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3758,6 +3758,7 @@ static struct mem_cgroup *mem_cgroup_alloc(struct mem_cgroup *parent)
> INIT_LIST_HEAD(&memcg->swap_peaks);
> spin_lock_init(&memcg->peaks_lock);
> memcg->socket_pressure = jiffies;
> + memcg->socket_pressure_duration = 0;
> memcg1_memcg_init(memcg);
> memcg->kmemcg_id = -1;
> INIT_LIST_HEAD(&memcg->objcg_list);
> @@ -4647,6 +4648,15 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> return nbytes;
> }
>
> +static int memory_socket_pressure_show(struct seq_file *m, void *v)
> +{
> + struct mem_cgroup *memcg = mem_cgroup_from_seq(m);
> +
> + seq_printf(m, "%lu\n", READ_ONCE(memcg->socket_pressure_duration));
> +
> + return 0;
> +}
> +
> static struct cftype memory_files[] = {
> {
> .name = "current",
> @@ -4718,6 +4728,11 @@ static struct cftype memory_files[] = {
> .flags = CFTYPE_NS_DELEGATABLE,
> .write = memory_reclaim,
> },
> + {
> + .name = "net.socket_pressure",
> + .flags = CFTYPE_NOT_ON_ROOT,
> + .seq_show = memory_socket_pressure_show,
> + },
> { } /* terminate */
> };
>
> diff --git a/mm/vmpressure.c b/mm/vmpressure.c
> index bd5183dfd879..1e767cd8aa08 100644
> --- a/mm/vmpressure.c
> +++ b/mm/vmpressure.c
> @@ -308,6 +308,8 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> level = vmpressure_calc_level(scanned, reclaimed);
>
> if (level > VMPRESSURE_LOW) {
> + unsigned long socket_pressure;
> + unsigned long jiffies_diff;
> /*
> * Let the socket buffer allocator know that
> * we are having trouble reclaiming LRU pages.
> @@ -316,7 +318,12 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree,
> * asserted for a second in which subsequent
> * pressure events can occur.
> */
> - WRITE_ONCE(memcg->socket_pressure, jiffies + HZ);
> + socket_pressure = jiffies + HZ;
> +
> + jiffies_diff = min(socket_pressure - READ_ONCE(memcg->socket_pressure), HZ);
> + memcg->socket_pressure_duration += jiffies_to_usecs(jiffies_diff);
KCSAN will complain about this. I think we can use atomic_long_add() and
don't need the one with strict ordering.
> +
> + WRITE_ONCE(memcg->socket_pressure, socket_pressure);
> }
> }
> }
>
> base-commit: e96ee511c906c59b7c4e6efd9d9b33917730e000
> --
> 2.39.5
>
next prev parent reply other threads:[~2025-08-05 23:02 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-05 6:44 [PATCH v4] memcg: expose socket memory pressure in a cgroup Daniel Sedlak
2025-08-05 15:54 ` Kuniyuki Iwashima
2025-08-05 23:02 ` Shakeel Butt [this message]
2025-08-06 19:20 ` Kuniyuki Iwashima
2025-08-06 21:54 ` Shakeel Butt
2025-08-06 22:01 ` Kuniyuki Iwashima
2025-08-06 23:34 ` Shakeel Butt
2025-08-06 23:40 ` Kuniyuki Iwashima
2025-08-06 23:51 ` Shakeel Butt
2025-08-07 10:22 ` Daniel Sedlak
2025-08-07 20:52 ` Shakeel Butt
2025-08-14 16:27 ` Matyas Hurtik
2025-08-14 17:31 ` Shakeel Butt
2025-08-14 17:43 ` Shakeel Butt
2025-08-07 10:42 ` Daniel Sedlak
2025-08-09 18:32 ` Tejun Heo
2025-08-11 21:31 ` Shakeel Butt
2025-08-13 12:03 ` Michal Koutný
2025-08-13 18:03 ` Tejun Heo
2025-08-20 16:51 ` Matyas Hurtik
2025-08-20 19:03 ` Tejun Heo
2025-08-20 19:31 ` Shakeel Butt
2025-08-20 20:37 ` Matyas Hurtik
2025-08-20 21:34 ` Shakeel Butt
2025-08-21 18:44 ` Matyas Hurtik
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fcnlbvljynxu5qlzmnjeagll7nf5mje7rwkimbqok6doso37gl@lwepk3ztjga7 \
--to=shakeel.butt@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=daniel.sedlak@cdn77.com \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=hannes@cmpxchg.org \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=kuniyu@google.com \
--cc=linux-mm@kvack.org \
--cc=matyas.hurtik@cdn77.com \
--cc=mhocko@kernel.org \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=ncardwell@google.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=roman.gushchin@linux.dev \
--cc=tj@kernel.org \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).