linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matyas Hurtik <matyas.hurtik@cdn77.com>
To: Shakeel Butt <shakeel.butt@linux.dev>,
	Daniel Sedlak <daniel.sedlak@cdn77.com>
Cc: "Kuniyuki Iwashima" <kuniyu@google.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Eric Dumazet" <edumazet@google.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Paolo Abeni" <pabeni@redhat.com>,
	"Simon Horman" <horms@kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Neal Cardwell" <ncardwell@google.com>,
	"David Ahern" <dsahern@kernel.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Yosry Ahmed" <yosry.ahmed@linux.dev>,
	linux-mm@kvack.org, netdev@vger.kernel.org,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Michal Hocko" <mhocko@kernel.org>,
	"Roman Gushchin" <roman.gushchin@linux.dev>,
	"Muchun Song" <muchun.song@linux.dev>,
	cgroups@vger.kernel.org, "Tejun Heo" <tj@kernel.org>,
	"Michal Koutný" <mkoutny@suse.com>
Subject: Re: [PATCH v4] memcg: expose socket memory pressure in a cgroup
Date: Thu, 14 Aug 2025 18:27:22 +0200	[thread overview]
Message-ID: <4937aca8-8ebb-47d5-986f-7bb27ddbdaba@cdn77.com> (raw)
In-Reply-To: <qsncixzj7s7jd7f3l2erjjs7cx3fanmlbkh4auaapsvon45rx3@62o2nqwrb43e>

On 8/7/25 10:52 PM, Shakeel Butt wrote:

> We definitely don't need a global lock. For memcg->net_pressure_lock, we
> need to be very clear why we need this lock. Basically we are doing RMW
> on memcg->socket_pressure and we want known 'consistently' how much
> further we are pushing memcg->socket_pressure. In other words the
> consistent value of diff. The lock is one way to get that consistent
> diff. We can also play some atomic ops trick to get the consistent value
> without lock but I don't think that complexity is worth it.

Hello,


I tried implementing the second option, making the diff consistent using 
atomics.
Would something like this work?

if (level > VMPRESSURE_LOW) {
   unsigned long new_socket_pressure;
   unsigned long old_socket_pressure;
   unsigned long duration_to_add;
   /*
     * Let the socket buffer allocator know that
     * we are having trouble reclaiming LRU pages.
     *
     * For hysteresis keep the pressure state
     * asserted for a second in which subsequent
     * pressure events can occur.
     */
   new_socket_pressure = jiffies + HZ;
   old_socket_pressure = atomic_long_xchg(
     &memcg->socket_pressure, new_socket_pressure);

   duration_to_add = jiffies_to_usecs(
     min(new_socket_pressure - old_socket_pressure, HZ));

   do {
     atomic_long_add(duration_to_add, &memcg->socket_pressure_duration);
   } while ((memcg = parent_mem_cgroup(memcg)));
}

memcg->socket_pressure would need to be changed into atomic_long_t,
but we avoid adding the memcg->net_pressure_lock.

> We don't need memcg->net_pressure_lock's protection for
> sk_pressure_duration of the memcg and its ancestors if additions to
> sk_pressure_duration are atomic.

With regards to the hierarchical propagation I noticed during testing that
vmpressure() was sometimes called with memcgs, created for systemd oneshot
services, that were at that time no longer present in the /sys/fs/cgroup 
tree.
This then made their parent counters a lot larger than just sum of the 
subtree
plus value of self. Would this behavior be correct?


Thanks,

Matyas



  reply	other threads:[~2025-08-14 16:27 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-05  6:44 [PATCH v4] memcg: expose socket memory pressure in a cgroup Daniel Sedlak
2025-08-05 15:54 ` Kuniyuki Iwashima
2025-08-05 23:02 ` Shakeel Butt
2025-08-06 19:20   ` Kuniyuki Iwashima
2025-08-06 21:54     ` Shakeel Butt
2025-08-06 22:01       ` Kuniyuki Iwashima
2025-08-06 23:34         ` Shakeel Butt
2025-08-06 23:40           ` Kuniyuki Iwashima
2025-08-06 23:51             ` Shakeel Butt
2025-08-07 10:22           ` Daniel Sedlak
2025-08-07 20:52             ` Shakeel Butt
2025-08-14 16:27               ` Matyas Hurtik [this message]
2025-08-14 17:31                 ` Shakeel Butt
2025-08-14 17:43                   ` Shakeel Butt
2025-08-07 10:42   ` Daniel Sedlak
2025-08-09 18:32 ` Tejun Heo
2025-08-11 21:31   ` Shakeel Butt
2025-08-13 12:03   ` Michal Koutný
2025-08-13 18:03     ` Tejun Heo
2025-08-20 16:51       ` Matyas Hurtik
2025-08-20 19:03         ` Tejun Heo
2025-08-20 19:31           ` Shakeel Butt
2025-08-20 20:37           ` Matyas Hurtik
2025-08-20 21:34             ` Shakeel Butt
2025-08-21 18:44               ` Matyas Hurtik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4937aca8-8ebb-47d5-986f-7bb27ddbdaba@cdn77.com \
    --to=matyas.hurtik@cdn77.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=daniel.sedlak@cdn77.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=kuniyu@google.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mkoutny@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=roman.gushchin@linux.dev \
    --cc=shakeel.butt@linux.dev \
    --cc=tj@kernel.org \
    --cc=yosry.ahmed@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).