linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
@ 2025-07-16  4:29 Kuniyuki Iwashima
  2025-07-16 19:59 ` Shakeel Butt
  2025-07-16 22:43 ` Andrew Morton
  0 siblings, 2 replies; 9+ messages in thread
From: Kuniyuki Iwashima @ 2025-07-16  4:29 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, Kuniyuki Iwashima, linux-mm, Neal Cardwell

memcg->socket_pressure is initialised with jiffies when the memcg
is created.

Once vmpressure detects that the cgroup is under memory pressure,
the field is updated with jiffies + HZ to signal the fact to the
socket layer and suppress memory allocation for one second.

Otherwise, the field is not updated.

mem_cgroup_under_socket_pressure() uses time_before() to check if
jiffies is less than memcg->socket_pressure, and this has a bug on
32-bit kernel.

  if (time_before(jiffies, memcg->socket_pressure))
          return true;

As time_before() casts the final result to long, the acceptable delta
between two timestamps is 2 ^ (BITS_PER_LONG - 1).

On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.

  >>> (2 ** 31) / 1000 / 60 / 60 / 24
  24.855134814814818

Once 24 days have passed since the last update of socket_pressure,
mem_cgroup_under_socket_pressure() starts to lie until the next
24 days pass.

Thus, we need to update socket_pressure to a recent timestamp
periodically on 32-bit kernel.

Let's do that every 24 hours, with a variation of about 0 to 4 hours.

The variation is to avoid bursting by cgroups created within a small
timeframe, like boot-up.

The work could be racy but does not take vmpr->sr_lock nor re-evaluate
vmpressure_calc_level() under the assumption that socket_pressure will
get updated soon if under memory pressure, because it persists only
for one second.

Note that we don't need to worry about 64-bit machines unless they
serve for 300 million years.

  >>> (2 ** 63) / 1000 / 60 / 60 / 24 / 365
  292471208.6775361

Fixes: 8e8ae645249b8 ("mm: memcontrol: hook up vmpressure to socket pressure")
Reported-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
---
 include/linux/vmpressure.h |  3 +++
 mm/vmpressure.c            | 52 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/linux/vmpressure.h b/include/linux/vmpressure.h
index 6a2f51ebbfd35..946c1b284d4d4 100644
--- a/include/linux/vmpressure.h
+++ b/include/linux/vmpressure.h
@@ -25,6 +25,9 @@ struct vmpressure {
 	struct mutex events_lock;
 
 	struct work_struct work;
+#if BITS_PER_LONG == 32
+	struct delayed_work delayed_work;
+#endif
 };
 
 struct mem_cgroup;
diff --git a/mm/vmpressure.c b/mm/vmpressure.c
index bd5183dfd8791..cd978d580d9a7 100644
--- a/mm/vmpressure.c
+++ b/mm/vmpressure.c
@@ -215,6 +215,51 @@ static void vmpressure_work_fn(struct work_struct *work)
 	} while ((vmpr = vmpressure_parent(vmpr)));
 }
 
+#if BITS_PER_LONG == 32
+static void vmpressure_update_socket_pressure(struct vmpressure *vmpr)
+{
+	struct mem_cgroup *memcg = vmpressure_to_memcg(vmpr);
+	unsigned long delay, variation;
+
+	/*
+	 * The acceptable delta for time_before() is up to 2 ^ (BITS_PER_LONG - 1),
+	 * which is 24 days with CONFIG_HZ=1000, so once-per-day is enough.
+	 */
+	delay = HZ * 60 * 60 * 24;
+
+	/*
+	 * Add variation (0 ~ 4 hours) to avoid bursting by cgourps created
+	 * during boot.
+	 */
+	variation = (unsigned long)memcg;
+	variation ^= variation >> 16;
+	variation ^= variation >> 8;
+	variation %= 256;
+
+	delay += HZ * 60 * variation;
+
+	mod_delayed_work(system_unbound_wq, &vmpr->delayed_work, delay);
+}
+
+static void vmpressure_update_socket_pressure_fn(struct work_struct *work)
+{
+	struct mem_cgroup *memcg;
+	struct vmpressure *vmpr;
+
+	vmpr = container_of(to_delayed_work(work), struct vmpressure, delayed_work);
+	memcg = vmpressure_to_memcg(vmpr);
+
+	/*
+	 * Update socket_pressure to a recent timestamp, but don't signal
+	 * false positive to mem_cgroup_under_socket_pressure(), thus -HZ.
+	 */
+	if (time_before(READ_ONCE(memcg->socket_pressure), jiffies))
+		WRITE_ONCE(memcg->socket_pressure, jiffies - HZ);
+
+	vmpressure_update_socket_pressure(vmpr);
+}
+#endif
+
 /**
  * vmpressure() - Account memory pressure through scanned/reclaimed ratio
  * @gfp:	reclaimer's gfp mask
@@ -462,6 +507,10 @@ void vmpressure_init(struct vmpressure *vmpr)
 	mutex_init(&vmpr->events_lock);
 	INIT_LIST_HEAD(&vmpr->events);
 	INIT_WORK(&vmpr->work, vmpressure_work_fn);
+#if BITS_PER_LONG == 32
+	INIT_DELAYED_WORK(&vmpr->delayed_work, vmpressure_update_socket_pressure_fn);
+	vmpressure_update_socket_pressure(vmpr);
+#endif
 }
 
 /**
@@ -478,4 +527,7 @@ void vmpressure_cleanup(struct vmpressure *vmpr)
 	 * goes away.
 	 */
 	flush_work(&vmpr->work);
+#if BITS_PER_LONG == 32
+	cancel_delayed_work_sync(&vmpr->delayed_work);
+#endif
 }
-- 
2.50.0.727.gbf7dc18ff4-goog



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16  4:29 [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel Kuniyuki Iwashima
@ 2025-07-16 19:59 ` Shakeel Butt
  2025-07-16 21:16   ` Kuniyuki Iwashima
  2025-07-16 22:43 ` Andrew Morton
  1 sibling, 1 reply; 9+ messages in thread
From: Shakeel Butt @ 2025-07-16 19:59 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: Andrew Morton, Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, Jul 16, 2025 at 04:29:12AM +0000, Kuniyuki Iwashima wrote:
> memcg->socket_pressure is initialised with jiffies when the memcg
> is created.
> 
> Once vmpressure detects that the cgroup is under memory pressure,
> the field is updated with jiffies + HZ to signal the fact to the
> socket layer and suppress memory allocation for one second.
> 
> Otherwise, the field is not updated.
> 
> mem_cgroup_under_socket_pressure() uses time_before() to check if
> jiffies is less than memcg->socket_pressure, and this has a bug on
> 32-bit kernel.
> 
>   if (time_before(jiffies, memcg->socket_pressure))
>           return true;
> 
> As time_before() casts the final result to long, the acceptable delta
> between two timestamps is 2 ^ (BITS_PER_LONG - 1).
> 
> On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.
> 
>   >>> (2 ** 31) / 1000 / 60 / 60 / 24
>   24.855134814814818
> 
> Once 24 days have passed since the last update of socket_pressure,
> mem_cgroup_under_socket_pressure() starts to lie until the next
> 24 days pass.
> 
> Thus, we need to update socket_pressure to a recent timestamp
> periodically on 32-bit kernel.
> 
> Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> 
> The variation is to avoid bursting by cgroups created within a small
> timeframe, like boot-up.
> 
> The work could be racy but does not take vmpr->sr_lock nor re-evaluate
> vmpressure_calc_level() under the assumption that socket_pressure will
> get updated soon if under memory pressure, because it persists only
> for one second.
> 
> Note that we don't need to worry about 64-bit machines unless they
> serve for 300 million years.
> 
>   >>> (2 ** 63) / 1000 / 60 / 60 / 24 / 365
>   292471208.6775361
> 
> Fixes: 8e8ae645249b8 ("mm: memcontrol: hook up vmpressure to socket pressure")
> Reported-by: Neal Cardwell <ncardwell@google.com>
> Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>

Is this a real issue which you have seen in the production? I wonder if
we can just reset memcg->socket_pressure in
mem_cgroup_under_socket_pressure() for 32 bit systems if we see
overflow?



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16 19:59 ` Shakeel Butt
@ 2025-07-16 21:16   ` Kuniyuki Iwashima
  0 siblings, 0 replies; 9+ messages in thread
From: Kuniyuki Iwashima @ 2025-07-16 21:16 UTC (permalink / raw)
  To: shakeel.butt
  Cc: akpm, davem, hannes, kuni1840, kuniyu, linux-mm, ncardwell,
	vdavydov.dev

From: Shakeel Butt <shakeel.butt@linux.dev>
Date: Wed, 16 Jul 2025 12:59:13 -0700
> On Wed, Jul 16, 2025 at 04:29:12AM +0000, Kuniyuki Iwashima wrote:
> > memcg->socket_pressure is initialised with jiffies when the memcg
> > is created.
> > 
> > Once vmpressure detects that the cgroup is under memory pressure,
> > the field is updated with jiffies + HZ to signal the fact to the
> > socket layer and suppress memory allocation for one second.
> > 
> > Otherwise, the field is not updated.
> > 
> > mem_cgroup_under_socket_pressure() uses time_before() to check if
> > jiffies is less than memcg->socket_pressure, and this has a bug on
> > 32-bit kernel.
> > 
> >   if (time_before(jiffies, memcg->socket_pressure))
> >           return true;
> > 
> > As time_before() casts the final result to long, the acceptable delta
> > between two timestamps is 2 ^ (BITS_PER_LONG - 1).
> > 
> > On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.
> > 
> >   >>> (2 ** 31) / 1000 / 60 / 60 / 24
> >   24.855134814814818
> > 
> > Once 24 days have passed since the last update of socket_pressure,
> > mem_cgroup_under_socket_pressure() starts to lie until the next
> > 24 days pass.
> > 
> > Thus, we need to update socket_pressure to a recent timestamp
> > periodically on 32-bit kernel.
> > 
> > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> > 
> > The variation is to avoid bursting by cgroups created within a small
> > timeframe, like boot-up.
> > 
> > The work could be racy but does not take vmpr->sr_lock nor re-evaluate
> > vmpressure_calc_level() under the assumption that socket_pressure will
> > get updated soon if under memory pressure, because it persists only
> > for one second.
> > 
> > Note that we don't need to worry about 64-bit machines unless they
> > serve for 300 million years.
> > 
> >   >>> (2 ** 63) / 1000 / 60 / 60 / 24 / 365
> >   292471208.6775361
> > 
> > Fixes: 8e8ae645249b8 ("mm: memcontrol: hook up vmpressure to socket pressure")
> > Reported-by: Neal Cardwell <ncardwell@google.com>
> > Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com>
> 
> Is this a real issue which you have seen in the production?

No, this is a theoretical issue that Neal found a while ago.


> I wonder if
> we can just reset memcg->socket_pressure in
> mem_cgroup_under_socket_pressure() for 32 bit systems if we see
> overflow?

I guess you mean like below ?  This doesn't work when jiffies is close
to ULONG_MAX and socket_pressure is close to 0 and the 'actual' time delta
is within HZ.

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 87b6688f124a7..2bf92514b67ff 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1609,6 +1609,10 @@ static inline bool mem_cgroup_under_socket_pressure(struct mem_cgroup *memcg)
 		return !!memcg->tcpmem_pressure;
 #endif /* CONFIG_MEMCG_V1 */
 	do {
+#if BITS_PER_LONG == 32
+		if (jiffies - READ_ONCE(memcg->socket_pressure) > LONG_MAX) {
+			WRITE_ONCE(memcg->socket_pressure, jiffies - HZ);
+#endif
 		if (time_before(jiffies, READ_ONCE(memcg->socket_pressure)))
 			return true;
 	} while ((memcg = parent_mem_cgroup(memcg)));


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16  4:29 [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel Kuniyuki Iwashima
  2025-07-16 19:59 ` Shakeel Butt
@ 2025-07-16 22:43 ` Andrew Morton
  2025-07-16 23:13   ` Kuniyuki Iwashima
  1 sibling, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2025-07-16 22:43 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, 16 Jul 2025 04:29:12 +0000 Kuniyuki Iwashima <kuniyu@google.com> wrote:

> memcg->socket_pressure is initialised with jiffies when the memcg
> is created.
> 
> Once vmpressure detects that the cgroup is under memory pressure,
> the field is updated with jiffies + HZ to signal the fact to the
> socket layer and suppress memory allocation for one second.
> 
> Otherwise, the field is not updated.
> 
> mem_cgroup_under_socket_pressure() uses time_before() to check if
> jiffies is less than memcg->socket_pressure, and this has a bug on
> 32-bit kernel.
> 
>   if (time_before(jiffies, memcg->socket_pressure))
>           return true;
> 
> As time_before() casts the final result to long, the acceptable delta
> between two timestamps is 2 ^ (BITS_PER_LONG - 1).
> 
> On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.
> 
>   >>> (2 ** 31) / 1000 / 60 / 60 / 24
>   24.855134814814818
> 
> Once 24 days have passed since the last update of socket_pressure,
> mem_cgroup_under_socket_pressure() starts to lie until the next
> 24 days pass.
> 
> Thus, we need to update socket_pressure to a recent timestamp
> periodically on 32-bit kernel.
> 
> Let's do that every 24 hours, with a variation of about 0 to 4 hours.

Can't we simply convert ->socket_preesure to a 64-bit type? 
timespec64/time64_t/etc?



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16 22:43 ` Andrew Morton
@ 2025-07-16 23:13   ` Kuniyuki Iwashima
  2025-07-16 23:52     ` Andrew Morton
  0 siblings, 1 reply; 9+ messages in thread
From: Kuniyuki Iwashima @ 2025-07-16 23:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, Jul 16, 2025 at 3:43 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 16 Jul 2025 04:29:12 +0000 Kuniyuki Iwashima <kuniyu@google.com> wrote:
>
> > memcg->socket_pressure is initialised with jiffies when the memcg
> > is created.
> >
> > Once vmpressure detects that the cgroup is under memory pressure,
> > the field is updated with jiffies + HZ to signal the fact to the
> > socket layer and suppress memory allocation for one second.
> >
> > Otherwise, the field is not updated.
> >
> > mem_cgroup_under_socket_pressure() uses time_before() to check if
> > jiffies is less than memcg->socket_pressure, and this has a bug on
> > 32-bit kernel.
> >
> >   if (time_before(jiffies, memcg->socket_pressure))
> >           return true;
> >
> > As time_before() casts the final result to long, the acceptable delta
> > between two timestamps is 2 ^ (BITS_PER_LONG - 1).
> >
> > On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.
> >
> >   >>> (2 ** 31) / 1000 / 60 / 60 / 24
> >   24.855134814814818
> >
> > Once 24 days have passed since the last update of socket_pressure,
> > mem_cgroup_under_socket_pressure() starts to lie until the next
> > 24 days pass.
> >
> > Thus, we need to update socket_pressure to a recent timestamp
> > periodically on 32-bit kernel.
> >
> > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
>
> Can't we simply convert ->socket_preesure to a 64-bit type?
> timespec64/time64_t/etc?

I think it's doable with get_jiffies_64() & time_before64().

My thought was a delayed work would be better than adding
two seqlock in the networking fast path.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16 23:13   ` Kuniyuki Iwashima
@ 2025-07-16 23:52     ` Andrew Morton
  2025-07-16 23:58       ` Kuniyuki Iwashima
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2025-07-16 23:52 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, 16 Jul 2025 16:13:54 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:

> > > Thus, we need to update socket_pressure to a recent timestamp
> > > periodically on 32-bit kernel.
> > >
> > > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> >
> > Can't we simply convert ->socket_preesure to a 64-bit type?
> > timespec64/time64_t/etc?
> 
> I think it's doable with get_jiffies_64() & time_before64().
> 
> My thought was a delayed work would be better than adding
> two seqlock in the networking fast path.

Is it on a very fast path?  mem_cgroup_under_socket_pressure() itself
doesn't look very fastpath-friendly and seqcounts are fast.  Bearing in
mind that this affects 32-bit machines only.

If a get_jiffies_64() call is demonstrated to be a performance issue
then perhaps there's something sneaky we can do along the lines of
reading jiffies_64 directly then falling back to get_jiffies_64() in
the rare something-went-wrong path.  Haven't thought about it :)

Dunno, the proposed patch just feels like overkill for a silly
32/66-bit issue?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16 23:52     ` Andrew Morton
@ 2025-07-16 23:58       ` Kuniyuki Iwashima
  2025-07-17  1:02         ` Andrew Morton
  0 siblings, 1 reply; 9+ messages in thread
From: Kuniyuki Iwashima @ 2025-07-16 23:58 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, Jul 16, 2025 at 4:52 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 16 Jul 2025 16:13:54 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
>
> > > > Thus, we need to update socket_pressure to a recent timestamp
> > > > periodically on 32-bit kernel.
> > > >
> > > > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> > >
> > > Can't we simply convert ->socket_preesure to a 64-bit type?
> > > timespec64/time64_t/etc?
> >
> > I think it's doable with get_jiffies_64() & time_before64().
> >
> > My thought was a delayed work would be better than adding
> > two seqlock in the networking fast path.
>
> Is it on a very fast path?  mem_cgroup_under_socket_pressure() itself
> doesn't look very fastpath-friendly and seqcounts are fast.  Bearing in
> mind that this affects 32-bit machines only.
>
> If a get_jiffies_64() call is demonstrated to be a performance issue
> then perhaps there's something sneaky we can do along the lines of
> reading jiffies_64 directly then falling back to get_jiffies_64() in
> the rare something-went-wrong path.  Haven't thought about it :)
>
> Dunno, the proposed patch just feels like overkill for a silly
> 32/66-bit issue?

Fair enough, I'll make it u64 in v2.

Thank you!


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-16 23:58       ` Kuniyuki Iwashima
@ 2025-07-17  1:02         ` Andrew Morton
  2025-07-17 19:37           ` Kuniyuki Iwashima
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2025-07-17  1:02 UTC (permalink / raw)
  To: Kuniyuki Iwashima
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, 16 Jul 2025 16:58:29 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:

> On Wed, Jul 16, 2025 at 4:52 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Wed, 16 Jul 2025 16:13:54 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
> >
> > > > > Thus, we need to update socket_pressure to a recent timestamp
> > > > > periodically on 32-bit kernel.
> > > > >
> > > > > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> > > >
> > > > Can't we simply convert ->socket_preesure to a 64-bit type?
> > > > timespec64/time64_t/etc?
> > >
> > > I think it's doable with get_jiffies_64() & time_before64().
> > >
> > > My thought was a delayed work would be better than adding
> > > two seqlock in the networking fast path.
> >
> > Is it on a very fast path?  mem_cgroup_under_socket_pressure() itself
> > doesn't look very fastpath-friendly and seqcounts are fast.  Bearing in
> > mind that this affects 32-bit machines only.
> >
> > If a get_jiffies_64() call is demonstrated to be a performance issue
> > then perhaps there's something sneaky we can do along the lines of
> > reading jiffies_64 directly then falling back to get_jiffies_64() in
> > the rare something-went-wrong path.  Haven't thought about it :)
> >
> > Dunno, the proposed patch just feels like overkill for a silly
> > 32/66-bit issue?
> 
> Fair enough, I'll make it u64 in v2.
> 

Well, please do pay attention to any performance impact.  Measurements
with some simple microbenchmark if possible?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel.
  2025-07-17  1:02         ` Andrew Morton
@ 2025-07-17 19:37           ` Kuniyuki Iwashima
  0 siblings, 0 replies; 9+ messages in thread
From: Kuniyuki Iwashima @ 2025-07-17 19:37 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, David S. Miller, Vladimir Davydov,
	Kuniyuki Iwashima, linux-mm, Neal Cardwell

On Wed, Jul 16, 2025 at 6:02 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Wed, 16 Jul 2025 16:58:29 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
>
> > On Wed, Jul 16, 2025 at 4:52 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> > >
> > > On Wed, 16 Jul 2025 16:13:54 -0700 Kuniyuki Iwashima <kuniyu@google.com> wrote:
> > >
> > > > > > Thus, we need to update socket_pressure to a recent timestamp
> > > > > > periodically on 32-bit kernel.
> > > > > >
> > > > > > Let's do that every 24 hours, with a variation of about 0 to 4 hours.
> > > > >
> > > > > Can't we simply convert ->socket_preesure to a 64-bit type?
> > > > > timespec64/time64_t/etc?
> > > >
> > > > I think it's doable with get_jiffies_64() & time_before64().
> > > >
> > > > My thought was a delayed work would be better than adding
> > > > two seqlock in the networking fast path.
> > >
> > > Is it on a very fast path?  mem_cgroup_under_socket_pressure() itself
> > > doesn't look very fastpath-friendly and seqcounts are fast.  Bearing in
> > > mind that this affects 32-bit machines only.
> > >
> > > If a get_jiffies_64() call is demonstrated to be a performance issue
> > > then perhaps there's something sneaky we can do along the lines of
> > > reading jiffies_64 directly then falling back to get_jiffies_64() in
> > > the rare something-went-wrong path.  Haven't thought about it :)
> > >
> > > Dunno, the proposed patch just feels like overkill for a silly
> > > 32/66-bit issue?
> >
> > Fair enough, I'll make it u64 in v2.
> >
>
> Well, please do pay attention to any performance impact.  Measurements
> with some simple microbenchmark if possible?

I don't have a real 32-bit machine so this is a result on QEMU,
but with/without the u64 jiffie patch, the time spent in
mem_cgroup_under_socket_pressure() was 1~5us and I didn't
see any measurable delta.

no patch applied:
iperf3   273 [000]   137.296248:
probe:mem_cgroup_under_socket_pressure: (c13660d0)
                c13660d1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3   273 [000]   137.296249:
probe:mem_cgroup_under_socket_pressure__return: (c13660d0 <- c1d8fd7f)
iperf3   273 [000]   137.296251:
probe:mem_cgroup_under_socket_pressure: (c13660d0)
                c13660d1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3   273 [000]   137.296253:
probe:mem_cgroup_under_socket_pressure__return: (c13660d0 <- c1d8fd7f)


u64 jiffies patch applied:
iperf3   308 [001]   330.669370:
probe:mem_cgroup_under_socket_pressure: (c12ddba0)
                c12ddba1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3   308 [001]   330.669371:
probe:mem_cgroup_under_socket_pressure__return: (c12ddba0 <- c1ce98bf)
iperf3   308 [001]   330.669382:
probe:mem_cgroup_under_socket_pressure: (c12ddba0)
                c12ddba1 mem_cgroup_under_socket_pressure+0x1
([kernel.kallsyms])
iperf3   308 [001]   330.669384:
probe:mem_cgroup_under_socket_pressure__return: (c12ddba0 <- c1ce98bf)


So, the u64 approach is good enough :)

Thanks!


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-07-17 19:37 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-16  4:29 [PATCH] memcg: Keep socket_pressure fresh on 32-bit kernel Kuniyuki Iwashima
2025-07-16 19:59 ` Shakeel Butt
2025-07-16 21:16   ` Kuniyuki Iwashima
2025-07-16 22:43 ` Andrew Morton
2025-07-16 23:13   ` Kuniyuki Iwashima
2025-07-16 23:52     ` Andrew Morton
2025-07-16 23:58       ` Kuniyuki Iwashima
2025-07-17  1:02         ` Andrew Morton
2025-07-17 19:37           ` Kuniyuki Iwashima

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).