netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Abel Wu <wuyun.abel@bytedance.com>
Cc: "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	 Jakub Kicinski <kuba@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	 Kuniyuki Iwashima <kuniyu@amazon.com>,
	Breno Leitao <leitao@debian.org>,
	 Alexander Mikhalitsyn <alexander@mihalicyn.com>,
	David Howells <dhowells@redhat.com>,
	 Jason Xing <kernelxing@tencent.com>,
	Xin Long <lucien.xin@gmail.com>,
	 KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtsu.com>,
	 "open list:NETWORKING [GENERAL]" <netdev@vger.kernel.org>,
	open list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next 2/2] sock: Fix improper heuristic on raising memory
Date: Sun, 24 Sep 2023 07:28:16 +0000	[thread overview]
Message-ID: <20230924072816.6ywgoe7ab2max672@google.com> (raw)
In-Reply-To: <71ac08d3-9f36-e0de-870e-3e252abcb66a@bytedance.com>

On Fri, Sep 22, 2023 at 06:10:06PM +0800, Abel Wu wrote:
[...]
> 
> After a second thought, it is still vague to me about the position
> the memcg pressure should be in socket memory allocation. It lacks
> convincing design. I think the above hunk helps, but not much.
> 
> I wonder if we should take option (3) first. Thoughts?
> 

Let's take a step further. Let's decouple the memcg accounting and
global skmem accounting. __sk_mem_raise_allocated is already very hard
to reason. There are couple of heuristics in it which may or may not
apply to both accounting infrastructures.

Let's explicitly document what heurisitics allows to forcefully succeed
the allocations i.e. irrespective of pressure or over limit for both
accounting infras. I think decoupling them would make the flow of the
code very clear.

There are three heuristics:

1. minimum buffer size even under pressure.

2. allow allocation for a socket whose usage is below average of the
system.

3. socket is over its sndbuf.

Let's discuss which heuristic applies to which accounting infra and
under which state (under pressure or over limit).

thanks,
Shakeel

  reply	other threads:[~2023-09-24  7:28 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-20 13:25 [PATCH net-next 1/2] sock: Code cleanup on __sk_mem_raise_allocated() Abel Wu
2023-09-20 13:25 ` [PATCH net-next 2/2] sock: Fix improper heuristic on raising memory Abel Wu
2023-09-21 19:01   ` Shakeel Butt
2023-09-22  8:36     ` Abel Wu
2023-09-22 10:10       ` Abel Wu
2023-09-24  7:28         ` Shakeel Butt [this message]
2023-10-03 12:49           ` Abel Wu
2023-10-11  3:04             ` Abel Wu
2023-10-13  1:09             ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230924072816.6ywgoe7ab2max672@google.com \
    --to=shakeelb@google.com \
    --cc=alexander@mihalicyn.com \
    --cc=davem@davemloft.net \
    --cc=dhowells@redhat.com \
    --cc=edumazet@google.com \
    --cc=kamezawa.hiroyu@jp.fujtsu.com \
    --cc=kernelxing@tencent.com \
    --cc=kuba@kernel.org \
    --cc=kuniyu@amazon.com \
    --cc=leitao@debian.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lucien.xin@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=wuyun.abel@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).