netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Soheil Hassas Yeganeh <soheil@google.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: "David S . Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	netdev <netdev@vger.kernel.org>, Wei Wang <weiwan@google.com>,
	Shakeel Butt <shakeelb@google.com>,
	Neal Cardwell <ncardwell@google.com>,
	Eric Dumazet <edumazet@google.com>
Subject: Re: [PATCH net-next 4/7] net: implement per-cpu reserves for memory_allocated
Date: Thu, 9 Jun 2022 09:33:25 -0400	[thread overview]
Message-ID: <CACSApvYEwczGVvOxOfDXNHd_x5LDb1vXT03y-=6CcrTv1uR9Kw@mail.gmail.com> (raw)
In-Reply-To: <20220609063412.2205738-5-eric.dumazet@gmail.com>

On Thu, Jun 9, 2022 at 2:34 AM Eric Dumazet <eric.dumazet@gmail.com> wrote:
>
> From: Eric Dumazet <edumazet@google.com>
>
> We plan keeping sk->sk_forward_alloc as small as possible
> in future patches.
>
> This means we are going to call sk_memory_allocated_add()
> and sk_memory_allocated_sub() more often.
>
> Implement a per-cpu cache of +1/-1 MB, to reduce number
> of changes to sk->sk_prot->memory_allocated, which
> would otherwise be cause of false sharing.
>
> Signed-off-by: Eric Dumazet <edumazet@google.com>

Acked-by: Soheil Hassas Yeganeh <soheil@google.com>

> ---
>  include/net/sock.h | 38 +++++++++++++++++++++++++++++---------
>  1 file changed, 29 insertions(+), 9 deletions(-)
>
> diff --git a/include/net/sock.h b/include/net/sock.h
> index 825f8cbf791f02d798f17dd4f7a2659cebb0e98a..59040fee74e7de8d63fbf719f46e172906c134bb 100644
> --- a/include/net/sock.h
> +++ b/include/net/sock.h
> @@ -1397,22 +1397,48 @@ static inline bool sk_under_memory_pressure(const struct sock *sk)
>         return !!*sk->sk_prot->memory_pressure;
>  }
>
> +static inline long
> +proto_memory_allocated(const struct proto *prot)
> +{
> +       return max(0L, atomic_long_read(prot->memory_allocated));
> +}
> +
>  static inline long
>  sk_memory_allocated(const struct sock *sk)
>  {
> -       return atomic_long_read(sk->sk_prot->memory_allocated);
> +       return proto_memory_allocated(sk->sk_prot);
>  }
>
> +/* 1 MB per cpu, in page units */
> +#define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT))
> +
>  static inline long
>  sk_memory_allocated_add(struct sock *sk, int amt)
>  {
> -       return atomic_long_add_return(amt, sk->sk_prot->memory_allocated);
> +       int local_reserve;
> +
> +       preempt_disable();
> +       local_reserve = __this_cpu_add_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
> +       if (local_reserve >= SK_MEMORY_PCPU_RESERVE) {
> +               __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);

This is just a nitpick, but we could
__this_cpu_write(*sk->sk_prot->per_cpu_fw_alloc, 0) instead which
should be slightly faster.

> +               atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
> +       }
> +       preempt_enable();
> +       return sk_memory_allocated(sk);
>  }
>
>  static inline void
>  sk_memory_allocated_sub(struct sock *sk, int amt)
>  {
> -       atomic_long_sub(amt, sk->sk_prot->memory_allocated);
> +       int local_reserve;
> +
> +       preempt_disable();
> +       local_reserve = __this_cpu_sub_return(*sk->sk_prot->per_cpu_fw_alloc, amt);
> +       if (local_reserve <= -SK_MEMORY_PCPU_RESERVE) {
> +               __this_cpu_sub(*sk->sk_prot->per_cpu_fw_alloc, local_reserve);
> +               atomic_long_add(local_reserve, sk->sk_prot->memory_allocated);
> +       }
> +       preempt_enable();
>  }
>
>  #define SK_ALLOC_PERCPU_COUNTER_BATCH 16
> @@ -1441,12 +1467,6 @@ proto_sockets_allocated_sum_positive(struct proto *prot)
>         return percpu_counter_sum_positive(prot->sockets_allocated);
>  }
>
> -static inline long
> -proto_memory_allocated(struct proto *prot)
> -{
> -       return atomic_long_read(prot->memory_allocated);
> -}
> -
>  static inline bool
>  proto_memory_pressure(struct proto *prot)
>  {
> --
> 2.36.1.255.ge46751e96f-goog
>

  reply	other threads:[~2022-06-09 13:34 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-09  6:34 [PATCH net-next 0/7] net: reduce tcp_memory_allocated inflation Eric Dumazet
2022-06-09  6:34 ` [PATCH net-next 1/7] Revert "net: set SK_MEM_QUANTUM to 4096" Eric Dumazet
2022-06-09 15:08   ` Shakeel Butt
2022-06-09  6:34 ` [PATCH net-next 2/7] net: remove SK_MEM_QUANTUM and SK_MEM_QUANTUM_SHIFT Eric Dumazet
2022-06-09 15:09   ` Shakeel Butt
2022-06-09  6:34 ` [PATCH net-next 3/7] net: add per_cpu_fw_alloc field to struct proto Eric Dumazet
2022-06-09 15:11   ` Shakeel Butt
2022-06-09  6:34 ` [PATCH net-next 4/7] net: implement per-cpu reserves for memory_allocated Eric Dumazet
2022-06-09 13:33   ` Soheil Hassas Yeganeh [this message]
2022-06-09 13:47     ` Eric Dumazet
2022-06-09 13:48       ` Soheil Hassas Yeganeh
2022-06-09 14:46   ` Neal Cardwell
2022-06-09 15:07     ` Shakeel Butt
2022-06-09 15:09       ` Neal Cardwell
2022-06-09 15:43         ` Eric Dumazet
2022-06-09 15:12   ` Shakeel Butt
2022-06-09  6:34 ` [PATCH net-next 5/7] net: fix sk_wmem_schedule() and sk_rmem_schedule() errors Eric Dumazet
2022-06-09 15:18   ` Shakeel Butt
2022-06-09  6:34 ` [PATCH net-next 6/7] net: keep sk->sk_forward_alloc as small as possible Eric Dumazet
2022-06-09 16:38   ` Shakeel Butt
2022-06-10 23:00   ` Mat Martineau
2022-10-13 13:15   ` K Prateek Nayak
2022-10-13 14:35     ` Eric Dumazet
2022-10-13 15:52       ` Shakeel Butt
2022-10-14  8:32         ` K Prateek Nayak
2022-10-14  8:30       ` K Prateek Nayak
2022-10-15 20:19         ` Eric Dumazet
2022-10-17  4:04           ` K Prateek Nayak
2022-06-09  6:34 ` [PATCH net-next 7/7] net: unexport __sk_mem_{raise|reduce}_allocated Eric Dumazet
2022-06-09 16:38   ` Shakeel Butt
2022-06-09 13:33 ` [PATCH net-next 0/7] net: reduce tcp_memory_allocated inflation Soheil Hassas Yeganeh
2022-06-11  0:10 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACSApvYEwczGVvOxOfDXNHd_x5LDb1vXT03y-=6CcrTv1uR9Kw@mail.gmail.com' \
    --to=soheil@google.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=kuba@kernel.org \
    --cc=ncardwell@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=shakeelb@google.com \
    --cc=weiwan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).