From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DCD9C47E49 for ; Wed, 24 Jan 2024 17:38:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28E976B0083; Wed, 24 Jan 2024 12:38:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23E388D0001; Wed, 24 Jan 2024 12:38:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0B8586B0088; Wed, 24 Jan 2024 12:38:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id ED7F86B0083 for ; Wed, 24 Jan 2024 12:38:33 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 812F1140CEC for ; Wed, 24 Jan 2024 17:38:33 +0000 (UTC) X-FDA: 81714914106.04.A920A41 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf21.hostedemail.com (Postfix) with ESMTP id 9E5911C0015 for ; Wed, 24 Jan 2024 17:38:31 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ivfaOnIV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of shakeelb@google.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=shakeelb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706117911; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yJ/BKkfpXJzgeyNiHaKMu/CgCnQKf/dgognXWBGDJRs=; b=yxNoa8wtiB5dtDwT6iUKic07ds8pT54K/iY3d83PQvygjSukMbANKfEzeHs34NkzxpisNM mQdatYtFx7xsaAKBDdw6iZh6r3vNzlewB3kwMCOM8wfFdXcb4hBKc+dLsv6pyIp/zx/sa/ btMkHA1HI4iMS4/JI8YTJFCShagYmCo= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ivfaOnIV; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of shakeelb@google.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=shakeelb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706117911; a=rsa-sha256; cv=none; b=zq/qOVDb7Uwp/4CImrutBGQKoOrFjKQEB0YBEWu5PiMyYFGtBP+xmgsPuvgIo66OoJIfaS SNRiJpQrCMD0ZCmG/yV4Qmg4kgGjKD+02TGg7qRhB8miGxHXv08pJg88/DHN7lf4T9G8VD bzbQDpGCESKz8cC6TYdS6e0dzuF/tMI= Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1d72d240c69so168985ad.1 for ; Wed, 24 Jan 2024 09:38:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706117910; x=1706722710; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=yJ/BKkfpXJzgeyNiHaKMu/CgCnQKf/dgognXWBGDJRs=; b=ivfaOnIVGQfHhAGps6HJsXv0MuNH8+8puLN2OoyCSbtOIfQSM7KTAWsIG+odD9yDUN RcfvPWe1iWpEt5kFN0Osi5v0KKJu+f+CRTFOg5+b8Dkrk/sqTuHLedvMs0B9wVeMRQC9 TXWBCIq9dEKDDNRx9Sne9TzHMkbMRSmn90yssH0XSq2sfkZqG9YAEylR8PWUmiaC6JBA IMnYXzhr/3zqoxvZFoGTUhgWZZeRGJa8VJf53256Bk9zBukVmMucZEEJsKLiAlqVPJAi 2uzeKW9qRx+jr46FK7iD9pvsF7zVnBFMKEMMjEH/2i0flqwM56KjFYIbTkt3R8FHH/y8 k8cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706117910; x=1706722710; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yJ/BKkfpXJzgeyNiHaKMu/CgCnQKf/dgognXWBGDJRs=; b=GwivgEEguWg3ZsU0N6Ff8F7rWb0NPpo74+r85KJBmEzAPkv1rrtHr0n1Z0Onjxphbs 4AcAdXkzYr2KJFLiIgHaxFZxoSUl8ZWZ9+FuLprb6BDQ0KMhTJH0ilPTZKKHVrWrF2R0 qgZnkfmRSUkXoNtLT5MnMlpwlkSzDVq2SDwhzj+8L/EwRrZ2H6ArVWbtz3tI+z4lPfU4 pbucLleP7TrWkRGiTsQFoiWe2GDZfeDpd0fTHQF7ZRJOI2P8HA0b+1hji8Jwy5wnvwDP KauguW7yV/6L9mE6VSRjHtqdp57x9CI3+0HnzMLWhCrQCI4JIQLrgNBMmHyXTthb3ye6 xnrw== X-Gm-Message-State: AOJu0YwcWvyli5+2wXKbKk5o2cRt8shHKD3dPUaeXHe5WX8oyzZg7SWQ hil45wEYDEoqkusR7daxtKLfW8srQkxOiuCMElPosWrXwE8GMImYr3cetk1OEoL8s208ZP0NEEa jOG6F10mkmjXuaNTzgByJNgiROVGSAq796eji X-Google-Smtp-Source: AGHT+IFB5hVm3nPtQ1RkStaM3t/gRxrS+28Wx2F0onfV8o6fOhomKTu7ndT8c1ooAvc+cAfwWvFudUBaqRv9FcYZ8EE= X-Received: by 2002:a17:902:db04:b0:1d7:246c:2fc1 with SMTP id m4-20020a170902db0400b001d7246c2fc1mr216102plx.26.1706117910167; Wed, 24 Jan 2024 09:38:30 -0800 (PST) MIME-Version: 1.0 References: <20240124100023.660032-1-yosryahmed@google.com> In-Reply-To: <20240124100023.660032-1-yosryahmed@google.com> From: Shakeel Butt Date: Wed, 24 Jan 2024 09:38:18 -0800 Message-ID: Subject: Re: [PATCH] mm: memcg: optimize parent iteration in memcg_rstat_updated() To: Yosry Ahmed Cc: Andrew Morton , Johannes Weiner , Michal Hocko , Roman Gushchin , Muchun Song , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel test robot Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 9E5911C0015 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: qg4345685rz7r3mzoqamwtt75s5y4d4y X-HE-Tag: 1706117911-968719 X-HE-Meta: U2FsdGVkX1803JJq8BfbBcQwtApoYcpkSnGwhY6IR+RQznU/xDDaUk7PtoVfRzsbgsNAm8L8tIW4o9fdlguJQ36rKdfQ5gILIali/wMCqm6RIYKspgRx4Q4fZkftMRuVSVKE83TZOlC6cexS1tNeLMjj4Eb7dwGKNLANKtpCYRwCZLY5+64j+nlTCMXzcnqOWYbN4me9bCREI9UNE6MGH2Mier4qY32Gy4UENYDg0VXZ8RbSvTWciD9J4drf4EgeFlACHszGSv5WUxOvHdbItT4s2/QASN9r/a3U8baqAVXS2yCYYVZwFqzqvCmJM3ji9fnYvkxNrwclUkLi9K2EARES4O1CGmI/BbiUt6JAR1sE0HREY2XZ5e7gAEn6USw8rwrVgSGMNZJXw3Z3EVkXdZLlftK8UmkB8U8a+qavl7yEkOFyeCVyytdgzy4oECFD8zhdOjKJKNhaSFKGJ/iY19/oNIKcC7/YPMVRdtnhW1OZx3mqWX9yozl4qxBKTbeebAcCZIdS/lOj8tGefrISAHhYAD6KsfyPg88t65hbUar0eg/XwG8A2X9oub29BmxEDdu4gfIBbkQvxw2W/o8UjE/YPFJuuTn6KJW6rzekW6MINNcl0EWCL4wNUTeyy1JZvxWuTGOGxdTtynFP5ReJRkSxuxUxHX53+eDHFKn9VNeseHpwH0lbez3IAz8km/G52L1JxabhELNL6/HEaGae67FXG8JD1GajkEHz3CLHYJnr4GNUrF2SoC1ZZZcqusquOtZr+BR3YESCFpj6Sub56DyGY9VIfHuLiINW6w7nf653HMVMeoS8VHUx7c2xz9vADgjpzNJS8L3ee+U1AyJ2oD5vOUquumvNIExgnWYDzQKCE+/AbkOPMcVnN9p563ijUY1/PrgwFGaeR4KE9wYEo/z/gCEdteyB5BqGxbJcmkmiw0nKnf9PZ+bJUfw2yWvW/o/x6+G4WgeD5nZMiZQ +MXtsqNt tOYmYudQKQKWn3bN7CCNXAeNgMn7BX5iKLN8BjcO1RR3kp10K5vW6PAJ58iXFLFcM/QwIVtkrT4v97qudlN8KvlT1eNaduNk5UWgNQZxHkdjAg4TRtk/o4/Fg5Gwx36fOHTZqjX/ewYFsWhK4R7GmrMfVg06c+/hor6WTOKCxJGi0aEPwPMuf0CobJE11o4aRylYe8XT2dmiMp8RuDjzJrWg24PTk4wxiNtmgD5j9S/U9kheoCYbPX4W79ceR7TIR09OxQLh0UnNMKThHWM4WMPJQFwqUirJBnSPDm9Gdw7eED7+1dglTMFMyOMc7zLfUBWzlVqJtriaTVcLib7iQLdKV3PspjirktLG4I5NAUvW2JPX6o0tLgP1tThZ1y8/uYKrr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Jan 24, 2024 at 2:00=E2=80=AFAM Yosry Ahmed = wrote: > > In memcg_rstat_updated(), we iterate the memcg being updated and its > parents to update memcg->vmstats_percpu->stats_updates in the fast path > (i.e. no atomic updates). According to my math, this is 3 memory loads > (and potentially 3 cache misses) per memcg: > - Load the address of memcg->vmstats_percpu. > - Load vmstats_percpu->stats_updates (based on some percpu calculation). > - Load the address of the parent memcg. > > Avoid most of the cache misses by caching a pointer from each struct > memcg_vmstats_percpu to its parent on the corresponding CPU. In this > case, for the first memcg we have 2 memory loads (same as above): > - Load the address of memcg->vmstats_percpu. > - Load vmstats_percpu->stats_updates (based on some percpu calculation). > > Then for each additional memcg, we need a single load to get the > parent's stats_updates directly. This reduces the number of loads from > O(3N) to O(2+N) -- where N is the number of memcgs we need to iterate. > > Additionally, stash a pointer to memcg->vmstats in each struct > memcg_vmstats_percpu such that we can access the atomic counter that all > CPUs fold into, memcg->vmstats->stats_updates. > memcg_should_flush_stats() is changed to memcg_vmstats_needs_flush() to > accept a struct memcg_vmstats pointer accordingly. > > In struct memcg_vmstats_percpu, make sure both pointers together with > stats_updates live on the same cacheline. Finally, update > mem_cgroup_alloc() to take in a parent pointer and initialize the new > cache pointers on each CPU. The percpu loop in mem_cgroup_alloc() may > look concerning, but there are multiple similar loops in the cgroup > creation path (e.g. cgroup_rstat_init()), most of which are hidden > within alloc_percpu(). > > According to Oliver's testing [1], this fixes multiple 30-38% > regressions in vm-scalability, will-it-scale-tlb_flush2, and > will-it-scale-fallocate1. This comes at a cost of 2 more pointers per > CPU (<2KB on a machine with 128 CPUs). > > [1] https://lore.kernel.org/lkml/ZbDJsfsZt2ITyo61@xsang-OptiPlex-9020/ > > Fixes: 8d59d2214c23 ("mm: memcg: make stats flushing threshold per-memcg"= ) > Tested-by: kernel test robot > Reported-by: kernel test robot > Closes: https://lore.kernel.org/oe-lkp/202401221624.cb53a8ca-oliver.sang@= intel.com > Signed-off-by: Yosry Ahmed > --- Nice work. Acked-by: Shakeel Butt