From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE29EC7618D for ; Thu, 6 Apr 2023 18:23:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240090AbjDFSX6 (ORCPT ); Thu, 6 Apr 2023 14:23:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240009AbjDFSX4 (ORCPT ); Thu, 6 Apr 2023 14:23:56 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3706C6A6B; Thu, 6 Apr 2023 11:23:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1680805434; x=1712341434; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=rQafWPpN6SSrt4t7Qo2/D4943r4B6SApT9bspraHldI=; b=QpKcGCkv+JlVQ1ubcyv674/0Gjslv/fONM35uyG10HTvI7M79ww71R37 +zdU04qNKFuZod5GGAr63QHRwFV9Zp8S2n2ugiN1ro5XHG8Vcz6rh2NvF GWtqt+UL6MfHl5EDQ1c2OaBojEz/jAVMvhylf+aGjad9jfw/9BzPnKhu/ uxZmN8PNq9d1/RQmARdPz1ZIR3d14cDe9Rspv1/dot87zjOvX9xIWCaFL vd2/RJ5AlGq2Mih4NWO33hxzoDmgQcfyirOisOJON/e8mzMChSZUzEKwo 8uu/Z68HCX8CUqxUSNLMVsc6FmsYd1kfJ3pnmayQ83wkHMsFqGCqN1xyj w==; X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="323184546" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="323184546" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 11:23:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10672"; a="664576653" X-IronPort-AV: E=Sophos;i="5.98,323,1673942400"; d="scan'208";a="664576653" Received: from ticela-az-114.amr.corp.intel.com (HELO [10.251.3.106]) ([10.251.3.106]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Apr 2023 11:23:46 -0700 Message-ID: Subject: Re: [PATCH mm-unstable RFC 0/5] cgroup: eliminate atomic rstat From: Tim Chen To: Yosry Ahmed , Alexander Viro , Christian Brauner , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org Date: Thu, 06 Apr 2023 11:23:47 -0700 In-Reply-To: <20230403220337.443510-1-yosryahmed@google.com> References: <20230403220337.443510-1-yosryahmed@google.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.44.4 (3.44.4-2.fc36) MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, 2023-04-03 at 22:03 +0000, Yosry Ahmed wrote: > A previous patch series ([1] currently in mm-unstable) changed most Can you include the link to [1]? Thanks. Tim > atomic rstat flushing contexts to become non-atomic. This was done to > avoid an expensive operation that scales with # cgroups and # cpus to > happen with irqs disabled and scheduling not permitted. There were two > remaining atomic flushing contexts after that series. This series tries > to eliminate them as well, eliminating atomic rstat flushing completely. >=20 > The two remaining atomic flushing contexts are: > (a) wb_over_bg_thresh()->mem_cgroup_wb_stats() > (b) mem_cgroup_threshold()->mem_cgroup_usage() >=20 > For (a), flushing needs to be atomic as wb_writeback() calls > wb_over_bg_thresh() with a spinlock held. However, it seems like the > call to wb_over_bg_thresh() doesn't need to be protected by that > spinlock, so this series proposes a refactoring that moves the call > outside the lock criticial section and makes the stats flushing > in mem_cgroup_wb_stats() non-atomic. >=20 > For (b), flushing needs to be atomic as mem_cgroup_threshold() is called > with irqs disabled. We only flush the stats when calculating the root > usage, as it is approximated as the sum of some memcg stats (file, anon, > and optionally swap) instead of the conventional page counter. This > series proposes changing this calculation to use the global stats > instead, eliminating the need for a memcg stat flush. >=20 > After these 2 contexts are eliminated, we no longer need > mem_cgroup_flush_stats_atomic() or cgroup_rstat_flush_atomic(). We can > remove them and simplify the code. >=20 > Yosry Ahmed (5): > writeback: move wb_over_bg_thresh() call outside lock section > memcg: flush stats non-atomically in mem_cgroup_wb_stats() > memcg: calculate root usage from global state > memcg: remove mem_cgroup_flush_stats_atomic() > cgroup: remove cgroup_rstat_flush_atomic() >=20 > fs/fs-writeback.c | 16 +++++++---- > include/linux/cgroup.h | 1 - > include/linux/memcontrol.h | 5 ---- > kernel/cgroup/rstat.c | 26 ++++-------------- > mm/memcontrol.c | 54 ++++++++------------------------------ > 5 files changed, 27 insertions(+), 75 deletions(-) >=20