From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B27BC4338F for ; Fri, 23 Jul 2021 10:02:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 36D1F60EB6 for ; Fri, 23 Jul 2021 10:02:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 36D1F60EB6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D36F06B005D; Fri, 23 Jul 2021 06:02:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE7156B0070; Fri, 23 Jul 2021 06:02:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFD486B0071; Fri, 23 Jul 2021 06:02:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0237.hostedemail.com [216.40.44.237]) by kanga.kvack.org (Postfix) with ESMTP id A601E6B005D for ; Fri, 23 Jul 2021 06:02:07 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 44748180AD82F for ; Fri, 23 Jul 2021 10:02:07 +0000 (UTC) X-FDA: 78393411894.12.35B8984 Received: from outbound-smtp57.blacknight.com (outbound-smtp57.blacknight.com [46.22.136.241]) by imf13.hostedemail.com (Postfix) with ESMTP id 929BC100AF6C for ; Fri, 23 Jul 2021 10:01:06 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail06.blacknight.ie [81.17.255.152]) by outbound-smtp57.blacknight.com (Postfix) with ESMTPS id 3F659FB314 for ; Fri, 23 Jul 2021 11:01:05 +0100 (IST) Received: (qmail 9456 invoked from network); 23 Jul 2021 10:01:04 -0000 Received: from unknown (HELO stampy.112glenside.lan) (mgorman@techsingularity.net@[84.203.17.255]) by 81.17.254.9 with ESMTPA; 23 Jul 2021 10:01:04 -0000 From: Mel Gorman To: Andrew Morton Cc: Thomas Gleixner , Ingo Molnar , Vlastimil Babka , Hugh Dickins , Linux-MM , Linux-RT-Users , LKML , Mel Gorman Subject: [PATCH 2/2] mm/vmstat: Protect per cpu variables with preempt disable on RT Date: Fri, 23 Jul 2021 11:00:34 +0100 Message-Id: <20210723100034.13353-3-mgorman@techsingularity.net> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210723100034.13353-1-mgorman@techsingularity.net> References: <20210723100034.13353-1-mgorman@techsingularity.net> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 929BC100AF6C Authentication-Results: imf13.hostedemail.com; dkim=none; spf=pass (imf13.hostedemail.com: domain of mgorman@techsingularity.net designates 46.22.136.241 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net; dmarc=none X-Stat-Signature: 1je9bp576i3f94duouzs19zu6kc4a3zn X-HE-Tag: 1627034466-100118 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Ingo Molnar Disable preemption on -RT for the vmstat code. On vanila the code runs in IRQ-off regions while on -RT it may not when stats are updated under a local_lock. "preempt_disable" ensures that the same resources is not updated in parallel due to preemption. This patch differs from the preempt-rt version where __count_vm_event and __count_vm_events are also protected. The counters are explicitly "allowe= d to be to be racy" so there is no need to protect them from preemption. On= ly the accurate page stats that are updated by a read-modify-write need protection. Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner Signed-off-by: Mel Gorman --- mm/vmstat.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/vmstat.c b/mm/vmstat.c index b0534e068166..d06332c221b1 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -319,6 +319,7 @@ void __mod_zone_page_state(struct zone *zone, enum zo= ne_stat_item item, long x; long t; =20 + preempt_disable_rt(); x =3D delta + __this_cpu_read(*p); =20 t =3D __this_cpu_read(pcp->stat_threshold); @@ -328,6 +329,7 @@ void __mod_zone_page_state(struct zone *zone, enum zo= ne_stat_item item, x =3D 0; } __this_cpu_write(*p, x); + preempt_enable_rt(); } EXPORT_SYMBOL(__mod_zone_page_state); =20 @@ -350,6 +352,7 @@ void __mod_node_page_state(struct pglist_data *pgdat,= enum node_stat_item item, delta >>=3D PAGE_SHIFT; } =20 + preempt_disable_rt(); x =3D delta + __this_cpu_read(*p); =20 t =3D __this_cpu_read(pcp->stat_threshold); @@ -359,6 +362,7 @@ void __mod_node_page_state(struct pglist_data *pgdat,= enum node_stat_item item, x =3D 0; } __this_cpu_write(*p, x); + preempt_enable_rt(); } EXPORT_SYMBOL(__mod_node_page_state); =20 @@ -391,6 +395,7 @@ void __inc_zone_state(struct zone *zone, enum zone_st= at_item item) s8 __percpu *p =3D pcp->vm_stat_diff + item; s8 v, t; =20 + preempt_disable_rt(); v =3D __this_cpu_inc_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { @@ -399,6 +404,7 @@ void __inc_zone_state(struct zone *zone, enum zone_st= at_item item) zone_page_state_add(v + overstep, zone, item); __this_cpu_write(*p, -overstep); } + preempt_enable_rt(); } =20 void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item ite= m) @@ -409,6 +415,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum= node_stat_item item) =20 VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); =20 + preempt_disable_rt(); v =3D __this_cpu_inc_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v > t)) { @@ -417,6 +424,7 @@ void __inc_node_state(struct pglist_data *pgdat, enum= node_stat_item item) node_page_state_add(v + overstep, pgdat, item); __this_cpu_write(*p, -overstep); } + preempt_enable_rt(); } =20 void __inc_zone_page_state(struct page *page, enum zone_stat_item item) @@ -437,6 +445,7 @@ void __dec_zone_state(struct zone *zone, enum zone_st= at_item item) s8 __percpu *p =3D pcp->vm_stat_diff + item; s8 v, t; =20 + preempt_disable_rt(); v =3D __this_cpu_dec_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { @@ -445,6 +454,7 @@ void __dec_zone_state(struct zone *zone, enum zone_st= at_item item) zone_page_state_add(v - overstep, zone, item); __this_cpu_write(*p, overstep); } + preempt_enable_rt(); } =20 void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item ite= m) @@ -455,6 +465,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum= node_stat_item item) =20 VM_WARN_ON_ONCE(vmstat_item_in_bytes(item)); =20 + preempt_disable_rt(); v =3D __this_cpu_dec_return(*p); t =3D __this_cpu_read(pcp->stat_threshold); if (unlikely(v < - t)) { @@ -463,6 +474,7 @@ void __dec_node_state(struct pglist_data *pgdat, enum= node_stat_item item) node_page_state_add(v - overstep, pgdat, item); __this_cpu_write(*p, overstep); } + preempt_enable_rt(); } =20 void __dec_zone_page_state(struct page *page, enum zone_stat_item item) --=20 2.26.2