From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:44382 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751787AbeDVNy7 (ORCPT ); Sun, 22 Apr 2018 09:54:59 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Vlastimil Babka , Pekka Enberg , Christoph Lameter , Joonsoo Kim , David Rientjes , Tejun Heo , Lai Jiangshan , John Stultz , Thomas Gleixner , Stephen Boyd , Andrew Morton , Linus Torvalds Subject: [PATCH 4.16 016/196] mm, slab: reschedule cache_reap() on the same CPU Date: Sun, 22 Apr 2018 15:50:36 +0200 Message-Id: <20180422135104.957692051@linuxfoundation.org> In-Reply-To: <20180422135104.278511750@linuxfoundation.org> References: <20180422135104.278511750@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Vlastimil Babka commit a9f2a846f0503e7d729f552e3ccfe2279010fe94 upstream. cache_reap() is initially scheduled in start_cpu_timer() via schedule_delayed_work_on(). But then the next iterations are scheduled via schedule_delayed_work(), i.e. using WORK_CPU_UNBOUND. Thus since commit ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs") there is no guarantee the future iterations will run on the originally intended cpu, although it's still preferred. I was able to demonstrate this with /sys/module/workqueue/parameters/debug_force_rr_cpu. IIUC, it may also happen due to migrating timers in nohz context. As a result, some cpu's would be calling cache_reap() more frequently and others never. This patch uses schedule_delayed_work_on() with the current cpu when scheduling the next iteration. Link: http://lkml.kernel.org/r/20180411070007.32225-1-vbabka@suse.cz Fixes: ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs") Signed-off-by: Vlastimil Babka Acked-by: Pekka Enberg Acked-by: Christoph Lameter Cc: Joonsoo Kim Cc: David Rientjes Cc: Tejun Heo Cc: Lai Jiangshan Cc: John Stultz Cc: Thomas Gleixner Cc: Stephen Boyd Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/slab.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/mm/slab.c +++ b/mm/slab.c @@ -4074,7 +4074,8 @@ next: next_reap_node(); out: /* Set up the next iteration */ - schedule_delayed_work(work, round_jiffies_relative(REAPTIMEOUT_AC)); + schedule_delayed_work_on(smp_processor_id(), work, + round_jiffies_relative(REAPTIMEOUT_AC)); } void get_slabinfo(struct kmem_cache *cachep, struct slabinfo *sinfo)