From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lf1-f53.google.com (mail-lf1-f53.google.com [209.85.167.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C470137E315 for ; Thu, 19 Mar 2026 09:39:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.53 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773913195; cv=none; b=WUAbs9osbB2kBf5F75Qqy48I9IAuXBEbmq4QTXfKuWbKDduU35k2ZgAj6x9IAousQ4kik5IKRepXcS31tPPkxafFOgDkl68WCqCtD9PMwFGZh7KiixREc89gqOcePn6Coa7CFWgKv9LUfmsB5VaKUSZRW7MEx9VtnXmkVBc7MDA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773913195; c=relaxed/simple; bh=gs9yd2f1ZEZj7ETdvWpa3yDLPotC5x+9NtJk49PRkvA=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=XO/33AbBhA8KQ00Of9DUO43iCjwW1jkK5jXyHA5Ze+uDakd2CVyq25BerQPJxu7pbeyueucZjfbswpEIKKE+iRKe3v95CTXAKPrcRh5PiUZxevy5dy+iuUPABE2Hepdr5/E7g1J9ucytXWhRmL+3YD0FY0NoSsMzrm/bWKt8YDU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=m42U64F9; arc=none smtp.client-ip=209.85.167.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="m42U64F9" Received: by mail-lf1-f53.google.com with SMTP id 2adb3069b0e04-59e5aa4ca41so363309e87.2 for ; Thu, 19 Mar 2026 02:39:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773913192; x=1774517992; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=bPJD16xk4xVRQCSfYQVv7FDGYNI5YyFRNJXX+Lbi858=; b=m42U64F9HLUiiI1zlJ77+b1mcANcP5YOUEvTeUgtmvfTxF08b0PjD7+EJmMa6hMZub APlBYW1+baSFE4VI1W3otmJeN8VEstjv9Liu0Gf6RldZSeLg9djwIvNXr0/sFfbIlf5n YlpIAgQa23onRBtX46ppMWn4Lqcb9yvdYr1oYN3jK+Qnn1BRUyr9vzMub0ZY9ieRkKNa CeG2jKijlITr+CSPS69KVeLSoR3uJs1cIbX3bIt8DwxX7jNZiYS0ZkCMvOLBoHHKs5/H aFLrWk5FznLRhVJIMDrC0eVhLKOCXXtMB9Z13/MffZaN2wPjxnifaZj47xZlRv8HDk7M D2bQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773913192; x=1774517992; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bPJD16xk4xVRQCSfYQVv7FDGYNI5YyFRNJXX+Lbi858=; b=fZyvqgqWaKRkfvaS88ankyvPI51GjZ/SnGeVd/gKPhmIM2UmSR0kTkfWtLZEidUlx7 YlV5G6/2RUhFx2tvgdV9EkRY+zTk9U8fBEoqzydJrNDtaL3vhQcyEK088vveKiOqnZjl pbTc+CvIW2o7Djg1UFwtFBYAZDJIfLxc9DLNEYvu3HYWF4PLhzRb6inFL4CdWwmw1iTQ EzXMj8+RkMs2AaJ+kDnYDt2xJERAyP5y6yjaPmdkpMpDlzburf/xUghvv7aWgSm0PH7O 8D8GxhfYl7h9+f31/TCv1gAWeMC2B4qGwwXgXIQvyxVnic/XH4w4yRGZkffm4FCKFFTm XtNw== X-Forwarded-Encrypted: i=1; AJvYcCWWu+qLP8QT3GigyXsWBwx1PsTo9IkaiCUPVT06uMV1Q7jRPLv2Y8xs5myxjeBceD28qON0+XGGy6BhnOI=@vger.kernel.org X-Gm-Message-State: AOJu0YwViGDdSR5QMXi97uC8Q1f3ZPx0+rDn7WlU5whxpw5O4tH7IapJ QO93h5B+KWjhFA3kXAGpndwqTP0ZYy417PGDiNmBudNoge4Os864KXaivsb1yH+G X-Gm-Gg: ATEYQzy+udTd6lLmzAN7CHAfexwedem2soQA1YfYSl62MmAyuH/KtR4bS9yE+fHh+uL SZNn46dG5t4jLZTXwbvgMlc6+qdyHGXjxpvGB2uzaPzq/VUbbYlZUPOgCX9SD0cC0doerqN28v8 P6Kz9eaTOSpUwbF1Te3ndYKm06l4NWdctBzWa3w3x47WG9dDqY1TPs1fpChz6RV2LSEjBz8ygdH iesQAv2nOHvvwV4Kmm+cnVkI7wtkNods8mAeYgR8ZJvWC966UNgTPYVtOo3aWMtuow8KqZUQhvY I8q6Apz6RhIWfjwYWw2oNdePvekJZENvO5kMh70OCHPtIz1OmjPKaBkkyjZfBLgr84/dnbB60MN 79J0tsd9J96SF2YH0pPotP7Btze+cKJygFO2eJnrV1AsenC++oOsjRWAHQ9nxurM+PR1y0uvzwF UhBwrr/dilVB9QCr94XgQXKvuUbBih9GDZQAay/dlKJ+hHzgcAQrDaZjyvg2KL X-Received: by 2002:a05:6512:1153:b0:5a1:44ba:c96e with SMTP id 2adb3069b0e04-5a2796bf5f9mr2741424e87.38.1773913191579; Thu, 19 Mar 2026 02:39:51 -0700 (PDT) Received: from pc636 (host-95-203-23-144.mobileonline.telia.com. [95.203.23.144]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5a279c6e4c0sm1098794e87.39.2026.03.19.02.39.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Mar 2026 02:39:51 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 19 Mar 2026 10:39:49 +0100 To: lirongqing Cc: Andrew Morton , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2] mm/vmalloc: use dedicated unbound workqueue for vmap area draining Message-ID: References: <20260319074307.2325-1-lirongqing@baidu.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260319074307.2325-1-lirongqing@baidu.com> On Thu, Mar 19, 2026 at 03:43:07AM -0400, lirongqing wrote: > From: Li RongQing > > The drain_vmap_area_work() function can take >10ms to complete when > there are many accumulated vmap areas in a system with a high CPU > count, causing workqueue watchdog warnings when run via > schedule_work(): > > [ 2069.796205] workqueue: drain_vmap_area_work hogged CPU for >10000us 4 times, consider switching to WQ_UNBOUND > [ 2192.823225] workqueue: drain_vmap_area_work hogged CPU for >10000us 5 times, consider switching to WQ_UNBOUND > > Switch to a dedicated WQ_UNBOUND workqueue to allow the scheduler to > run this background task on any available CPU, improving responsiveness. > Use WQ_MEM_RECLAIM to ensure forward progress under memory pressure. > > Create vmap_drain_wq in vmalloc_init_late() which is called after > workqueue_init_early() in start_kernel() to avoid boot-time crashes. > > Suggested-by: Uladzislau Rezki > Signed-off-by: Li RongQing > --- > Diff with v1: create dedicated unbound workqueue > > include/linux/vmalloc.h | 2 ++ > init/main.c | 1 + > mm/vmalloc.c | 14 +++++++++++++- > 3 files changed, 16 insertions(+), 1 deletion(-) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index e8e94f9..c028603 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -301,11 +301,13 @@ static inline void set_vm_flush_reset_perms(void *addr) > if (vm) > vm->flags |= VM_FLUSH_RESET_PERMS; > } > +void __init vmalloc_init_late(void); > #else /* !CONFIG_MMU */ > #define VMALLOC_TOTAL 0UL > > static inline unsigned long vmalloc_nr_pages(void) { return 0; } > static inline void set_vm_flush_reset_perms(void *addr) {} > +static inline void __init vmalloc_init_late(void) {} > #endif /* CONFIG_MMU */ > > #if defined(CONFIG_MMU) && defined(CONFIG_SMP) > diff --git a/init/main.c b/init/main.c > index 1cb395d..50b497f 100644 > --- a/init/main.c > +++ b/init/main.c > @@ -1099,6 +1099,7 @@ void start_kernel(void) > * workqueue_init(). > */ > workqueue_init_early(); > + vmalloc_init_late(); > No, no. We should not patch main.c for such purpose :) > rcu_init(); > kvfree_rcu_init(); > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 61caa55..a52ccd4 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -1067,6 +1067,7 @@ static void reclaim_and_purge_vmap_areas(void); > static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); > static void drain_vmap_area_work(struct work_struct *work); > static DECLARE_WORK(drain_vmap_work, drain_vmap_area_work); > +static struct workqueue_struct *vmap_drain_wq; > > static __cacheline_aligned_in_smp atomic_long_t nr_vmalloc_pages; > static __cacheline_aligned_in_smp atomic_long_t vmap_lazy_nr; > @@ -2471,7 +2472,7 @@ static void free_vmap_area_noflush(struct vmap_area *va) > > /* After this point, we may free va at any time */ > if (unlikely(nr_lazy > nr_lazy_max)) > - schedule_work(&drain_vmap_work); > + queue_work(vmap_drain_wq, &drain_vmap_work); > } > > /* > @@ -5422,6 +5423,17 @@ vmap_node_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > return SHRINK_STOP; > } > > +void __init vmalloc_init_late(void) > +{ > + vmap_drain_wq = alloc_workqueue("vmap_drain", > + WQ_UNBOUND | WQ_MEM_RECLAIM, 0); > + if (!vmap_drain_wq) { > + pr_warn("vmap_drain_wq creation failed, using system_unbound_wq\n"); > + vmap_drain_wq = system_unbound_wq; > + } > + > +} > + > void __init vmalloc_init(void) > { > struct shrinker *vmap_node_shrinker; > -- > 2.9.4 > Why can't you add this into the vmalloc_ini()? -- Uladzislau Rezki