From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AE0C35A3A4 for ; Thu, 2 Apr 2026 16:05:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.179 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775145956; cv=none; b=PQpr1YM8wR1oYmkxQrQwMlGJoy5J+Yjj2rfsrTeVJju1SC40rtVAvocsqVYMIhTZUsGIHq8aFr3yjPENpLC2vICudc4a9V7ZSr/ZKx+E/pTWXv9hltKwzRDYxERPOQEe988zKAIu7HgS6rWSvz3PICZ6qlLU4ep8cFGG6dH2abA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775145956; c=relaxed/simple; bh=EKWeK6O4/zG/Ra04kfn17P+7eqy5Udvr8t9NRdcLQ0M=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=r7R458NjbQ95M5mKZCbTZLszNpysylrJ/AqJ++svXfa2O0+4I3EJ016AZ2TrXHNKlKXlCP/2oHBgL8e2JD5OjsZLmOBuplSaN4MVHlE+OoIEiYJNcojFuyNK29khWrW0SA2AZWJXvgbs+EFAmM41cwnQQlo5NRUHE1j08a2sjbk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UHOsIhaG; arc=none smtp.client-ip=209.85.208.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UHOsIhaG" Received: by mail-lj1-f179.google.com with SMTP id 38308e7fff4ca-38bcda08c76so11033631fa.0 for ; Thu, 02 Apr 2026 09:05:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1775145951; x=1775750751; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=VMeE4O3tYspnb1+uRYcEC9JhynEoZ/6Jf/EDgOFDWZk=; b=UHOsIhaG0dRlsyOfAsz6syvIuwKCrUvIZRUr2F6nLWxgmWmHYrk0HyXOrxWJkbtcR4 gnOqDNF1J0PBnFIU2s4gwnx0LIEfHAL6o8HimcyQYkGmSYZIdLH/eMMIuMN25QAWwOfX b9+NvP9oXcs0+U4IGIrUZHftJ17P5WWJdJpcJKTor5awb8nragJlx6QNQqenRZFcBlBU 3Zjp/yLn/quF3tJNvULJxsqXebpn1h4t9hpEB5IdcPWwH2RrnGlTQjSMQI8gN3s15wtm YB4Lx/Y1lv1SGMJX+fa0e+ngvI4h3645keow/GX+5u/PI3Ey3t/LWr9m8kgXmc6RBZfP HMng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775145951; x=1775750751; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VMeE4O3tYspnb1+uRYcEC9JhynEoZ/6Jf/EDgOFDWZk=; b=o8iUAAI+Nv2V5clBTWu5i7+aQ2OZkfLoBm1/HEtUXhViewHwaCqQPlQcqqz7+fKtNx 7ytTKpyg61hxKqIcJ6rGwlzjPIdaux9gk+w0UZ92rpRTz/Fo7ruoqnhgmYPClFujBzSS c8/a+35Wm/kNCCg5M5sGowmy90lEdFgWS778TCsmISzA5E9IxNxruRIHo2N778o/uPCh 1iu6J/PMKt7/XHVhHii4LtsoK/TJ0zAORRATaEjWl8/z8q4nvTApRlYS0CjdYirXNlDj HuSV3+Cb5u5Dx0ze5eTxyLWxEFPyCw1zFKC+pNXcikXF5LZPsRFvK1+IyTRL/85/gcDu SUtQ== X-Forwarded-Encrypted: i=1; AJvYcCUJCcMcwiNM6F8M+BCFpRL53Dt/YeVTxnXM3qEIQijqx+UWUsW4yaR06EgnR9xHCUDeBYtfjDNZt0bsZ1M=@vger.kernel.org X-Gm-Message-State: AOJu0YymLV4q+ZDB51J4QTqc3pf6CTiL038dQpygCQ/FsoLpCi9SyXSb 96t+osv2maBxOu0DHMH8clQwEDJ/vOEOLnVTM7tKNP7NnqXqfKD1PIEg X-Gm-Gg: ATEYQzzItSxQt/Nu3aPwkQceqF1QoYUR7ENtUF5+LpBZPH78TDBjiZ7ELiYagQKH7oz LBpb2UV1cdU2s2bgONbeX50Q4Z2vTQWjOUx/zbSiHXnIZokC7YuAqaNXAXLJryRWxPfhllvuxeF Ra4aQNkse1fqEAe3zMFtrQXCNm2xZhZZLza26VrtSdTGuZCrUJDjB3K6qw1+CXo3DtpKq9SnV/r 6EQYV91IjzZDz2iMYtLiwwgUYzS+n+Uwl8+zurrwDaSiHX+V50vLJ9ZahaGnbRno7ReP4Ch3n7X O7nQrK1jypKeyKR4GaNDOfLqqkcYk++1v5TT4bC6zB4c8THFxXshUPJ7PSZL2nAFVVEGU/uersw 1JbVytWbCKxH7PDfCUSK1b9Kaw1PPm0Po2676UBLbdsNO4WTbZQ2aK8zO6MDPuIFG X-Received: by 2002:a05:651c:3242:b0:38b:e048:57c with SMTP id 38308e7fff4ca-38cc2f3898amr32309411fa.7.1775145950759; Thu, 02 Apr 2026 09:05:50 -0700 (PDT) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-38cd1fa77c8sm6209681fa.6.2026.04.02.09.05.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Apr 2026 09:05:50 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 2 Apr 2026 18:05:48 +0200 To: Baoquan He Cc: "Uladzislau Rezki (Sony)" , linux-mm@kvack.org, Andrew Morton , LKML , stable@vger.kernel.org, lirongqing Subject: Re: [PATCH v3] mm/vmalloc: Use dedicated unbound workqueues for vmap drain Message-ID: References: <20260331202352.879718-1-urezki@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Thu, Apr 02, 2026 at 08:22:36AM +0800, Baoquan He wrote: > On 04/01/26 at 05:47pm, Baoquan He wrote: > > On 03/31/26 at 10:23pm, Uladzislau Rezki (Sony) wrote: > > > drain_vmap_area_work() function can take >10ms to complete > > > when there are many accumulated vmap areas in a system with > > > high CPU count, causing workqueue watchdog warnings when run > > > via schedule_work(): > > > > > > workqueue: drain_vmap_area_work hogged CPU for >10000us > > > > > > Move the top-level drain work to a dedicated WQ_UNBOUND > > > workqueue so the scheduler can run this background work > > > on any available CPU, improving responsiveness. Use the > > > WQ_MEM_RECLAIM to ensure forward progress under memory > > > pressure. > > > > > > Move purge helpers to separate WQ_UNBOUND | WQ_MEM_RECLAIM > > > workqueue. This allows drain_vmap_work to wait for helpers > > > completion without creating dependency on the same rescuer > > > thread and avoid a potential parent/child deadlock. > > ...snip... > > > @@ -2385,29 +2390,31 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, > > > nr_purge_helpers = atomic_long_read(&vmap_lazy_nr) / lazy_max_pages(); > > > nr_purge_helpers = clamp(nr_purge_helpers, 1U, nr_purge_nodes) - 1; > > > > > > - for_each_cpu(i, &purge_nodes) { > > > - vn = &vmap_nodes[i]; > > > + for_each_vmap_node(vn) { > > > + vn->work_queued = false; > > > + > > > + if (list_empty(&vn->purge_list)) > > > + continue; > > > > > > if (nr_purge_helpers > 0) { > > > INIT_WORK(&vn->purge_work, purge_vmap_node); > > > + vn->work_queued = schedule_drain_vmap_work( > > > + READ_ONCE(drain_vmap_helpers_wq), &vn->purge_work); > > > > The new schedule_drain_vmap_work() could submit all purge_work on one > > CPU, do we need use queue_work_on(cpu, wq, work) instead? > > Forgot the specified WQ_UNBOUND on alloc_workqueue(), sorry for the > noise. Then this patch looks great to me. > Right. When a worker is created for UNBOUND queue, its cpumask is updated so it can be awaken on any CPU. Scheduler decides. -- Uladzislau Rezki