From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 89999342CBA for ; Fri, 20 Feb 2026 12:31:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771590695; cv=none; b=vCGpqMND/6uKIJK5xsqPWQT7FslPGK1HalXQRu8UQucl0mDSodEhXBWrS9RzX2FogQgwVbMIh0NQDocuwxFGs6xE6lG4L5a8QK2KdNqa4NotpyByShiwjVjd2s/8w4XDWoYMesOGiStTcQ/skAoOLbwTW7wrQlROs10w7jq3laE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771590695; c=relaxed/simple; bh=T5POkgsE1VqDjgW6piarSrn23E6zM3xln7UZbhLnXo0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=bjjMlRVL4gD53jj0Duasrwocm7+b5CjjXi06US2kBIpbqY7FBr2WpVam5xaQaIxQ37f+GyAl2ZrX2542VqpBwJrxRSvzWNcAgtYA5f1pDpnNJAYCmj2StH6H6bRGyLnXaB0jK7nncVbzk4wHLDKO9S6TDFvQvjwTPX4LtJadqPo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=JIjkN21o; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="JIjkN21o" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-4833115090dso19101285e9.3 for ; Fri, 20 Feb 2026 04:31:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1771590691; x=1772195491; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=paC9R7VowpxtjcyveRbHciWvakOZD/98nE1roKe929I=; b=JIjkN21oQk7IFYESZXbuJCWap6nAu661v1vImvsNYKNE4tr8J1LxvLE84S/0q0TG+V aTyk+NkBCg/MP7EKchQcR2JY/ISNm3pSX0tLBrsbKWeJoxiyuVoA1+ggVfamHuK1K98s 0/yWiE0Il+jgFxypSWWXF9qt27YsL8P8rJTHJewXSPm9Zq92YcpmlyrP4KWvwd6dE//b 7H4WkM3FWk700oVmLkv4gmMDSt8H2BcbRv01fX7ra8DxQasJ7i5W1MaIDF6OILL/KMsF NYx6e4DWLNi+QM8r/lQR5q3ZjDOXgJZpJP39cuJ531WhoOhGEeCkEg3RyRP78h1W89tm H+FQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771590691; x=1772195491; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=paC9R7VowpxtjcyveRbHciWvakOZD/98nE1roKe929I=; b=ZvT+ofnaZrSit7FWwpsLX3nbtbGzMiq2/afM855RN52VFzDgLAB75YYK8R/xVjmT6S ii/N3iM3cSTL1kKtE8w+EZpu+uVlRoD3ZTUD8crobsyh0jujr6MhWans//la4JGlfOxz M9AhcIUK8m44n2fOD4GO32sHfZCGJWlUKDTvsIyWz0zZieSX5/ef4mw0fRROi//Fxb4b /Ra6eg7/FoTRdAfnjnuPSkoYJcRegd7lpXZsgsOfhTX7GdjS1MSFix6vJzm6VlxqeRQu ca+wBTue3oLphkixGQx4JNydGlA3zzf2Wx8zkSx+mIEh+2Kdzk3RUtLy6KLoV8tKhTaB kzyw== X-Forwarded-Encrypted: i=1; AJvYcCW922+YvaZwxfif8mkGng6njLthl88yls3K9y+LaFxxTrZguUCmmJLF5eQTjH96njJeyanUdNqxMb+HJds=@vger.kernel.org X-Gm-Message-State: AOJu0YxVsw55mfB9kepa+C5fgl5lLQVNZLt2TyYpmk/Ia3HzEOqXoyyo xJC2kx86aIJxwu97m1JHqm3XX8+V5wa5qq8oPjlstH3Gb6Vi1RvTEevVUWZIVTkKExY= X-Gm-Gg: AZuq6aKyMnqrXwZoLmsJJ8Bg7i2njoU1IgGm6+4Th6icXs2KerTGPqkj+KH4OSUto5O aCttqOYFUZKYSqfz6lEO8qkquaIESUdUWTTwBnWVhP/L007SH2Cc/TSzhYduVGVcH05r+G5k4nz r1TQpyS5NJiHnwSPE94m9tSKpqL+qPCLBvhl6Bbgw4LS4n1VEvcRv9kYVUGkSOKSuj35fE5c+Uq F0ZaWbRQMBIj6dbkvt9xi/q6ENW4yz2db2b0LB9nc8st48sPOBshiNvUPfYy6A3/9tfQeO2Dusm ijYqWxn+pElm16X0XOQenYSuYn6wn+9wYnY1AlMjC4wtkHrKCH2rNh+rDr817szcqeWhuq6atFY NXXapKzQeLpeQ2+M+1XJEV97lv11OiYDQUUjLq9lkgshEUV0+dQIyheKDaFttK6nq3raHxzzSin pac4Yv4rSmr/1tTJtL54xmQVOaCD1p5pMzf+PCFqLdEA== X-Received: by 2002:a05:600c:3e14:b0:483:7783:5363 with SMTP id 5b1f17b1804b1-48398b6e214mr129218225e9.26.1771590691440; Fri, 20 Feb 2026 04:31:31 -0800 (PST) Received: from localhost (109-81-84-7.rct.o2.cz. [109.81.84.7]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43796a74704sm59690480f8f.16.2026.02.20.04.31.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Feb 2026 04:31:31 -0800 (PST) Date: Fri, 20 Feb 2026 13:31:29 +0100 From: Michal Hocko To: Vlastimil Babka Cc: Marcelo Tosatti , Leonardo Bras , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng , Frederic Weisbecker Subject: Re: [PATCH 0/4] Introduce QPW for per-cpu operations Message-ID: References: <20260206143430.021026873@redhat.com> <3f2b985a-2fb0-4d63-9dce-8a9cad8ce464@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3f2b985a-2fb0-4d63-9dce-8a9cad8ce464@suse.com> On Fri 20-02-26 11:48:00, Vlastimil Babka wrote: > On 2/19/26 16:27, Marcelo Tosatti wrote: > > On Mon, Feb 16, 2026 at 12:00:55PM +0100, Michal Hocko wrote: > > > > Michal, > > > > Again, i don't see how moving operations to happen at return to > > kernel would help (assuming you are talking about > > "context_tracking,x86: Defer some IPIs until a user->kernel transition"). > > > > The IPIs in the patchset above can be deferred until user->kernel > > transition because they are TLB flushes, for addresses which do not > > exist on the address space mapping in userspace. > > > > What are the per-CPU objects in SLUB ? > > > > struct slab_sheaf { > > union { > > struct rcu_head rcu_head; > > struct list_head barn_list; > > /* only used for prefilled sheafs */ > > struct { > > unsigned int capacity; > > bool pfmemalloc; > > }; > > }; > > struct kmem_cache *cache; > > unsigned int size; > > int node; /* only used for rcu_sheaf */ > > void *objects[]; > > }; > > > > struct slub_percpu_sheaves { > > local_trylock_t lock; > > struct slab_sheaf *main; /* never NULL when unlocked */ > > struct slab_sheaf *spare; /* empty or full, may be NULL */ > > struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ > > }; > > > > Examples of local CPU operation that manipulates the data structures: > > 1) kmalloc, allocates an object from local per CPU list. > > 2) kfree, returns an object to local per CPU list. > > > > Examples of an operation that would perform changes on the per-CPU lists > > remotely: > > kmem_cache_shrink (cache shutdown), kmem_cache_shrink. > > > > You can't delay either kmalloc (removal of object from per-CPU freelist), > > or kfree (return of object from per-CPU freelist), or kmem_cache_shrink > > or kmem_cache_shrink to return to userspace. > > > > What i missing something here? (or do you have something on your mind > > which i can't see). > > Let's try and analyze when we need to do the flushing in SLUB > > - memory offline - would anyone do that with isolcpus? if yes, they probably > deserve the disruption > > - cache shrinking (mainly from sysfs handler) - not necessary for > correctness, can probably skip cpu if needed, also kinda shooting your own > foot on isolcpu systems > > - kmem_cache is being destroyed (__kmem_cache_shutdown()) - this is > important for correctness. destroying caches should be rare, but can't rule > it out > > - kvfree_rcu_barrier() - a very tricky one; currently has only a debugging > caller, but that can change > > (BTW, see the note in flush_rcu_sheaves_on_cache() and how it relies on the > flush actually happening on the cpu. Won't QPW violate that?) Thanks, this is a very useful insight. > How would this work with houskeeping on return to userspace approach? > > - Would we just walk the list of all caches to flush them? could be > expensive. Would we somehow note only those that need it? That would make > the fast paths do something extra? > > - If some other CPU executed kmem_cache_destroy(), it would have to wait for > the isolated cpu returning to userspace. Do we have the means for > synchronizing on that? Would that risk a deadlock? We used to have a > deferred finishing of the destroy for other reasons but were glad to get rid > of it when it was possible, now it might be necessary to revive it? This would be tricky because there is no time guarantee when isolated workload enters the kernel again. Maybe never if all the pre-initialization was sufficient. On the other hand if the flush happens on the way to userspace then you only need to wait for the isolated workload to return from a syscall (modulo task dying and similar edge cases). > How would this work with QPW? > > - probably fast paths more expensive due to spin lock vs local_trylock_t > > - flush_rcu_sheaves_on_cache() needs to be solved safely (see above) > > What if we avoid percpu sheaves completely on isolated cpus and instead > allocate/free using the slowpaths? That seems like a reasonable performance price to pay for very edge case (isolated workload). -- Michal Hocko SUSE Labs