From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3B2742797AC for ; Sat, 7 Feb 2026 01:27:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.47 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427676; cv=none; b=qHliuhx4qa8ZzfoMwp7ATaaNm8K6FW1EIWNjrS3IKmfg5pVsZQWSlUymbKu3qJ905LR9rZLn2tG2JdFnTwZ3UoOT9PB5DQkWGXv0B+DMYzUHW4KD9W14+q/veWffox8bqKF6KaeHCzYKUOhEQHYxlYZxRSGW+Yndeq/jHPzxl2I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770427676; c=relaxed/simple; bh=WyFXAR5qhlFHxei4rJCheH3WhPvV1qW1/XjNwtjQ3y8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type:Content-Disposition; b=u6mkBaLxGENsj7gT4ekAiq9Fpppfmeok14qgrM4vD4Ex5zO6R/ZnPQAlzS285s4U2Qebn4p9rAYQrf5/ZXh6aByaqLgyyq+fXb2eS34nduQRGZDNR1wUfHN0e+Un06LAqR1U99I/X9w14jDFwHvKZAKVnO7Pzqr67MjB/PbPzmA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=X3oDwvai; arc=none smtp.client-ip=209.85.128.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="X3oDwvai" Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-482f2599980so32147095e9.0 for ; Fri, 06 Feb 2026 17:27:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1770427674; x=1771032474; darn=vger.kernel.org; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:from:to :cc:subject:date:message-id:reply-to; bh=PE7gzxqot5BRL9QsDMUVjbbKM2mWn3uLSLIy5IeNUqk=; b=X3oDwvaitOrPWfAspkuM7sC/N1oHMIy0lYNVzqit9yWuNDfD1Rg3stxhDxTGy1nMOh itotSTB/PRn2SzDQO8SPUNaYufxvN1DNo7VDAf03H8mF0swR6gsRHdWxFThslRMlDurB LHKwqkhtOahcuhz1U/4id311/0XCUixdzpEJG5W1mn5cfjwJ9ZnU7cgScGgLbZp0xuMx EGXYySZwnkCqa5Y7zQihE66lnxQfXqroieysWqMWS0XewWJ3bHdAGWbfc7pOKHjIieRv 8R/aOWoVFxrEffpxkmm0knTHZCm9Bv8w1utmVv0qZyY2NWGgf1SzqbAmmPHWP4dDUQNa b6HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770427674; x=1771032474; h=content-transfer-encoding:content-disposition:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PE7gzxqot5BRL9QsDMUVjbbKM2mWn3uLSLIy5IeNUqk=; b=ByP1k1prdsHpS3C0r2hdAKOEDmskkL6AJQFp76AyO5cMVSBXSRPE60b23SrACJ2H68 X03MLIZqAODkYdC3o7lsxGbGfQ7bF6BKIPeEapMgxqQ7Jzgb5sgl/Kcma1BTGdvJgYRj eCchUC5D0Vh2fnRJB0TFIxC6Cxu0jfoKgiTcduJwXVJfByM6GCaJLDsSAtxJw85OnxUs znmyrno699sToZPmj62GLmVjLNy0FRXDTSRjt6ln1TgP93mKtR8MdLfnHocuVgQdW8+w i6IBbB4jOYSHzOr0MBmU+NAcPEufGYiHDznFs5u/rSieLVZCAn0iGrvCddyAKFfl9ZV9 dX9A== X-Forwarded-Encrypted: i=1; AJvYcCWMxj+Tz2eCX0wi5aeIp6iFJyPqF7MEr3U9Ntnm6UZG15Me9dbOFRbkOhgZkXXpqSOOIGj56gy3PRPbcQc=@vger.kernel.org X-Gm-Message-State: AOJu0Yx8cRCQsjr0UFzS7LdGwTHcgH250sj2jh8s+Kq/iAPElqZgbElv cU/UsAdsrq7ym+qoShnGtCWEc00oUAjn3Tl9pOfSLxH9j8FuCmTBQD7aeCxzWw== X-Gm-Gg: AZuq6aKciilEri+TjvVZc8ADUNMPoEVRcRPdOxp3v550Uz0utlOWiLsSl4HsFvFIzQI gQrmJdEZKiYrzF7sN/n15Pa2J9x031/nQcdRwWfG5CabXItALKXpiBjzbifVEnbEEb5TsgymJfc fX6dUgxnn9h6RmB7b6yMBjRr9bxxDeaZ2JSmIdhuN0bT6J/xgUrMLdILEUT0VKB/7kOC2OC/D0C m4Dpa60Eig+Xw4DH3coIg045WoEtytgLWByy8NwN454A+QQofiERSQCaUEYU2Y+MCIGz9tLsloL hNU64I6okzyJMIb+616kjmpYxQxkPuHoQQZr4yD3NbYVZ+MkFPwKrrmfJEW4sp9fP0962WnuOy2 3D7HGescHrnulMwd7/2d1bdVAOQNaw94MYuW0WaYEtWU3aVurAlCHsO9v/y/PUCyrPSX1NzKWm7 obEqud8gxTEm3qRMkAUG/hd4s= X-Received: by 2002:a05:600c:35d1:b0:480:3a72:524a with SMTP id 5b1f17b1804b1-4832021b572mr58580195e9.19.1770427674181; Fri, 06 Feb 2026 17:27:54 -0800 (PST) Received: from WindFlash.powerhub ([2a0a:ef40:1b2a:fa01:9944:6a8c:dc37:eba5]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43629756bc3sm8833460f8f.39.2026.02.06.17.27.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Feb 2026 17:27:53 -0800 (PST) From: Leonardo Bras To: Marcelo Tosatti Cc: Leonardo Bras , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Leonardo Bras , Thomas Gleixner , Waiman Long , Boqun Feng Subject: Re: [PATCH 4/4] slub: apply new queue_percpu_work_on() interface Date: Fri, 6 Feb 2026 22:27:39 -0300 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260206143741.621816322@redhat.com> References: <20260206143430.021026873@redhat.com> <20260206143741.621816322@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Content-Transfer-Encoding: 8bit On Fri, Feb 06, 2026 at 11:34:34AM -0300, Marcelo Tosatti wrote: > Make use of the new qpw_{un,}lock*() and queue_percpu_work_on() > interface to improve performance & latency on PREEMPT_RT kernels. > > For functions that may be scheduled in a different cpu, replace > local_{un,}lock*() by qpw_{un,}lock*(), and replace schedule_work_on() by > queue_percpu_work_on(). The same happens for flush_work() and > flush_percpu_work(). > > This change requires allocation of qpw_structs instead of a work_structs, > and changing parameters of a few functions to include the cpu parameter. > > This should bring no relevant performance impact on non-RT kernels: Same as prev patch > For functions that may be scheduled in a different cpu, the local_*lock's > this_cpu_ptr() becomes a per_cpu_ptr(smp_processor_id()). > > Signed-off-by: Leonardo Bras > Signed-off-by: Marcelo Tosatti > > --- > mm/slub.c | 218 ++++++++++++++++++++++++++++++++++++++++---------------------- > 1 file changed, 142 insertions(+), 76 deletions(-) > > Index: slab/mm/slub.c > =================================================================== > --- slab.orig/mm/slub.c > +++ slab/mm/slub.c > @@ -49,6 +49,7 @@ > #include > #include > #include > +#include > #include > > #include "internal.h" > @@ -128,7 +129,7 @@ > * For debug caches, all allocations are forced to go through a list_lock > * protected region to serialize against concurrent validation. > * > - * cpu_sheaves->lock (local_trylock) > + * cpu_sheaves->lock (qpw_trylock) > * > * This lock protects fastpath operations on the percpu sheaves. On !RT it > * only disables preemption and does no atomic operations. As long as the main > @@ -156,7 +157,7 @@ > * Interrupts are disabled as part of list_lock or barn lock operations, or > * around the slab_lock operation, in order to make the slab allocator safe > * to use in the context of an irq. > - * Preemption is disabled as part of local_trylock operations. > + * Preemption is disabled as part of qpw_trylock operations. > * kmalloc_nolock() and kfree_nolock() are safe in NMI context but see > * their limitations. > * > @@ -417,7 +418,7 @@ struct slab_sheaf { > }; > > struct slub_percpu_sheaves { > - local_trylock_t lock; > + qpw_trylock_t lock; > struct slab_sheaf *main; /* never NULL when unlocked */ > struct slab_sheaf *spare; /* empty or full, may be NULL */ > struct slab_sheaf *rcu_free; /* for batching kfree_rcu() */ > @@ -479,7 +480,7 @@ static nodemask_t slab_nodes; > static struct workqueue_struct *flushwq; > > struct slub_flush_work { > - struct work_struct work; > + struct qpw_struct qpw; > struct kmem_cache *s; > bool skip; > }; > @@ -2826,7 +2827,7 @@ static void __kmem_cache_free_bulk(struc > * > * returns true if at least partially flushed > */ > -static bool sheaf_flush_main(struct kmem_cache *s) > +static bool sheaf_flush_main(struct kmem_cache *s, int cpu) > { > struct slub_percpu_sheaves *pcs; > unsigned int batch, remaining; > @@ -2835,10 +2836,10 @@ static bool sheaf_flush_main(struct kmem > bool ret = false; > > next_batch: > - if (!local_trylock(&s->cpu_sheaves->lock)) > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) > return ret; > > - pcs = this_cpu_ptr(s->cpu_sheaves); > + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); > sheaf = pcs->main; > > batch = min(PCS_BATCH_MAX, sheaf->size); > @@ -2848,7 +2849,7 @@ next_batch: > > remaining = sheaf->size; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > > __kmem_cache_free_bulk(s, batch, &objects[0]); > > @@ -2932,13 +2933,13 @@ static void rcu_free_sheaf_nobarn(struct > * flushing operations are rare so let's keep it simple and flush to slabs > * directly, skipping the barn > */ > -static void pcs_flush_all(struct kmem_cache *s) > +static void pcs_flush_all(struct kmem_cache *s, int cpu) > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *spare, *rcu_free; > > - local_lock(&s->cpu_sheaves->lock); > - pcs = this_cpu_ptr(s->cpu_sheaves); > + qpw_lock(&s->cpu_sheaves->lock, cpu); > + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); > > spare = pcs->spare; > pcs->spare = NULL; > @@ -2946,7 +2947,7 @@ static void pcs_flush_all(struct kmem_ca > rcu_free = pcs->rcu_free; > pcs->rcu_free = NULL; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > > if (spare) { > sheaf_flush_unused(s, spare); > @@ -2956,7 +2957,7 @@ static void pcs_flush_all(struct kmem_ca > if (rcu_free) > call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); > > - sheaf_flush_main(s); > + sheaf_flush_main(s, cpu); > } > > static void __pcs_flush_all_cpu(struct kmem_cache *s, unsigned int cpu) > @@ -3881,13 +3882,13 @@ static void flush_cpu_sheaves(struct wor > { > struct kmem_cache *s; > struct slub_flush_work *sfw; > + int cpu = qpw_get_cpu(w); > > - sfw = container_of(w, struct slub_flush_work, work); > - > + sfw = &per_cpu(slub_flush, cpu); > s = sfw->s; > > if (cache_has_sheaves(s)) > - pcs_flush_all(s); > + pcs_flush_all(s, cpu); > } > > static void flush_all_cpus_locked(struct kmem_cache *s) > @@ -3904,17 +3905,17 @@ static void flush_all_cpus_locked(struct > sfw->skip = true; > continue; > } > - INIT_WORK(&sfw->work, flush_cpu_sheaves); > + INIT_QPW(&sfw->qpw, flush_cpu_sheaves, cpu); > sfw->skip = false; > sfw->s = s; > - queue_work_on(cpu, flushwq, &sfw->work); > + queue_percpu_work_on(cpu, flushwq, &sfw->qpw); > } > > for_each_online_cpu(cpu) { > sfw = &per_cpu(slub_flush, cpu); > if (sfw->skip) > continue; > - flush_work(&sfw->work); > + flush_percpu_work(&sfw->qpw); > } > > mutex_unlock(&flush_lock); > @@ -3933,17 +3934,18 @@ static void flush_rcu_sheaf(struct work_ > struct slab_sheaf *rcu_free; > struct slub_flush_work *sfw; > struct kmem_cache *s; > + int cpu = qpw_get_cpu(w); > > - sfw = container_of(w, struct slub_flush_work, work); > + sfw = &per_cpu(slub_flush, cpu); > s = sfw->s; > > - local_lock(&s->cpu_sheaves->lock); > - pcs = this_cpu_ptr(s->cpu_sheaves); > + qpw_lock(&s->cpu_sheaves->lock, cpu); > + pcs = per_cpu_ptr(s->cpu_sheaves, cpu); > > rcu_free = pcs->rcu_free; > pcs->rcu_free = NULL; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > > if (rcu_free) > call_rcu(&rcu_free->rcu_head, rcu_free_sheaf_nobarn); > @@ -3968,14 +3970,14 @@ void flush_rcu_sheaves_on_cache(struct k > * sure the __kfree_rcu_sheaf() finished its call_rcu() > */ > > - INIT_WORK(&sfw->work, flush_rcu_sheaf); > + INIT_QPW(&sfw->qpw, flush_rcu_sheaf, cpu); > sfw->s = s; > - queue_work_on(cpu, flushwq, &sfw->work); > + queue_percpu_work_on(cpu, flushwq, &sfw->qpw); > } > > for_each_online_cpu(cpu) { > sfw = &per_cpu(slub_flush, cpu); > - flush_work(&sfw->work); > + flush_percpu_work(&sfw->qpw); > } > > mutex_unlock(&flush_lock); > @@ -4472,22 +4474,24 @@ bool slab_post_alloc_hook(struct kmem_ca > * > * Must be called with the cpu_sheaves local lock locked. If successful, returns > * the pcs pointer and the local lock locked (possibly on a different cpu than > - * initially called). If not successful, returns NULL and the local lock > - * unlocked. > + * initially called), and migration disabled. If not successful, returns NULL > + * and the local lock unlocked, with migration enabled. > */ > static struct slub_percpu_sheaves * > -__pcs_replace_empty_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs, gfp_t gfp) > +__pcs_replace_empty_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs, gfp_t gfp, > + int *cpu) > { > struct slab_sheaf *empty = NULL; > struct slab_sheaf *full; > struct node_barn *barn; > bool can_alloc; > > - lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); > + qpw_lockdep_assert_held(&s->cpu_sheaves->lock); > > /* Bootstrap or debug cache, back off */ > if (unlikely(!cache_has_sheaves(s))) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > return NULL; > } > > @@ -4498,7 +4502,8 @@ __pcs_replace_empty_main(struct kmem_cac > > barn = get_barn(s); > if (!barn) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > return NULL; > } > > @@ -4524,7 +4529,8 @@ __pcs_replace_empty_main(struct kmem_cac > } > } > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > > if (!can_alloc) > return NULL; > @@ -4550,7 +4556,9 @@ __pcs_replace_empty_main(struct kmem_cac > * we can reach here only when gfpflags_allow_blocking > * so this must not be an irq > */ > - local_lock(&s->cpu_sheaves->lock); > + migrate_disable(); > + *cpu = smp_processor_id(); > + qpw_lock(&s->cpu_sheaves->lock, *cpu); > pcs = this_cpu_ptr(s->cpu_sheaves); > > /* > @@ -4593,6 +4601,7 @@ void *alloc_from_pcs(struct kmem_cache * > struct slub_percpu_sheaves *pcs; > bool node_requested; > void *object; > + int cpu; > > #ifdef CONFIG_NUMA > if (static_branch_unlikely(&strict_numa) && > @@ -4627,13 +4636,17 @@ void *alloc_from_pcs(struct kmem_cache * > return NULL; > } > > - if (!local_trylock(&s->cpu_sheaves->lock)) > + migrate_disable(); > + cpu = smp_processor_id(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) { > + migrate_enable(); > return NULL; > + } > > pcs = this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size == 0)) { > - pcs = __pcs_replace_empty_main(s, pcs, gfp); > + pcs = __pcs_replace_empty_main(s, pcs, gfp, &cpu); > if (unlikely(!pcs)) > return NULL; > } > @@ -4647,7 +4660,8 @@ void *alloc_from_pcs(struct kmem_cache * > * the current allocation or previous freeing process. > */ > if (page_to_nid(virt_to_page(object)) != node) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > stat(s, ALLOC_NODE_MISMATCH); > return NULL; > } > @@ -4655,7 +4669,8 @@ void *alloc_from_pcs(struct kmem_cache * > > pcs->main->size--; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > stat(s, ALLOC_FASTPATH); > > @@ -4670,10 +4685,15 @@ unsigned int alloc_from_pcs_bulk(struct > struct slab_sheaf *main; > unsigned int allocated = 0; > unsigned int batch; > + int cpu; > > next_batch: > - if (!local_trylock(&s->cpu_sheaves->lock)) > + migrate_disable(); > + cpu = smp_processor_id(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) { > + migrate_enable(); > return allocated; > + } > > pcs = this_cpu_ptr(s->cpu_sheaves); > > @@ -4683,7 +4703,8 @@ next_batch: > struct node_barn *barn; > > if (unlikely(!cache_has_sheaves(s))) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > return allocated; > } > > @@ -4694,7 +4715,8 @@ next_batch: > > barn = get_barn(s); > if (!barn) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > return allocated; > } > > @@ -4709,7 +4731,8 @@ next_batch: > > stat(s, BARN_GET_FAIL); > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > /* > * Once full sheaves in barn are depleted, let the bulk > @@ -4727,7 +4750,8 @@ do_alloc: > main->size -= batch; > memcpy(p, main->objects + main->size, batch * sizeof(void *)); > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > stat_add(s, ALLOC_FASTPATH, batch); > > @@ -4877,6 +4901,7 @@ kmem_cache_prefill_sheaf(struct kmem_cac > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *sheaf = NULL; > struct node_barn *barn; > + int cpu; > > if (unlikely(!size)) > return NULL; > @@ -4906,7 +4931,9 @@ kmem_cache_prefill_sheaf(struct kmem_cac > return sheaf; > } > > - local_lock(&s->cpu_sheaves->lock); > + migrate_disable(); > + cpu = smp_processor_id(); > + qpw_lock(&s->cpu_sheaves->lock, cpu); > pcs = this_cpu_ptr(s->cpu_sheaves); > > if (pcs->spare) { > @@ -4925,7 +4952,8 @@ kmem_cache_prefill_sheaf(struct kmem_cac > stat(s, BARN_GET_FAIL); > } > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > > if (!sheaf) > @@ -4961,6 +4989,7 @@ void kmem_cache_return_sheaf(struct kmem > { > struct slub_percpu_sheaves *pcs; > struct node_barn *barn; > + int cpu; > > if (unlikely((sheaf->capacity != s->sheaf_capacity) > || sheaf->pfmemalloc)) { > @@ -4969,7 +4998,9 @@ void kmem_cache_return_sheaf(struct kmem > return; > } > > - local_lock(&s->cpu_sheaves->lock); > + migrate_disable(); > + cpu = smp_processor_id(); > + qpw_lock(&s->cpu_sheaves->lock, cpu); > pcs = this_cpu_ptr(s->cpu_sheaves); > barn = get_barn(s); > > @@ -4979,7 +5010,8 @@ void kmem_cache_return_sheaf(struct kmem > stat(s, SHEAF_RETURN_FAST); > } > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > if (!sheaf) > return; > @@ -5507,9 +5539,9 @@ slab_empty: > */ > static void __pcs_install_empty_sheaf(struct kmem_cache *s, > struct slub_percpu_sheaves *pcs, struct slab_sheaf *empty, > - struct node_barn *barn) > + struct node_barn *barn, int cpu) > { > - lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); > + qpw_lockdep_assert_held(&s->cpu_sheaves->lock); > > /* This is what we expect to find if nobody interrupted us. */ > if (likely(!pcs->spare)) { > @@ -5546,31 +5578,34 @@ static void __pcs_install_empty_sheaf(st > /* > * Replace the full main sheaf with a (at least partially) empty sheaf. > * > - * Must be called with the cpu_sheaves local lock locked. If successful, returns > - * the pcs pointer and the local lock locked (possibly on a different cpu than > - * initially called). If not successful, returns NULL and the local lock > - * unlocked. > + * Must be called with the cpu_sheaves local lock locked, and migration counter ^~ qpw? > + * increased. If successful, returns the pcs pointer and the local lock locked > + * (possibly on a different cpu than initially called), with migration counter > + * increased. If not successful, returns NULL and the local lock unlocked, ^~ qpw? > + * and migration counter decreased. > */ > static struct slub_percpu_sheaves * > __pcs_replace_full_main(struct kmem_cache *s, struct slub_percpu_sheaves *pcs, > - bool allow_spin) > + bool allow_spin, int *cpu) > { > struct slab_sheaf *empty; > struct node_barn *barn; > bool put_fail; > > restart: > - lockdep_assert_held(this_cpu_ptr(&s->cpu_sheaves->lock)); > + qpw_lockdep_assert_held(&s->cpu_sheaves->lock); > > /* Bootstrap or debug cache, back off */ > if (unlikely(!cache_has_sheaves(s))) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > return NULL; > } > > barn = get_barn(s); > if (!barn) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > return NULL; > } > > @@ -5607,7 +5642,8 @@ restart: > stat(s, BARN_PUT_FAIL); > > pcs->spare = NULL; > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > > sheaf_flush_unused(s, to_flush); > empty = to_flush; > @@ -5623,7 +5659,8 @@ restart: > put_fail = true; > > alloc_empty: > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, *cpu); > + migrate_enable(); > > /* > * alloc_empty_sheaf() doesn't support !allow_spin and it's > @@ -5640,11 +5677,17 @@ alloc_empty: > if (put_fail) > stat(s, BARN_PUT_FAIL); > > - if (!sheaf_flush_main(s)) > + migrate_disable(); > + *cpu = smp_processor_id(); > + if (!sheaf_flush_main(s, *cpu)) { > + migrate_enable(); > return NULL; > + } > > - if (!local_trylock(&s->cpu_sheaves->lock)) > + if (!qpw_trylock(&s->cpu_sheaves->lock, *cpu)) { > + migrate_enable(); > return NULL; > + } > > pcs = this_cpu_ptr(s->cpu_sheaves); > > @@ -5659,13 +5702,14 @@ alloc_empty: > return pcs; > > got_empty: > - if (!local_trylock(&s->cpu_sheaves->lock)) { > + if (!qpw_trylock(&s->cpu_sheaves->lock, *cpu)) { > + migrate_enable(); > barn_put_empty_sheaf(barn, empty); > return NULL; > } > > pcs = this_cpu_ptr(s->cpu_sheaves); > - __pcs_install_empty_sheaf(s, pcs, empty, barn); > + __pcs_install_empty_sheaf(s, pcs, empty, barn, *cpu); > > return pcs; > } > @@ -5678,22 +5722,28 @@ static __fastpath_inline > bool free_to_pcs(struct kmem_cache *s, void *object, bool allow_spin) > { > struct slub_percpu_sheaves *pcs; > + int cpu; > > - if (!local_trylock(&s->cpu_sheaves->lock)) > + migrate_disable(); > + cpu = smp_processor_id(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) { > + migrate_enable(); > return false; > + } > > pcs = this_cpu_ptr(s->cpu_sheaves); > > if (unlikely(pcs->main->size == s->sheaf_capacity)) { > > - pcs = __pcs_replace_full_main(s, pcs, allow_spin); > + pcs = __pcs_replace_full_main(s, pcs, allow_spin, &cpu); > if (unlikely(!pcs)) > return false; > } > > pcs->main->objects[pcs->main->size++] = object; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > stat(s, FREE_FASTPATH); > > @@ -5777,14 +5827,19 @@ bool __kfree_rcu_sheaf(struct kmem_cache > { > struct slub_percpu_sheaves *pcs; > struct slab_sheaf *rcu_sheaf; > + int cpu; > > if (WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_RT))) > return false; > > lock_map_acquire_try(&kfree_rcu_sheaf_map); > > - if (!local_trylock(&s->cpu_sheaves->lock)) > + migrate_disable(); > + cpu = smp_processor_id(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) { > + migrate_enable(); > goto fail; > + } > > pcs = this_cpu_ptr(s->cpu_sheaves); > > @@ -5795,7 +5850,8 @@ bool __kfree_rcu_sheaf(struct kmem_cache > > /* Bootstrap or debug cache, fall back */ > if (unlikely(!cache_has_sheaves(s))) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > goto fail; > } > > @@ -5807,7 +5863,8 @@ bool __kfree_rcu_sheaf(struct kmem_cache > > barn = get_barn(s); > if (!barn) { > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > goto fail; > } > > @@ -5818,15 +5875,18 @@ bool __kfree_rcu_sheaf(struct kmem_cache > goto do_free; > } > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > empty = alloc_empty_sheaf(s, GFP_NOWAIT); > > if (!empty) > goto fail; > > - if (!local_trylock(&s->cpu_sheaves->lock)) { > + migrate_disable(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) { > barn_put_empty_sheaf(barn, empty); > + migrate_enable(); > goto fail; > } > > @@ -5862,7 +5922,8 @@ do_free: > if (rcu_sheaf) > call_rcu(&rcu_sheaf->rcu_head, rcu_free_sheaf); > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > stat(s, FREE_RCU_SHEAF); > lock_map_release(&kfree_rcu_sheaf_map); > @@ -5889,6 +5950,7 @@ static void free_to_pcs_bulk(struct kmem > void *remote_objects[PCS_BATCH_MAX]; > unsigned int remote_nr = 0; > int node = numa_mem_id(); > + int cpu; > > next_remote_batch: > while (i < size) { > @@ -5918,7 +5980,9 @@ next_remote_batch: > goto flush_remote; > > next_batch: > - if (!local_trylock(&s->cpu_sheaves->lock)) > + migrate_disable(); > + cpu = smp_processor_id(); > + if (!qpw_trylock(&s->cpu_sheaves->lock, cpu)) > goto fallback; > > pcs = this_cpu_ptr(s->cpu_sheaves); > @@ -5961,7 +6025,8 @@ do_free: > memcpy(main->objects + main->size, p, batch * sizeof(void *)); > main->size += batch; > > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > stat_add(s, FREE_FASTPATH, batch); > > @@ -5977,7 +6042,8 @@ do_free: > return; > > no_empty: > - local_unlock(&s->cpu_sheaves->lock); > + qpw_unlock(&s->cpu_sheaves->lock, cpu); > + migrate_enable(); > > /* > * if we depleted all empty sheaves in the barn or there are too > @@ -7377,7 +7443,7 @@ static int init_percpu_sheaves(struct km > > pcs = per_cpu_ptr(s->cpu_sheaves, cpu); > > - local_trylock_init(&pcs->lock); > + qpw_trylock_init(&pcs->lock); > > /* > * Bootstrap sheaf has zero size so fast-path allocation fails. > > Conversions look correct. I have some ideas, but I am still not sure about the need of migrate_*able() here, but if they are indeed needed, I think we should work on having them inside helpers that are special for local_cpu-only functions, instead of happening on user code like this. What do you think? Thanks for getting this upstream! Leo