From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EB86C83F1B for ; Sat, 12 Jul 2025 01:55:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 72CE38D0002; Fri, 11 Jul 2025 21:55:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6DDB48D0001; Fri, 11 Jul 2025 21:55:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CC848D0002; Fri, 11 Jul 2025 21:55:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 4A4638D0001 for ; Fri, 11 Jul 2025 21:55:53 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E6B94B5383 for ; Sat, 12 Jul 2025 01:55:52 +0000 (UTC) X-FDA: 83653946544.14.B8C0109 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf24.hostedemail.com (Postfix) with ESMTP id 175BB180008 for ; Sat, 12 Jul 2025 01:55:50 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="gbNmh3/8"; spf=pass (imf24.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752285351; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rg+r9L8StXhbjYVCsH0jzU9YcZJUfRA3GM3f05TYwMw=; b=H/wg7lBAMq5TPUzi5//by8c5NbyFeq9xE9tWCervOJNyXN9ydj3HJ5+Fahw1Qr9vBfQhPz 6kXElZoojpxywMBUtG0TnWXRl94iuhkMDXhQcBHxuSTr8zqNJA8LxOkJoYitpBdoFRRmqG kQub5vavejJXEUvtaJJ4y0r9vVC6GI8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="gbNmh3/8"; spf=pass (imf24.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.216.46 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752285351; a=rsa-sha256; cv=none; b=WLpefdbbliEacQdemZacjHLpQt3f1IILBViok4VLu5ArVAIhctnXDyRnzhKnPmNlMgGS8f uxuonG1P4V1ak6mWmqetnY0uWjqIIf/1ioWBR/at8Hgme4ipWUC47Qveun4oJYW1AXhGL/ 4cwKFlTxHEQTCv8I7r2ceYWrfHiYHck= Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-31a93a4b399so2435768a91.0 for ; Fri, 11 Jul 2025 18:55:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752285350; x=1752890150; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=rg+r9L8StXhbjYVCsH0jzU9YcZJUfRA3GM3f05TYwMw=; b=gbNmh3/83LUcB7XgpwUa4lAFtfKE5CcKkpiyISZ4g2NL61bb5F29iAUR7kfhXhPjkw kKHjHPk31g3jrIB+xc4jUc+wwTeCM3WnSTd4OiNd/yUBI471VAnzskKSP/kyIax8GWYG //JtgBNXcTrs/daCcKFE6uJEay0MLpfkEfDr67di8Uk1Gzh6l9OaRdPiRAxp8ABmO9/y u5ROkgBq9+KVt+YD9MnmTr8wBI3IQmiC1PoB5SZmLQ/7P99MXau70BVx5gbgbKpWZL4c I8WGj/8rJ1Olb/hPPhBuaftIScE1ST1l/awC/sPqiXaEbHivO/WMkfCTVHyxYOB+7ksa xrGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752285350; x=1752890150; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=rg+r9L8StXhbjYVCsH0jzU9YcZJUfRA3GM3f05TYwMw=; b=QKUiSmgHFtA/b5sRh/rGezobWCCYSf5am4T6+Ld8L3epGzOKUVFzrjyelpO1njVboI m70x8V/TgfAQF5I//6YYgK79L0s7ig/sOT2nc2mWUtBXEkQURwG0ihBovNq8P94BtUj+ yjLInzULvOjHLoYfu568g/6mfY4HB4zKgzIt3zmVs1uGRoXgBORM41NuTfr75Zq/5mrw iP0qProBJuGCYO+SP7n6iS8lkw9dtQ19D1HLb91AAs36tkgG8AJ01sR3zS6yMcA99KB9 QsIVknIhFkAy8QOIL5Y1kvZwx9B2ZlmsxOh5vDenTYuB3mnqMzx55wpQbCO1yEF7k4u6 4lBA== X-Forwarded-Encrypted: i=1; AJvYcCUCridCAEqUoOPdTcUDnyxWWfpQ9/1Od418TdPbNMDleqH5bpuqwMSZN0xNUnxNr2dghMdX8gT73g==@kvack.org X-Gm-Message-State: AOJu0YwiZ/LMqrDIor5GcBtaq1jLFgbZPP/Oa+luysS6zFLqQwXItWuu 6eGN7vquJUMNPsFkibywNIvCT5J84IbFp3DlcJGrcmDKAqxo5ATfrzhf X-Gm-Gg: ASbGncsHm9u7PLZAjLtJMOwPY3RwsOmRHUq2bG9VASuYn+fSb/4O9bIWKr/oMK+LWbM Agvu8FmiVdfOR+7gmbUXgPO10CeNLh+rpoRQhGLMRGCREPBpGsu5XuuFTPlYv4VyAyiSyuSAtr3 Gjz81LFjwW/BStrF7q5iwfuoj0b02MTOpSjKdwnjRugULDCv5TRounEkg+fQfmPCuchQHZR6Jnu VSlwnnCmIbbFQRI9r35qjacIWKwSaEIz61TSSOX3mjdmkBpeWaepw62+NZiLJmkKcxjwzFo5Gey o9zQsr7NRqQZOvW1t28KcKDt3zqHSpbFr8yTLYPx6bFnhLejx4AKADhHqg7DKKY87YcrK3A0DD2 WueVucWivAZqTVBb2MD/x+fDnTSiPf8TaFhBFuVdpJ5Z52LQtNxxtvKZGHfisNw== X-Google-Smtp-Source: AGHT+IERWo2p/ZO5/TKXEfjpVUC08IFnWvFBn48ZcAA+LDOoAHf0PtOLeaNlKNokaVkyfQp+1ilL5Q== X-Received: by 2002:a17:90a:d886:b0:312:dbcd:b93d with SMTP id 98e67ed59e1d1-31c3d0c2bc7mr14503874a91.14.1752285349504; Fri, 11 Jul 2025 18:55:49 -0700 (PDT) Received: from MacBook-Pro-49.local ([2001:558:600a:7:a83d:600f:32cc:235a]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-31c3e95815bsm6030865a91.3.2025.07.11.18.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Jul 2025 18:55:48 -0700 (PDT) Date: Fri, 11 Jul 2025 18:55:47 -0700 From: Alexei Starovoitov To: Vlastimil Babka Cc: Harry Yoo , bpf , linux-mm , Shakeel Butt , Michal Hocko , Sebastian Sewior , Andrii Nakryiko , Kumar Kartikeya Dwivedi , Andrew Morton , Peter Zijlstra , Steven Rostedt , Johannes Weiner Subject: Re: [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock(). Message-ID: References: <20250709015303.8107-1-alexei.starovoitov@gmail.com> <20250709015303.8107-7-alexei.starovoitov@gmail.com> <683189c3-934e-4398-b970-34584ac70a69@suse.cz> <59c621bf-17cd-40ac-af99-3c7cb6ecafc3@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <59c621bf-17cd-40ac-af99-3c7cb6ecafc3@suse.cz> X-Stat-Signature: nrbhurypsuq3h1jmpzxk7n47xfzr17ej X-Rspamd-Queue-Id: 175BB180008 X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1752285350-363032 X-HE-Meta: U2FsdGVkX19uDRHdcKk4/cZ3lKAQ42Tp4SSfdSOL0w4sCvk9am3hLqWXXlDQPogsgX0cwwypfZoCcQ9eZtQPex/fC5li/0ppzPiRPZomzSj/3mNd6aTskQ3VbEHz96ZTiANuynkq2aYcsMUEaKhIboNj49w+KaxrQI8/IH+EanEnG4dR15NQJ9k/JR9FUn/yd4GYQzO7lSucEU7L4LFvJWnhdH7PMj24ydiP/y1rJGoDisK8PQa5f4zSRoxhiofS4fhUdUQGIdOIVTONh28bh9hkc7cE6qrtKkXxJN0ieR72c7WAGZKm+75zE5cS5772HSRoJT857wsKVkcNYOtWxQTaf/ApYY7JbtKywm4NoLqkJRurgd/idGk+ypSyI8LcdBFPzU52CBVoAl+TCJZp5GP+8jWT2Iiyyltm6mjD9UwKwPw8x4Wv3eT4fx+SuiXO3lXDVh4LtX4i7tOWAS5N4MqhHq3gVtqqoj7rtcN4LpX27trzuZ/lcRxSKP52GGeWNw/9r+Oca6i1DtKhyWlNeL2380yyAvz9IWR+rDb8KzHa/VSzeBiljlsjZZFxHodVovbdqwt0EGAC6oInBgOZ4eCLIdEjkvmOf7eQ9MebX3qPirKqkio8rgZUUT+6KU3/cZDUFwmiPCkY3H63kHrXly20qLU9FY7Ix/NRWjmV/3aLxDfQyaHRzblI5B+rKRbeebDnF0p7OxYXfEwlpTIZb3sc/4xqVn76ymwSkw6nTwr78lFjcC3/j2JhJTjiKiQDRGtdX51fIMcfQZ0wLHwRPssYVGLXGrhqLvxCMaGyisTAhqaBDGJrPdyHzHXkd3OIQ88Q8fZCo6jgX11sNL970uJi33OnPTRPWb/ZBwPjGfdCHbUGm64tPv3qiTi5T475/dmZyaPWCnEHbxzHJ1anQq18kLF7myzHURu30xxuZwVilNbmxaefgErULVSwxrZRftm0Xa0uR6LYp5dXnl0 0gqJC74N aBs2ec0E+u59U3zPVjTNUCfpg/Ww88DGjBkOCScALBYog5wJ0LdOzqLxXKMKsOpG5Njg7m3SrfkVaQslxgeDQwDHG70pvmLn+ZuUeOGSVT1vjDBs7gkBUGeXGKrzT9uPtgwHWz3C0yXdo9g5kzew+YG4uXHv/ggT+bPXQfjT/t252pUpE+t4GNYxPlgWiJF3AGW4RFTMoSLY7CNhLI9CpNobg/8kAjlX2eM6RD6yn8bSpXIgBDo+UWU0n/Kp7/J0L9Eo+rRPb00N7nuVQnnhxg1oN6pml7tbIIhngDziu2YJJVCzykoqMHUlyFylR/5MgVJZu+mC/NlaGhqKSG8Ppe3DqgRtWNm9FHiTI9UTVy/TCoBjaC6sTymYEVs3c+u26AF3alZ3IG7rjAA15lD27udEXiV8feeyP3VPH07ATAnYEYodqSe9W5kdLTXhTEymi/nMOq5OJbsCCMueObpAVuMyLo6H3WK65+jcjykVt3VE9YM19qVOQ0t9BZiPnbnCvV8qg6c72tIfKJDT/G1hnUZgXDOCZ04Ukn5ef X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Jul 11, 2025 at 12:30:19PM +0200, Vlastimil Babka wrote: > > and > > static DEFINE_PER_CPU(struct llist_head, defer_deactivate_slabs); > > Should work. Also deactivate_slab() should be the correct operation for both > a slab from partial list and a newly allocated one. > But oops, where do we store all the parameters for deactivate_slab() We can > probably reuse the union with "struct list_head slab_list" for queueing. > kmem_cache pointer can be simply taken from struct slab, it's already tehre. > But the separate flush_freelist pointer? Maybe take advantage of list_head > being two pointers and struct llist_node just one pointer, so what we need > will still fit? > > Otherwise we could do the first two phases of deactivate_slab() immediately > and only defer the third phase where the freelists are already merged and > there's no freelist pointer to handle anymore. But if it's not necessary, > let's not complicate. > > Also should kmem_cache_destroy() path now get a barrier to flush all pending > irq_work? Does it exist? Thanks a lot everyone for great feedback. Here is what I have so far that addresses the comments. The only thing I struggle with is how to properly test "if (unlikely(c->slab))" condition in retry_load_slab. I couldn't trigger it no matter what I tried. So I manually unit-tested defer_deactivate_slab() bits with hacks. Will fold and respin next week. -- >From 7efd089831b1e1968f12c7c4e058375bd126f9f6 Mon Sep 17 00:00:00 2001 From: Alexei Starovoitov Date: Fri, 11 Jul 2025 16:56:12 -0700 Subject: [PATCH slab] slab: fixes Signed-off-by: Alexei Starovoitov --- mm/Kconfig | 1 + mm/slab.h | 6 +++ mm/slab_common.c | 3 ++ mm/slub.c | 112 ++++++++++++++++++++++++++++++----------------- 4 files changed, 83 insertions(+), 39 deletions(-) diff --git a/mm/Kconfig b/mm/Kconfig index 0287e8d94aea..331a14d678b3 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -206,6 +206,7 @@ menu "Slab allocator options" config SLUB def_bool y + select IRQ_WORK config KVFREE_RCU_BATCHED def_bool y diff --git a/mm/slab.h b/mm/slab.h index 05a21dc796e0..65f4616b41de 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -57,6 +57,10 @@ struct slab { struct { union { struct list_head slab_list; + struct { /* For deferred deactivate_slab() */ + struct llist_node llnode; + void *flush_freelist; + }; #ifdef CONFIG_SLUB_CPU_PARTIAL struct { struct slab *next; @@ -680,6 +684,8 @@ void __kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab) void __check_heap_object(const void *ptr, unsigned long n, const struct slab *slab, bool to_user); +void defer_free_barrier(void); + static inline bool slub_debug_orig_size(struct kmem_cache *s) { return (kmem_cache_debug_flags(s, SLAB_STORE_USER) && diff --git a/mm/slab_common.c b/mm/slab_common.c index bfe7c40eeee1..937af8ab2501 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -507,6 +507,9 @@ void kmem_cache_destroy(struct kmem_cache *s) rcu_barrier(); } + /* Wait for deferred work from kmalloc/kfree_nolock() */ + defer_free_barrier(); + cpus_read_lock(); mutex_lock(&slab_mutex); diff --git a/mm/slub.c b/mm/slub.c index dbecfd412e41..dc889cc59809 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2460,10 +2460,10 @@ static inline struct slab *alloc_slab_page(gfp_t flags, int node, struct slab *slab; unsigned int order = oo_order(oo); - if (unlikely(!allow_spin)) { + if (unlikely(!allow_spin)) folio = (struct folio *)alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */, node, order); - } else if (node == NUMA_NO_NODE) + else if (node == NUMA_NO_NODE) folio = (struct folio *)alloc_frozen_pages(flags, order); else folio = (struct folio *)__alloc_frozen_pages(flags, order, node, NULL); @@ -3694,6 +3694,8 @@ static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab) return freelist; } +static void defer_deactivate_slab(struct slab *slab); + /* * Slow path. The lockless freelist is empty or we need to perform * debugging duties. @@ -3742,14 +3744,13 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (unlikely(!node_match(slab, node))) { /* * same as above but node_match() being false already - * implies node != NUMA_NO_NODE + * implies node != NUMA_NO_NODE. + * Reentrant slub cannot take locks necessary to + * deactivate_slab, hence ignore node preference. + * kmalloc_nolock() doesn't allow __GFP_THISNODE. */ if (!node_isset(node, slab_nodes) || !allow_spin) { - /* - * Reentrant slub cannot take locks necessary - * to deactivate_slab, hence downgrade to any node - */ node = NUMA_NO_NODE; } else { stat(s, ALLOC_NODE_MISMATCH); @@ -3953,19 +3954,19 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, void *flush_freelist = c->freelist; struct slab *flush_slab = c->slab; - if (unlikely(!allow_spin)) - /* - * Reentrant slub cannot take locks - * necessary for deactivate_slab() - */ - return NULL; c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); local_unlock_irqrestore(&s->cpu_slab->lock, flags); - deactivate_slab(s, flush_slab, flush_freelist); + if (unlikely(!allow_spin)) { + /* Reentrant slub cannot take locks, defer */ + flush_slab->flush_freelist = flush_freelist; + defer_deactivate_slab(flush_slab); + } else { + deactivate_slab(s, flush_slab, flush_freelist); + } stat(s, CPUSLAB_FLUSH); @@ -4707,18 +4708,36 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, discard_slab(s, slab); } -static DEFINE_PER_CPU(struct llist_head, defer_free_objects); -static DEFINE_PER_CPU(struct irq_work, defer_free_work); +struct defer_free { + struct llist_head objects; + struct llist_head slabs; + struct irq_work work; +}; + +static void free_deferred_objects(struct irq_work *work); +static DEFINE_PER_CPU(struct defer_free, defer_free_objects) = { + .objects = LLIST_HEAD_INIT(objects), + .slabs = LLIST_HEAD_INIT(slabs), + .work = IRQ_WORK_INIT(free_deferred_objects), +}; + +/* + * In PREEMPT_RT irq_work runs in per-cpu kthread, so it's safe + * to take sleeping spin_locks from __slab_free() and deactivate_slab(). + * In !PREEMPT_RT irq_work will run after local_unlock_irqrestore(). + */ static void free_deferred_objects(struct irq_work *work) { - struct llist_head *llhead = this_cpu_ptr(&defer_free_objects); + struct defer_free *df = container_of(work, struct defer_free, work); + struct llist_head *objs = &df->objects; + struct llist_head *slabs = &df->slabs; struct llist_node *llnode, *pos, *t; - if (llist_empty(llhead)) + if (llist_empty(objs) && llist_empty(slabs)) return; - llnode = llist_del_all(llhead); + llnode = llist_del_all(objs); llist_for_each_safe(pos, t, llnode) { struct kmem_cache *s; struct slab *slab; @@ -4727,6 +4746,7 @@ static void free_deferred_objects(struct irq_work *work) slab = virt_to_slab(x); s = slab->slab_cache; + x -= s->offset; /* * memcg, kasan_slab_pre are already done for 'x'. * The only thing left is kasan_poison. @@ -4734,26 +4754,39 @@ static void free_deferred_objects(struct irq_work *work) kasan_slab_free(s, x, false, false, true); __slab_free(s, slab, x, x, 1, _THIS_IP_); } + + llnode = llist_del_all(slabs); + llist_for_each_safe(pos, t, llnode) { + struct slab *slab = container_of(pos, struct slab, llnode); + + deactivate_slab(slab->slab_cache, slab, slab->flush_freelist); + } } -static int __init init_defer_work(void) +static void defer_free(struct kmem_cache *s, void *head) { - int cpu; + struct defer_free *df = this_cpu_ptr(&defer_free_objects); - for_each_possible_cpu(cpu) { - init_llist_head(per_cpu_ptr(&defer_free_objects, cpu)); - init_irq_work(per_cpu_ptr(&defer_free_work, cpu), - free_deferred_objects); - } - return 0; + if (llist_add(head + s->offset, &df->objects)) + irq_work_queue(&df->work); } -late_initcall(init_defer_work); -static void defer_free(void *head) +static void defer_deactivate_slab(struct slab *slab) { - if (llist_add(head, this_cpu_ptr(&defer_free_objects))) - irq_work_queue(this_cpu_ptr(&defer_free_work)); + struct defer_free *df = this_cpu_ptr(&defer_free_objects); + + if (llist_add(&slab->llnode, &df->slabs)) + irq_work_queue(&df->work); +} + +void defer_free_barrier(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + irq_work_sync(&per_cpu_ptr(&defer_free_objects, cpu)->work); } + #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free that @@ -4774,6 +4807,8 @@ static __always_inline void do_slab_free(struct kmem_cache *s, struct slab *slab, void *head, void *tail, int cnt, unsigned long addr) { + /* cnt == 0 signals that it's called from kfree_nolock() */ + bool allow_spin = cnt; struct kmem_cache_cpu *c; unsigned long tid; void **freelist; @@ -4792,28 +4827,27 @@ static __always_inline void do_slab_free(struct kmem_cache *s, barrier(); if (unlikely(slab != c->slab)) { - /* cnt == 0 signals that it's called from kfree_nolock() */ - if (unlikely(!cnt)) { + if (unlikely(!allow_spin)) { /* * __slab_free() can locklessly cmpxchg16 into a slab, * but then it might need to take spin_lock or local_lock * in put_cpu_partial() for further processing. * Avoid the complexity and simply add to a deferred list. */ - defer_free(head); + defer_free(s, head); } else { __slab_free(s, slab, head, tail, cnt, addr); } return; } - if (unlikely(!cnt)) { + if (unlikely(!allow_spin)) { if ((in_nmi() || !USE_LOCKLESS_FAST_PATH()) && local_lock_is_locked(&s->cpu_slab->lock)) { - defer_free(head); + defer_free(s, head); return; } - cnt = 1; + cnt = 1; /* restore cnt. kfree_nolock() frees one object at a time */ kasan_slab_free(s, head, false, false, /* skip quarantine */true); } @@ -5065,7 +5099,7 @@ void kfree(const void *object) EXPORT_SYMBOL(kfree); /* - * Can be called while holding raw_spin_lock or from IRQ and NMI, + * Can be called while holding raw_spinlock_t or from IRQ and NMI, * but only for objects allocated by kmalloc_nolock(), * since some debug checks (like kmemleak and kfence) were * skipped on allocation. large_kmalloc is not supported either. @@ -5115,7 +5149,7 @@ void kfree_nolock(const void *object) #ifndef CONFIG_SLUB_TINY do_slab_free(s, slab, x, x, 0, _RET_IP_); #else - defer_free(x); + defer_free(s, x); #endif } EXPORT_SYMBOL_GPL(kfree_nolock); -- 2.47.1