From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4074BC83F09 for ; Wed, 9 Jul 2025 01:53:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 543E28D000D; Tue, 8 Jul 2025 21:53:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4CDAF8D0001; Tue, 8 Jul 2025 21:53:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 346918D000D; Tue, 8 Jul 2025 21:53:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1CBB58D0001 for ; Tue, 8 Jul 2025 21:53:26 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C567F1D8C68 for ; Wed, 9 Jul 2025 01:53:25 +0000 (UTC) X-FDA: 83643053970.04.76A26A9 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf29.hostedemail.com (Postfix) with ESMTP id D0A03120008 for ; Wed, 9 Jul 2025 01:53:23 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YDL+AtH0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752026003; a=rsa-sha256; cv=none; b=ja5qREa0Dqts4lWMfLUVUMwnmEzB567JtYP2nUgV0XXbiOB+If5O0GJm7EnyTQlbU97ZbY UsUzNLFUA4vrCPZ1D4Npbz9m+T9jHQ6IVxurOypoaOjqgmsZpmKUS7K1oZfK2Q31ZIKxVU v/Nc6i1d4UM3yZPYCVy1x+/+HBICQuQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YDL+AtH0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of alexei.starovoitov@gmail.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=alexei.starovoitov@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752026003; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9YnkHCpSnaIpZ2t6dm9MQaWh6UiPbKBICxKQZhmc7Mw=; b=3srt4dJaOEAInm5WchPkkpevGidRziBSIwaWS6UBF7hoVkPxdNNcWdj8y+e34v9lFgjYgZ 6Bh15y9hzuJR8w8Cl0GgDPWkw5OHpb7WrK5K+/mpt8ZWJ6AXyy4lWamEqLtpBr/IemcG+D C+86xGdtELV2sqfXiYMzE4EfnoGCuEg= Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-234c5b57557so47038965ad.3 for ; Tue, 08 Jul 2025 18:53:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1752026003; x=1752630803; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9YnkHCpSnaIpZ2t6dm9MQaWh6UiPbKBICxKQZhmc7Mw=; b=YDL+AtH0rXF0d59FvOEztaxTObF66VHznS1+DYXrp9+J7+fOPHlcAO5VxgbGA4AK+H Cwubh/45tPl0I1mowA9p0/GtCMb0fFKrxflITA8KIb0YCXxonoIWyexqxJvDuhIk32tm bypnPHAObtQbbB8V8CKoChNjpPks2U9RyP06pPo6L2Z8cK1gendSdUPrzzs2Fbkqo7jq l4NaVcpGbcf5AHoQCS9kVj4XisFo2mU2JIkCmbBz8Pifld4z6YW9x6RcYHyMAQCMR4Ce vWuvHfcDOHJR3o1PHRncMdr5yrg5uayOkMiUsY9jOtE2/NP1O5DxAEwB5mB+EKA/HrVu D8MA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752026003; x=1752630803; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9YnkHCpSnaIpZ2t6dm9MQaWh6UiPbKBICxKQZhmc7Mw=; b=RmhcUI148OVYDfGcZwt3H7K1Dlsi2KcCh7LG+d6ES9sayJUWU4I0e/gafpg3n2Uhj7 L00ta7fBfh+k0n83+JMHKkEcYDqg4a5QwUWzAKJPt5aDhJr2kmgo1ToqOpZKOuwQ0Dy1 kvx599dnhqK87BsBdiaecgMOwz2zLXAUCpzrJyVbz4/3IfCImyzLnha1hLS8xnqEj3sh WW9wfJNzGgnjcgf7QfX6HgWjdK9Rv0M+2Cst8z5FDiQGB69yoQrazgZ+v4Xj8B0UVJ4X fWPqxwSXxbHMGTxOkehTG6OwxNYg+QRD9IG3GFwAKUFu06kfJdjuTbgG1iLXMeASkdtC VUmQ== X-Forwarded-Encrypted: i=1; AJvYcCVpPwbzxkJRxRWzE67e/ZowTI1Cul2VbsCTvPqolKA8v+uitD9eveSST5hac2djGV+SelszhG1yig==@kvack.org X-Gm-Message-State: AOJu0YziA1QXhDDjpa1c1Uv8bRsbJ4E8G4KLDoMjTSZ+HIf2INzQchyX V73O0XoUk+7F8IvhIe7+YhrLktsRVFz2fykC332czYF+ba+QULOrr4/I X-Gm-Gg: ASbGncutOow6YlYNf8vxX4PQ+fXGe3GnsMo3W6Rt0zpEFJ9MVIfM/bG4D01RjWOEg0C Lko0kgn3fa90a5xkXXPk6mYum3OWaCRt9QW05UzqUbURe3a068gED6umRpc/RmTZkOFYEqpvBmH gl263AqCKBybz/e2dUMIGhad6nHO9Ugbi/eOEPmaiWVNfvS3hJ/ht61+XMb6E2VfR82wH5CiB5b DV8tnHJq0eRnAovm2Civ0PNfANyc8ObJ/6IWhDvbcLvCQXa8XWLo7+oJsX12hmciEex+rdDI2BN SEmGi2iHtx84oIR9vaHJNhKh6EBXx6eq4puFoQ2gjIuLFbjGX6ax4pfDWuWGs7KfDioFyNUqCwB xSVzurputHzAoQDLRW/2uyUy28JNFSit0J+4HBA== X-Google-Smtp-Source: AGHT+IFiJdGM4iMT0UeyI9b06UPXBG1AvN2zTdWiRIYOAx9BZ6g/30rL7wDzzDviTGO8VfmlD3e9Ew== X-Received: by 2002:a17:903:3b8b:b0:237:f7f8:7453 with SMTP id d9443c01a7336-23ddb32cec2mr9150995ad.51.1752026002515; Tue, 08 Jul 2025 18:53:22 -0700 (PDT) Received: from localhost.localdomain ([2001:558:600a:7:a83d:600f:32cc:235a]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-23c845998a6sm132951135ad.213.2025.07.08.18.53.21 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 08 Jul 2025 18:53:22 -0700 (PDT) From: Alexei Starovoitov To: bpf@vger.kernel.org, linux-mm@kvack.org Cc: vbabka@suse.cz, harry.yoo@oracle.com, shakeel.butt@linux.dev, mhocko@suse.com, bigeasy@linutronix.de, andrii@kernel.org, memxor@gmail.com, akpm@linux-foundation.org, peterz@infradead.org, rostedt@goodmis.org, hannes@cmpxchg.org Subject: [PATCH v2 6/6] slab: Introduce kmalloc_nolock() and kfree_nolock(). Date: Tue, 8 Jul 2025 18:53:03 -0700 Message-Id: <20250709015303.8107-7-alexei.starovoitov@gmail.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) In-Reply-To: <20250709015303.8107-1-alexei.starovoitov@gmail.com> References: <20250709015303.8107-1-alexei.starovoitov@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: D0A03120008 X-Stat-Signature: h7sfd9wowo8jjy5xox5698i87z8jhcng X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1752026003-585817 X-HE-Meta: U2FsdGVkX184KytEufi3MP1sjQSkKIdAWHiLVT2IQlKpMwfowEy5MugAJ3+fbL8Das7SMUTprrGXaWyrkuOhcPEz/cKl/SE7Ttz3UrGkcanb4SjY9k7oT0c9qVr4/zI48xazNZIKI1u2d2ca+vooGjaCqxiJZRt1plp4+9U+o7tq8hx4Cu+QBPfbZzjCTdR9fS1B2fqvcnBQfRjhDWkZFO9ydP9AKqUCTEIDBGKWQFOXMT9YsF80leRtyst5gXxcyOpuJET5wqZgyWbv1VA2fmo/XErnEe44w9ll5PR5iJp3e9BIXe9WlB6dkkIPq0WA3SJr8xnNPoXU0GR5ZkZL8HyDroQJh3DUsSx4zFqbhEqC/bFz/aQcpZfQPEZ4LcafHHBC+wYbnaJGUKqtdMfF3vpfxw2eAO6ZUE1qwP87X+K6D2Yv6rpal5POz4av7XdS/AjYsZXT7cMj6BC4QEB42NQKuRNPeptZt+fPYUYntE04Zr1dRV7+ABEAexi7V2V9NUWv3WrHZSQNL7MPhAgs3w/8dEz896baUDA8iTACFyA8rgPLLAutIrT5bqSDkbm7ne5eQryrwzZ1JqSmglGb2uFv0yUhSCYibcuHb1WXyaB4bo3W9XZpXuWpG05wDu/9YVysCqdvdMktaBlpEhB6JM9ErxXysR3qpByTslbGXc/2ZeAhva1gvCW63Eg5nHN/7jZzlrzNJqMXvsxdeD/X1BBERnb/F2AaUDXH/i8Q0z4YyUxrhb4+W/esJYGXuDoXPYub2XrO45M1W61b7UW5kODMVNGZhfB/YoKgvKnq1mADWyTAJIhnqdyBJbJE807r2r8HfUCVgJvvOXxfZyy5IRIl3e3xZz7GLUvOepPOC3o0N3BOcfXJlfaq6sUmTp87LAHGtm+kbyQghU4T3EfyUHk0vTrqptkm1yhotfU4FzM5jg2yCpNbLKH9s1KY/ZSXEsFUbNuJIFDtm63XhnM UlKflaMD UKS++qQ7j6wIJlE1GW1W/y4zr7xc463ZPP/q5LmqESTUzsFhqA9cySG4K6ayIV2AhYbrP09qwoD7ygna1jajJ72ya11MQ7vi4SpbQtT3gtGCGqd3/Q1XgNmO2S3kr4jMY+EfW0WT3fyo89J7/YUo4falrh3v84rBZ6+i0EKAq4j7yY+WLxjpLMBhLzUsy20X68El/C9FTVUWcd9Y4Ty2dk53eqL9+hKY0o/ZOBL5Pdbgwr1l6XMOX3n38bzxa1lFsRc4DY5mEYKTf69CEl52BYfXVEEEI4iRgphKwVSxz11/f2Sp8/skHUCGP6peo1e2wdg7UhWyXLdgcJGW2Pei2W2Pt++u3CmYmNIxpHnO9K+QWB4szZ4+PD3KTFcLwz/f2S6ffqM3IHqTJAj4aXj+0XbA1GVM+cFWjhVOaoQx7lVnEs/Rt7F1PvdIoY1NHbpP2g8tMW6XrfLwonkOB+KQFYkKGwqY5mVZPKjnIqYQTjYWdWN/vw7Xux/i6R+Ihu3026KQBG37PUVFORQNYXncGuIhsghe0yikw25RkAHPZtySELb2jaJYkzzxoKErMZIG4BqT9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alexei Starovoitov kmalloc_nolock() relies on ability of local_lock to detect the situation when it's locked. In !PREEMPT_RT local_lock_is_locked() is true only when NMI happened in irq saved region that protects _that specific_ per-cpu kmem_cache_cpu. In that case retry the operation in a different kmalloc bucket. The second attempt will likely succeed, since this cpu locked different kmem_cache_cpu. Similarly, in PREEMPT_RT local_lock_is_locked() returns true when per-cpu rt_spin_lock is locked by current task. In this case re-entrance into the same kmalloc bucket is unsafe, and kmalloc_nolock() tries a different bucket that is most likely is not locked by the current task. Though it may be locked by a different task it's safe to rt_spin_lock() on it. Similar to alloc_pages_nolock() the kmalloc_nolock() returns NULL immediately if called from hard irq or NMI in PREEMPT_RT. kfree_nolock() defers freeing to irq_work when local_lock_is_locked() and in_nmi() or in PREEMPT_RT. SLUB_TINY config doesn't use local_lock_is_locked() and relies on spin_trylock_irqsave(&n->list_lock) to allocate while kfree_nolock() always defers to irq_work. Signed-off-by: Alexei Starovoitov --- include/linux/kasan.h | 13 +- include/linux/slab.h | 4 + mm/kasan/common.c | 5 +- mm/slub.c | 330 ++++++++++++++++++++++++++++++++++++++---- 4 files changed, 319 insertions(+), 33 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 890011071f2b..acdc8cb0152e 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -200,7 +200,7 @@ static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, } bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init, - bool still_accessible); + bool still_accessible, bool no_quarantine); /** * kasan_slab_free - Poison, initialize, and quarantine a slab object. * @object: Object to be freed. @@ -226,11 +226,13 @@ bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init, * @Return true if KASAN took ownership of the object; false otherwise. */ static __always_inline bool kasan_slab_free(struct kmem_cache *s, - void *object, bool init, - bool still_accessible) + void *object, bool init, + bool still_accessible, + bool no_quarantine) { if (kasan_enabled()) - return __kasan_slab_free(s, object, init, still_accessible); + return __kasan_slab_free(s, object, init, still_accessible, + no_quarantine); return false; } @@ -427,7 +429,8 @@ static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) } static inline bool kasan_slab_free(struct kmem_cache *s, void *object, - bool init, bool still_accessible) + bool init, bool still_accessible, + bool no_quarantine) { return false; } diff --git a/include/linux/slab.h b/include/linux/slab.h index d5a8ab98035c..743f6d196d57 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -470,6 +470,7 @@ void * __must_check krealloc_noprof(const void *objp, size_t new_size, #define krealloc(...) alloc_hooks(krealloc_noprof(__VA_ARGS__)) void kfree(const void *objp); +void kfree_nolock(const void *objp); void kfree_sensitive(const void *objp); size_t __ksize(const void *objp); @@ -910,6 +911,9 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f } #define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) +void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node); +#define kmalloc_nolock(...) alloc_hooks(kmalloc_nolock_noprof(__VA_ARGS__)) + #define kmem_buckets_alloc(_b, _size, _flags) \ alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE)) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index ed4873e18c75..67042e07baee 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -256,13 +256,16 @@ bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, } bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init, - bool still_accessible) + bool still_accessible, bool no_quarantine) { if (!kasan_arch_is_ready() || is_kfence_address(object)) return false; poison_slab_object(cache, object, init, still_accessible); + if (no_quarantine) + return false; + /* * If the object is put into quarantine, do not let slab put the object * onto the freelist for now. The object's metadata is kept until the diff --git a/mm/slub.c b/mm/slub.c index c4b64821e680..f0844b44ee09 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include @@ -393,7 +394,7 @@ struct kmem_cache_cpu { #ifdef CONFIG_SLUB_CPU_PARTIAL struct slab *partial; /* Partially allocated slabs */ #endif - local_lock_t lock; /* Protects the fields above */ + local_trylock_t lock; /* Protects the fields above */ #ifdef CONFIG_SLUB_STATS unsigned int stat[NR_SLUB_STAT_ITEMS]; #endif @@ -1982,6 +1983,7 @@ static inline void init_slab_obj_exts(struct slab *slab) int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp_t gfp, bool new_slab) { + bool allow_spin = gfpflags_allow_spinning(gfp); unsigned int objects = objs_per_slab(s, slab); unsigned long new_exts; unsigned long old_exts; @@ -1990,8 +1992,14 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, gfp &= ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |= __GFP_NO_OBJ_EXT; - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, - slab_nid(slab)); + if (unlikely(!allow_spin)) { + size_t sz = objects * sizeof(struct slabobj_ext); + + vec = kmalloc_nolock(sz, __GFP_ZERO, slab_nid(slab)); + } else { + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + } if (!vec) { /* Mark vectors which failed to allocate */ if (new_slab) @@ -2021,7 +2029,10 @@ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s, * objcg vector should be reused. */ mark_objexts_empty(vec); - kfree(vec); + if (unlikely(!allow_spin)) + kfree_nolock(vec); + else + kfree(vec); return 0; } @@ -2379,7 +2390,7 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init, } /* KASAN might put x into memory quarantine, delaying its reuse. */ - return !kasan_slab_free(s, x, init, still_accessible); + return !kasan_slab_free(s, x, init, still_accessible, false); } static __fastpath_inline @@ -2442,13 +2453,17 @@ static void *setup_object(struct kmem_cache *s, void *object) * Slab allocation and freeing */ static inline struct slab *alloc_slab_page(gfp_t flags, int node, - struct kmem_cache_order_objects oo) + struct kmem_cache_order_objects oo, + bool allow_spin) { struct folio *folio; struct slab *slab; unsigned int order = oo_order(oo); - if (node == NUMA_NO_NODE) + if (unlikely(!allow_spin)) { + folio = (struct folio *)alloc_frozen_pages_nolock(0/* __GFP_COMP is implied */, + node, order); + } else if (node == NUMA_NO_NODE) folio = (struct folio *)alloc_frozen_pages(flags, order); else folio = (struct folio *)__alloc_frozen_pages(flags, order, node, NULL); @@ -2598,6 +2613,7 @@ static __always_inline void unaccount_slab(struct slab *slab, int order, static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) { + bool allow_spin = gfpflags_allow_spinning(flags); struct slab *slab; struct kmem_cache_order_objects oo = s->oo; gfp_t alloc_gfp; @@ -2617,7 +2633,11 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min)) alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~__GFP_RECLAIM; - slab = alloc_slab_page(alloc_gfp, node, oo); + /* + * __GFP_RECLAIM could be cleared on the first allocation attempt, + * so pass allow_spin flag directly. + */ + slab = alloc_slab_page(alloc_gfp, node, oo, allow_spin); if (unlikely(!slab)) { oo = s->min; alloc_gfp = flags; @@ -2625,7 +2645,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node) * Allocation may have failed due to fragmentation. * Try a lower order alloc if possible */ - slab = alloc_slab_page(alloc_gfp, node, oo); + slab = alloc_slab_page(alloc_gfp, node, oo, allow_spin); if (unlikely(!slab)) return NULL; stat(s, ORDER_FALLBACK); @@ -2803,8 +2823,8 @@ static void *alloc_single_from_partial(struct kmem_cache *s, * allocated slab. Allocate a single object instead of whole freelist * and put the slab to the partial (or full) list. */ -static void *alloc_single_from_new_slab(struct kmem_cache *s, - struct slab *slab, int orig_size) +static void *alloc_single_from_new_slab(struct kmem_cache *s, struct slab *slab, + int orig_size, gfp_t gfpflags) { int nid = slab_nid(slab); struct kmem_cache_node *n = get_node(s, nid); @@ -2824,7 +2844,10 @@ static void *alloc_single_from_new_slab(struct kmem_cache *s, */ return NULL; - spin_lock_irqsave(&n->list_lock, flags); + if (gfpflags_allow_spinning(gfpflags)) + spin_lock_irqsave(&n->list_lock, flags); + else if (!spin_trylock_irqsave(&n->list_lock, flags)) + return NULL; if (slab->inuse == slab->objects) add_full(s, n, slab); @@ -2865,7 +2888,10 @@ static struct slab *get_partial_node(struct kmem_cache *s, if (!n || !n->nr_partial) return NULL; - spin_lock_irqsave(&n->list_lock, flags); + if (gfpflags_allow_spinning(pc->flags)) + spin_lock_irqsave(&n->list_lock, flags); + else if (!spin_trylock_irqsave(&n->list_lock, flags)) + return NULL; list_for_each_entry_safe(slab, slab2, &n->partial, slab_list) { if (!pfmemalloc_match(slab, pc->flags)) continue; @@ -3056,7 +3082,7 @@ static void init_kmem_cache_cpus(struct kmem_cache *s) for_each_possible_cpu(cpu) { c = per_cpu_ptr(s->cpu_slab, cpu); - local_lock_init(&c->lock); + local_trylock_init(&c->lock); c->tid = init_tid(cpu); } } @@ -3690,6 +3716,7 @@ static inline void *freeze_slab(struct kmem_cache *s, struct slab *slab) static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, unsigned long addr, struct kmem_cache_cpu *c, unsigned int orig_size) { + bool allow_spin = gfpflags_allow_spinning(gfpflags); void *freelist; struct slab *slab; unsigned long flags; @@ -3717,7 +3744,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * same as above but node_match() being false already * implies node != NUMA_NO_NODE */ - if (!node_isset(node, slab_nodes)) { + if (!node_isset(node, slab_nodes) || + !allow_spin) { + /* + * Reentrant slub cannot take locks necessary + * to deactivate_slab, hence downgrade to any node + */ node = NUMA_NO_NODE; } else { stat(s, ALLOC_NODE_MISMATCH); @@ -3730,7 +3762,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * PFMEMALLOC but right now, we are losing the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match(slab, gfpflags))) + if (unlikely(!pfmemalloc_match(slab, gfpflags) && allow_spin)) goto deactivate_slab; /* must check again c->slab in case we got preempted and it changed */ @@ -3803,7 +3835,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, slub_set_percpu_partial(c, slab); if (likely(node_match(slab, node) && - pfmemalloc_match(slab, gfpflags))) { + pfmemalloc_match(slab, gfpflags)) || + /* + * Reentrant slub cannot take locks necessary + * for __put_partials(), hence downgrade to any node + */ + !allow_spin) { c->slab = slab; freelist = get_freelist(s, slab); VM_BUG_ON(!freelist); @@ -3833,8 +3870,13 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * allocating new page from other nodes */ if (unlikely(node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) - && try_thisnode)) - pc.flags = GFP_NOWAIT | __GFP_THISNODE; + && try_thisnode)) { + if (unlikely(!allow_spin)) + /* Do not upgrade gfp to NOWAIT from more restrictive mode */ + pc.flags = gfpflags | __GFP_THISNODE; + else + pc.flags = GFP_NOWAIT | __GFP_THISNODE; + } pc.orig_size = orig_size; slab = get_partial(s, node, &pc); @@ -3873,7 +3915,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, ALLOC_SLAB); if (kmem_cache_debug(s)) { - freelist = alloc_single_from_new_slab(s, slab, orig_size); + freelist = alloc_single_from_new_slab(s, slab, orig_size, gfpflags); if (unlikely(!freelist)) goto new_objects; @@ -3895,7 +3937,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, inc_slabs_node(s, slab_nid(slab), slab->objects); - if (unlikely(!pfmemalloc_match(slab, gfpflags))) { + if (unlikely(!pfmemalloc_match(slab, gfpflags) && allow_spin)) { /* * For !pfmemalloc_match() case we don't load freelist so that * we don't make further mismatched allocations easier. @@ -3911,6 +3953,12 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, void *flush_freelist = c->freelist; struct slab *flush_slab = c->slab; + if (unlikely(!allow_spin)) + /* + * Reentrant slub cannot take locks + * necessary for deactivate_slab() + */ + return NULL; c->slab = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); @@ -3946,8 +3994,23 @@ static void *__slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, */ c = slub_get_cpu_ptr(s->cpu_slab); #endif - - p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); + if (unlikely(!gfpflags_allow_spinning(gfpflags))) { + if (local_lock_is_locked(&s->cpu_slab->lock)) { + /* + * EBUSY is an internal signal to kmalloc_nolock() to + * retry a different bucket. It's not propagated + * to the caller. + */ + p = ERR_PTR(-EBUSY); + goto out; + } + local_lock_lockdep_start(&s->cpu_slab->lock); + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); + local_lock_lockdep_end(&s->cpu_slab->lock); + } else { + p = ___slab_alloc(s, gfpflags, node, addr, c, orig_size); + } +out: #ifdef CONFIG_PREEMPT_COUNT slub_put_cpu_ptr(s->cpu_slab); #endif @@ -4071,7 +4134,7 @@ static void *__slab_alloc_node(struct kmem_cache *s, return NULL; } - object = alloc_single_from_new_slab(s, slab, orig_size); + object = alloc_single_from_new_slab(s, slab, orig_size, gfpflags); return object; } @@ -4150,8 +4213,9 @@ bool slab_post_alloc_hook(struct kmem_cache *s, struct list_lru *lru, if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) memset(p[i], 0, zero_size); - kmemleak_alloc_recursive(p[i], s->object_size, 1, - s->flags, init_flags); + if (gfpflags_allow_spinning(flags)) + kmemleak_alloc_recursive(p[i], s->object_size, 1, + s->flags, init_flags); kmsan_slab_alloc(s, p[i], init_flags); alloc_tagging_slab_alloc_hook(s, p[i], flags); } @@ -4342,6 +4406,94 @@ void *__kmalloc_noprof(size_t size, gfp_t flags) } EXPORT_SYMBOL(__kmalloc_noprof); +/** + * kmalloc_nolock - Allocate an object of given size from any context. + * @size: size to allocate + * @gfp_flags: GFP flags. Only __GFP_ACCOUNT, __GFP_ZERO allowed. + * @node: node number of the target node. + * + * Return: pointer to the new object or NULL in case of error. + * NULL does not mean EBUSY or EAGAIN. It means ENOMEM. + * There is no reason to call it again and expect !NULL. + */ +void *kmalloc_nolock_noprof(size_t size, gfp_t gfp_flags, int node) +{ + gfp_t alloc_gfp = __GFP_NOWARN | __GFP_NOMEMALLOC | gfp_flags; + struct kmem_cache *s; + bool can_retry = true; + void *ret = ERR_PTR(-EBUSY); + + VM_WARN_ON_ONCE(gfp_flags & ~(__GFP_ACCOUNT | __GFP_ZERO)); + + if (unlikely(!size)) + return ZERO_SIZE_PTR; + + if (!USE_LOCKLESS_FAST_PATH() && (in_nmi() || in_hardirq())) + /* kmalloc_nolock() in PREEMPT_RT is not supported from irq */ + return NULL; +retry: + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) + return NULL; + s = kmalloc_slab(size, NULL, alloc_gfp, _RET_IP_); + + if (!(s->flags & __CMPXCHG_DOUBLE) && !kmem_cache_debug(s)) + /* + * kmalloc_nolock() is not supported on architectures that + * don't implement cmpxchg16b, but debug caches don't use + * per-cpu slab and per-cpu partial slabs. They rely on + * kmem_cache_node->list_lock, so kmalloc_nolock() can + * attempt to allocate from debug caches by + * spin_trylock_irqsave(&n->list_lock, ...) + */ + return NULL; + + /* + * Do not call slab_alloc_node(), since trylock mode isn't + * compatible with slab_pre_alloc_hook/should_failslab and + * kfence_alloc. Hence call __slab_alloc_node() (at most twice) + * and slab_post_alloc_hook() directly. + * + * In !PREEMPT_RT ___slab_alloc() manipulates (freelist,tid) pair + * in irq saved region. It assumes that the same cpu will not + * __update_cpu_freelist_fast() into the same (freelist,tid) pair. + * Therefore use in_nmi() to check whether particular bucket is in + * irq protected section. + * + * If in_nmi() && local_lock_is_locked(s->cpu_slab) then it means that + * this cpu was interrupted somewhere inside ___slab_alloc() after + * it did local_lock_irqsave(&s->cpu_slab->lock, flags). + * In this case fast path with __update_cpu_freelist_fast() is not safe. + */ +#ifndef CONFIG_SLUB_TINY + if (!in_nmi() || !local_lock_is_locked(&s->cpu_slab->lock)) +#endif + ret = __slab_alloc_node(s, alloc_gfp, node, _RET_IP_, size); + + if (PTR_ERR(ret) == -EBUSY) { + if (can_retry) { + /* pick the next kmalloc bucket */ + size = s->object_size + 1; + /* + * Another alternative is to + * if (memcg) alloc_gfp &= ~__GFP_ACCOUNT; + * else if (!memcg) alloc_gfp |= __GFP_ACCOUNT; + * to retry from bucket of the same size. + */ + can_retry = false; + goto retry; + } + ret = NULL; + } + + maybe_wipe_obj_freeptr(s, ret); + slab_post_alloc_hook(s, NULL, alloc_gfp, 1, &ret, + slab_want_init_on_alloc(alloc_gfp, s), size); + + ret = kasan_kmalloc(s, ret, size, alloc_gfp); + return ret; +} +EXPORT_SYMBOL_GPL(kmalloc_nolock_noprof); + void *__kmalloc_node_track_caller_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node, unsigned long caller) { @@ -4555,6 +4707,53 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, discard_slab(s, slab); } +static DEFINE_PER_CPU(struct llist_head, defer_free_objects); +static DEFINE_PER_CPU(struct irq_work, defer_free_work); + +static void free_deferred_objects(struct irq_work *work) +{ + struct llist_head *llhead = this_cpu_ptr(&defer_free_objects); + struct llist_node *llnode, *pos, *t; + + if (llist_empty(llhead)) + return; + + llnode = llist_del_all(llhead); + llist_for_each_safe(pos, t, llnode) { + struct kmem_cache *s; + struct slab *slab; + void *x = pos; + + slab = virt_to_slab(x); + s = slab->slab_cache; + + /* + * memcg, kasan_slab_pre are already done for 'x'. + * The only thing left is kasan_poison. + */ + kasan_slab_free(s, x, false, false, true); + __slab_free(s, slab, x, x, 1, _THIS_IP_); + } +} + +static int __init init_defer_work(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + init_llist_head(per_cpu_ptr(&defer_free_objects, cpu)); + init_irq_work(per_cpu_ptr(&defer_free_work, cpu), + free_deferred_objects); + } + return 0; +} +late_initcall(init_defer_work); + +static void defer_free(void *head) +{ + if (llist_add(head, this_cpu_ptr(&defer_free_objects))) + irq_work_queue(this_cpu_ptr(&defer_free_work)); +} #ifndef CONFIG_SLUB_TINY /* * Fastpath with forced inlining to produce a kfree and kmem_cache_free that @@ -4593,10 +4792,31 @@ static __always_inline void do_slab_free(struct kmem_cache *s, barrier(); if (unlikely(slab != c->slab)) { - __slab_free(s, slab, head, tail, cnt, addr); + /* cnt == 0 signals that it's called from kfree_nolock() */ + if (unlikely(!cnt)) { + /* + * __slab_free() can locklessly cmpxchg16 into a slab, + * but then it might need to take spin_lock or local_lock + * in put_cpu_partial() for further processing. + * Avoid the complexity and simply add to a deferred list. + */ + defer_free(head); + } else { + __slab_free(s, slab, head, tail, cnt, addr); + } return; } + if (unlikely(!cnt)) { + if ((in_nmi() || !USE_LOCKLESS_FAST_PATH()) && + local_lock_is_locked(&s->cpu_slab->lock)) { + defer_free(head); + return; + } + cnt = 1; + kasan_slab_free(s, head, false, false, /* skip quarantine */true); + } + if (USE_LOCKLESS_FAST_PATH()) { freelist = READ_ONCE(c->freelist); @@ -4844,6 +5064,62 @@ void kfree(const void *object) } EXPORT_SYMBOL(kfree); +/* + * Can be called while holding raw_spin_lock or from IRQ and NMI, + * but only for objects allocated by kmalloc_nolock(), + * since some debug checks (like kmemleak and kfence) were + * skipped on allocation. large_kmalloc is not supported either. + */ +void kfree_nolock(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + void *x = (void *)object; + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio = virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + WARN(1, "Buggy usage of kfree_nolock"); + return; + } + + slab = folio_slab(folio); + s = slab->slab_cache; + + memcg_slab_free_hook(s, slab, &x, 1); + alloc_tagging_slab_free_hook(s, slab, &x, 1); + /* + * Unlike slab_free() do NOT call the following: + * kmemleak_free_recursive(x, s->flags); + * debug_check_no_locks_freed(x, s->object_size); + * debug_check_no_obj_freed(x, s->object_size); + * __kcsan_check_access(x, s->object_size, ..); + * kfence_free(x); + * since they take spinlocks. + */ + kmsan_slab_free(s, x); + /* + * If KASAN finds a kernel bug it will do kasan_report_invalid_free() + * which will call raw_spin_lock_irqsave() which is technically + * unsafe from NMI, but take chance and report kernel bug. + * The sequence of + * kasan_report_invalid_free() -> raw_spin_lock_irqsave() -> NMI + * -> kfree_nolock() -> kasan_report_invalid_free() on the same CPU + * is double buggy and deserves to deadlock. + */ + if (kasan_slab_pre_free(s, x)) + return; +#ifndef CONFIG_SLUB_TINY + do_slab_free(s, slab, x, x, 0, _RET_IP_); +#else + defer_free(x); +#endif +} +EXPORT_SYMBOL_GPL(kfree_nolock); + static __always_inline __realloc_size(2) void * __do_krealloc(const void *p, size_t new_size, gfp_t flags) { -- 2.47.1