From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02E5FC4167B for ; Wed, 6 Dec 2023 09:06:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 820086B0089; Wed, 6 Dec 2023 04:06:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CEFE6B0096; Wed, 6 Dec 2023 04:06:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66F8D6B0099; Wed, 6 Dec 2023 04:06:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 581746B0089 for ; Wed, 6 Dec 2023 04:06:04 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2A67CC012F for ; Wed, 6 Dec 2023 09:06:04 +0000 (UTC) X-FDA: 81535811448.22.C777FDE Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf15.hostedemail.com (Postfix) with ESMTP id 3AD58A0011 for ; Wed, 6 Dec 2023 09:06:01 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UgonCtG3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701853561; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=b2GGM7bFYDaqOGYAduet1T5T8Hq7xfXB9hdrnTRRI+g=; b=Dw+hehdFh8nJRP0w1Gf0MMi7q/Tj5aNCDzVHUrjec0ATdDz7mPYXZjxnaM/mwhciF+GRRV 1ZaiOSSlSS962xcX0Hvkz+2w4IeihjrnCda4XTTq2+O0d6hhqsg0paNPViYgvkM4xVvlmv u1+QldFpFWnK9oNMYuJodLeYaiIKFEo= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UgonCtG3; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf15.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.214.171 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701853561; a=rsa-sha256; cv=none; b=coqBpOKeLZCNv2oHGX+J5rPWMQobNOv3jyCSWxgyvXAPzmhG3XVbFN2i3VMwiE0csDAGFZ rBsv7e9b4Evzm2ARkePce0b/O3r+dkEcEdr+tPefYoaExl59myje7V1UvwVJagpq15zRwh nRBQv+ESR7Q1tR/ZTkCAoIbGgXcULts= Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1d0c93b1173so10355195ad.2 for ; Wed, 06 Dec 2023 01:06:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701853560; x=1702458360; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=b2GGM7bFYDaqOGYAduet1T5T8Hq7xfXB9hdrnTRRI+g=; b=UgonCtG3wtSuaB8glSFHAVMWNz8KzpkaH2kKf3QU7Y7EUK2aoF0mBKdElVEzIUTmpG zi8oQzZouG3lSqqDqfU3QCH0YZDwKrIqNS1fAVC3GAZ9NdSnh0GG+nR40YKr0fXe5XZ9 5dMdrRLhmcc+hdi4bdE0eXFcT39oyBgtKVI2zFHTop/wbbSBOS9eq7AwXY/WChscWj1J BVriAZTCsJAF0RH5FdDpyPdqmLvDAqhse6GG5qR/ymC4xWhfpxwdj2Vzut6LvH4FINSS cGNOMNpRZzhWxRNh0c9223n9ddKPNxB8goNcsjau3eJUxOahppbrEgprdFKuz5ekj1HN +C1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701853560; x=1702458360; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=b2GGM7bFYDaqOGYAduet1T5T8Hq7xfXB9hdrnTRRI+g=; b=mG7ZNuBiWqWi1ZsGMPT8j+wlRUBmuHu4iK0mtFmVEiuWT6hMYaAV41kcyYxRRdW+Ry wHjaTPlKXB9/BZC9pmpIV+I8KMQ/X+34atzLHpo27AP+7HlKn5JvMVxVh5V3eTK1n6NV ZSXIm2a6Lj3L8Qlnd7ZfhZZNtNBBpAihFE8fMiug2hRihzO7NwZ76VJ7Atd3cH1+DkHg ei5kA9I7+jwub6umwxhwAAfsPPTj3PhV7rgYWiQjxhZB+8HhRdwGRBnw1aXumx/Q+1TF NuMKtVlc6j37Q2Lp0OknCXpqYoQUcpu0VA/vn0p8eJetn6IcxRH1mmztPvelMeN7qBiA 9sLA== X-Gm-Message-State: AOJu0YyzT7r54e8KzFNidjbhuVAcwuPSqYv1Z3Mk52DedC6G3GjTB84V AjB213kWE6JtciyvFtFxnLE= X-Google-Smtp-Source: AGHT+IGlm0cH3TcZXGOz3igm65NbMoGXdSSMbu1dGUW4XgsvgeSILUX2r8JpcMUhu/z4L0agyq4hfw== X-Received: by 2002:a17:902:c411:b0:1d0:6ffd:f1f8 with SMTP id k17-20020a170902c41100b001d06ffdf1f8mr465248plk.78.1701853559768; Wed, 06 Dec 2023 01:05:59 -0800 (PST) Received: from localhost.localdomain ([1.245.180.67]) by smtp.gmail.com with ESMTPSA id j20-20020a170902759400b001c74df14e6fsm11559003pll.284.2023.12.06.01.05.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 01:05:58 -0800 (PST) Date: Wed, 6 Dec 2023 18:05:38 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Vlastimil Babka Cc: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim , Andrew Morton , Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org Subject: Re: [PATCH v2 07/21] mm/slab: remove CONFIG_SLAB code from slab common code Message-ID: References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> <20231120-slab-remove-slab-v2-7-9c9c70177183@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231120-slab-remove-slab-v2-7-9c9c70177183@suse.cz> X-Rspamd-Queue-Id: 3AD58A0011 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: w83rgx5hk6jgmncjpoid56byubgfo8ho X-HE-Tag: 1701853561-569332 X-HE-Meta: U2FsdGVkX1+uB24VoUgJdNVElOnlgdOTdlagfW0lLZXICj4PEprCI7e3lWTL/szxtxOyjuBThcckEPhFSjlr8JCBpXrX8HdEiEEobWeVBG13ubzagyV6ux0ExJdMPUhQVBUaKS/Td+nCv2yby5oYfGnV+Lj1UO4fr9Sql3il80CfrPdDrwDh33QaA8SdqktEzKGmOe3098RqluMhMGcmNm6g3EDQKYShH7wx65idesw5fsThaossxeWU0VxGHq18CiLe6MbC/j465Q5W4/NdlOEs1ogvzSlo9Z5nWGxaJeMUh8dGvBv1QGsoYa+hAND+alRUmCq1/AcmFmKmWi8xxSI7dhkfVecn7tQwEmEkFusNEJ5S5jlWDEAEvaUnUiCZhGgewFRXeDE02JQfpcTRVjB0XnwyEcMEdRo8S4bxNG8Kdg4qpVvBGRGZew4eRuboAYcz03Kg8Wh9vcvl2y1fFRiQpBe+wEmumyJygMd91Fr5j2uahhlrF3GYiMO1moWqyH7VP6KoGt8qeAw3oeLkIpcxgUizw7HGTBky02+JVBrC99x1C+piE5lhVKOrsmeKUgewQOO9w3V6G7rOlnK5nZ/pbCVFru5eDmJuHsUlhDZ+MOq+DjHhSZP3H4A6HwI/OHLqKOJK+sRWTywDm8kiERhv3acGWUcP554AYwSvRTJM82qh/6gx5TId98h4NIlq6EQNkT+mLeUkgvwbG+TCql2VBjfj0Oi7hvCL5amWy/MMMU2faKqGbN4P808hN2ael3PHMOnF109aHt2lfx8umfzRyRtBoV9WhRreXRv8vu5lwSkQ59Y3bpIa+SFKouA+oHatLR0Y279i92aIn8SNpidimYAFvYAAeFYgt8VH8cD+yh0nFHMaFPeo50kF2aYDj0mknTRGJis2s5LqhTSexLtzAdPWkQ8TZoidSGk5g4EC7bfw+nm79SJ0NoIT2Gq31PS1Ms/JgQQa3WjpXY1 wF23CGvo 7+QgbcqA5nyp0P0A/N1oejAG6fDJUEh0M4I5L987WhLri+VX+odNreoLA9QEGKmd0t1Yqdb0r1QbXBj5yHRab3nJROGSA1BZZa1oWrYUQ0XMf0X6tQdrOzTX1iM+i7J+SkVsZzFC01o1qO0HIJEM0bLA9CM9uY0CKzw9HsrcqWstcW/9hGfT/n+b6peBtRkuVPkpwpV3BD5Py5de4uZoItd0l8k6x7ZdoOxPtW/nInu2cs5QGRWMtiHI4/HeOFyPPGkDiIKr9zcFlWAkVqhzejZXouSuGFch9ZBCtfBGsqCVm/QJx6wEXqIKCY2Lg9HZ7Xq10L5OFxi6moETaMO58JBUv7ddU+AZoPKmmxyanN+PFdyzExpMArleDsDDPC4JBLGYHgVl0Y9aCVvqF13CDOxpNsMwMo1/ZgFUm/4Nv1AkJb+8wvTLEwqu8wKry5zfxz4fLlArcEKW7VaUvR1pM5Wl0tFxG+AGptLi07rwQkJDgMejtMs8ThOP2nWqbsMgFJ7qkOxQ6miyWo0c/NUrpvPintdm7l8t7bo+HdUK1111SGn4gZApBG+q36MqTxEjZRCHmDut0TvStXnLGxRf0/5swYIJw2fWXfa8h9j92KdL8fiEIXyjpyPkdGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 07:34:18PM +0100, Vlastimil Babka wrote: > In slab_common.c and slab.h headers, we can now remove all code behind > CONFIG_SLAB and CONFIG_DEBUG_SLAB ifdefs, and remove all CONFIG_SLUB > ifdefs. > > Reviewed-by: Kees Cook > Signed-off-by: Vlastimil Babka > --- > include/linux/slab.h | 14 ++--------- > mm/slab.h | 69 ++++------------------------------------------------ > mm/slab_common.c | 22 ++--------------- > 3 files changed, 9 insertions(+), 96 deletions(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index 34e43cddc520..b2015d0e01ad 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -24,7 +24,7 @@ > > /* > * Flags to pass to kmem_cache_create(). > - * The ones marked DEBUG are only valid if CONFIG_DEBUG_SLAB is set. > + * The ones marked DEBUG need CONFIG_SLUB_DEBUG enabled, otherwise are no-op > */ > /* DEBUG: Perform (expensive) checks on alloc/free */ > #define SLAB_CONSISTENCY_CHECKS ((slab_flags_t __force)0x00000100U) > @@ -302,25 +302,15 @@ static inline unsigned int arch_slab_minalign(void) > * Kmalloc array related definitions > */ > > -#ifdef CONFIG_SLAB > /* > - * SLAB and SLUB directly allocates requests fitting in to an order-1 page > + * SLUB directly allocates requests fitting in to an order-1 page > * (PAGE_SIZE*2). Larger requests are passed to the page allocator. > */ > #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) > #define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) > #ifndef KMALLOC_SHIFT_LOW > -#define KMALLOC_SHIFT_LOW 5 > -#endif > -#endif > - > -#ifdef CONFIG_SLUB > -#define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) > -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) > -#ifndef KMALLOC_SHIFT_LOW > #define KMALLOC_SHIFT_LOW 3 > #endif > -#endif > > /* Maximum allocatable size */ > #define KMALLOC_MAX_SIZE (1UL << KMALLOC_SHIFT_MAX) > diff --git a/mm/slab.h b/mm/slab.h > index 3d07fb428393..014c36ea51fa 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -42,21 +42,6 @@ typedef union { > struct slab { > unsigned long __page_flags; > > -#if defined(CONFIG_SLAB) > - > - struct kmem_cache *slab_cache; > - union { > - struct { > - struct list_head slab_list; > - void *freelist; /* array of free object indexes */ > - void *s_mem; /* first object */ > - }; > - struct rcu_head rcu_head; > - }; > - unsigned int active; > - > -#elif defined(CONFIG_SLUB) > - > struct kmem_cache *slab_cache; > union { > struct { > @@ -91,10 +76,6 @@ struct slab { > }; > unsigned int __unused; > > -#else > -#error "Unexpected slab allocator configured" > -#endif > - > atomic_t __page_refcount; > #ifdef CONFIG_MEMCG > unsigned long memcg_data; > @@ -111,7 +92,7 @@ SLAB_MATCH(memcg_data, memcg_data); > #endif > #undef SLAB_MATCH > static_assert(sizeof(struct slab) <= sizeof(struct page)); > -#if defined(system_has_freelist_aba) && defined(CONFIG_SLUB) > +#if defined(system_has_freelist_aba) > static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t))); > #endif > > @@ -228,13 +209,7 @@ static inline size_t slab_size(const struct slab *slab) > return PAGE_SIZE << slab_order(slab); > } > > -#ifdef CONFIG_SLAB > -#include > -#endif > - > -#ifdef CONFIG_SLUB > #include > -#endif > > #include > #include > @@ -320,26 +295,16 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s) > SLAB_CACHE_DMA32 | SLAB_PANIC | \ > SLAB_TYPESAFE_BY_RCU | SLAB_DEBUG_OBJECTS ) > > -#if defined(CONFIG_DEBUG_SLAB) > -#define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER) > -#elif defined(CONFIG_SLUB_DEBUG) > +#ifdef CONFIG_SLUB_DEBUG > #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \ > SLAB_TRACE | SLAB_CONSISTENCY_CHECKS) > #else > #define SLAB_DEBUG_FLAGS (0) > #endif > > -#if defined(CONFIG_SLAB) > -#define SLAB_CACHE_FLAGS (SLAB_MEM_SPREAD | SLAB_NOLEAKTRACE | \ > - SLAB_RECLAIM_ACCOUNT | SLAB_TEMPORARY | \ > - SLAB_ACCOUNT | SLAB_NO_MERGE) > -#elif defined(CONFIG_SLUB) > #define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE | SLAB_RECLAIM_ACCOUNT | \ > SLAB_TEMPORARY | SLAB_ACCOUNT | \ > SLAB_NO_USER_FLAGS | SLAB_KMALLOC | SLAB_NO_MERGE) > -#else > -#define SLAB_CACHE_FLAGS (SLAB_NOLEAKTRACE) > -#endif > > /* Common flags available with current configuration */ > #define CACHE_CREATE_MASK (SLAB_CORE_FLAGS | SLAB_DEBUG_FLAGS | SLAB_CACHE_FLAGS) > @@ -672,18 +637,14 @@ size_t __ksize(const void *objp); > > static inline size_t slab_ksize(const struct kmem_cache *s) > { > -#ifndef CONFIG_SLUB > - return s->object_size; > - > -#else /* CONFIG_SLUB */ > -# ifdef CONFIG_SLUB_DEBUG > +#ifdef CONFIG_SLUB_DEBUG > /* > * Debugging requires use of the padding between object > * and whatever may come after it. > */ > if (s->flags & (SLAB_RED_ZONE | SLAB_POISON)) > return s->object_size; > -# endif > +#endif > if (s->flags & SLAB_KASAN) > return s->object_size; > /* > @@ -697,7 +658,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) > * Else we can use all the padding etc for the allocation > */ > return s->size; > -#endif > } > > static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, > @@ -775,23 +735,6 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, > * The slab lists for all objects. > */ > struct kmem_cache_node { > -#ifdef CONFIG_SLAB > - raw_spinlock_t list_lock; > - struct list_head slabs_partial; /* partial list first, better asm code */ > - struct list_head slabs_full; > - struct list_head slabs_free; > - unsigned long total_slabs; /* length of all slab lists */ > - unsigned long free_slabs; /* length of free slab list only */ > - unsigned long free_objects; > - unsigned int free_limit; > - unsigned int colour_next; /* Per-node cache coloring */ > - struct array_cache *shared; /* shared per node */ > - struct alien_cache **alien; /* on other nodes */ > - unsigned long next_reap; /* updated without locking */ > - int free_touched; /* updated without locking */ > -#endif > - > -#ifdef CONFIG_SLUB > spinlock_t list_lock; > unsigned long nr_partial; > struct list_head partial; > @@ -800,8 +743,6 @@ struct kmem_cache_node { > atomic_long_t total_objects; > struct list_head full; > #endif > -#endif > - > }; > > static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) > @@ -818,7 +759,7 @@ static inline struct kmem_cache_node *get_node(struct kmem_cache *s, int node) > if ((__n = get_node(__s, __node))) > > > -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG) > +#ifdef CONFIG_SLUB_DEBUG > void dump_unreclaimable_slab(void); > #else > static inline void dump_unreclaimable_slab(void) > diff --git a/mm/slab_common.c b/mm/slab_common.c > index 8d431193c273..63b8411db7ce 100644 > --- a/mm/slab_common.c > +++ b/mm/slab_common.c > @@ -71,10 +71,8 @@ static int __init setup_slab_merge(char *str) > return 1; > } > > -#ifdef CONFIG_SLUB > __setup_param("slub_nomerge", slub_nomerge, setup_slab_nomerge, 0); > __setup_param("slub_merge", slub_merge, setup_slab_merge, 0); > -#endif > > __setup("slab_nomerge", setup_slab_nomerge); > __setup("slab_merge", setup_slab_merge); > @@ -197,10 +195,6 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, > if (s->size - size >= sizeof(void *)) > continue; > > - if (IS_ENABLED(CONFIG_SLAB) && align && > - (align > s->align || s->align % align)) > - continue; > - > return s; > } > return NULL; > @@ -1222,12 +1216,8 @@ void cache_random_seq_destroy(struct kmem_cache *cachep) > } > #endif /* CONFIG_SLAB_FREELIST_RANDOM */ > > -#if defined(CONFIG_SLAB) || defined(CONFIG_SLUB_DEBUG) > -#ifdef CONFIG_SLAB > -#define SLABINFO_RIGHTS (0600) > -#else > +#ifdef CONFIG_SLUB_DEBUG > #define SLABINFO_RIGHTS (0400) > -#endif > > static void print_slabinfo_header(struct seq_file *m) > { > @@ -1235,18 +1225,10 @@ static void print_slabinfo_header(struct seq_file *m) > * Output format version, so at least we can change it > * without _too_ many complaints. > */ > -#ifdef CONFIG_DEBUG_SLAB > - seq_puts(m, "slabinfo - version: 2.1 (statistics)\n"); > -#else > seq_puts(m, "slabinfo - version: 2.1\n"); > -#endif > seq_puts(m, "# name "); > seq_puts(m, " : tunables "); > seq_puts(m, " : slabdata "); > -#ifdef CONFIG_DEBUG_SLAB > - seq_puts(m, " : globalstat "); > - seq_puts(m, " : cpustat "); > -#endif > seq_putc(m, '\n'); > } > > @@ -1370,7 +1352,7 @@ static int __init slab_proc_init(void) > } > module_init(slab_proc_init); > > -#endif /* CONFIG_SLAB || CONFIG_SLUB_DEBUG */ > +#endif /* CONFIG_SLUB_DEBUG */ > > static __always_inline __realloc_size(2) void * > __do_krealloc(const void *p, size_t new_size, gfp_t flags) > > -- Looks good to me, Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> > 2.42.1 > >