From: kernel test robot <lkp@intel.com>
To: Alexei Starovoitov <ast@kernel.org>
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
linux-mm@kvack.org, Vlastimil Babka <vbabka@suse.cz>
Subject: [vbabka-slab:slab/for-6.18/kmalloc_nolock 14/14] mm/slub.c:3866:2: warning: variable 'flags' is used uninitialized whenever '&&' condition is false
Date: Fri, 12 Sep 2025 18:25:35 +0800 [thread overview]
Message-ID: <202509121822.aOV6H1ts-lkp@intel.com> (raw)
tree: https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git slab/for-6.18/kmalloc_nolock
head: 8014922e0e72dde7684abebeab3404720401679f
commit: 8014922e0e72dde7684abebeab3404720401679f [14/14] slab: Introduce kmalloc_nolock() and kfree_nolock().
config: x86_64-buildonly-randconfig-004-20250912 (https://download.01.org/0day-ci/archive/20250912/202509121822.aOV6H1ts-lkp@intel.com/config)
compiler: clang version 20.1.8 (https://github.com/llvm/llvm-project 87f0227cb60147a26a1eeb4fb06e3b505e9c7261)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250912/202509121822.aOV6H1ts-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509121822.aOV6H1ts-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> mm/slub.c:3866:2: warning: variable 'flags' is used uninitialized whenever '&&' condition is false [-Wsometimes-uninitialized]
3866 | local_lock_cpu_slab(s, flags);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
mm/slub.c:3776:2: note: expanded from macro 'local_lock_cpu_slab'
3776 | lockdep_assert(local_trylock_irqsave(&(s)->cpu_slab->lock, flags))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/lockdep.h:279:15: note: expanded from macro 'lockdep_assert'
279 | do { WARN_ON(debug_locks && !(cond)); } while (0)
| ^~~~~~~~~~~
include/asm-generic/bug.h:171:25: note: expanded from macro 'WARN_ON'
171 | int __ret_warn_on = !!(condition); \
| ^~~~~~~~~
mm/slub.c:3891:27: note: uninitialized use occurs here
3891 | local_unlock_cpu_slab(s, flags);
| ^~~~~
mm/slub.c:3780:48: note: expanded from macro 'local_unlock_cpu_slab'
3780 | local_unlock_irqrestore(&(s)->cpu_slab->lock, flags)
| ^~~~~
include/linux/local_lock.h:52:48: note: expanded from macro 'local_unlock_irqrestore'
52 | __local_unlock_irqrestore(this_cpu_ptr(lock), flags)
| ^~~~~
include/linux/local_lock_internal.h:202:21: note: expanded from macro '__local_unlock_irqrestore'
202 | local_irq_restore(flags); \
| ^~~~~
include/linux/irqflags.h:240:61: note: expanded from macro 'local_irq_restore'
240 | #define local_irq_restore(flags) do { raw_local_irq_restore(flags); } while (0)
| ^~~~~
include/linux/irqflags.h:179:26: note: expanded from macro 'raw_local_irq_restore'
179 | arch_local_irq_restore(flags); \
| ^~~~~
mm/slub.c:3866:2: note: remove the '&&' if its condition is always true
3866 | local_lock_cpu_slab(s, flags);
| ^
mm/slub.c:3776:2: note: expanded from macro 'local_lock_cpu_slab'
3776 | lockdep_assert(local_trylock_irqsave(&(s)->cpu_slab->lock, flags))
| ^
include/linux/lockdep.h:279:15: note: expanded from macro 'lockdep_assert'
279 | do { WARN_ON(debug_locks && !(cond)); } while (0)
| ^
mm/slub.c:3863:21: note: initialize the variable 'flags' to silence this warning
3863 | unsigned long flags;
| ^
| = 0
1 warning generated.
vim +3866 mm/slub.c
3852
3853 /*
3854 * Put a slab into a partial slab slot if available.
3855 *
3856 * If we did not find a slot then simply move all the partials to the
3857 * per node partial list.
3858 */
3859 static void put_cpu_partial(struct kmem_cache *s, struct slab *slab, int drain)
3860 {
3861 struct slab *oldslab;
3862 struct slab *slab_to_put = NULL;
3863 unsigned long flags;
3864 int slabs = 0;
3865
> 3866 local_lock_cpu_slab(s, flags);
3867
3868 oldslab = this_cpu_read(s->cpu_slab->partial);
3869
3870 if (oldslab) {
3871 if (drain && oldslab->slabs >= s->cpu_partial_slabs) {
3872 /*
3873 * Partial array is full. Move the existing set to the
3874 * per node partial list. Postpone the actual unfreezing
3875 * outside of the critical section.
3876 */
3877 slab_to_put = oldslab;
3878 oldslab = NULL;
3879 } else {
3880 slabs = oldslab->slabs;
3881 }
3882 }
3883
3884 slabs++;
3885
3886 slab->slabs = slabs;
3887 slab->next = oldslab;
3888
3889 this_cpu_write(s->cpu_slab->partial, slab);
3890
3891 local_unlock_cpu_slab(s, flags);
3892
3893 if (slab_to_put) {
3894 __put_partials(s, slab_to_put);
3895 stat(s, CPU_PARTIAL_DRAIN);
3896 }
3897 }
3898
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
next reply other threads:[~2025-09-12 10:26 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-12 10:25 kernel test robot [this message]
2025-09-12 13:23 ` [vbabka-slab:slab/for-6.18/kmalloc_nolock 14/14] mm/slub.c:3866:2: warning: variable 'flags' is used uninitialized whenever '&&' condition is false Vlastimil Babka
2025-09-12 16:39 ` Alexei Starovoitov
2025-09-13 20:46 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202509121822.aOV6H1ts-lkp@intel.com \
--to=lkp@intel.com \
--cc=ast@kernel.org \
--cc=linux-mm@kvack.org \
--cc=llvm@lists.linux.dev \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox