linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
To: kernel test robot <oliver.sang@intel.com>
Cc: lkp@lists.01.org, lkp@intel.com, akpm@linux-foundation.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	frederic@kernel.org,  tglx@linutronix.de, peterz@infradead.org,
	mtosatti@redhat.com, nilal@redhat.com,  mgorman@suse.de,
	linux-rt-users@vger.kernel.org, vbabka@suse.cz, cl@linux.com,
	 ppandit@redhat.com
Subject: Re: [mm/page_alloc]  5541e53659: BUG:spinlock_bad_magic_on_CPU
Date: Thu, 04 Nov 2021 17:39:44 +0100	[thread overview]
Message-ID: <3190fc080138108c189626e22676aaae80b72414.camel@redhat.com> (raw)
In-Reply-To: <20211104143809.GB6499@xsang-OptiPlex-9020>

On Thu, 2021-11-04 at 22:38 +0800, kernel test robot wrote:
> 
> Greeting,
> 
> FYI, we noticed the following commit (built with gcc-9):
> 
> commit: 5541e5365954069e4c7b649831c0e41bc9e5e081 ("[PATCH v2 2/3] mm/page_alloc: Convert per-cpu lists' local locks to per-cpu spin locks")
> url: https://github.com/0day-ci/linux/commits/Nicolas-Saenz-Julienne/mm-page_alloc-Remote-per-cpu-page-list-drain-support/20211104-010825
> base: https://github.com/hnaz/linux-mm master
> patch link: https://lore.kernel.org/lkml/20211103170512.2745765-3-nsaenzju@redhat.com
> 
> in testcase: boot
> 
> on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G
> 
> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
> 
> 
> +--------------------------------------------+------------+------------+
> >                                            | 69c421f2b4 | 5541e53659 |
> +--------------------------------------------+------------+------------+
> > boot_successes                             | 11         | 0          |
> > boot_failures                              | 0          | 11         |
> > BUG:spinlock_bad_magic_on_CPU              | 0          | 11         |
> > BUG:using_smp_processor_id()in_preemptible | 0          | 11         |
> +--------------------------------------------+------------+------------+
> 
> 
> If you fix the issue, kindly add following tag
> Reported-by: kernel test robot <oliver.sang@intel.com>
> 
> 
> [    0.161872][    T0] BUG: spinlock bad magic on CPU#0, swapper/0
> [    0.162248][    T0]  lock: 0xeb24bef0, .magic: 00000000, .owner: swapper/0, .owner_cpu: 0
> [    0.162767][    T0] CPU: 0 PID: 0 Comm: swapper Not tainted 5.15.0-rc7-mm1-00437-g5541e5365954 #1
> [    0.163325][    T0] Call Trace:
> [ 0.163524][ T0] dump_stack_lvl (lib/dump_stack.c:107 (discriminator 4)) 
> [ 0.163802][ T0] dump_stack (lib/dump_stack.c:114) 
> [ 0.164050][ T0] spin_bug (kernel/locking/spinlock_debug.c:70 kernel/locking/spinlock_debug.c:77) 
> [ 0.164296][ T0] do_raw_spin_unlock (arch/x86/include/asm/atomic.h:29 include/linux/atomic/atomic-instrumented.h:28 include/asm-generic/qspinlock.h:28 kernel/locking/spinlock_debug.c:100 kernel/locking/spinlock_debug.c:140) 
> [ 0.164624][ T0] _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:160 kernel/locking/spinlock.c:194) 
> [ 0.164971][ T0] free_unref_page (include/linux/spinlock.h:423 mm/page_alloc.c:3400) 
> [ 0.165253][ T0] free_the_page (mm/page_alloc.c:699) 
> [ 0.165521][ T0] __free_pages (mm/page_alloc.c:5453) 
> [ 0.165785][ T0] add_highpages_with_active_regions (include/linux/mm.h:2511 arch/x86/mm/init_32.c:416) 
> [ 0.166179][ T0] set_highmem_pages_init (arch/x86/mm/highmem_32.c:30) 
> [ 0.166501][ T0] mem_init (arch/x86/mm/init_32.c:749 (discriminator 2)) 
> [ 0.166749][ T0] start_kernel (init/main.c:842 init/main.c:988) 
> [ 0.167026][ T0] ? early_idt_handler_common (arch/x86/kernel/head_32.S:417) 
> [ 0.167369][ T0] i386_start_kernel (arch/x86/kernel/head32.c:57) 
> [ 0.167662][ T0] startup_32_smp (arch/x86/kernel/head_32.S:328) 

I did test this with lock debugging enabled, but I somehow missed this stack
trace. Here's the fix:

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7dbdab100461..c8964e28aa59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6853,6 +6853,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta
        pcp->high = BOOT_PAGESET_HIGH;
        pcp->batch = BOOT_PAGESET_BATCH;
        pcp->free_factor = 0;
+       spin_lock_init(&pcp->lock);
 }
 
 static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high,
@@ -6902,7 +6903,6 @@ void __meminit setup_zone_pageset(struct zone *zone)
                struct per_cpu_zonestat *pzstats;
 
                pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
-               spin_lock_init(&pcp->lock);
                pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
                per_cpu_pages_init(pcp, pzstats);
        }

-- 
Nicolás Sáenz



  reply	other threads:[~2021-11-04 16:39 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-03 17:05 [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support Nicolas Saenz Julienne
2021-11-03 17:05 ` [PATCH v2 1/3] mm/page_alloc: Don't pass pfn to free_unref_page_commit() Nicolas Saenz Julienne
2021-11-23 14:41   ` Vlastimil Babka
2021-11-03 17:05 ` [PATCH v2 2/3] mm/page_alloc: Convert per-cpu lists' local locks to per-cpu spin locks Nicolas Saenz Julienne
2021-11-04 14:38   ` [mm/page_alloc] 5541e53659: BUG:spinlock_bad_magic_on_CPU kernel test robot
2021-11-04 16:39     ` Nicolas Saenz Julienne [this message]
2021-11-03 17:05 ` [PATCH v2 3/3] mm/page_alloc: Remotely drain per-cpu lists Nicolas Saenz Julienne
2021-12-03 14:13   ` Mel Gorman
2021-12-09 10:50     ` Nicolas Saenz Julienne
2021-12-09 17:45     ` Marcelo Tosatti
2021-12-10 10:55       ` Mel Gorman
2021-12-14 10:58         ` Marcelo Tosatti
2021-12-14 11:42           ` Christoph Lameter
2021-12-14 12:25             ` Marcelo Tosatti
2021-11-23 14:58 ` [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support Vlastimil Babka
2021-11-30 18:09   ` Nicolas Saenz Julienne
2021-12-01 14:01     ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3190fc080138108c189626e22676aaae80b72414.camel@redhat.com \
    --to=nsaenzju@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=frederic@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=mgorman@suse.de \
    --cc=mtosatti@redhat.com \
    --cc=nilal@redhat.com \
    --cc=oliver.sang@intel.com \
    --cc=peterz@infradead.org \
    --cc=ppandit@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).