From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_2 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35B07C432C1 for ; Tue, 24 Sep 2019 15:05:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CE51D214DA for ; Tue, 24 Sep 2019 15:05:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="Vbus0KPg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CE51D214DA Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6DC756B0005; Tue, 24 Sep 2019 11:05:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6660F6B0006; Tue, 24 Sep 2019 11:05:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52CC26B0007; Tue, 24 Sep 2019 11:05:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 212326B0005 for ; Tue, 24 Sep 2019 11:05:45 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id C5766180AD828 for ; Tue, 24 Sep 2019 15:05:44 +0000 (UTC) X-FDA: 75970138608.08.B71B743 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id C7E2D1806E789 for ; Tue, 24 Sep 2019 15:03:26 +0000 (UTC) X-HE-Tag: skin12_3b0e69177c249 X-Filterd-Recvd-Size: 12725 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf14.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Sep 2019 15:03:25 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id j31so2525606qta.5 for ; Tue, 24 Sep 2019 08:03:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=YsMeuyoMleb9QtVsaOT25mI20JAueG4RElFK0Rnm9yE=; b=Vbus0KPgjf7ASZHaDu1c+Lei8f2AaozrpsjDyCbIhsTQJfxrJASuBdfqZJ6kn7BgGL oCpYOlxoeHh7bT/ZxUzPgPjHkgHQNqZkQUoGNRaEbVG65uD4pSK2++OxeO/wiklRBhBg t1wFpcGsmK16qm3bLtCxRFe4/M+3UlQU+SBr8jVPYKNNfRY394u5gXUO9BdqQBk/e5L1 E4jdykk9Xrad94zISEVAdu6DMVq3c79WGSVLdRsITv0MDdGRNDu7qpBdg9o9U4o+cANQ PJJjm1qZIvyEUmyOa+pG9gNG7/1YUP4G4BPTD+Y6pYSXpekMUY+KxcqcI5jThAgTB1+p 9qjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:mime-version:content-transfer-encoding; bh=YsMeuyoMleb9QtVsaOT25mI20JAueG4RElFK0Rnm9yE=; b=tpNnFXgIIfK7JkwdFxaurUb2uggNaa7mzz5CnJGW80LBSdR6e9pTLalculWWAYk2vp 68r/d4Uvvzrrz8grfIo94NdPUXhU5FhTSj0b1PXdlMnI60Izw5chcXgqEPtAsKFdqKt4 7YqLJmhO93BWJgYr90gQJLo+Sep+qhZItkJbZEiRfzDLfptfzZHTPpxumDZA6h20BVei OzJRy/5cJBzhI987HAAW/jnBLmrbSK4Z3xpMi1D79gyaxoImsd688LFsOlnemQVVb6kr DvJ8snS/jdFPB8bYhvETcm3nIw9vIvFg2mfRt9dRiHH/LDzyw81fvLUss0jtBz/kw2Na X1/Q== X-Gm-Message-State: APjAAAVt0CXNLezUVoMHA5h1e4R9o6NNiRX1279dAY5xmcUG6WPAR1oY lFcj0cyFpkaTjjp+Uak/L1J7TKQZwYE= X-Google-Smtp-Source: APXvYqzxIxph97ZiUsFZUjd8SrGbTm+lU1/+SYyTCtgIbt5Pqbs40pvTONiAwb7S+/oHN6ow094PHw== X-Received: by 2002:ac8:7587:: with SMTP id s7mr3328024qtq.288.1569337404243; Tue, 24 Sep 2019 08:03:24 -0700 (PDT) Received: from dhcp-41-57.bos.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id s17sm946702qkg.79.2019.09.24.08.03.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Sep 2019 08:03:23 -0700 (PDT) Message-ID: <1569337401.5576.217.camel@lca.pw> Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock From: Qian Cai To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Oscar Salvador , Michal Hocko , Pavel Tatashin , Dan Williams , Thomas Gleixner Date: Tue, 24 Sep 2019 11:03:21 -0400 In-Reply-To: <20190924143615.19628-1-david@redhat.com> References: <20190924143615.19628-1-david@redhat.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-10.el7) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 2019-09-24 at 16:36 +0200, David Hildenbrand wrote: > Since commit 3f906ba23689 ("mm/memory-hotplug: switch locking to a perc= pu > rwsem") we do a cpus_read_lock() in mem_hotplug_begin(). This was > introduced to fix a potential deadlock between get_online_mems() and > get_online_cpus() - the memory and cpu hotplug lock. The root issue was > that build_all_zonelists() -> stop_machine() required the cpu hotplug l= ock: > The reason is that memory hotplug takes the memory hotplug lock and > then calls stop_machine() which calls get_online_cpus(). That's th= e > reverse lock order to get_online_cpus(); get_online_mems(); in > mm/slub_common.c >=20 > So memory hotplug never really required any cpu lock itself, only > stop_machine() and lru_add_drain_all() required it. Back then, > stop_machine_cpuslocked() and lru_add_drain_all_cpuslocked() were used > as the cpu hotplug lock was now obtained in the caller. >=20 > Since commit 11cd8638c37f ("mm, page_alloc: remove stop_machine from bu= ild > all_zonelists"), the stop_machine_cpuslocked() call is gone. > build_all_zonelists() does no longer require the cpu lock and does no > longer make use of stop_machine(). >=20 > Since commit 9852a7212324 ("mm: drop hotplug lock from > lru_add_drain_all()"), lru_add_drain_all() "Doesn't need any cpu hotplu= g > locking because we do rely on per-cpu kworkers being shut down before o= ur > page_alloc_cpu_dead callback is executed on the offlined cpu.". The > lru_add_drain_all_cpuslocked() variant was removed. >=20 > So there is nothing left that requires the cpu hotplug lock. The memory > hotplug lock and the device hotplug lock are sufficient. >=20 > Cc: Andrew Morton > Cc: Oscar Salvador > Cc: Michal Hocko > Cc: Pavel Tatashin > Cc: Dan Williams > Cc: Thomas Gleixner > Signed-off-by: David Hildenbrand > --- >=20 > RFC -> v1: > - Reword and add more details why the cpu hotplug lock was needed here > in the first place, and why we no longer require it. >=20 > --- > mm/memory_hotplug.c | 2 -- > 1 file changed, 2 deletions(-) >=20 > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c3e9aed6023f..5fa30f3010e1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -88,14 +88,12 @@ __setup("memhp_default_state=3D", setup_memhp_defau= lt_state); > =20 > void mem_hotplug_begin(void) > { > - cpus_read_lock(); > percpu_down_write(&mem_hotplug_lock); > } > =20 > void mem_hotplug_done(void) > { > percpu_up_write(&mem_hotplug_lock); > - cpus_read_unlock(); > } > =20 > u64 max_mem_size =3D U64_MAX; While at it, it might be a good time to rethink the whole locking over th= ere, as it right now read files under /sys/kernel/slab/ could trigger a possible deadlock anyway. [=C2=A0=C2=A0442.258806][ T5224] WARNING: possible circular locking depen= dency detected [=C2=A0=C2=A0442.265678][ T5224] 5.3.0-rc7-mm1+ #6 Tainted: G=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0L=C2=A0= =C2=A0=C2=A0 [=C2=A0=C2=A0442.271766][ T5224] ----------------------------------------= -------------- [=C2=A0=C2=A0442.278635][ T5224] cat/5224 is trying to acquire lock: [=C2=A0=C2=A0442.283857][ T5224] ffff900012ac3120 (mem_hotplug_lock.rw_se= m){++++}, at: show_slab_objects+0x94/0x3a8 [=C2=A0=C2=A0442.293189][ T5224]=C2=A0 [=C2=A0=C2=A0442.293189][ T5224] but task is already holding lock: [=C2=A0=C2=A0442.300404][ T5224] b8ff009693eee398 (kn->count#45){++++}, a= t: kernfs_seq_start+0x44/0xf0 [=C2=A0=C2=A0442.308587][ T5224]=C2=A0 [=C2=A0=C2=A0442.308587][ T5224] which lock already depends on the new lo= ck. [=C2=A0=C2=A0442.308587][ T5224]=C2=A0 [=C2=A0=C2=A0442.318841][ T5224]=C2=A0 [=C2=A0=C2=A0442.318841][ T5224] the existing dependency chain (in revers= e order) is: [=C2=A0=C2=A0442.327705][ T5224]=C2=A0 [=C2=A0=C2=A0442.327705][ T5224] -> #2 (kn->count#45){++++}: [=C2=A0=C2=A0442.334413][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0lock_acquire+0x31c/0x360 [=C2=A0=C2=A0442.339286][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0__kernfs_remove+0x290/0x490 [=C2=A0=C2=A0442.344428][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kernfs_remove+0x30/0x44 [=C2=A0=C2=A0442.349224][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0sysfs_remove_dir+0x70/0x88 [=C2=A0=C2=A0442.354276][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kobject_del+0x50/0xb0 [=C2=A0=C2=A0442.358890][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0sysfs_slab_unlink+0x2c/0x38 [=C2=A0=C2=A0442.364025][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0shutdown_cache+0xa0/0xf0 [=C2=A0=C2=A0442.368898][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kmemcg_cache_shutdown_fn+0x1c/0x34 [=C2=A0=C2=A0442.374640][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kmemcg_workfn+0x44/0x64 [=C2=A0=C2=A0442.379428][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0process_one_work+0x4f4/0x950 [=C2=A0=C2=A0442.384649][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0worker_thread+0x390/0x4bc [=C2=A0=C2=A0442.389610][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kthread+0x1cc/0x1e8 [=C2=A0=C2=A0442.394052][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0ret_from_fork+0x10/0x18 [=C2=A0=C2=A0442.398835][ T5224]=C2=A0 [=C2=A0=C2=A0442.398835][ T5224] -> #1 (slab_mutex){+.+.}: [=C2=A0=C2=A0442.405365][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0lock_acquire+0x31c/0x360 [=C2=A0=C2=A0442.410240][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0__mutex_lock_common+0x16c/0xf78 [=C2=A0=C2=A0442.415722][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0mutex_lock_nested+0x40/0x50 [=C2=A0=C2=A0442.420855][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0memcg_create_kmem_cache+0x38/0x16c [=C2=A0=C2=A0442.426598][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0memcg_kmem_cache_create_func+0x3c/0x70 [=C2=A0=C2=A0442.432687][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0process_one_work+0x4f4/0x950 [=C2=A0=C2=A0442.437908][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0worker_thread+0x390/0x4bc [=C2=A0=C2=A0442.442868][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kthread+0x1cc/0x1e8 [=C2=A0=C2=A0442.447307][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0ret_from_fork+0x10/0x18 [=C2=A0=C2=A0442.452090][ T5224]=C2=A0 [=C2=A0=C2=A0442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){++++}: [=C2=A0=C2=A0442.459748][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0validate_chain+0xd10/0x2bcc [=C2=A0=C2=A0442.464883][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0__lock_acquire+0x7f4/0xb8c [=C2=A0=C2=A0442.469930][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0lock_acquire+0x31c/0x360 [=C2=A0=C2=A0442.474803][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0get_online_mems+0x54/0x150 [=C2=A0=C2=A0442.479850][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0show_slab_objects+0x94/0x3a8 [=C2=A0=C2=A0442.485072][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0total_objects_show+0x28/0x34 [=C2=A0=C2=A0442.490292][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0slab_attr_show+0x38/0x54 [=C2=A0=C2=A0442.495166][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0sysfs_kf_seq_show+0x198/0x2d4 [=C2=A0=C2=A0442.500473][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kernfs_seq_show+0xa4/0xcc [=C2=A0=C2=A0442.505433][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0seq_read+0x30c/0x8a8 [=C2=A0=C2=A0442.509958][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0kernfs_fop_read+0xa8/0x314 [=C2=A0=C2=A0442.515007][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0__vfs_read+0x88/0x20c [=C2=A0=C2=A0442.519620][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0vfs_read+0xd8/0x10c [=C2=A0=C2=A0442.524060][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0ksys_read+0xb0/0x120 [=C2=A0=C2=A0442.528586][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0__arm64_sys_read+0x54/0x88 [=C2=A0=C2=A0442.533634][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0el0_svc_handler+0x170/0x240 [=C2=A0=C2=A0442.538768][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0el0_svc+0x8/0xc [=C2=A0=C2=A0442.542858][ T5224]=C2=A0 [=C2=A0=C2=A0442.542858][ T5224] other info that might help us debug this= : [=C2=A0=C2=A0442.542858][ T5224]=C2=A0 [=C2=A0=C2=A0442.552936][ T5224] Chain exists of: [=C2=A0=C2=A0442.552936][ T5224]=C2=A0=C2=A0=C2=A0mem_hotplug_lock.rw_sem= --> slab_mutex --> kn->count#45 [=C2=A0=C2=A0442.552936][ T5224]=C2=A0 [=C2=A0=C2=A0442.565803][ T5224]=C2=A0=C2=A0Possible unsafe locking scena= rio: [=C2=A0=C2=A0442.565803][ T5224]=C2=A0 [=C2=A0=C2=A0442.573105][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0CPU0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0CPU1 [=C2=A0=C2=A0442.578322][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0----=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0---- [=C2=A0=C2=A0442.583539][ T5224]=C2=A0=C2=A0=C2=A0lock(kn->count#45); [=C2=A0=C2=A0442.587545][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= lock(slab_mutex); [=C2=A0=C2=A0442.593898][ T5224]=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= lock(kn->count#45); [=C2=A0=C2=A0442.600433][ T5224]=C2=A0=C2=A0=C2=A0lock(mem_hotplug_lock.r= w_sem); [=C2=A0=C2=A0442.605393][ T5224]=C2=A0 [=C2=A0=C2=A0442.605393][ T5224]=C2=A0=C2=A0*** DEADLOCK *** [=C2=A0=C2=A0442.605393][ T5224]=C2=A0 [=C2=A0=C2=A0442.613390][ T5224] 3 locks held by cat/5224: [=C2=A0=C2=A0442.617740][ T5224]=C2=A0=C2=A0#0: 9eff00095b14b2a0 (&p->loc= k){+.+.}, at: seq_read+0x4c/0x8a8 [=C2=A0=C2=A0442.625399][ T5224]=C2=A0=C2=A0#1: 0eff008997041480 (&of->mu= tex){+.+.}, at: kernfs_seq_start+0x34/0xf0 [=C2=A0=C2=A0442.633842][ T5224]=C2=A0=C2=A0#2: b8ff009693eee398 (kn->cou= nt#45){++++}, at: kernfs_seq_start+0x44/0xf0 [=C2=A0=C2=A0442.642477][ T5224]=C2=A0 [=C2=A0=C2=A0442.642477][ T5224] stack backtrace: [=C2=A0=C2=A0442.648221][ T5224] CPU: 117 PID: 5224 Comm: cat Tainted: G=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0L=C2=A0=C2=A0=C2=A0=C2=A05.3.0-rc7-mm1+ #6 [=C2=A0=C2=A0442.656826][ T5224] Hardware name: HPE Apollo 70=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0/C01_APACHE_MB=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= , BIOS L50_5.13_1.11 06/18/2019 [=C2=A0=C2=A0442.667253][ T5224] Call trace: [=C2=A0=C2=A0442.670391][ T5224]=C2=A0=C2=A0dump_backtrace+0x0/0x248 [=C2=A0=C2=A0442.674743][ T5224]=C2=A0=C2=A0show_stack+0x20/0x2c [=C2=A0=C2=A0442.678750][ T5224]=C2=A0=C2=A0dump_stack+0xd0/0x140 [=C2=A0=C2=A0442.682841][ T5224]=C2=A0=C2=A0print_circular_bug+0x368/0x38= 0 [=C2=A0=C2=A0442.687715][ T5224]=C2=A0=C2=A0check_noncircular+0x248/0x250 [=C2=A0=C2=A0442.692501][ T5224]=C2=A0=C2=A0validate_chain+0xd10/0x2bcc [=C2=A0=C2=A0442.697115][ T5224]=C2=A0=C2=A0__lock_acquire+0x7f4/0xb8c [=C2=A0=C2=A0442.701641][ T5224]=C2=A0=C2=A0lock_acquire+0x31c/0x360 [=C2=A0=C2=A0442.705993][ T5224]=C2=A0=C2=A0get_online_mems+0x54/0x150 [=C2=A0=C2=A0442.710519][ T5224]=C2=A0=C2=A0show_slab_objects+0x94/0x3a8 [=C2=A0=C2=A0442.715219][ T5224]=C2=A0=C2=A0total_objects_show+0x28/0x34 [=C2=A0=C2=A0442.719918][ T5224]=C2=A0=C2=A0slab_attr_show+0x38/0x54 [=C2=A0=C2=A0442.724271][ T5224]=C2=A0=C2=A0sysfs_kf_seq_show+0x198/0x2d4 [=C2=A0=C2=A0442.729056][ T5224]=C2=A0=C2=A0kernfs_seq_show+0xa4/0xcc [=C2=A0=C2=A0442.733494][ T5224]=C2=A0=C2=A0seq_read+0x30c/0x8a8 [=C2=A0=C2=A0442.737498][ T5224]=C2=A0=C2=A0kernfs_fop_read+0xa8/0x314 [=C2=A0=C2=A0442.742025][ T5224]=C2=A0=C2=A0__vfs_read+0x88/0x20c [=C2=A0=C2=A0442.746118][ T5224]=C2=A0=C2=A0vfs_read+0xd8/0x10c [=C2=A0=C2=A0442.750036][ T5224]=C2=A0=C2=A0ksys_read+0xb0/0x120 [=C2=A0=C2=A0442.754042][ T5224]=C2=A0=C2=A0__arm64_sys_read+0x54/0x88 [=C2=A0=C2=A0442.758569][ T5224]=C2=A0=C2=A0el0_svc_handler+0x170/0x240 [=C2=A0=C2=A0442.763180][ T5224]=C2=A0=C2=A0el0_svc+0x8/0xc