From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4543AC54791 for ; Sun, 10 Mar 2024 08:23:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DCF86B0072; Sun, 10 Mar 2024 04:23:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 88BBC6B0074; Sun, 10 Mar 2024 04:23:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 778E06B0075; Sun, 10 Mar 2024 04:23:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 66F126B0072 for ; Sun, 10 Mar 2024 04:23:19 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 05787C136E for ; Sun, 10 Mar 2024 08:23:18 +0000 (UTC) X-FDA: 81880439718.24.AE998E7 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 1429D10000B for ; Sun, 10 Mar 2024 08:23:16 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710058997; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Rd3jgHoTmTWl38PBY1Bbnl3xSvjXnAttztIYdBZF+WY=; b=WfzH4Pl7Og+qnwq65lyEIhjExuxJkBoXZ1HC2klaoDgPEedI8IjEbv5HTNj2U2m3aumz5B mopB+yX8rjd/AsCTcNZNhjS9ebzwoKmiBMbUSvrKdg87vg5c2pZ1s1j2KTxUWCpxFmoILt YlkLtsR597Pb25zxjU9L4sss5Hfz70c= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710058997; a=rsa-sha256; cv=none; b=AbbchQb8Yx5ci9mBVU7kY3/70C1PTSrMoC6HWta+zdMLKhveHKuqqzZNxJ0PxEhAz76PS/ mGyko2bM1hHy/xX2V17afSevxdC5vjbB89J+3Sv6yGjTnDsP4GSr+BpMj7EGSt3EIDTreq 7cGj8haNuLFvU3blEDIMrc3AOSTu7Oc= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 02A3E113E; Sun, 10 Mar 2024 00:23:52 -0800 (PST) Received: from [10.57.68.196] (unknown [10.57.68.196]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8F5893F73F; Sun, 10 Mar 2024 00:23:14 -0800 (PST) Message-ID: <59098a73-636a-497d-bc20-0abc90b4868c@arm.com> Date: Sun, 10 Mar 2024 08:23:12 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed Content-Language: en-GB To: Matthew Wilcox Cc: Zi Yan , Andrew Morton , linux-mm@kvack.org, Yang Shi , Huang Ying References: <03CE3A00-917C-48CC-8E1C-6A98713C817C@nvidia.com> <090c9d68-9296-4338-9afa-5369bb1db66c@arm.com> <95f91a7f-13dd-4212-97ce-cf0b4060828a@arm.com> <08c01e9d-beda-435c-93ac-f303a89379df@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 1429D10000B X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: jsfykb1mxuwyu9xfaa547st7zc8nmy7u X-HE-Tag: 1710058996-213600 X-HE-Meta: U2FsdGVkX18TUJe5QFwOdFMx9H2b4TcvXTsAbwUNSwlcfGPs0/nxUBzQQnEv3E5Al6ZlI13Gl/MMzDg/IhTDueD6AvW+fhz7vWeWiA528gglArr3wp/Qc1x04f1aa4vq9xzuwf9FC1TZJBjbuZsWweiXvlb8aS2ytz6MosrsLkU+IBQq3Zp0esec1o9pR2G6haj+KRw9egrpKxdayiLWXhj/dtjV5sUPU54T+ak84plKYVRhUMp8loKXfQ4q3NeahEaaZw95gsgWJOkAMhD0t3V5+vgspjsIt6NSFRX+vYH0EJ9x11T69jZXIEOwquITNcUL89Jjeufpzf0jdEigT2/ihGL5Zk6c/73pOdplAzpGcfV+uT354gdLDi1KNIObtrGYzYwcCiPfFA3IlagNYnuVOHvUwq0gY78NOykLV4idIPe0UJWmtCMilPOogHlhzEV3DKvjaPoy2f/SvAigDroS4GzOsaa2ETHx6JM51+FJv3lHIWO132vpeXtGFf8wXkgAxeIbVJeL51/mG+VfSeOP+4rOGvX/00glpcJi5gzQoglnknugR+SlM0akTLCOA7GzRZNTU76Dy+5JbArbDvEF81I/yKvDgrlcjdxW2ND06hsyPl/eii78iKzdHmSyv95h3d7DJG4ALCVFWpfyEEozb4oiCRB+7wcNGBNmWoXFreFtPy3gEuvnS+ou/4tIpwBWn2mQf7HruiK17aLiu8J/8Fzn33YnakIzW5e7bseHCM811RuPLM0VA7oGG4WAPqYooALkpJNGM3yy+Pu9Bzeny6wuIeNTlHjmovdEM+nQi77NJ81QYjfZYkmwre81QAYuOY6nBxpoMqB7u+M+lJ5UnyXZkcG5 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/03/2024 04:23, Matthew Wilcox wrote: > On Sat, Mar 09, 2024 at 09:38:42AM +0000, Ryan Roberts wrote: >>> I think split_queue_len is getting out of sync with the number of items on the >>> queue? We only decrement it if we lost the race with folio_put(). But we are >>> unconditionally taking folios off the list here. So we are definitely out of >>> sync until we take the lock again below. But we only put folios back on the list >>> that failed to split. A successful split used to decrement this variable >>> (because the folio was on _a_ list). But now it doesn't. So we are always >>> mismatched after the first failed split? >> >> Oops, I meant first *sucessful* split. > > Agreed, nice fix. > >> I've run the full test 5 times, and haven't seen any slow down or RCU stall >> warning. But on the 5th time, I saw the non-NULL mapping oops (your new check >> did not trigger): >> >> [ 944.475632] BUG: Bad page state in process usemem pfn:252932 >> [ 944.477314] page:00000000ad4feba6 refcount:0 mapcount:0 >> mapping:000000003a777cd9 index:0x1 pfn:0x252932 >> [ 944.478575] aops:0x0 ino:dead000000000122 >> [ 944.479130] flags: 0xbfffc0000000000(node=0|zone=2|lastcpupid=0xffff) >> [ 944.479934] page_type: 0xffffffff() >> [ 944.480328] raw: 0bfffc0000000000 0000000000000000 fffffc00084a4c90 >> fffffc00084a4c90 >> [ 944.481734] raw: 0000000000000001 0000000000000000 00000000ffffffff >> 0000000000000000 >> [ 944.482475] page dumped because: non-NULL mapping > >> So what do we know? >> >> - the above page looks like it was the 3rd page of a large folio >> - words 3 and 4 are the same, meaning they are likely empty _deferred_list >> - pfn alignment is correct for this >> - The _deferred_list for all previously freed large folios was empty >> - but the folio could have been in the new deferred split batch? > > I don't think it could be in a deferred split bacth because we hold the > refcount at that point ... > >> - free_tail_page_prepare() zeroed mapping/_deferred_list during free >> - _deferred_list was subsequently reinitialized to "empty" while on free list >> >> So how about this for a rough hypothesis: >> >> >> CPU1 CPU2 >> deferred_split_scan >> list_del_init >> folio_batch_add >> folio_put -> free >> free_tail_page_prepare >> is on deferred list? -> no >> split_huge_page_to_list_to_order >> list_empty(folio->_deferred_list) >> -> yes >> list_del_init >> mapping = NULL >> -> (_deferred_list.prev = NULL) >> put page on free list >> INIT_LIST_HEAD(entry); >> -> "mapping" no longer NULL >> >> >> But CPU1 is holding a reference, so that could only happen if a reference was >> put one too many times. Ugh. > > Before we start blaming the CPU for doing something impossible, It doesn't sound completely impossible to me that there is a rare error path that accidentally folio_put()s an extra time... > what if > we're taking the wrong lock? ...but yeah, equally as plausible, I guess. > I know that seems crazy, but if page->flags > gets corrupted to the point where we change some of the bits in the > nid, when we free the folio, we call folio_undo_large_rmappable(), > get the wrong ds_queue back from get_deferred_split_queue(), take the > wrong split_queue_lock, corrupt the deferred list of a different node, > and bad things happen? > > I don't think we can detect that folio->nid has become corrupted in the > page allocation/freeing code (can we?), but we can tell if a folio is > on the wrong ds_queue in deferred_split_scan(): > > list_for_each_entry_safe(folio, next, &ds_queue->split_queue, > _deferred_list) { > + VM_BUG_ON_FOLIO(folio_nid(folio) != sc->nid, folio); > + VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); > list_del_init(&folio->_deferred_list); > > (also testing the hypothesis that somehow a split folio has ended up > on the deferred split list) OK, ran with these checks, and get the following oops: [ 411.719461] page:0000000059c1826b refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x8c6a40 [ 411.720807] page:0000000059c1826b refcount:0 mapcount:-128 mapping:0000000000000000 index:0x1 pfn:0x8c6a40 [ 411.721792] flags: 0xbfffc0000000000(node=0|zone=2|lastcpupid=0xffff) [ 411.722453] page_type: 0xffffff7f(buddy) [ 411.722870] raw: 0bfffc0000000000 fffffc001227e808 fffffc002a857408 0000000000000000 [ 411.723672] raw: 0000000000000001 0000000000000004 00000000ffffff7f 0000000000000000 [ 411.724470] page dumped because: VM_BUG_ON_FOLIO(!folio_test_large(folio)) [ 411.725176] ------------[ cut here ]------------ [ 411.725642] kernel BUG at include/linux/mm.h:1191! [ 411.726341] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP [ 411.727021] Modules linked in: [ 411.727329] CPU: 40 PID: 2704 Comm: usemem Not tainted 6.8.0-rc5-00391-g44b0dc848590-dirty #45 [ 411.728179] Hardware name: linux,dummy-virt (DT) [ 411.728657] pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) [ 411.729381] pc : __dump_page+0x450/0x4a8 [ 411.729789] lr : __dump_page+0x450/0x4a8 [ 411.730187] sp : ffff80008b97b6f0 [ 411.730525] x29: ffff80008b97b6f0 x28: 00000000000000e2 x27: ffff80008b97b988 [ 411.731227] x26: ffff80008b97b988 x25: ffff800082105000 x24: 0000000000000001 [ 411.731926] x23: 0000000000000000 x22: 0000000000000001 x21: fffffc00221a9000 [ 411.732630] x20: fffffc00221a9000 x19: fffffc00221a9000 x18: ffffffffffffffff [ 411.733331] x17: 3030303030303030 x16: 2066376666666666 x15: 076c076f07660721 [ 411.734035] x14: 0728074f0749074c x13: 076c076f07660721 x12: 0000000000000000 [ 411.734757] x11: 0720072007200729 x10: ffff0013f5e756c0 x9 : ffff80008014b604 [ 411.735473] x8 : 00000000ffffbfff x7 : ffff0013f5e756c0 x6 : 0000000000000000 [ 411.736198] x5 : ffff0013a5a24d88 x4 : 0000000000000000 x3 : 0000000000000000 [ 411.736923] x2 : 0000000000000000 x1 : ffff0000c2849b80 x0 : 000000000000003e [ 411.737621] Call trace: [ 411.737870] __dump_page+0x450/0x4a8 [ 411.738229] dump_page+0x2c/0x70 [ 411.738551] deferred_split_scan+0x258/0x368 [ 411.738973] do_shrink_slab+0x184/0x750 [ 411.739355] shrink_slab+0x4d4/0x9c0 [ 411.739729] shrink_node+0x214/0x860 [ 411.740098] do_try_to_free_pages+0xd0/0x560 [ 411.740540] try_to_free_mem_cgroup_pages+0x14c/0x330 [ 411.741048] try_charge_memcg+0x1cc/0x788 [ 411.741456] __mem_cgroup_charge+0x6c/0xd0 [ 411.741884] __handle_mm_fault+0x1000/0x1a28 [ 411.742306] handle_mm_fault+0x7c/0x418 [ 411.742698] do_page_fault+0x100/0x690 [ 411.743080] do_translation_fault+0xb4/0xd0 [ 411.743508] do_mem_abort+0x4c/0xa8 [ 411.743876] el0_da+0x54/0xb8 [ 411.744177] el0t_64_sync_handler+0xe4/0x158 [ 411.744602] el0t_64_sync+0x190/0x198 [ 411.744976] Code: f000de00 912c4021 9126a000 97f79727 (d4210000) [ 411.745573] ---[ end trace 0000000000000000 ]--- The new VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); is firing, but then when dump_page() does this: if (compound) { pr_warn("head:%p order:%u entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", head, compound_order(head), folio_entire_mapcount(folio), folio_nr_pages_mapped(folio), atomic_read(&folio->_pincount)); } VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); inside folio_entire_mapcount() fires so we have a nested oops. So the very first line is from the first oops and the rest is from the second. I guess we are racing with the page being freed? I find the change in mapcount interesting; 0 -> -128. Not sure why this would happen? Given the NID check didn't fire, I wonder if this points more towards extra folio_put than corrupt folio nid?