From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C287C05027 for ; Tue, 14 Feb 2023 11:23:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6BBF66B0073; Tue, 14 Feb 2023 06:23:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 66C5C6B0074; Tue, 14 Feb 2023 06:23:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 533696B0075; Tue, 14 Feb 2023 06:23:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 444326B0073 for ; Tue, 14 Feb 2023 06:23:06 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 144AC40694 for ; Tue, 14 Feb 2023 11:23:06 +0000 (UTC) X-FDA: 80465660772.01.7CEE714 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id BF0A0C0011 for ; Tue, 14 Feb 2023 11:23:03 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ibll4D4X; spf=pass (imf28.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676373783; a=rsa-sha256; cv=none; b=k+lxEY6rE/KTD/ugyUzA0vqwWN32M7LWJ8CoMyh440zkEzrxgZy2cGwqgi4+hNNAxc4rUL LAz3Z4kLSE8pp5abYKREJmA5DJqtjsdcQTvkdITRprcqUBadW9u0l4KcItVKUge8sjCfK9 8XWF49Cb/rk8MhZX1nNB0Qpwq6pGhKI= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ibll4D4X; spf=pass (imf28.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676373783; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=E+d2u7qxhmEhGvu08Rt0T5yjU+JQuiplfBwWqI8t4aE=; b=zVgDPiwcQ/t17jYIASU7x0p24OMIQ/DjPprDtYSJOUD4P6XdES+wWnYpWkEcYczCkx2zmE /HKr0BSjhC5jjaV0hccnDespWbBF3YFz3aYQZFZBVajr9EOZ51uKpUe3W6vK/fT6I6b5TF Qxmb8yc/gJB+hdU4IlPPNdZWQCAuMSM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676373783; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=E+d2u7qxhmEhGvu08Rt0T5yjU+JQuiplfBwWqI8t4aE=; b=ibll4D4XSLAIEvX/nqd/DHHef9KUTFk5EYO6z5c3Qna1I25ivGyPWW2pfB7mcQ0sK/u7aT ipmyi9M+g4tlc0Z6eURGrCNixftcXqEYMY0Td9Uiw9O8EaoEXb4MifpSCFHH3t4WhfmRa6 u2bdqP354aPaqbDe4ITYcpqCH56RTZI= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-316-urp_8BIoMu2ceNuKrLOoIQ-1; Tue, 14 Feb 2023 06:23:01 -0500 X-MC-Unique: urp_8BIoMu2ceNuKrLOoIQ-1 Received: by mail-wr1-f72.google.com with SMTP id l15-20020adff48f000000b002c55dbddb59so998661wro.6 for ; Tue, 14 Feb 2023 03:23:01 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=E+d2u7qxhmEhGvu08Rt0T5yjU+JQuiplfBwWqI8t4aE=; b=g2gS95oownq4K5Xs1cYGMHgm2Gk4xV6m3ZwaBDYf2IPz6bTEY+ECFhHdmAXLXbNlJq j+yKIw4WdTN0r5q3Ipvf6KIWOQkhMwBdoklQuIaivWnqCTcqmzsZ0ZBaHrVFedOw3l9+ jO8MJf52WPKA4k164wTS4zZCrwG5xnwRxfB0DPkQsUpOQIz1w3Ucd3rHBIspdMERbHR8 QTDrn0tXEaPxm2ydfCpNS4D3tlygKMCMXGVeVSK3Wp+jTxf4+J7yiicHg90idbg4riy2 ZojfKT7JyzzIpAmF571pAnFCLJmZcyd535QfKvN5ANGqQf+Rcz3eGCpC/JBKorujU0MM Ikmw== X-Gm-Message-State: AO0yUKXU5qFJi24mwrdVNvWTIdL2byTAqGSn53g3Ujoxo0DJyyJiNKNO QhHLKqSK9IyJN7khYbfGIzkkSFDxgzDSw1tV0tAWlePR6q/r5VGiMdmfKYdmLCpb5pXOU8M29c4 JAUlEETUmVKE= X-Received: by 2002:adf:f685:0:b0:2c5:607f:5f4f with SMTP id v5-20020adff685000000b002c5607f5f4fmr1663438wrp.8.1676373780693; Tue, 14 Feb 2023 03:23:00 -0800 (PST) X-Google-Smtp-Source: AK7set9YKWzLlY2ftJ89JAEvuLm57wUR5ldr/D2Hdh5dSLx34H1frSXfK1/OpbeT6vGz5CXAXm9unQ== X-Received: by 2002:adf:f685:0:b0:2c5:607f:5f4f with SMTP id v5-20020adff685000000b002c5607f5f4fmr1663419wrp.8.1676373780354; Tue, 14 Feb 2023 03:23:00 -0800 (PST) Received: from ?IPV6:2003:cb:c709:1700:969:8e2b:e8bb:46be? (p200300cbc709170009698e2be8bb46be.dip0.t-ipconnect.de. [2003:cb:c709:1700:969:8e2b:e8bb:46be]) by smtp.gmail.com with ESMTPSA id z15-20020adfe54f000000b002c5694aef92sm721133wrm.21.2023.02.14.03.22.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 14 Feb 2023 03:22:59 -0800 (PST) Message-ID: <22f0e262-982e-ea80-e52a-a3c924b31d58@redhat.com> Date: Tue, 14 Feb 2023 12:22:58 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: Re: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes To: Qi Zheng , Mike Rapoport Cc: Vlastimil Babka , Qi Zheng , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Teng Hu , Matthew Wilcox , Mel Gorman , Oscar Salvador , Muchun Song References: <20230212110305.93670-1-zhengqi.arch@bytedance.com> <2484666e-e78e-549d-e075-b2c39d460d71@suse.cz> <85af4ada-96c8-1f99-90fa-9b6d63d0016e@bytedance.com> <67240e55-af49-f20a-2b4b-b7d574cd910d@gmail.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: <67240e55-af49-f20a-2b4b-b7d574cd910d@gmail.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: BF0A0C0011 X-Rspamd-Server: rspam01 X-Stat-Signature: mppognfxosrmkrhdtet4xroqfi8sturd X-HE-Tag: 1676373783-843224 X-HE-Meta: U2FsdGVkX1+FeLxYCCTtg4xB0dHlma9oSax6F/YzRRkOHJaOEhZpT77OBss+bfHHSw6MvI4ikdy1Nv74Dii/9nbkuaTiFQEwXUP/Yv/9BTo7IL/mlVe1P10Bk/Y1Yj0SG2TGJ3RO4BuWkD1Oe7FK4Y4kWJD+6enQxJMB1MZ1Da6M7pQXh76EC2CCQzdXbOlzBecu1bKiVhx3rUlYuxOMQeAHpFs5iTt9jkkXLJWnfESmxVlFUSZxVR9x4hQwGywFnGC5/FtS1pPy99livizPw4gcw4xek2CW+/JLdB9lCVAqvPakvzJ31TNozPf+LAtfG3hNjy8rUX69egZlV8XC/UQcj+wZt35eZpagzfNbgS/VEA7PBFnRNz8e0b7Fr5uEY1UXem17XhTSAm0a82qQsubqks6U3vUEiQ2edvD89OwAHsocGyJFbY5x9wB6q2lun5gOz0vMg06cwQxIc77lEmFtRnwl8I3n0jiUaJygLF/chfkAW7EZg/4y5eGyerFF+02UUjFhX69HVog/pC5Sx269oi1XZsEbcSkTTjY/A3gPwUoj+BV3ieG6qHNZvDHJFgnxb079ssIWVDlDg/LFvMoH8JSccV8I9iOxCkdYFrGF4pdh7Qb3r5mxZbx+ph0K62F717APlwHhyme6Iwc11lPK3YiY9OcM6WsJr/LZ/YW29urbrq/ZNqGKS1/sU8nr70XSvHwx0SNNfZeJdNkiY4duXlYNCwu8+v9TOWGxb1cge6RfV85AQ7A4K21pPJc63f4SbVNsk51nfduBHmKQ6XSlqVk61CcCwKBc0uyR+IfWtSMMzgD0WOujwfNVhbYcwahg36Qj7GjwEvciGkThWZdg5iALWPYqbIOpbnhukqWdQAbSLhsnkZ1qxEspLc/J6Nk09bytDPKaJESfQZ0tRw9NTAZofapvJS1gwHY8bXuLlkMXUZdzOKzSdsxBkp0Jr2Zm5gm+oM8zTZ37bbF egT4IjpM mQ4g99J0vgNuwGQ9NRuci7lPiP7ly5TXVBvBN5hgIwdlyljQ1x5m+7z3nTzPfrXtK06OsMVleP4/SNQlLlekkLqYiYnyZZmHrcYgttlYjX6uah79iNftp0VwS3LNultLN+3jkmN+Hw8LDBfXPdyE37oWNwJwS6sVhfkzWvdj+dms61hcxI2NBxT2OVXMQz4J+jaq88xliSMAXUI3j7KneShaYt1DoLTsiXGV2EfV15Z6+2VWVYgYzX6Xgvd3DlC5MJ7bObUhTpp/hHN4XUASi9RD//E4Z7w1SkJI222wDm8164IJoUwRss57UUopxBsR2nSjA6OhbB65ub7lcJURK01Nv1cXpk4mLCoCYyiCgdmvf0tbX8UA/4MCju7/Kl1QpgQ6V1DVbWaTokNkORrNVsPf5MXGS1orNwjWJcW6yXul5FVOqm0uECE4K4TPv3kzajFxrM96NlqtxM/WT0kPUljzNBQMURhWpgG7ytwZDCidoI8o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 14.02.23 11:26, Qi Zheng wrote: > > > On 2023/2/14 17:43, Mike Rapoport wrote: >> On Tue, Feb 14, 2023 at 10:17:03AM +0100, David Hildenbrand wrote: >>> On 14.02.23 09:42, Vlastimil Babka wrote: >>>> On 2/13/23 12:00, Qi Zheng wrote: >>>>> >>>>> >>>>> On 2023/2/13 16:47, Vlastimil Babka wrote: >>>>>> On 2/12/23 12:03, Qi Zheng wrote: >>>>>>> In x86, numa_register_memblks() is only interested in >>>>>>> those nodes which have enough memory, so it skips over >>>>>>> all nodes with memory below NODE_MIN_SIZE (treated as >>>>>>> a memoryless node). Later on, we will initialize these >>>>>>> memoryless nodes (allocate pgdat in free_area_init() >>>>>>> and build zonelist etc), and will online these nodes >>>>>>> in init_cpu_to_node() and init_gi_nodes(). >>>>>>> >>>>>>> After boot, these memoryless nodes are in N_ONLINE >>>>>>> state but not in N_MEMORY state. But we can still allocate >>>>>>> pages from these memoryless nodes. >>>>>>> >>>>>>> In SLUB, we only process nodes in the N_MEMORY state, >>>>>>> such as allocating their struct kmem_cache_node. So if >>>>>>> we allocate a page from the memoryless node above to >>>>>>> SLUB, the struct kmem_cache_node of the node corresponding >>>>>>> to this page is NULL, which will cause panic. >>>>>>> >>>>>>> For example, if we use qemu to start a two numa node kernel, >>>>>>> one of the nodes has 2M memory (less than NODE_MIN_SIZE), >>>>>>> and the other node has 2G, then we will encounter the >>>>>>> following panic: >>>>>>> >>>>>>> [ 0.149844] BUG: kernel NULL pointer dereference, address: 0000000000000000 >>>>>>> [ 0.150783] #PF: supervisor write access in kernel mode >>>>>>> [ 0.151488] #PF: error_code(0x0002) - not-present page >>>>>>> <...> >>>>>>> [ 0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40 >>>>>>> <...> >>>>>>> [ 0.169781] Call Trace: >>>>>>> [ 0.170159] >>>>>>> [ 0.170448] deactivate_slab+0x187/0x3c0 >>>>>>> [ 0.171031] ? bootstrap+0x1b/0x10e >>>>>>> [ 0.171559] ? preempt_count_sub+0x9/0xa0 >>>>>>> [ 0.172145] ? kmem_cache_alloc+0x12c/0x440 >>>>>>> [ 0.172735] ? bootstrap+0x1b/0x10e >>>>>>> [ 0.173236] bootstrap+0x6b/0x10e >>>>>>> [ 0.173720] kmem_cache_init+0x10a/0x188 >>>>>>> [ 0.174240] start_kernel+0x415/0x6ac >>>>>>> [ 0.174738] secondary_startup_64_no_verify+0xe0/0xeb >>>>>>> [ 0.175417] >>>>>>> [ 0.175713] Modules linked in: >>>>>>> [ 0.176117] CR2: 0000000000000000 >>>>>>> >>>>>>> In addition, we can also encountered this panic in the actual >>>>>>> production environment. We set up a 2c2g container with two >>>>>>> numa nodes, and then reserved 128M for kdump, and then we >>>>>>> can encountered the above panic in the kdump kernel. >>>>>>> >>>>>>> To fix it, we can filter memoryless nodes when allocating >>>>>>> pages. >>>>>>> >>>>>>> Signed-off-by: Qi Zheng >>>>>>> Reported-by: Teng Hu >>>>>> >>>>>> Well AFAIK the key mechanism to only allocate from "good" nodes is the >>>>>> zonelist, we shouldn't need to start putting extra checks like this. So it >>>>>> seems to me that the code building the zonelists should take the >>>>>> NODE_MIN_SIZE constraint in mind. >>>>> >>>>> Indeed. How about the following patch: >>>> >>>> +Cc also David, forgot earlier. >>>> >>>> Looks good to me, at least. >>>> >>>>> @@ -6382,8 +6378,11 @@ int find_next_best_node(int node, nodemask_t >>>>> *used_node_mask) >>>>> int min_val = INT_MAX; >>>>> int best_node = NUMA_NO_NODE; >>>>> >>>>> - /* Use the local node if we haven't already */ >>>>> - if (!node_isset(node, *used_node_mask)) { >>>>> + /* >>>>> + * Use the local node if we haven't already. But for memoryless >>>>> local >>>>> + * node, we should skip it and fallback to other nodes. >>>>> + */ >>>>> + if (!node_isset(node, *used_node_mask) && node_state(node, >>>>> N_MEMORY)) { >>>>> node_set(node, *used_node_mask); >>>>> return node; >>>>> } >>>>> >>>>> For memoryless node, we skip it and fallback to other nodes when >>>>> build its zonelists. >>>>> >>>>> Say we have node0 and node1, and node0 is memoryless, then: >>>>> >>>>> [ 0.102400] Fallback order for Node 0: 1 >>>>> [ 0.102931] Fallback order for Node 1: 1 >>>>> >>>>> In this way, we will not allocate pages from memoryless node0. >>>>> >>> >>> In offline_pages(), we'll first build_all_zonelists() to then >>> node_states_clear_node()->node_clear_state(node, N_MEMORY); >>> >>> So at least on the offlining path, we wouldn't detect it properly yet I >>> assume, and build a zonelist that contains a now-memory-less node? >> >> Another question is what happens if a new memory is plugged into a node >> that had < NODE_MIN_SIZE of memory and after hotplug it stops being >> "memoryless". > > When going online and offline a memory will re-call > build_all_zonelists() to re-establish the zonelists (the zonelist of > itself and other nodes). So it can stop being "memoryless" > automatically. > > But in online_pages(), did not see the check of < NODE_MIN_SIZE. TBH, this is the first time I hear of NODE_MIN_SIZE and it seems to be a pretty x86 specific thing. Are we sure we want to get NODE_MIN_SIZE involved? -- Thanks, David / dhildenb