From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BDD9C433FE for ; Tue, 2 Nov 2021 09:04:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2394B60E8C for ; Tue, 2 Nov 2021 09:04:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2394B60E8C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id ADB71940011; Tue, 2 Nov 2021 05:04:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8A2F94000A; Tue, 2 Nov 2021 05:04:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97827940011; Tue, 2 Nov 2021 05:04:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 86F3694000A for ; Tue, 2 Nov 2021 05:04:25 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2DF3418162A6F for ; Tue, 2 Nov 2021 09:04:25 +0000 (UTC) X-FDA: 78763404132.17.222B05A Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf02.hostedemail.com (Postfix) with ESMTP id 0B91A7001779 for ; Tue, 2 Nov 2021 09:04:19 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 58FE92190B; Tue, 2 Nov 2021 09:04:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1635843863; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FQEk7euf1G3s43BnBfla4wo2KxK/XLAKoQBCeTgbFlg=; b=gwqfHlozMmRuDrNEqUdlPz/QNOyKLaEHUOPCTD/fwVhFXUbh95iEqVl1jM6gHypg8HqxA0 WcYpkHBTHRNlmo9prJIrq672nWBeFKVmSyLyT5NOJPo4LH8mMuEwoIm8KaFBJXupjkebRG gth4aIr8JV0/YYHlfbvSFfluFwCcmcE= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 26AD0A3B83; Tue, 2 Nov 2021 09:04:23 +0000 (UTC) Date: Tue, 2 Nov 2021 10:04:22 +0100 From: Michal Hocko To: Alexey Makhalov Cc: David Hildenbrand , "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" , Oscar Salvador Subject: Re: [PATCH] mm: fix panic in __alloc_pages Message-ID: References: <20211101201312.11589-1-amakhalov@vmware.com> <7136c959-63ff-b866-b8e4-f311e0454492@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 0B91A7001779 X-Stat-Signature: 4gpo3mgkefwrgktd1wcy5pkrxdghsojh Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=gwqfHloz; spf=pass (imf02.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Rspamd-Server: rspam02 X-HE-Tag: 1635843859-190466 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is hard to follow your reply as your email client is not quoting properly. Let me try to reconstruct On Tue 02-11-21 08:48:27, Alexey Makhalov wrote: > On 02.11.21 08:47, Michal Hocko wrote: [...] >>>> CPU2 has been hot-added >>>> BUG: unable to handle page fault for address: 0000000000001608 >>>> #PF: supervisor read access in kernel mode >>>> #PF: error_code(0x0000) - not-present page >>>> PGD 0 P4D 0 >>>> Oops: 0000 [#1] SMP PTI >>>> CPU: 0 PID: 1 Comm: systemd Tainted: G E 5.15.0-rc7+ #11 >>>> Hardware name: VMware, Inc. VMware7,1/440BX Desktop Reference Platform, BIOS VMW >>>> >>>> RIP: 0010:__alloc_pages+0x127/0x290 >>> >>> Could you resolve this into a specific line of the source code please? This got probably unnoticed. I would be really curious whether this is a broken zonelist or something else. >>>> Node can be in one of the following states: >>>> 1. not present (nid == NUMA_NO_NODE) >>>> 2. present, but offline (nid > NUMA_NO_NODE, node_online(nid) == 0, >>>> NODE_DATA(nid) == NULL) >>>> 3. present and online (nid > NUMA_NO_NODE, node_online(nid) > 0, >>>> NODE_DATA(nid) != NULL) >>>> >>>> alloc_page_{bulk_array}node() functions verify for nid validity only >>>> and do not check if nid is online. Enhanced verification check allows >>>> to handle page allocation when node is in 2nd state. >>> >>> I do not think this is a correct approach. We should make sure that the >>> proper fallback node is used instead. This means that the zone list is >>> initialized properly. IIRC this has been a problem in the past and it >>> has been fixed. The initialization code is quite subtle though so it is >>> possible that this got broken again. > This approach behaves in the same way as CPU was not yet added. (state #1). > So, we can think of state #2 as state #1 when CPU is not present. >> I'm a little confused: >> >> In add_memory_resource() we hotplug the new node if required and set it >> online. Memory might get onlined later, via online_pages(). > > You are correct. In case of memory hot add, it is true. But in case of adding > CPU with memoryless node, try_node_online() will be called only during CPU > onlining, see cpu_up(). > > Is there any reason why try_online_node() resides in cpu_up() and not in add_cpu()? > I think it would be correct to online node during the CPU hot add to align with > memory hot add. I am not familiar with cpu hotplug, but this doesn't seem to be anything new so how come this became problem only now? >> So after add_memory_resource()->__try_online_node() succeeded, we have >> an online pgdat -- essentially 3. >> > This patch detects if we're past 3. but says that it reproduced by > disabling *memory* onlining. > This is the hot adding of both new CPU and new _memoryless_ node (with CPU only) > And onlining CPU makes its node online. Disabling CPU onlining puts new node > into state #2, which leads to repro. > >> Before we online memory for a hotplugged node, all zones are !populated. >> So once we online memory for a !populated zone in online_pages(), we >> trigger setup_zone_pageset(). >> >> >> The confusing part is that this patch checks for 3. but says it can be >> reproduced by not onlining *memory*. There seems to be something missing. > > Do we maybe need a proper populated_zone() check before accessing zone data? No, we need them initialize properly. -- Michal Hocko SUSE Labs