From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754314AbZEFFTn (ORCPT ); Wed, 6 May 2009 01:19:43 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750901AbZEFFTd (ORCPT ); Wed, 6 May 2009 01:19:33 -0400 Received: from mga07.intel.com ([143.182.124.22]:33700 "EHLO azsmga101.ch.intel.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750702AbZEFFTc (ORCPT ); Wed, 6 May 2009 01:19:32 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.40,301,1239001200"; d="scan'208";a="139644977" Subject: Re: [PATCH] Fix early panic issue on machines with memless node From: "Zhang, Yanmin" To: Jack Steiner Cc: David Rientjes , alex.shi@intel.com, LKML , Ingo Molnar , Andi Kleen In-Reply-To: <20090505202730.GA9831@sgi.com> References: <1241493327.27664.17.camel@ymzhang> <20090505163608.GA20385@sgi.com> <20090505202730.GA9831@sgi.com> Content-Type: text/plain; charset=UTF-8 Date: Wed, 06 May 2009 13:19:52 +0800 Message-Id: <1241587192.27664.56.camel@ymzhang> Mime-Version: 1.0 X-Mailer: Evolution 2.22.1 (2.22.1-2.fc9) Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2009-05-05 at 15:27 -0500, Jack Steiner wrote: > On Tue, May 05, 2009 at 12:52:54PM -0700, David Rientjes wrote: > > On Tue, 5 May 2009, Jack Steiner wrote: > > > > > I was able to duplicate your original problem. Your patch below solves the > > > problem. AFAICT, it causes no new reqgressions to the various configurations > > > that I'm testing. (I'll add the "mem=2G" to my configs that I test). > > > > > > > Great, it would be helpful to catch these problems before 2.6.30 is > > released. I've passed my patch along to Ingo. > > > > > However, I see a new regression that was not present a couple of weeks ago. > > > Configurations that have nodes with cpus and no memory panic during > > > boot. This occurs both with and without your patch and is not related to "mem=". > > > > > > I need to isolate the problem but here is the stack trace. : > > > Pid: 0, comm: swapper Not tainted 2.6.30-rc4-next-20090505-medusa #12 > > > Call Trace: > > > [] early_idt_handler+0x5e/0x71 > > > [] ? build_zonelists_node+0x4c/0x8d > > > [] __build_all_zonelists+0x1ae/0x55a > > > [] build_all_zonelists+0x1b5/0x263 > > > [] start_kernel+0x17a/0x3c5 > > > [] ? early_idt_handler+0x0/0x71 > > > [] x86_64_start_reservations+0xae/0xb2 > > > [] x86_64_start_kernel+0x152/0x161 > > > > > > > Please post your .config since it apparently differs from x86_64 defconfig > > judging by my debugging symbols and also the full output of the panic. > > I suspect I mislead you when I mentioned "configurations". I did not mean > the .config file. I use a more-or-less standard .config file. > > I do much of my testing on a system simulator. Using a simulator config file, > I specify the system configuration such as number of nodes, sockets per node, > cpus per socket, memory per socket, address map, boot options, etc. This > makes it easy to quickly test a lot of strange (but real) configurations. > > The configuration above that is failing is a 2-socket Nehelem blade that has no > memory on socket 0. All memory is located on socket 1. The panic is caused by a > null dereference of NODE_DATA(0). > > Still looking.... It seems in function setup_node_bootmem: if (!end) return; stops the initialization of node_data[nodeid]. Later on panic when build_zonelists dereference NODE_DATA(0). Although a node is memoryless, but mostly it has small blocks of memory, so function acpi_scan_nodes marks them offline. However, if getting node info in acpi_numa_processor_affinity_init. the node might have no any memory, and acpi_scan_nodes doesn't mark it offline. The logic is confusing with patch dc09855191809. Could you revert it to retest?