linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ingo Molnar <mingo@elte.hu>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: travis@sgi.com, tglx@linutronix.de, ak@suse.de, clameter@sgi.com,
	steiner@sgi.com, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH 2/2] x86_64: Cleanup non-smp usage of cpu maps v3
Date: Tue, 4 Mar 2008 09:35:07 +0100	[thread overview]
Message-ID: <20080304083507.GE5689@elte.hu> (raw)
In-Reply-To: <20080303173011.b0d9a89d.akpm@linux-foundation.org>

* Andrew Morton <akpm@linux-foundation.org> wrote:

> I now recall that it has been happening on every fifth-odd boot for a 
> few weeks now.  The machine prints
> 
> Time: tsc clocksource has been installed
> 
> then five instances of "system 00:01: iomem range 0x...", then it 
> hangs. ie: it never prints "system 00:01: iomem range 
> 0xfe600000-0xfe6fffff has been reserved" from 
> http://userweb.kernel.org/~akpm/dmesg-akpm2.txt.
> 
> It may have some correlation with whether the machine was booted via 
> poweron versus `reboot -f', dunno.

the tsc thing seems to be an accidental proximity to me.

such a hard hang has a basic system setup feel to it: the PCI changes in 
2.6.25 or perhaps some ACPI changes. But it could also be timer related 
(although in that case it typically doesnt hang in the middle of a 
system setup sequence)

i'd say pci=nommconf, but your dmesg has this:

  PCI: Not using MMCONFIG.

but, what does seem to be new in your dmesg (i happen to have a historic 
dmesg-akpm2.txt of yours saved away) is:

  hpet0: at MMIO 0xfed00000, IRQs 2, 8, 11
  hpet0: 3 64-bit timers, 14318180 Hz

was hpet active on this box before? Try hpet=disable perhaps - does that 
change anything? (But ... this is still a 10% chance suggestion, there's 
way too many other possibilities for such bugs to occur.)

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2008-03-04  8:35 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-02-19 20:33 [PATCH 0/2] x86: Optimize percpu accesses v3 Mike Travis
2008-02-19 20:33 ` [PATCH 1/2] x86_64: Fold pda into per cpu area v3 Mike Travis
2008-02-20 12:07   ` Ingo Molnar
2008-02-20 13:16     ` Eric Dumazet
2008-02-20 15:54       ` Mike Travis
2008-02-20 18:57     ` Mike Travis
2008-02-19 20:33 ` [PATCH 2/2] x86_64: Cleanup non-smp usage of cpu maps v3 Mike Travis
2008-03-04  1:02   ` Andrew Morton
2008-03-04  1:30     ` Andrew Morton
2008-03-04  8:35       ` Ingo Molnar [this message]
2008-03-05  0:45         ` Andrew Morton
2008-03-04 13:21     ` Mike Travis
2008-02-20  9:15 ` [PATCH 0/2] x86: Optimize percpu accesses v3 Ingo Molnar
2008-02-20 15:28   ` Mike Travis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080304083507.GE5689@elte.hu \
    --to=mingo@elte.hu \
    --cc=ak@suse.de \
    --cc=akpm@linux-foundation.org \
    --cc=clameter@sgi.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=steiner@sgi.com \
    --cc=tglx@linutronix.de \
    --cc=travis@sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).