From: David Mosberger <davidm@napali.hpl.hp.com>
To: linux-ia64@vger.kernel.org
Subject: Re: [patch 2.6.11] __copy_user breaks on unaligned src
Date: Fri, 25 Mar 2005 20:27:04 +0000 [thread overview]
Message-ID: <16964.29720.937091.330552@napali.hpl.hp.com> (raw)
In-Reply-To: <12404.1111129477@kao2.melbourne.sgi.com>
>>>>> On Thu, 24 Mar 2005 23:59:41 -0800, David Mosberger <davidm@linux.hpl.hp.com> said:
David> After some more digging, it appears that we do get a
David> vhpt-miss fault first and for some reason, that handler
David> triggers a (nested) general exception fault with
David> ISR.code7:4}=3 (IA-64 Reserved Register/Field fault,
David> Unimplemented Data Address fault". Not sure yet what
David> triggers the nested fault.
Well, this turned out to be a bit of a red herring: it was faulting
because the lfetch.fault happened before the Linux page-table-base
register (ar.k7) was initialized. On the real hardware, ar.k7 was
zero and since the lfetch-triggered fault was to address 0, this
caused the vhpt_miss handler to go down in flames.
The attached patch fixes this problem and the machine now boots fine
using lfetch.fault for prefetch()/prefetchw().
Keith: unfortunately, I doubt this will be of any help in tracking
down your problem.
Tony: this patch is perfectly safe and helps make the kernel more
robust, so I'd recommend to apply it soonish.
Thanks,
--david
ia64: Initialize ar.k7 to empty_zero_page early on
Without this initialization, early TLB misses to any user-regions will
cause the TLB miss handlers to go down in flames. Normally, no such
early TLB misses occur, but aggressive use of lfetch.fault can trigger
it easily (e.g., when using lfetch.fault for the
prefetch()/prefetchw() macros we get an early miss for address 0 due
to a prefetch in find_pid()).
Signed-off-by: David Mosberger-Tang <davidm@hpl.hp.com>
=== arch/ia64/kernel/setup.c 1.90 vs edited ==--- 1.90/arch/ia64/kernel/setup.c 2005-03-23 11:08:32 -08:00
+++ edited/arch/ia64/kernel/setup.c 2005-03-25 12:10:44 -08:00
@@ -711,6 +711,15 @@
ia64_set_kr(IA64_KR_FPU_OWNER, 0);
/*
+ * Initialize the page-table base register to a global
+ * directory with all zeroes. This ensure that we can handle
+ * TLB-misses to user address-space even before we created the
+ * first user address-space. This may happen, e.g., due to
+ * aggressive use of lfetch.fault.
+ */
+ ia64_set_kr(IA64_KR_PT_BASE, __pa(ia64_imva(empty_zero_page)));
+
+ /*
* Initialize default control register to defer all speculative faults. The
* kernel MUST NOT depend on a particular setting of these bits (in other words,
* the kernel must have recovery code for all speculative accesses). Turn on
prev parent reply other threads:[~2005-03-25 20:27 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-03-18 7:04 [patch 2.6.11] __copy_user breaks on unaligned src Keith Owens
2005-03-18 7:17 ` David Mosberger
2005-03-18 17:40 ` Luck, Tony
2005-03-18 19:19 ` David Mosberger
2005-03-19 0:58 ` Luck, Tony
2005-03-22 3:04 ` Keith Owens
2005-03-25 1:17 ` David Mosberger
2005-03-25 7:59 ` David Mosberger
2005-03-25 20:27 ` David Mosberger [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=16964.29720.937091.330552@napali.hpl.hp.com \
--to=davidm@napali.hpl.hp.com \
--cc=linux-ia64@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox