* ia64_mca_cpe_int_handler
@ 2008-02-22 17:14 Zoltan Menyhart
2008-02-22 22:32 ` ia64_mca_cpe_int_handler Luck, Tony
` (8 more replies)
0 siblings, 9 replies; 10+ messages in thread
From: Zoltan Menyhart @ 2008-02-22 17:14 UTC (permalink / raw)
To: linux-ia64
[-- Attachment #1: Type: text/plain, Size: 1251 bytes --]
The ia64_mca_cpe_int_handler() in 2.6.24 goes something like this:
ia64_mca_cpe_int_handler (int cpe_irq, void *arg)
{
...
/* SAL spec states this should run w/ interrupts enabled */
local_irq_enable();
spin_lock(&cpe_history_lock);
...
spin_unlock(&cpe_history_lock);
ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
}
I think the interrupts are enabled too early. I just have caught a dead lock:
I have got nested ia64_mca_cpe_int_handler()-s, the first instance has
been interrupted somewhere between spin_lock(&cpe_history_lock) and
spin_unlock(&cpe_history_lock). Obviously, the second instance will never get
the lock.
The previous versions, e.g. 2.6.18, were not safe either:
ia64_mca_cpe_int_handler (int cpe_irq, void *arg, struct pt_regs *ptregs)
{
...
/* SAL spec states this should run w/ interrupts enabled */
local_irq_enable();
/* Get the CPE error record and log it */
ia64_mca_log_sal_error_record(SAL_INFO_TYPE_CPE);
spin_lock(&cpe_history_lock);
...
spin_unlock(&cpe_history_lock);
}
I think the interrupts have to be blocked while we are inside the lock-
protected region.
I can think of something like this below. Please have a look at this patch.
Thanks,
Zoltan Menyhart
[-- Attachment #2: mca.c-patch --]
[-- Type: text/plain, Size: 1103 bytes --]
--- linux-2.6.24-old/arch/ia64/kernel/mca.c 2008-02-22 18:09:12.000000000 +0100
+++ linux-2.6.24/arch/ia64/kernel/mca.c 2008-02-22 18:09:26.000000000 +0100
@@ -436,6 +436,10 @@
static const char * const rec_name[] = { "MCA", "INIT", "CMC", "CPE" };
#endif
+ if (irq_safe){
+ /* SAL spec states this should run w/ interrupts enabled */
+ local_irq_enable();
+ }
size = ia64_log_get(sal_info_type, &buffer, irq_safe);
if (!size)
return;
@@ -512,9 +516,6 @@
IA64_MCA_DEBUG("%s: received interrupt vector = %#x on CPU %d\n",
__FUNCTION__, cpe_irq, smp_processor_id());
- /* SAL spec states this should run w/ interrupts enabled */
- local_irq_enable();
-
spin_lock(&cpe_history_lock);
if (!cpe_poll_enabled && cpe_vector >= 0) {
@@ -1324,9 +1325,6 @@
IA64_MCA_DEBUG("%s: received interrupt vector = %#x on CPU %d\n",
__FUNCTION__, cmc_irq, smp_processor_id());
- /* SAL spec states this should run w/ interrupts enabled */
- local_irq_enable();
-
spin_lock(&cmc_history_lock);
if (!cmc_polling_enabled) {
int i, count = 1; /* we know 1 happened now */
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
@ 2008-02-22 22:32 ` Luck, Tony
2008-02-22 23:44 ` ia64_mca_cpe_int_handler Russ Anderson
` (7 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Luck, Tony @ 2008-02-22 22:32 UTC (permalink / raw)
To: linux-ia64
> I think the interrupts are enabled too early.
This looks reasonable ... enabling the interrupts in code
where we will hold the cmc_history_lock (or the cpe_history_lock)
looks like a recipe for problems.
> Please have a look at this patch.
+ if (irq_safe){
+ /* SAL spec states this should run w/ interrupts enabled */
+ local_irq_enable();
+ }
On first reading this I was a bit worried that you didn't enable
interrrupts for *all* calls to SAL_GET_STATE_INFO calls (and
rather worried that we perhaps needed to for MCA ... which sounded
like a very, very bad idea). But I went back to re-read the SAL
spec, and figured out my confusion.
The "irq_safe" variable is being over-loaded with extra meaning
here ... it has the value we want (true for CMC or CPE) but the
comment about SAL spec doesn't quite match the "irq safe" meaning
implied by the name of the variable.
The SAL spec doesn't say that we must have interrupts enabled if
it is safe to do so, it says: "The operating system-corrected
error handler shall run with interrupts enabled".
Perhaps just update the comment to:
/* SAL spec says CMC and CPE handler must enable interrupts */
which at least points the reader a little more clearly to what is
going on. Either that or make the test:
if (sal_info_type = SAL_INFO_TYPE_CMC || sal_info_type = SAL_INFO_TYPE_CPE) {
/* SAL spec states these should run w/ interrupts enabled */
[This looks way too verbose]
-Tony
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-22 22:32 ` ia64_mca_cpe_int_handler Luck, Tony
@ 2008-02-22 23:44 ` Russ Anderson
2008-02-23 0:02 ` ia64_mca_cpe_int_handler Luck, Tony
` (6 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Russ Anderson @ 2008-02-22 23:44 UTC (permalink / raw)
To: linux-ia64
On Fri, Feb 22, 2008 at 02:32:11PM -0800, Luck, Tony wrote:
>
> The SAL spec doesn't say that we must have interrupts enabled if
> it is safe to do so, it says: "The operating system-corrected
> error handler shall run with interrupts enabled".
The footnote explains: "It is required that the operating system
handlers operate with interrupts enabled, so that system firmware
can manage its resources (like NVRAM based error records) without
impacting system performance."
The goal of limiting the performance impact of corrected errors
is certainly reasonable. The system should not hold off other
interrupts while handling corrected errors. The trouble is if
we get a second corrected error while handling the first.
What the polling mode code does is disable just the CPE interrupt.
disable_irq_nosync(local_vector_to_irq(IA64_CPE_VECTOR));
Would it be unreasonable to disable the CPE interrupt, then enable
do the local_irq_enable() ? It would functionally be the same as
going into polling mode for the length of time it takes to handle
the CPE, then returning to interrupt mode. I think that would
meet both the letter and the intent of the SAL spec.
> Perhaps just update the comment to:
>
> /* SAL spec says CMC and CPE handler must enable interrupts */
>
> which at least points the reader a little more clearly to what is
> going on. Either that or make the test:
>
> if (sal_info_type = SAL_INFO_TYPE_CMC || sal_info_type = SAL_INFO_TYPE_CPE) {
> /* SAL spec states these should run w/ interrupts enabled */
>
> [This looks way too verbose]
>
> -Tony
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-22 22:32 ` ia64_mca_cpe_int_handler Luck, Tony
2008-02-22 23:44 ` ia64_mca_cpe_int_handler Russ Anderson
@ 2008-02-23 0:02 ` Luck, Tony
2008-02-25 17:05 ` ia64_mca_cpe_int_handler Zoltan Menyhart
` (5 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Luck, Tony @ 2008-02-23 0:02 UTC (permalink / raw)
To: linux-ia64
> Would it be unreasonable to disable the CPE interrupt, then enable
> do the local_irq_enable() ? It would functionally be the same as
> going into polling mode for the length of time it takes to handle
> the CPE, then returning to interrupt mode. I think that would
> meet both the letter and the intent of the SAL spec.
This should be OK ... same treatment would be needed in the CMC path
(Zoltan's patch already covers this by delaying the interrupt enable
there too).
Which approach is better?
Delaying the interrupt enable may be a simpler code change (disabling
and enabling the CPE/CMC interrupt may complicate the code that
already does this for high rates of corrected interrupts).
Keeping the enable early is better for interrupt latency.
-Tony
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (2 preceding siblings ...)
2008-02-23 0:02 ` ia64_mca_cpe_int_handler Luck, Tony
@ 2008-02-25 17:05 ` Zoltan Menyhart
2008-02-26 21:01 ` ia64_mca_cpe_int_handler Russ Anderson
` (4 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Zoltan Menyhart @ 2008-02-25 17:05 UTC (permalink / raw)
To: linux-ia64
1. irq_safe
"SAL runtime services are called from the following execution environment:
- Operating system runtime execution environment. The normal operating
system execution environment is with translation on and interrupts
enabled but the operating system may choose to call SAL runtime services
in physical mode.
- Operating system machine check and initialization handler. The execution
environment for these are provided by SAL and are in physical mode with
interrupts disabled."
I read these paragraphs: we always call the SAL with interrupts enabled
unless we are in the MCA / INIT handlers. I.e. we could call
SAL_GET_STATE_INFO(SAL_INFO_TYPE_MCA)
SAL_GET_STATE_INFO(SAL_INFO_TYPE_INIT)
SAL_CLEAR_STATE_INFO(SAL_INFO_TYPE_MCA)
SAL_CLEAR_STATE_INFO(SAL_INFO_TYPE_INIT)
with interrupts enabled, when we are in a kernel process / interrupt handler
context.
Therefore "irq_safe" should come from the caller of ia64_mca_cpe_int_handler().
E.g. we could call SAL_GET_STATE_INFO(SAL_INFO_TYPE_MCA) at the start up of
the system with interrupts enabled, to see if the last reboot was due to a MCA,
2. I do not think using the same buffering mechanism for MCA and INIT as
we do for CMCI and CPEI is safe. You cannot take any lock in the
MCA / INIT handlers.
I have not got any idea how the current code can be made safe.
3. disable_irq...() vs. local_irq_disable():
disable_irq...() is not local to the CPU, it costs ~10 microseconds.
You need to disable the polling interrupt, too.
I prefer to keep a spin_lock_irqsave() like semantics.
4. I'm thinking of a separate log buffer mechanism for the MCA handler:
- I'll have two log buffers for the MCA logs
- The 1st one keeps the 1st MCA, the 2nd one the last one
- I'll have an atomic variable that works as follows:
+ If it is 0, then there is nothing in the buffers
+ If it is 1, then there is exactly one, the 1st log there
+ If it is > 1, then:
* If it is even, then the 2nd buffer is being updated
* If if is odd, then it holds the last log
The MCA handler plays as follows:
- If the atomic variable is 0, it writes the 1st log, wmb(), sets the
atomic variable to 1
- Else it increments the atomic variable, wmb(), it writes the "last" log,
wmb(), increments the atomic variable. Should the "old value" of the
atomic variable have become 0, restart...
The salinfo_decode side polls the atomic variable. If it is > 0, then
rmb(), it fetches the 1st log, if it is > 2 and odd, then it
fetches the last one, rmb(), it re-reads the the atomic variable,
if it has been incremented, then it re-fetches the last log...
Having read the logs, the salinfo_decode side can set the atomic
variable to 0.
The same mechanism duplicated for INIT.
I'd like to have you opinion in this idea, can you agree that this
mechanism will be 100% safe?
Thanks,
Zoltan Menyhart
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (3 preceding siblings ...)
2008-02-25 17:05 ` ia64_mca_cpe_int_handler Zoltan Menyhart
@ 2008-02-26 21:01 ` Russ Anderson
2008-02-27 10:06 ` ia64_mca_cpe_int_handler Zoltan Menyhart
` (3 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Russ Anderson @ 2008-02-26 21:01 UTC (permalink / raw)
To: linux-ia64
On Mon, Feb 25, 2008 at 06:05:29PM +0100, Zoltan Menyhart wrote:
> 1. irq_safe
>
> "SAL runtime services are called from the following execution environment:
> - Operating system runtime execution environment. The normal operating
> system execution environment is with translation on and interrupts
> enabled but the operating system may choose to call SAL runtime services
> in physical mode.
> - Operating system machine check and initialization handler. The execution
> environment for these are provided by SAL and are in physical mode with
> interrupts disabled."
>
> I read these paragraphs: we always call the SAL with interrupts enabled
> unless we are in the MCA / INIT handlers. I.e. we could call
>
> SAL_GET_STATE_INFO(SAL_INFO_TYPE_MCA)
> SAL_GET_STATE_INFO(SAL_INFO_TYPE_INIT)
> SAL_CLEAR_STATE_INFO(SAL_INFO_TYPE_MCA)
> SAL_CLEAR_STATE_INFO(SAL_INFO_TYPE_INIT)
An implication is those SAL calls need to work in both execution environments.
> with interrupts enabled, when we are in a kernel process / interrupt handler
> context.
> Therefore "irq_safe" should come from the caller of
> ia64_mca_cpe_int_handler().
>
> E.g. we could call SAL_GET_STATE_INFO(SAL_INFO_TYPE_MCA) at the start up of
> the system with interrupts enabled, to see if the last reboot was due to a
> MCA,
Yes. irq_safe should be based on the callers execution environment, not
the record type. That would explain the comment "FIXME: remove MCA and irq_safe."
in mca.c. :-)
> 2. I do not think using the same buffering mechanism for MCA and INIT as
> we do for CMCI and CPEI is safe. You cannot take any lock in the
> MCA / INIT handlers.
Thinking out loud... MCA/INIT handlers cannot touch locks that could be
in use when the MCA/INIT surfaces. Common routines used by both CMCI/CPEI
handlers and MCA/INIT handlers cannot touch common locks. The CMCI/CPEI
does not look at MCA or INIT records (and vice versa).
> I have not got any idea how the current code can be made safe.
Thinking of the scenarios that need to be safe:
Handling nested events of the same type, such as the original example of
nested CPEs.
Handling nested events of different types, such as handling a CPE when
an MCA surfaces.
Logging records of the same type when a new event of the same type
surfaces, such as salinfo logging a CPE as a new CPE surfaces.
Logging records of one type when a new event of a different type surfaces,
such as salinfo logging an MCA record when a CPE surfaces.
> 3. disable_irq...() vs. local_irq_disable():
>
> disable_irq...() is not local to the CPU, it costs ~10 microseconds.
> You need to disable the polling interrupt, too.
>
> I prefer to keep a spin_lock_irqsave() like semantics.
The only reason for suggesting disable_irq...() was to stay within
the letter of the MCA spec.
> 4. I'm thinking of a separate log buffer mechanism for the MCA handler:
>
> - I'll have two log buffers for the MCA logs
> - The 1st one keeps the 1st MCA, the 2nd one the last one
> - I'll have an atomic variable that works as follows:
> + If it is 0, then there is nothing in the buffers
> + If it is 1, then there is exactly one, the 1st log there
> + If it is > 1, then:
> * If it is even, then the 2nd buffer is being updated
> * If if is odd, then it holds the last log
>
> The MCA handler plays as follows:
> - If the atomic variable is 0, it writes the 1st log, wmb(), sets the
> atomic variable to 1
> - Else it increments the atomic variable, wmb(), it writes the "last" log,
> wmb(), increments the atomic variable. Should the "old value" of the
> atomic variable have become 0, restart...
>
> The salinfo_decode side polls the atomic variable. If it is > 0, then
> rmb(), it fetches the 1st log, if it is > 2 and odd, then it
> fetches the last one, rmb(), it re-reads the the atomic variable,
> if it has been incremented, then it re-fetches the last log...
> Having read the logs, the salinfo_decode side can set the atomic
> variable to 0.
>
> The same mechanism duplicated for INIT.
What would the CMCI/CPEI mechanism look like?
Salinfo (presumably) handle CMCI/CPEI differently?
> I'd like to have you opinion in this idea, can you agree that this
> mechanism will be 100% safe?
Does is handle all the cases? This is a tricky area.
> Thanks,
>
> Zoltan Menyhart
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (4 preceding siblings ...)
2008-02-26 21:01 ` ia64_mca_cpe_int_handler Russ Anderson
@ 2008-02-27 10:06 ` Zoltan Menyhart
2008-02-27 10:53 ` ia64_mca_cpe_int_handler Robin Holt
` (2 subsequent siblings)
8 siblings, 0 replies; 10+ messages in thread
From: Zoltan Menyhart @ 2008-02-27 10:06 UTC (permalink / raw)
To: linux-ia64
Russ,
Thank you for your remarks.
I want to add a new mechanism for the MCA/INIT handlers to be able to
store their logs in a safe way. (No locks, no waiting,...)
This will be based on the two buffers and the atomic counter for each
of the MCA/INIT handlers.
And I want to keep the existing mechanism when we are not in MCA/INIT
handlers' contexts.
Using separate mechanisms, avoids MCA/INIT handlers interfering with
the interrupts and the polling.
Should we receive too many CPEIs / CMCIs, chaining into polling,
may make us lose events.
Should we receive too many recovered MCAs or INITs, there will
always be some logs lost. I want to keep the first and the last log.
The first one to know how it begins.
Why the last one? I've got a _feeling_ that the last one will have
accumulated consequences (whatever it is) of the preceding ones.
If they are note recovered, the game is over anyway.
I'll think it over again :-)
Thanks,
Zoltan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (5 preceding siblings ...)
2008-02-27 10:06 ` ia64_mca_cpe_int_handler Zoltan Menyhart
@ 2008-02-27 10:53 ` Robin Holt
2008-02-27 12:18 ` ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-27 16:29 ` ia64_mca_cpe_int_handler Russ Anderson
8 siblings, 0 replies; 10+ messages in thread
From: Robin Holt @ 2008-02-27 10:53 UTC (permalink / raw)
To: linux-ia64
On Wed, Feb 27, 2008 at 11:06:31AM +0100, Zoltan Menyhart wrote:
> Russ,
>
> Thank you for your remarks.
>
> I want to add a new mechanism for the MCA/INIT handlers to be able to
> store their logs in a safe way. (No locks, no waiting,...)
> This will be based on the two buffers and the atomic counter for each
> of the MCA/INIT handlers.
>
> And I want to keep the existing mechanism when we are not in MCA/INIT
> handlers' contexts.
>
> Using separate mechanisms, avoids MCA/INIT handlers interfering with
> the interrupts and the polling.
>
> Should we receive too many CPEIs / CMCIs, chaining into polling,
> may make us lose events.
>
> Should we receive too many recovered MCAs or INITs, there will
> always be some logs lost. I want to keep the first and the last log.
> The first one to know how it begins.
> Why the last one? I've got a _feeling_ that the last one will have
> accumulated consequences (whatever it is) of the preceding ones.
Is this records per cpu or records total?
In my experience, the first and last are somewhat random. Often, when
we have received multiple MCA's due to an event, the telling record is
in the middle of the heap and not easily determined without some
experience with that type of event group. Is there any reasonable way
to keep all the records or at least a larger group than just two.
Russ, do you know what the max number of MCA/INIT records we would
expect to see during a worst-case type event?
Thanks,
Robin
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (6 preceding siblings ...)
2008-02-27 10:53 ` ia64_mca_cpe_int_handler Robin Holt
@ 2008-02-27 12:18 ` Zoltan Menyhart
2008-02-27 16:29 ` ia64_mca_cpe_int_handler Russ Anderson
8 siblings, 0 replies; 10+ messages in thread
From: Zoltan Menyhart @ 2008-02-27 12:18 UTC (permalink / raw)
To: linux-ia64
> Is this records per cpu or records total?
Well, I planed 2 + 2 global log buffers for MCA / INIT. (As today.)
The 1st CPU can use 1st buffer (and increments the atomic variable).
For additional MCAs, it fills in the 2nd buffer.
The other CPUs fill in the 2nd buffer, too.
Yes, we overwrite the 2nd buffer. It goes like this:
- increment the atomic variable, wmb()
- memcpy() the log into the 2nd buffer, wmb()
- increment the atomic variable
- before starting, verify if the atomic variable is odd,
use compare-and-swap for the incrementing it
memcpy() should finish in a relatively short time.
Should it provoke another MCA...
(The actual code also assumes that the MCA handler is "hard core".)
We could have dedicated buffers to each CPU, and more than 2 of them
per CPU.
Yet all the resources have to be pre-allocated.
For some hopefully very unlikely events...
Eventually, this algorithm is easily modifiable to handle "n" buffers
(overwriting the last one).
Thanks,
Zoltan
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: ia64_mca_cpe_int_handler
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
` (7 preceding siblings ...)
2008-02-27 12:18 ` ia64_mca_cpe_int_handler Zoltan Menyhart
@ 2008-02-27 16:29 ` Russ Anderson
8 siblings, 0 replies; 10+ messages in thread
From: Russ Anderson @ 2008-02-27 16:29 UTC (permalink / raw)
To: linux-ia64
On Wed, Feb 27, 2008 at 04:53:32AM -0600, Robin Holt wrote:
>
> Russ, do you know what the max number of MCA/INIT records we would
> expect to see during a worst-case type event?
At least NR_CPUS. In the case of an nmi of the systen, for example, there
will be an INIT record for each CPU. SAL is expected to create a record
for each CPU.
I think the real question is how many MCA/INIT records will linux
process at one time. For both MCA and INIT, the CPUs are rendezvoused
one CPU becomes the monarch, so there is only one CPU calling down to
SAL at a time. Even in the case of multiple CPUs going into MCA at
the same time the handling is done one CPU at a time (after the first
CPU handles its MCA it is demoted to a slave and the next CPUs promoted
to monarch to handle its MCA).
That said, the more parallel the processing of records, the more buffering
will be needed.
There is the potential for nested MCAs one a given CPU that should be
handled. Since CPUs are rendezvoued, it is extremely unlikely to have
more that one layer of nesting (two MCA records).
Nested CPEI/CMCI are more likely. Since the handling of those interrupts
is done with interrupts enabled (per the MCA Spec) there is a greater
potential for multiple records per CPU being processed at the same
time.
A big part of the issue is how quickly salinfo able to process records.
The quicker it handles the records, the less buffering is needed.
Since salinfo is running in userland, it can get held off for long
periods of time.
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2008-02-27 16:29 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-02-22 17:14 ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-22 22:32 ` ia64_mca_cpe_int_handler Luck, Tony
2008-02-22 23:44 ` ia64_mca_cpe_int_handler Russ Anderson
2008-02-23 0:02 ` ia64_mca_cpe_int_handler Luck, Tony
2008-02-25 17:05 ` ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-26 21:01 ` ia64_mca_cpe_int_handler Russ Anderson
2008-02-27 10:06 ` ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-27 10:53 ` ia64_mca_cpe_int_handler Robin Holt
2008-02-27 12:18 ` ia64_mca_cpe_int_handler Zoltan Menyhart
2008-02-27 16:29 ` ia64_mca_cpe_int_handler Russ Anderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox