From: Gary Hade <garyhade@us.ibm.com>
To: Gary Hade <garyhade@us.ibm.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>,
mingo@elte.hu, mingo@redhat.com, linux-kernel@vger.kernel.org,
tglx@linutronix.de, hpa@zytor.com, x86@kernel.org,
yinghai@kernel.org, lcm@us.ibm.com
Subject: Re: [RESEND] [PATCH v2] [BUGFIX] x86/x86_64: fix CPU offlining triggered "active" device IRQ interrruption
Date: Thu, 4 Jun 2009 14:17:45 -0700 [thread overview]
Message-ID: <20090604211744.GB9213@us.ibm.com> (raw)
In-Reply-To: <20090604200437.GA9213@us.ibm.com>
On Thu, Jun 04, 2009 at 01:04:37PM -0700, Gary Hade wrote:
> On Wed, Jun 03, 2009 at 02:13:23PM -0700, Eric W. Biederman wrote:
> > Gary Hade <garyhade@us.ibm.com> writes:
> >
> > > Correct, after the fix was applied my testing did _not_ show
> > > the lockups that you are referring to. I wonder if there is a
> > > chance that the root cause of those old failures and the root
> > > cause of issue that my fix addresses are the same?
> > >
> > > Can you provide the test case that demonstrated the old failure
> > > cases so I can try it on our systems? Also, do you recall what
> > > mainline version demonstrated the old failure
> >
> > The irq migration has already been moved to interrupt context by the
> > time I started working on it. And I managed to verify that there were
> > indeed problems with moving it out of interrupt context before my code
> > merged.
> >
> > So if you want to reproduce it reduce your irq migration to the essentials.
> > Set IRQ_MOVE_PCNTXT, and always migrate the irqs from process context
> > immediately.
> >
> > Then migrate an irq that fires at a high rate rapidly from one cpu to
> > another.
> >
> > Right now you are insulated from most of the failures because you still
> > don't have IRQ_MOVE_PCNTXT. So you are only really testing your new code
> > in the cpu hotunplug path.
>
> OK, I'm confused.
>
> It sounds like you want me force IRQ_MOVE_PCNTXT so that I can
> test in a configuration that you say is already broken. Why
> in the heck would this config, where you expect lockups without
> the fix, be a productive environment in which to test the fix?
Sorry, I did not say this well. Trying again:
Why would this config, where you already expect lockups
for reasons that you say are not addressed by the fix, be
a productive environment in which to test the fix?
Gary
--
Gary Hade
System x Enablement
IBM Linux Technology Center
503-578-4503 IBM T/L: 775-4503
garyhade@us.ibm.com
http://www.ibm.com/linux/ltc
next prev parent reply other threads:[~2009-06-04 21:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-02 19:32 [RESEND] [PATCH v2] [BUGFIX] x86/x86_64: fix CPU offlining triggered "active" device IRQ interrruption Gary Hade
2009-06-03 4:51 ` H. Peter Anvin
2009-06-03 16:40 ` Gary Hade
2009-06-03 12:03 ` Eric W. Biederman
2009-06-03 17:06 ` Gary Hade
2009-06-03 21:13 ` Eric W. Biederman
2009-06-04 20:04 ` Gary Hade
2009-06-04 21:17 ` Gary Hade [this message]
2009-06-04 23:16 ` Eric W. Biederman
2009-06-03 12:27 ` Eric W. Biederman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090604211744.GB9213@us.ibm.com \
--to=garyhade@us.ibm.com \
--cc=ebiederm@xmission.com \
--cc=hpa@zytor.com \
--cc=lcm@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mingo@redhat.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
--cc=yinghai@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox