xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Keir Fraser <keir@xen.org>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH] x86: fix ordering of operations in destroy_irq()
Date: Thu, 30 May 2013 17:23:07 +0100	[thread overview]
Message-ID: <51A77CEB.6030409@eu.citrix.com> (raw)
In-Reply-To: <51A5C33A02000078000D974A@nat28.tlf.novell.com>

On 05/29/2013 07:58 AM, Jan Beulich wrote:
> The fix for XSA-36, switching the default of vector map management to
> be per-device, exposed more readily a problem with the cleanup of these
> vector maps: dynamic_irq_cleanup() clearing desc->arch.used_vectors
> keeps the subsequently invoked clear_irq_vector() from clearing the
> bits for both the in-use and a possibly still outstanding old vector.
>
> Fix this by folding dynamic_irq_cleanup() into destroy_irq(), which was
> its only caller, deferring the clearing of the vector map pointer until
> after clear_irq_vector().
>
> Once at it, also defer resetting of desc->handler until after the loop
> around smp_mb() checking for IRQ_INPROGRESS to be clear, fixing a
> (mostly theoretical) issue with the intercation with do_IRQ(): If we
> don't defer the pointer reset, do_IRQ() could, for non-guest IRQs, call
> ->ack() and ->end() with different ->handler pointers, potentially
> leading to an IRQ remaining un-acked. The issue is mostly theoretical
> because non-guest IRQs are subject to destroy_irq() only on (boot time)
> error paths.
>
> As to the changed locking: Invoking clear_irq_vector() with desc->lock
> held is okay because vector_lock already nests inside desc->lock (proven
> by set_desc_affinity(), which takes vector_lock and gets called from
> various desc->handler->ack implementations, getting invoked with
> desc->lock held).
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

How big of an impact is this bug?  How many people are actually affected 
by it?

It's a bit hard for me to tell from the description, but it looks like 
it's code motion, then some "theoretical" issues.

Remember our three goals:
- A bug-free release
- An awesome release
- An on-time release

Is the improvement this patch represents worth the potential risk of 
bugs at this point?

  -George

  parent reply	other threads:[~2013-05-30 16:23 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-29  6:58 [PATCH] x86: fix ordering of operations in destroy_irq() Jan Beulich
2013-05-29  7:23 ` Jan Beulich
2013-05-29 22:17   ` Andrew Cooper
2013-05-29  7:29 ` Keir Fraser
2013-05-30 16:23 ` George Dunlap [this message]
2013-05-30 16:42   ` Jan Beulich
2013-05-30 16:51     ` George Dunlap
2013-05-30 17:22       ` Andrew Cooper
2013-05-31  6:36       ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51A77CEB.6030409@eu.citrix.com \
    --to=george.dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).