virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Gleixner <tglx@linutronix.de>
To: Jason Wang <jasowang@redhat.com>, mst@redhat.com, jasowang@redhat.com
Cc: "Paul E. McKenney" <paulmck@kernel.org>,
	david.kaplan@amd.com, konrad.wilk@oracle.com,
	Peter Zijlstra <peterz@infradead.org>,
	Boqun Feng <boqun.feng@gmail.com>,
	f.hetzelt@tu-berlin.de, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH 7/9] virtio-pci: harden INTX interrupts
Date: Mon, 13 Sep 2021 23:36:24 +0200	[thread overview]
Message-ID: <875yv4f99j.ffs@tglx> (raw)
In-Reply-To: <20210913055353.35219-8-jasowang@redhat.com>

Jason,

On Mon, Sep 13 2021 at 13:53, Jason Wang wrote:
> This patch tries to make sure the virtio interrupt handler for INTX
> won't be called after a reset and before virtio_device_ready(). We
> can't use IRQF_NO_AUTOEN since we're using shared interrupt
> (IRQF_SHARED). So this patch tracks the INTX enabling status in a new
> intx_soft_enabled variable and toggle it during in
> vp_disable/enable_vectors(). The INTX interrupt handler will check
> intx_soft_enabled before processing the actual interrupt.

Ah, there it is :)

Cc'ed our memory ordering wizards as I might be wrong as usual.

> -	if (vp_dev->intx_enabled)
> +	if (vp_dev->intx_enabled) {
> +		vp_dev->intx_soft_enabled = false;
> +		/* ensure the vp_interrupt see this intx_soft_enabled value */
> +		smp_wmb();
>  		synchronize_irq(vp_dev->pci_dev->irq);

As you are synchronizing the interrupt here anyway, what is the value of
the barrier?

 		vp_dev->intx_soft_enabled = false;
  		synchronize_irq(vp_dev->pci_dev->irq);

is sufficient because of:

synchronize_irq()
   do {
   	raw_spin_lock(desc->lock);
        in_progress = check_inprogress(desc);
   	raw_spin_unlock(desc->lock);
   } while (in_progress);     

raw_spin_lock() has ACQUIRE semantics so the store to intx_soft_enabled
can complete after lock has been acquired which is uninteresting.

raw_spin_unlock() has RELEASE semantics so the store to intx_soft_enabled
has to be completed before the unlock completes.

So if the interrupt is on the flight then it might or might not see
intx_soft_enabled == false. But that's true for your barrier construct
as well.

The important part is that any interrupt for this line arriving after
synchronize_irq() has completed is guaranteed to see intx_soft_enabled
== false.

That is what you want to achieve, right?

>  	for (i = 0; i < vp_dev->msix_vectors; ++i)
>  		disable_irq(pci_irq_vector(vp_dev->pci_dev, i));
> @@ -43,8 +47,12 @@ void vp_enable_vectors(struct virtio_device *vdev)
>  	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
>  	int i;
>  
> -	if (vp_dev->intx_enabled)
> +	if (vp_dev->intx_enabled) {
> +		vp_dev->intx_soft_enabled = true;
> +		/* ensure the vp_interrupt see this intx_soft_enabled value */
> +		smp_wmb();

For the enable case the barrier is pointless vs. intx_soft_enabled

CPU 0                                           CPU 1

interrupt                                       vp_enable_vectors()
  vp_interrupt()                                
    if (!vp_dev->intx_soft_enabled)
       return IRQ_NONE;
                                                  vp_dev->intx_soft_enabled = true;

IOW, the concurrent interrupt might or might not see the store. That's
not a problem for legacy PCI interrupts. If it did not see the store and
the interrupt originated from that device then it will account it as one
spurious interrupt which will get raised again because those interrupts
are level triggered and nothing acknowledged it at the device level.

Now, what's more interesting is that is has to be guaranteed that the
interrupt which observes

        vp_dev->intx_soft_enabled == true

also observes all preceeding stores, i.e. those which make the interrupt
handler capable of handling the interrupt.

That's the real problem and for that your barrier is at the wrong place
because you want to make sure that those stores are visible before the
store to intx_soft_enabled becomes visible, i.e. this should be:


        /* Ensure that all preceeding stores are visible before intx_soft_enabled */
	smp_wmb();
	vp_dev->intx_soft_enabled = true;

Now Micheal is not really enthusiatic about the barrier in the interrupt
handler hotpath, which is understandable.

As the device startup is not really happening often it's sensible to do
the following

        disable_irq();
        vp_dev->intx_soft_enabled = true;
        enable_irq();

because:

        disable_irq()
          synchronize_irq()

acts as a barrier for the preceeding stores:

        disable_irq()
   	  raw_spin_lock(desc->lock);
          __disable_irq(desc);
   	  raw_spin_unlock(desc->lock);

          synchronize_irq()
            do {
   	      raw_spin_lock(desc->lock);
              in_progress = check_inprogress(desc);
   	      raw_spin_unlock(desc->lock);
            } while (in_progress);     

        intx_soft_enabled = true;

        enable_irq();

In this case synchronize_irq() prevents the subsequent store to
intx_soft_enabled to leak into the __disable_irq(desc) section which in
turn makes it impossible for an interrupt handler to observe
intx_soft_enabled == true before the prerequisites which preceed the
call to disable_irq() are visible.

Of course the memory ordering wizards might disagree, but if they do,
then we have a massive chase of ordering problems vs. similar constructs
all over the tree ahead of us.

From the interrupt perspective the sequence:

        disable_irq();
        vp_dev->intx_soft_enabled = true;
        enable_irq();

is perfectly fine as well. Any interrupt arriving during the disabled
section will be reraised on enable_irq() in hardware because it's a
level interrupt. Any resulting failure is either a hardware or a
hypervisor bug.

Thanks,

        tglx
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-09-13 21:36 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-13  5:53 [PATCH 0/9] More virtio hardening Jason Wang
2021-09-13  5:53 ` [PATCH 1/9] virtio-blk: validate num_queues during probe Jason Wang
2021-09-13  7:48   ` Stefano Garzarella
2021-09-14  2:29     ` Jason Wang
2021-09-13 12:05   ` Stefan Hajnoczi
2021-09-13  5:53 ` [PATCH 2/9] virtio: add doc for validate() method Jason Wang
2021-09-13  5:53 ` [PATCH 3/9] virtio-console: switch to use .validate() Jason Wang
2021-09-13  5:53 ` [PATCH 4/9] virtio_console: validate max_nr_ports before trying to use it Jason Wang
2021-09-13  5:53 ` [PATCH 5/9] virtio_config: introduce a new ready method Jason Wang
2021-09-13  5:53 ` [PATCH 6/9] virtio_pci: harden MSI-X interrupts Jason Wang
2021-09-13  6:03   ` Michael S. Tsirkin
2021-09-13  6:08     ` Jason Wang
2021-09-13  6:28       ` Michael S. Tsirkin
2021-09-13  6:34         ` Jason Wang
2021-09-13  6:37           ` Michael S. Tsirkin
2021-09-13  6:43             ` Jason Wang
2021-09-13  7:01               ` Michael S. Tsirkin
2021-09-13  7:15                 ` Jason Wang
2021-09-13  6:50             ` Michael S. Tsirkin
2021-09-13  7:07               ` Jason Wang
2021-09-13 19:38                 ` Thomas Gleixner
2021-09-13 20:54                   ` Michael S. Tsirkin
2021-09-13 22:31                     ` Thomas Gleixner
2021-09-14  2:20                       ` Jason Wang
2021-09-14  8:29                     ` Thomas Gleixner
2021-09-13  5:53 ` [PATCH 7/9] virtio-pci: harden INTX interrupts Jason Wang
2021-09-13  6:33   ` Michael S. Tsirkin
2021-09-13  6:36     ` Jason Wang
2021-09-13  6:41       ` Michael S. Tsirkin
2021-09-13  6:45         ` Jason Wang
2021-09-13  7:02           ` Michael S. Tsirkin
2021-09-13  7:17             ` Jason Wang
2021-09-13 21:36   ` Thomas Gleixner [this message]
2021-09-13 22:01     ` Michael S. Tsirkin
2021-09-13 22:20       ` Thomas Gleixner
2021-09-14  2:50     ` Jason Wang
2021-09-14  9:34     ` Boqun Feng
2021-09-14 11:03     ` Peter Zijlstra
2021-09-14 11:09       ` Thomas Gleixner
2021-09-13  5:53 ` [PATCH 8/9] virtio_ring: fix typos in vring_desc_extra Jason Wang
2021-09-13  5:53 ` [PATCH 9/9] virtio_ring: validate used buffer length Jason Wang
2021-09-13  6:36   ` Michael S. Tsirkin
2021-09-13  6:40     ` Jason Wang
2021-09-13  6:57       ` Michael S. Tsirkin
2021-09-13  7:13         ` Jason Wang
2021-10-05  7:42 ` [PATCH 0/9] More virtio hardening Michael S. Tsirkin
2021-10-11  7:36   ` Jason Wang
2021-10-11 12:36     ` Michael S. Tsirkin
2021-10-12  2:43       ` Jason Wang
2021-10-12  5:44         ` Michael S. Tsirkin
2021-10-12  6:11           ` Jason Wang
2021-10-12  6:35             ` Michael S. Tsirkin
2021-10-12  6:43               ` Jason Wang
2021-10-12  7:03                 ` Michael S. Tsirkin
2021-10-12  8:46                   ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=875yv4f99j.ffs@tglx \
    --to=tglx@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=david.kaplan@amd.com \
    --cc=f.hetzelt@tu-berlin.de \
    --cc=jasowang@redhat.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).