linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Arnd Bergmann <arnd@arndb.de>,
	linux-arch@vger.kernel.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	virtualization@lists.linux-foundation.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [PATCH 33/34] virtio: use __smp_load_acquire/__smp_store_release
Date: Wed, 30 Dec 2015 15:26:32 +0200	[thread overview]
Message-ID: <1451473761-30019-34-git-send-email-mst@redhat.com> (raw)
In-Reply-To: <1451473761-30019-1-git-send-email-mst@redhat.com>

virtio ring entries have exactly the acquire/release
semantics:
	- reading used index acquires a ring entry from host
	- updating the available index releases it to host

Thus when using weak barriers (as most people do), __smp_load_acquire
and __smp_store_release will do exactly the right thing to synchronize
with the host.

In fact, QEMU already uses __atomic_thread_fence(__ATOMIC_ACQUIRE) and
__atomic_thread_fence(__ATOMIC_RELEASE);
Documentation/circular-buffers.txt suggests smp_load_acquire and
smp_store_release for head and tail updates.

Since we have to worry about strong barrier users,
let's add APIs to wrap these, and use in virtio_ring.c:
in case of the string barriers, we fall back on
rmb/wmb as previously.

It is tempting to use this in vhost as well,
this needs a bit more work to make acquire/release macros
work on __user pointers.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_ring.h  | 26 ++++++++++++++++++++++++++
 drivers/virtio/virtio_ring.c | 20 ++++++++++----------
 2 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index 71176be..528cf81 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -45,6 +45,32 @@ static inline void virtio_wmb(bool weak_barriers)
 		wmb();
 }
 
+/* a load + acquire barrier, but only guaranteed to order reads */
+static inline __virtio16 virtio_load_acquire_rmb(bool weak_barriers,
+						 __virtio16 *p)
+{
+	if (weak_barriers)
+		return __smp_load_acquire(p);
+	else {
+		__virtio16 v = READ_ONCE(*p);
+		rmb();
+		return v;
+	}
+}
+
+/* a release barrier + store, but only guaranteed to order writes */
+static inline void virtio_store_release_wmb(bool weak_barriers,
+					    __virtio16 *p, __virtio16 v)
+{
+	if (weak_barriers)
+		__smp_store_release(p, v);
+	else {
+		wmb();
+		WRITE_ONCE(*p, v);
+		return;
+	}
+}
+
 struct virtio_device;
 struct virtqueue;
 
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ee663c4..edc01d0 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -244,11 +244,11 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	avail = vq->avail_idx_shadow & (vq->vring.num - 1);
 	vq->vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
 
+	vq->avail_idx_shadow++;
 	/* Descriptors and available array need to be set before we expose the
 	 * new available array entries. */
-	virtio_wmb(vq->weak_barriers);
-	vq->avail_idx_shadow++;
-	vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
+	virtio_store_release_wmb(vq->weak_barriers, &vq->vring.avail->idx,
+				 cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow));
 	vq->num_added++;
 
 	pr_debug("Added buffer head %i to %p\n", head, vq);
@@ -453,9 +453,10 @@ static void detach_buf(struct vring_virtqueue *vq, unsigned int head)
 	vq->vq.num_free++;
 }
 
-static inline bool more_used(const struct vring_virtqueue *vq)
+static inline
+bool more_used(const struct vring_virtqueue *vq, __virtio16 used_idx)
 {
-	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx);
+	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, used_idx);
 }
 
 /**
@@ -488,15 +489,14 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len)
 		return NULL;
 	}
 
-	if (!more_used(vq)) {
+	/* Only get used array entries after they have been exposed by host. */
+	if (!more_used(vq, virtio_load_acquire_rmb(vq->weak_barriers,
+						   &vq->vring.used->idx))) {
 		pr_debug("No more buffers in queue\n");
 		END_USE(vq);
 		return NULL;
 	}
 
-	/* Only get used array entries after they have been exposed by host. */
-	virtio_rmb(vq->weak_barriers);
-
 	last_used = (vq->last_used_idx & (vq->vring.num - 1));
 	i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id);
 	*len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len);
@@ -704,7 +704,7 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!more_used(vq)) {
+	if (!more_used(vq, vq->vring.used->idx)) {
 		pr_debug("virtqueue interrupt with no work for %p\n", vq);
 		return IRQ_NONE;
 	}
-- 
MST

  parent reply	other threads:[~2015-12-30 13:26 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-30 13:24 [PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt Michael S. Tsirkin
2015-12-30 12:58 ` [PATCH 00/34] arch: barrier cleanup + __smp_xxx " Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 01/34] Documentation/memory-barriers.txt: document __smb_mb() Michael S. Tsirkin
2016-01-04 11:08   ` Stefano Stabellini
2016-01-04 11:27     ` Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 02/34] asm-generic: guard smp_store_release/load_acquire Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 03/34] ia64: rename nop->iosapic_nop Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 16:39   ` Luck, Tony
2015-12-30 16:39     ` Luck, Tony
2015-12-30 13:24 ` [PATCH 04/34] ia64: reuse asm-generic/barrier.h Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 16:41   ` Luck, Tony
2015-12-30 16:41     ` Luck, Tony
2015-12-30 13:24 ` [PATCH 05/34] powerpc: " Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 06/34] s390: " Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 07/34] sparc: " Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 17:56   ` David Miller
2015-12-30 19:55     ` Michael S. Tsirkin
2015-12-30 19:55       ` Michael S. Tsirkin
2015-12-30 20:47       ` David Miller
2015-12-30 20:47         ` David Miller
2015-12-30 13:24 ` [PATCH 08/34] asm-generic: smp_store_mb should use smp_mb Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 13:44   ` Arnd Bergmann
2015-12-30 20:30     ` Michael S. Tsirkin
2015-12-30 22:43       ` Arnd Bergmann
2015-12-30 13:24 ` [PATCH 09/34] arm: reuse asm-generic/barrier.h Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 13:24 ` [PATCH 10/34] arm64: " Michael S. Tsirkin
2015-12-30 13:24   ` Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 11/34] metag: " Michael S. Tsirkin
2015-12-30 13:25   ` Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 12/34] mips: " Michael S. Tsirkin
2015-12-30 13:25   ` Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 13/34] x86/um: " Michael S. Tsirkin
2015-12-30 13:25   ` Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 14/34] x86: " Michael S. Tsirkin
2015-12-30 13:25   ` Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 15/34] asm-generic: add __smp_XXX wrappers Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 15/34] asm-generic: add __smp_xxx wrappers Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 16/34] powerpc: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 16/34] powerpc: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 17/34] arm64: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 17/34] arm64: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 18/34] arm: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 18/34] arm: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 19/34] blackfin: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 19/34] blackfin: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 20/34] ia64: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 20/34] ia64: define __smp_xxx Michael S. Tsirkin
2015-12-30 16:43   ` [PATCH 20/34] ia64: define __smp_XXX Luck, Tony
2015-12-30 13:25 ` [PATCH 21/34] metag: " Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 21/34] metag: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 22/34] mips: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 22/34] mips: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 23/34] s390: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 23/34] s390: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:25 ` [PATCH 24/34] sh: define __smp_XXX, fix smp_store_mb for !SMP Michael S. Tsirkin
2015-12-30 13:25   ` [PATCH 24/34] sh: define __smp_xxx, " Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 25/34] sparc: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 25/34] sparc: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 26/34] tile: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 26/34] tile: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 27/34] xtensa: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 27/34] xtensa: define __smp_xxx Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 28/34] x86: define __smp_XXX Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 28/34] x86: define __smp_xxX Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 29/34] Revert "virtio_ring: Update weak barriers to use dma_wmb/rmb" Michael S. Tsirkin
2015-12-30 13:26   ` Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 30/34] virtio_ring: update weak barriers to use __smp_XXX Michael S. Tsirkin
2015-12-30 13:26 ` [PATCH 31/34] xenbus: use __smp_XXX barriers Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 31/34] xenbus: use __smp_xxx barriers Michael S. Tsirkin
2016-01-04 11:58   ` [PATCH 31/34] xenbus: use __smp_XXX barriers Stefano Stabellini
2015-12-30 13:26 ` [PATCH 32/34] xen/io: " Michael S. Tsirkin
2015-12-30 13:26   ` [PATCH 32/34] xen/io: use __smp_xxx barriers Michael S. Tsirkin
2016-01-04 11:59   ` [PATCH 32/34] xen/io: use __smp_XXX barriers Stefano Stabellini
2015-12-30 13:26 ` Michael S. Tsirkin [this message]
2015-12-30 13:26 ` [PATCH 34/34] virtio_ring: use __smp_store_mb Michael S. Tsirkin
2015-12-30 13:49 ` [PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt Arnd Bergmann
2015-12-30 21:45   ` [PATCH 00/34] arch: barrier cleanup + __smp_xxx " Michael S. Tsirkin
2015-12-30 22:45     ` Arnd Bergmann
2015-12-30 20:46 ` David Miller
2015-12-30 20:46   ` David Miller
2015-12-30 21:36   ` Michael S. Tsirkin
2015-12-30 21:59     ` David Miller
2015-12-30 21:59       ` David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1451473761-30019-34-git-send-email-mst@redhat.com \
    --to=mst@redhat.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=arnd@arndb.de \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).