* [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings
@ 2017-01-27 15:40 Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent Paolo Bonzini
` (9 more replies)
0 siblings, 10 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
A few fixes caught by "make check" (yes, brown paper bag).
Paolo
v2->v3: patch 1 committed already (replaced by new patch 1)
fix error handling in patch 3
fix freeing uninitialized VRingMemoryRegionCache (patch 7)
new patch 8
Paolo Bonzini (9):
memory: make memory_listener_unregister idempotent
virtio: add virtio_*_phys_cached
virtio: use address_space_map/unmap to access descriptors
exec: make address_space_cache_destroy idempotent
virtio: use MemoryRegionCache to access descriptors
virtio: add MemoryListener to cache ring translations
virtio: use VRingMemoryRegionCaches for descriptor ring
virtio: check for ring setup in virtio_queue_update_used_idx
virtio: use VRingMemoryRegionCaches for avail and used rings
exec.c | 1 +
hw/net/virtio-net.c | 14 +-
hw/virtio/virtio.c | 338 ++++++++++++++++++++++++++++++--------
include/exec/memory.h | 2 +
include/hw/virtio/virtio-access.h | 52 ++++++
include/hw/virtio/virtio.h | 1 +
memory.c | 5 +
7 files changed, 345 insertions(+), 68 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-02-09 22:54 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 2/9] virtio: add virtio_*_phys_cached Paolo Bonzini
` (8 subsequent siblings)
9 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
Make it easy to unregister a MemoryListener without tracking whether it
had been registered before.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
v3: new
memory.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/memory.c b/memory.c
index 2bfc37f..8fafd4c 100644
--- a/memory.c
+++ b/memory.c
@@ -2371,8 +2371,13 @@ void memory_listener_register(MemoryListener *listener, AddressSpace *as)
void memory_listener_unregister(MemoryListener *listener)
{
+ if (!listener->address_space) {
+ return;
+ }
+
QTAILQ_REMOVE(&memory_listeners, listener, link);
QTAILQ_REMOVE(&listener->address_space->listeners, listener, link_as);
+ listener->address_space = NULL;
}
void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 2/9] virtio: add virtio_*_phys_cached
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 3/9] virtio: use address_space_map/unmap to access descriptors Paolo Bonzini
` (7 subsequent siblings)
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/hw/virtio/virtio-access.h | 52 +++++++++++++++++++++++++++++++++++++++
1 file changed, 52 insertions(+)
diff --git a/include/hw/virtio/virtio-access.h b/include/hw/virtio/virtio-access.h
index 91ae14d..2e92074 100644
--- a/include/hw/virtio/virtio-access.h
+++ b/include/hw/virtio/virtio-access.h
@@ -156,6 +156,58 @@ static inline uint16_t virtio_tswap16(VirtIODevice *vdev, uint16_t s)
#endif
}
+static inline uint16_t virtio_lduw_phys_cached(VirtIODevice *vdev,
+ MemoryRegionCache *cache,
+ hwaddr pa)
+{
+ if (virtio_access_is_big_endian(vdev)) {
+ return lduw_be_phys_cached(cache, pa);
+ }
+ return lduw_le_phys_cached(cache, pa);
+}
+
+static inline uint32_t virtio_ldl_phys_cached(VirtIODevice *vdev,
+ MemoryRegionCache *cache,
+ hwaddr pa)
+{
+ if (virtio_access_is_big_endian(vdev)) {
+ return ldl_be_phys_cached(cache, pa);
+ }
+ return ldl_le_phys_cached(cache, pa);
+}
+
+static inline uint64_t virtio_ldq_phys_cached(VirtIODevice *vdev,
+ MemoryRegionCache *cache,
+ hwaddr pa)
+{
+ if (virtio_access_is_big_endian(vdev)) {
+ return ldq_be_phys_cached(cache, pa);
+ }
+ return ldq_le_phys_cached(cache, pa);
+}
+
+static inline void virtio_stw_phys_cached(VirtIODevice *vdev,
+ MemoryRegionCache *cache,
+ hwaddr pa, uint16_t value)
+{
+ if (virtio_access_is_big_endian(vdev)) {
+ stw_be_phys_cached(cache, pa, value);
+ } else {
+ stw_le_phys_cached(cache, pa, value);
+ }
+}
+
+static inline void virtio_stl_phys_cached(VirtIODevice *vdev,
+ MemoryRegionCache *cache,
+ hwaddr pa, uint32_t value)
+{
+ if (virtio_access_is_big_endian(vdev)) {
+ stl_be_phys_cached(cache, pa, value);
+ } else {
+ stl_le_phys_cached(cache, pa, value);
+ }
+}
+
static inline void virtio_tswap16s(VirtIODevice *vdev, uint16_t *s)
{
*s = virtio_tswap16(vdev, *s);
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 3/9] virtio: use address_space_map/unmap to access descriptors
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 2/9] virtio: add virtio_*_phys_cached Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent Paolo Bonzini
` (6 subsequent siblings)
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
This makes little difference, but it makes the code change smaller
for the next patch that introduces MemoryRegionCache. This is
because map/unmap are similar to MemoryRegionCache init/destroy.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
v3: "goto done" when a descriptor has been mapped partially
hw/virtio/virtio.c | 103 ++++++++++++++++++++++++++++++++++++++---------------
1 file changed, 75 insertions(+), 28 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 6365706..f44e072 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -120,10 +120,9 @@ void virtio_queue_update_rings(VirtIODevice *vdev, int n)
}
static void vring_desc_read(VirtIODevice *vdev, VRingDesc *desc,
- hwaddr desc_pa, int i)
+ uint8_t *desc_ptr, int i)
{
- address_space_read(vdev->dma_as, desc_pa + i * sizeof(VRingDesc),
- MEMTXATTRS_UNSPECIFIED, (void *)desc, sizeof(VRingDesc));
+ memcpy(desc, desc_ptr + i * sizeof(VRingDesc), sizeof(VRingDesc));
virtio_tswap64s(vdev, &desc->addr);
virtio_tswap32s(vdev, &desc->len);
virtio_tswap16s(vdev, &desc->flags);
@@ -408,7 +407,7 @@ enum {
};
static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
- hwaddr desc_pa, unsigned int max,
+ void *desc_ptr, unsigned int max,
unsigned int *next)
{
/* If this descriptor says it doesn't chain, we're done. */
@@ -426,7 +425,7 @@ static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
return VIRTQUEUE_READ_DESC_ERROR;
}
- vring_desc_read(vdev, desc, desc_pa, *next);
+ vring_desc_read(vdev, desc, desc_ptr, *next);
return VIRTQUEUE_READ_DESC_MORE;
}
@@ -434,31 +433,41 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
unsigned int *out_bytes,
unsigned max_in_bytes, unsigned max_out_bytes)
{
- unsigned int idx;
+ VirtIODevice *vdev = vq->vdev;
+ unsigned int max, idx;
unsigned int total_bufs, in_total, out_total;
+ void *vring_desc_ptr;
+ void *indirect_desc_ptr = NULL;
+ hwaddr len = 0;
int rc;
idx = vq->last_avail_idx;
-
total_bufs = in_total = out_total = 0;
+
+ max = vq->vring.num;
+ len = max * sizeof(VRingDesc);
+ vring_desc_ptr = address_space_map(vdev->dma_as, vq->vring.desc, &len, false);
+ if (len < max * sizeof(VRingDesc)) {
+ virtio_error(vdev, "Cannot map descriptor ring");
+ goto err;
+ }
+
while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
- VirtIODevice *vdev = vq->vdev;
- unsigned int max, num_bufs, indirect = 0;
+ void *desc_ptr = vring_desc_ptr;
+ unsigned int num_bufs;
VRingDesc desc;
- hwaddr desc_pa;
unsigned int i;
- max = vq->vring.num;
num_bufs = total_bufs;
if (!virtqueue_get_head(vq, idx++, &i)) {
goto err;
}
- desc_pa = vq->vring.desc;
- vring_desc_read(vdev, &desc, desc_pa, i);
+ vring_desc_read(vdev, &desc, desc_ptr, i);
if (desc.flags & VRING_DESC_F_INDIRECT) {
+ len = desc.len;
if (desc.len % sizeof(VRingDesc)) {
virtio_error(vdev, "Invalid size for indirect buffer table");
goto err;
@@ -471,11 +480,17 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
}
/* loop over the indirect descriptor table */
- indirect = 1;
+ indirect_desc_ptr = address_space_map(vdev->dma_as, desc.addr,
+ &len, false);
+ desc_ptr = indirect_desc_ptr;
+ if (len < desc.len) {
+ virtio_error(vdev, "Cannot map indirect buffer");
+ goto err;
+ }
+
max = desc.len / sizeof(VRingDesc);
- desc_pa = desc.addr;
num_bufs = i = 0;
- vring_desc_read(vdev, &desc, desc_pa, i);
+ vring_desc_read(vdev, &desc, desc_ptr, i);
}
do {
@@ -494,17 +509,20 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
goto done;
}
- rc = virtqueue_read_next_desc(vdev, &desc, desc_pa, max, &i);
+ rc = virtqueue_read_next_desc(vdev, &desc, desc_ptr, max, &i);
} while (rc == VIRTQUEUE_READ_DESC_MORE);
if (rc == VIRTQUEUE_READ_DESC_ERROR) {
goto err;
}
- if (!indirect)
- total_bufs = num_bufs;
- else
+ if (desc_ptr == indirect_desc_ptr) {
+ address_space_unmap(vdev->dma_as, desc_ptr, len, false, 0);
+ indirect_desc_ptr = NULL;
total_bufs++;
+ } else {
+ total_bufs = num_bufs;
+ }
}
if (rc < 0) {
@@ -512,6 +530,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
}
done:
+ if (indirect_desc_ptr) {
+ address_space_unmap(vdev->dma_as, indirect_desc_ptr, len, false, 0);
+ }
+ address_space_unmap(vdev->dma_as, vring_desc_ptr, len, false, 0);
if (in_bytes) {
*in_bytes = in_total;
}
@@ -651,9 +673,12 @@ static void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_nu
void *virtqueue_pop(VirtQueue *vq, size_t sz)
{
unsigned int i, head, max;
- hwaddr desc_pa = vq->vring.desc;
+ void *vring_desc_ptr;
+ void *indirect_desc_ptr = NULL;
+ void *desc_ptr;
+ hwaddr len;
VirtIODevice *vdev = vq->vdev;
- VirtQueueElement *elem;
+ VirtQueueElement *elem = NULL;
unsigned out_num, in_num;
hwaddr addr[VIRTQUEUE_MAX_SIZE];
struct iovec iov[VIRTQUEUE_MAX_SIZE];
@@ -689,18 +714,34 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
}
i = head;
- vring_desc_read(vdev, &desc, desc_pa, i);
+
+ len = max * sizeof(VRingDesc);
+ vring_desc_ptr = address_space_map(vdev->dma_as, vq->vring.desc, &len, false);
+ if (len < max * sizeof(VRingDesc)) {
+ virtio_error(vdev, "Cannot map descriptor ring");
+ goto done;
+ }
+
+ desc_ptr = vring_desc_ptr;
+ vring_desc_read(vdev, &desc, desc_ptr, i);
if (desc.flags & VRING_DESC_F_INDIRECT) {
if (desc.len % sizeof(VRingDesc)) {
virtio_error(vdev, "Invalid size for indirect buffer table");
- return NULL;
+ goto done;
}
/* loop over the indirect descriptor table */
+ len = desc.len;
+ indirect_desc_ptr = address_space_map(vdev->dma_as, desc.addr, &len, false);
+ desc_ptr = indirect_desc_ptr;
+ if (len < desc.len) {
+ virtio_error(vdev, "Cannot map indirect buffer");
+ goto done;
+ }
+
max = desc.len / sizeof(VRingDesc);
- desc_pa = desc.addr;
i = 0;
- vring_desc_read(vdev, &desc, desc_pa, i);
+ vring_desc_read(vdev, &desc, desc_ptr, i);
}
/* Collect all the descriptors */
@@ -731,7 +772,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
goto err_undo_map;
}
- rc = virtqueue_read_next_desc(vdev, &desc, desc_pa, max, &i);
+ rc = virtqueue_read_next_desc(vdev, &desc, desc_ptr, max, &i);
} while (rc == VIRTQUEUE_READ_DESC_MORE);
if (rc == VIRTQUEUE_READ_DESC_ERROR) {
@@ -753,11 +794,17 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
vq->inuse++;
trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num);
+done:
+ if (indirect_desc_ptr) {
+ address_space_unmap(vdev->dma_as, indirect_desc_ptr, len, false, 0);
+ }
+ address_space_unmap(vdev->dma_as, vring_desc_ptr, len, false, 0);
+
return elem;
err_undo_map:
virtqueue_undo_map_desc(out_num, in_num, iov);
- return NULL;
+ goto done;
}
/* virtqueue_drop_all:
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (2 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 3/9] virtio: use address_space_map/unmap to access descriptors Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-02-09 22:57 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 5/9] virtio: use MemoryRegionCache to access descriptors Paolo Bonzini
` (5 subsequent siblings)
9 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
Clear cache->mr so that address_space_cache_destroy does nothing
the second time it is called.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
exec.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/exec.c b/exec.c
index f2bed92..5de15cf 100644
--- a/exec.c
+++ b/exec.c
@@ -3165,6 +3165,7 @@ void address_space_cache_destroy(MemoryRegionCache *cache)
xen_invalidate_map_cache_entry(cache->ptr);
}
memory_region_unref(cache->mr);
+ cache->mr = NULL;
}
/* Called from RCU critical section. This function has the same
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 5/9] virtio: use MemoryRegionCache to access descriptors
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (3 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 6/9] virtio: add MemoryListener to cache ring translations Paolo Bonzini
` (4 subsequent siblings)
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
For now, the cache is created on every virtqueue_pop. Later on,
direct descriptors will be able to reuse it.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 80 +++++++++++++++++++++++++--------------------------
include/exec/memory.h | 2 ++
2 files changed, 41 insertions(+), 41 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index f44e072..8e9667b 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -120,9 +120,10 @@ void virtio_queue_update_rings(VirtIODevice *vdev, int n)
}
static void vring_desc_read(VirtIODevice *vdev, VRingDesc *desc,
- uint8_t *desc_ptr, int i)
+ MemoryRegionCache *cache, int i)
{
- memcpy(desc, desc_ptr + i * sizeof(VRingDesc), sizeof(VRingDesc));
+ address_space_read_cached(cache, i * sizeof(VRingDesc),
+ desc, sizeof(VRingDesc));
virtio_tswap64s(vdev, &desc->addr);
virtio_tswap32s(vdev, &desc->len);
virtio_tswap16s(vdev, &desc->flags);
@@ -407,7 +408,7 @@ enum {
};
static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
- void *desc_ptr, unsigned int max,
+ MemoryRegionCache *desc_cache, unsigned int max,
unsigned int *next)
{
/* If this descriptor says it doesn't chain, we're done. */
@@ -425,7 +426,7 @@ static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
return VIRTQUEUE_READ_DESC_ERROR;
}
- vring_desc_read(vdev, desc, desc_ptr, *next);
+ vring_desc_read(vdev, desc, desc_cache, *next);
return VIRTQUEUE_READ_DESC_MORE;
}
@@ -436,24 +437,25 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
VirtIODevice *vdev = vq->vdev;
unsigned int max, idx;
unsigned int total_bufs, in_total, out_total;
- void *vring_desc_ptr;
- void *indirect_desc_ptr = NULL;
- hwaddr len = 0;
+ MemoryRegionCache vring_desc_cache;
+ MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
+ int64_t len = 0;
int rc;
idx = vq->last_avail_idx;
total_bufs = in_total = out_total = 0;
max = vq->vring.num;
- len = max * sizeof(VRingDesc);
- vring_desc_ptr = address_space_map(vdev->dma_as, vq->vring.desc, &len, false);
+ len = address_space_cache_init(&vring_desc_cache, vdev->dma_as,
+ vq->vring.desc, max * sizeof(VRingDesc),
+ false);
if (len < max * sizeof(VRingDesc)) {
virtio_error(vdev, "Cannot map descriptor ring");
goto err;
}
while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
- void *desc_ptr = vring_desc_ptr;
+ MemoryRegionCache *desc_cache = &vring_desc_cache;
unsigned int num_bufs;
VRingDesc desc;
unsigned int i;
@@ -464,10 +466,9 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
goto err;
}
- vring_desc_read(vdev, &desc, desc_ptr, i);
+ vring_desc_read(vdev, &desc, desc_cache, i);
if (desc.flags & VRING_DESC_F_INDIRECT) {
- len = desc.len;
if (desc.len % sizeof(VRingDesc)) {
virtio_error(vdev, "Invalid size for indirect buffer table");
goto err;
@@ -480,9 +481,10 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
}
/* loop over the indirect descriptor table */
- indirect_desc_ptr = address_space_map(vdev->dma_as, desc.addr,
- &len, false);
- desc_ptr = indirect_desc_ptr;
+ len = address_space_cache_init(&indirect_desc_cache,
+ vdev->dma_as,
+ desc.addr, desc.len, false);
+ desc_cache = &indirect_desc_cache;
if (len < desc.len) {
virtio_error(vdev, "Cannot map indirect buffer");
goto err;
@@ -490,7 +492,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
max = desc.len / sizeof(VRingDesc);
num_bufs = i = 0;
- vring_desc_read(vdev, &desc, desc_ptr, i);
+ vring_desc_read(vdev, &desc, desc_cache, i);
}
do {
@@ -509,16 +511,15 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
goto done;
}
- rc = virtqueue_read_next_desc(vdev, &desc, desc_ptr, max, &i);
+ rc = virtqueue_read_next_desc(vdev, &desc, desc_cache, max, &i);
} while (rc == VIRTQUEUE_READ_DESC_MORE);
if (rc == VIRTQUEUE_READ_DESC_ERROR) {
goto err;
}
- if (desc_ptr == indirect_desc_ptr) {
- address_space_unmap(vdev->dma_as, desc_ptr, len, false, 0);
- indirect_desc_ptr = NULL;
+ if (desc_cache == &indirect_desc_cache) {
+ address_space_cache_destroy(&indirect_desc_cache);
total_bufs++;
} else {
total_bufs = num_bufs;
@@ -530,10 +531,8 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
}
done:
- if (indirect_desc_ptr) {
- address_space_unmap(vdev->dma_as, indirect_desc_ptr, len, false, 0);
- }
- address_space_unmap(vdev->dma_as, vring_desc_ptr, len, false, 0);
+ address_space_cache_destroy(&indirect_desc_cache);
+ address_space_cache_destroy(&vring_desc_cache);
if (in_bytes) {
*in_bytes = in_total;
}
@@ -673,10 +672,10 @@ static void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_nu
void *virtqueue_pop(VirtQueue *vq, size_t sz)
{
unsigned int i, head, max;
- void *vring_desc_ptr;
- void *indirect_desc_ptr = NULL;
- void *desc_ptr;
- hwaddr len;
+ MemoryRegionCache vring_desc_cache;
+ MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
+ MemoryRegionCache *desc_cache;
+ int64_t len;
VirtIODevice *vdev = vq->vdev;
VirtQueueElement *elem = NULL;
unsigned out_num, in_num;
@@ -715,15 +714,16 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
i = head;
- len = max * sizeof(VRingDesc);
- vring_desc_ptr = address_space_map(vdev->dma_as, vq->vring.desc, &len, false);
+ len = address_space_cache_init(&vring_desc_cache, vdev->dma_as,
+ vq->vring.desc, max * sizeof(VRingDesc),
+ false);
if (len < max * sizeof(VRingDesc)) {
virtio_error(vdev, "Cannot map descriptor ring");
goto done;
}
- desc_ptr = vring_desc_ptr;
- vring_desc_read(vdev, &desc, desc_ptr, i);
+ desc_cache = &vring_desc_cache;
+ vring_desc_read(vdev, &desc, desc_cache, i);
if (desc.flags & VRING_DESC_F_INDIRECT) {
if (desc.len % sizeof(VRingDesc)) {
virtio_error(vdev, "Invalid size for indirect buffer table");
@@ -731,9 +731,9 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
}
/* loop over the indirect descriptor table */
- len = desc.len;
- indirect_desc_ptr = address_space_map(vdev->dma_as, desc.addr, &len, false);
- desc_ptr = indirect_desc_ptr;
+ len = address_space_cache_init(&indirect_desc_cache, vdev->dma_as,
+ desc.addr, desc.len, false);
+ desc_cache = &indirect_desc_cache;
if (len < desc.len) {
virtio_error(vdev, "Cannot map indirect buffer");
goto done;
@@ -741,7 +741,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
max = desc.len / sizeof(VRingDesc);
i = 0;
- vring_desc_read(vdev, &desc, desc_ptr, i);
+ vring_desc_read(vdev, &desc, desc_cache, i);
}
/* Collect all the descriptors */
@@ -772,7 +772,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
goto err_undo_map;
}
- rc = virtqueue_read_next_desc(vdev, &desc, desc_ptr, max, &i);
+ rc = virtqueue_read_next_desc(vdev, &desc, desc_cache, max, &i);
} while (rc == VIRTQUEUE_READ_DESC_MORE);
if (rc == VIRTQUEUE_READ_DESC_ERROR) {
@@ -795,10 +795,8 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num);
done:
- if (indirect_desc_ptr) {
- address_space_unmap(vdev->dma_as, indirect_desc_ptr, len, false, 0);
- }
- address_space_unmap(vdev->dma_as, vring_desc_ptr, len, false, 0);
+ address_space_cache_destroy(&indirect_desc_cache);
+ address_space_cache_destroy(&vring_desc_cache);
return elem;
diff --git a/include/exec/memory.h b/include/exec/memory.h
index a10044f..cb40723 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1426,6 +1426,8 @@ struct MemoryRegionCache {
bool is_write;
};
+#define MEMORY_REGION_CACHE_INVALID ((MemoryRegionCache) { .mr = NULL })
+
/* address_space_cache_init: prepare for repeated access to a physical
* memory region
*
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 6/9] virtio: add MemoryListener to cache ring translations
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (4 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 5/9] virtio: use MemoryRegionCache to access descriptors Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 7/9] virtio: use VRingMemoryRegionCaches for descriptor ring Paolo Bonzini
` (3 subsequent siblings)
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
The cached translations are RCU-protected to allow efficient use
when processing virtqueues.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
v2->v3: fix instance_finalize when caches where partially
uninitialized or the listener had not been registered
hw/virtio/virtio.c | 105 +++++++++++++++++++++++++++++++++++++++++++--
include/hw/virtio/virtio.h | 1 +
2 files changed, 103 insertions(+), 3 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 8e9667b..44fe14a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -60,6 +60,13 @@ typedef struct VRingUsed
VRingUsedElem ring[0];
} VRingUsed;
+typedef struct VRingMemoryRegionCaches {
+ struct rcu_head rcu;
+ MemoryRegionCache desc;
+ MemoryRegionCache avail;
+ MemoryRegionCache used;
+} VRingMemoryRegionCaches;
+
typedef struct VRing
{
unsigned int num;
@@ -68,6 +75,7 @@ typedef struct VRing
hwaddr desc;
hwaddr avail;
hwaddr used;
+ VRingMemoryRegionCaches *caches;
} VRing;
struct VirtQueue
@@ -104,6 +112,51 @@ struct VirtQueue
QLIST_ENTRY(VirtQueue) node;
};
+static void virtio_free_region_cache(VRingMemoryRegionCaches *caches)
+{
+ if (!caches) {
+ return;
+ }
+
+ address_space_cache_destroy(&caches->desc);
+ address_space_cache_destroy(&caches->avail);
+ address_space_cache_destroy(&caches->used);
+ g_free(caches);
+}
+
+static void virtio_init_region_cache(VirtIODevice *vdev, int n)
+{
+ VirtQueue *vq = &vdev->vq[n];
+ VRingMemoryRegionCaches *old = vq->vring.caches;
+ VRingMemoryRegionCaches *new;
+ hwaddr addr, size;
+ int event_size;
+
+ event_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0;
+
+ addr = vq->vring.desc;
+ if (!addr) {
+ return;
+ }
+ new = g_new0(VRingMemoryRegionCaches, 1);
+ size = virtio_queue_get_desc_size(vdev, n);
+ address_space_cache_init(&new->desc, vdev->dma_as,
+ addr, size, false);
+
+ size = virtio_queue_get_used_size(vdev, n) + event_size;
+ address_space_cache_init(&new->used, vdev->dma_as,
+ vq->vring.used, size, true);
+
+ size = virtio_queue_get_avail_size(vdev, n) + event_size;
+ address_space_cache_init(&new->avail, vdev->dma_as,
+ vq->vring.avail, size, false);
+
+ atomic_rcu_set(&vq->vring.caches, new);
+ if (old) {
+ call_rcu(old, virtio_free_region_cache, rcu);
+ }
+}
+
/* virt queue functions */
void virtio_queue_update_rings(VirtIODevice *vdev, int n)
{
@@ -117,6 +170,7 @@ void virtio_queue_update_rings(VirtIODevice *vdev, int n)
vring->used = vring_align(vring->avail +
offsetof(VRingAvail, ring[vring->num]),
vring->align);
+ virtio_init_region_cache(vdev, n);
}
static void vring_desc_read(VirtIODevice *vdev, VRingDesc *desc,
@@ -1264,6 +1318,7 @@ void virtio_queue_set_rings(VirtIODevice *vdev, int n, hwaddr desc,
vdev->vq[n].vring.desc = desc;
vdev->vq[n].vring.avail = avail;
vdev->vq[n].vring.used = used;
+ virtio_init_region_cache(vdev, n);
}
void virtio_queue_set_num(VirtIODevice *vdev, int n, int num)
@@ -1982,9 +2037,6 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
void virtio_cleanup(VirtIODevice *vdev)
{
qemu_del_vm_change_state_handler(vdev->vmstate);
- g_free(vdev->config);
- g_free(vdev->vq);
- g_free(vdev->vector_queues);
}
static void virtio_vmstate_change(void *opaque, int running, RunState state)
@@ -2245,6 +2297,19 @@ void GCC_FMT_ATTR(2, 3) virtio_error(VirtIODevice *vdev, const char *fmt, ...)
}
}
+static void virtio_memory_listener_commit(MemoryListener *listener)
+{
+ VirtIODevice *vdev = container_of(listener, VirtIODevice, listener);
+ int i;
+
+ for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
+ if (vdev->vq[i].vring.num == 0) {
+ break;
+ }
+ virtio_init_region_cache(vdev, i);
+ }
+}
+
static void virtio_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -2267,6 +2332,9 @@ static void virtio_device_realize(DeviceState *dev, Error **errp)
error_propagate(errp, err);
return;
}
+
+ vdev->listener.commit = virtio_memory_listener_commit;
+ memory_listener_register(&vdev->listener, vdev->dma_as);
}
static void virtio_device_unrealize(DeviceState *dev, Error **errp)
@@ -2289,6 +2357,36 @@ static void virtio_device_unrealize(DeviceState *dev, Error **errp)
vdev->bus_name = NULL;
}
+static void virtio_device_free_virtqueues(VirtIODevice *vdev)
+{
+ int i;
+ if (!vdev->vq) {
+ return;
+ }
+
+ for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
+ VRingMemoryRegionCaches *caches;
+ if (vdev->vq[i].vring.num == 0) {
+ break;
+ }
+ caches = atomic_read(&vdev->vq[i].vring.caches);
+ atomic_set(&vdev->vq[i].vring.caches, NULL);
+ virtio_free_region_cache(caches);
+ }
+ g_free(vdev->vq);
+}
+
+static void virtio_device_instance_finalize(Object *obj)
+{
+ VirtIODevice *vdev = VIRTIO_DEVICE(obj);
+
+ memory_listener_unregister(&vdev->listener);
+ virtio_device_free_virtqueues(vdev);
+
+ g_free(vdev->config);
+ g_free(vdev->vector_queues);
+}
+
static Property virtio_properties[] = {
DEFINE_VIRTIO_COMMON_FEATURES(VirtIODevice, host_features),
DEFINE_PROP_END_OF_LIST(),
@@ -2415,6 +2513,7 @@ static const TypeInfo virtio_device_info = {
.parent = TYPE_DEVICE,
.instance_size = sizeof(VirtIODevice),
.class_init = virtio_device_class_init,
+ .instance_finalize = virtio_device_instance_finalize,
.abstract = true,
.class_size = sizeof(VirtioDeviceClass),
};
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 525da24..f1b2673 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -85,6 +85,7 @@ struct VirtIODevice
uint32_t generation;
int nvectors;
VirtQueue *vq;
+ MemoryListener listener;
uint16_t device_id;
bool vm_running;
bool broken; /* device in invalid state, needs reset */
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 7/9] virtio: use VRingMemoryRegionCaches for descriptor ring
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (5 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 6/9] virtio: add MemoryListener to cache ring translations Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx Paolo Bonzini
` (2 subsequent siblings)
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 26 ++++++++++++--------------
1 file changed, 12 insertions(+), 14 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 44fe14a..32bf364 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -491,25 +491,24 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
VirtIODevice *vdev = vq->vdev;
unsigned int max, idx;
unsigned int total_bufs, in_total, out_total;
- MemoryRegionCache vring_desc_cache;
+ VRingMemoryRegionCaches *caches;
MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
int64_t len = 0;
int rc;
+ rcu_read_lock();
idx = vq->last_avail_idx;
total_bufs = in_total = out_total = 0;
max = vq->vring.num;
- len = address_space_cache_init(&vring_desc_cache, vdev->dma_as,
- vq->vring.desc, max * sizeof(VRingDesc),
- false);
- if (len < max * sizeof(VRingDesc)) {
+ caches = atomic_rcu_read(&vq->vring.caches);
+ if (caches->desc.len < max * sizeof(VRingDesc)) {
virtio_error(vdev, "Cannot map descriptor ring");
goto err;
}
while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
- MemoryRegionCache *desc_cache = &vring_desc_cache;
+ MemoryRegionCache *desc_cache = &caches->desc;
unsigned int num_bufs;
VRingDesc desc;
unsigned int i;
@@ -586,13 +585,13 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
done:
address_space_cache_destroy(&indirect_desc_cache);
- address_space_cache_destroy(&vring_desc_cache);
if (in_bytes) {
*in_bytes = in_total;
}
if (out_bytes) {
*out_bytes = out_total;
}
+ rcu_read_unlock();
return;
err:
@@ -726,7 +725,7 @@ static void *virtqueue_alloc_element(size_t sz, unsigned out_num, unsigned in_nu
void *virtqueue_pop(VirtQueue *vq, size_t sz)
{
unsigned int i, head, max;
- MemoryRegionCache vring_desc_cache;
+ VRingMemoryRegionCaches *caches;
MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
MemoryRegionCache *desc_cache;
int64_t len;
@@ -768,15 +767,14 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
i = head;
- len = address_space_cache_init(&vring_desc_cache, vdev->dma_as,
- vq->vring.desc, max * sizeof(VRingDesc),
- false);
- if (len < max * sizeof(VRingDesc)) {
+ rcu_read_lock();
+ caches = atomic_rcu_read(&vq->vring.caches);
+ if (caches->desc.len < max * sizeof(VRingDesc)) {
virtio_error(vdev, "Cannot map descriptor ring");
goto done;
}
- desc_cache = &vring_desc_cache;
+ desc_cache = &caches->desc;
vring_desc_read(vdev, &desc, desc_cache, i);
if (desc.flags & VRING_DESC_F_INDIRECT) {
if (desc.len % sizeof(VRingDesc)) {
@@ -850,7 +848,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
trace_virtqueue_pop(vq, elem, elem->in_num, elem->out_num);
done:
address_space_cache_destroy(&indirect_desc_cache);
- address_space_cache_destroy(&vring_desc_cache);
+ rcu_read_unlock();
return elem;
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (6 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 7/9] virtio: use VRingMemoryRegionCaches for descriptor ring Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-02-09 22:59 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 9/9] virtio: use VRingMemoryRegionCaches for avail and used rings Paolo Bonzini
2017-02-07 15:22 ` [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
9 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
If the vring has not been set up, it is not necessary for vring_used_idx
to do anything (as is already the case when the caller is virtio_load).
This is harmless for now, but it will be a problem when the
MemoryRegionCache has not been set up.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/virtio/virtio.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 32bf364..23fac1e 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -2154,7 +2154,9 @@ void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
{
- vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
+ if (vdev->vq[n].vring.desc) {
+ vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
+ }
}
void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [Qemu-devel] [PATCH 9/9] virtio: use VRingMemoryRegionCaches for avail and used rings
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (7 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx Paolo Bonzini
@ 2017-01-27 15:40 ` Paolo Bonzini
2017-02-07 15:22 ` [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-01-27 15:40 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
The virtio-net change is necessary because it uses virtqueue_fill
and virtqueue_flush instead of the more convenient virtqueue_push.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/net/virtio-net.c | 14 +++++-
hw/virtio/virtio.c | 132 ++++++++++++++++++++++++++++++++++++++--------------
2 files changed, 109 insertions(+), 37 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 7b3ad4a..6f0e397 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -1130,7 +1130,8 @@ static int receive_filter(VirtIONet *n, const uint8_t *buf, int size)
return 0;
}
-static ssize_t virtio_net_receive(NetClientState *nc, const uint8_t *buf, size_t size)
+static ssize_t virtio_net_receive_rcu(NetClientState *nc, const uint8_t *buf,
+ size_t size)
{
VirtIONet *n = qemu_get_nic_opaque(nc);
VirtIONetQueue *q = virtio_net_get_subqueue(nc);
@@ -1233,6 +1234,17 @@ static ssize_t virtio_net_receive(NetClientState *nc, const uint8_t *buf, size_t
return size;
}
+static ssize_t virtio_net_receive(NetClientState *nc, const uint8_t *buf,
+ size_t size)
+{
+ ssize_t r;
+
+ rcu_read_lock();
+ r = virtio_net_receive_rcu(nc, buf, size);
+ rcu_read_unlock();
+ return r;
+}
+
static int32_t virtio_net_flush_tx(VirtIONetQueue *q);
static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 23fac1e..3668803 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -173,6 +173,7 @@ void virtio_queue_update_rings(VirtIODevice *vdev, int n)
virtio_init_region_cache(vdev, n);
}
+/* Called within rcu_read_lock(). */
static void vring_desc_read(VirtIODevice *vdev, VRingDesc *desc,
MemoryRegionCache *cache, int i)
{
@@ -184,88 +185,110 @@ static void vring_desc_read(VirtIODevice *vdev, VRingDesc *desc,
virtio_tswap16s(vdev, &desc->next);
}
+/* Called within rcu_read_lock(). */
static inline uint16_t vring_avail_flags(VirtQueue *vq)
{
- hwaddr pa;
- pa = vq->vring.avail + offsetof(VRingAvail, flags);
- return virtio_lduw_phys(vq->vdev, pa);
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingAvail, flags);
+ return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
}
+/* Called within rcu_read_lock(). */
static inline uint16_t vring_avail_idx(VirtQueue *vq)
{
- hwaddr pa;
- pa = vq->vring.avail + offsetof(VRingAvail, idx);
- vq->shadow_avail_idx = virtio_lduw_phys(vq->vdev, pa);
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingAvail, idx);
+ vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
return vq->shadow_avail_idx;
}
+/* Called within rcu_read_lock(). */
static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
{
- hwaddr pa;
- pa = vq->vring.avail + offsetof(VRingAvail, ring[i]);
- return virtio_lduw_phys(vq->vdev, pa);
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingAvail, ring[i]);
+ return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
}
+/* Called within rcu_read_lock(). */
static inline uint16_t vring_get_used_event(VirtQueue *vq)
{
return vring_avail_ring(vq, vq->vring.num);
}
+/* Called within rcu_read_lock(). */
static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
int i)
{
- hwaddr pa;
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingUsed, ring[i]);
virtio_tswap32s(vq->vdev, &uelem->id);
virtio_tswap32s(vq->vdev, &uelem->len);
- pa = vq->vring.used + offsetof(VRingUsed, ring[i]);
- address_space_write(vq->vdev->dma_as, pa, MEMTXATTRS_UNSPECIFIED,
- (void *)uelem, sizeof(VRingUsedElem));
+ address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
+ address_space_cache_invalidate(&caches->used, pa, sizeof(VRingUsedElem));
}
+/* Called within rcu_read_lock(). */
static uint16_t vring_used_idx(VirtQueue *vq)
{
- hwaddr pa;
- pa = vq->vring.used + offsetof(VRingUsed, idx);
- return virtio_lduw_phys(vq->vdev, pa);
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingUsed, idx);
+ return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
}
+/* Called within rcu_read_lock(). */
static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
{
- hwaddr pa;
- pa = vq->vring.used + offsetof(VRingUsed, idx);
- virtio_stw_phys(vq->vdev, pa, val);
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
+ hwaddr pa = offsetof(VRingUsed, idx);
+ virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
+ address_space_cache_invalidate(&caches->used, pa, sizeof(val));
vq->used_idx = val;
}
+/* Called within rcu_read_lock(). */
static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
{
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
VirtIODevice *vdev = vq->vdev;
- hwaddr pa;
- pa = vq->vring.used + offsetof(VRingUsed, flags);
- virtio_stw_phys(vdev, pa, virtio_lduw_phys(vdev, pa) | mask);
+ hwaddr pa = offsetof(VRingUsed, flags);
+ uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+
+ virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
+ address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
}
+/* Called within rcu_read_lock(). */
static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
{
+ VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
VirtIODevice *vdev = vq->vdev;
- hwaddr pa;
- pa = vq->vring.used + offsetof(VRingUsed, flags);
- virtio_stw_phys(vdev, pa, virtio_lduw_phys(vdev, pa) & ~mask);
+ hwaddr pa = offsetof(VRingUsed, flags);
+ uint16_t flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
+
+ virtio_stw_phys_cached(vdev, &caches->used, pa, flags & ~mask);
+ address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
}
+/* Called within rcu_read_lock(). */
static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
{
+ VRingMemoryRegionCaches *caches;
hwaddr pa;
if (!vq->notification) {
return;
}
- pa = vq->vring.used + offsetof(VRingUsed, ring[vq->vring.num]);
- virtio_stw_phys(vq->vdev, pa, val);
+
+ caches = atomic_rcu_read(&vq->vring.caches);
+ pa = offsetof(VRingUsed, ring[vq->vring.num]);
+ virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
}
void virtio_queue_set_notification(VirtQueue *vq, int enable)
{
vq->notification = enable;
+
+ rcu_read_lock();
if (virtio_vdev_has_feature(vq->vdev, VIRTIO_RING_F_EVENT_IDX)) {
vring_set_avail_event(vq, vring_avail_idx(vq));
} else if (enable) {
@@ -277,6 +300,7 @@ void virtio_queue_set_notification(VirtQueue *vq, int enable)
/* Expose avail event/used flags before caller checks the avail idx. */
smp_mb();
}
+ rcu_read_unlock();
}
int virtio_queue_ready(VirtQueue *vq)
@@ -285,8 +309,9 @@ int virtio_queue_ready(VirtQueue *vq)
}
/* Fetch avail_idx from VQ memory only when we really need to know if
- * guest has added some buffers. */
-int virtio_queue_empty(VirtQueue *vq)
+ * guest has added some buffers.
+ * Called within rcu_read_lock(). */
+static int virtio_queue_empty_rcu(VirtQueue *vq)
{
if (vq->shadow_avail_idx != vq->last_avail_idx) {
return 0;
@@ -295,6 +320,20 @@ int virtio_queue_empty(VirtQueue *vq)
return vring_avail_idx(vq) == vq->last_avail_idx;
}
+int virtio_queue_empty(VirtQueue *vq)
+{
+ bool empty;
+
+ if (vq->shadow_avail_idx != vq->last_avail_idx) {
+ return 0;
+ }
+
+ rcu_read_lock();
+ empty = vring_avail_idx(vq) == vq->last_avail_idx;
+ rcu_read_unlock();
+ return empty;
+}
+
static void virtqueue_unmap_sg(VirtQueue *vq, const VirtQueueElement *elem,
unsigned int len)
{
@@ -373,6 +412,7 @@ bool virtqueue_rewind(VirtQueue *vq, unsigned int num)
return true;
}
+/* Called within rcu_read_lock(). */
void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
unsigned int len, unsigned int idx)
{
@@ -393,6 +433,7 @@ void virtqueue_fill(VirtQueue *vq, const VirtQueueElement *elem,
vring_used_write(vq, &uelem, idx);
}
+/* Called within rcu_read_lock(). */
void virtqueue_flush(VirtQueue *vq, unsigned int count)
{
uint16_t old, new;
@@ -416,10 +457,13 @@ void virtqueue_flush(VirtQueue *vq, unsigned int count)
void virtqueue_push(VirtQueue *vq, const VirtQueueElement *elem,
unsigned int len)
{
+ rcu_read_lock();
virtqueue_fill(vq, elem, len, 0);
virtqueue_flush(vq, 1);
+ rcu_read_unlock();
}
+/* Called within rcu_read_lock(). */
static int virtqueue_num_heads(VirtQueue *vq, unsigned int idx)
{
uint16_t num_heads = vring_avail_idx(vq) - idx;
@@ -439,6 +483,7 @@ static int virtqueue_num_heads(VirtQueue *vq, unsigned int idx)
return num_heads;
}
+/* Called within rcu_read_lock(). */
static bool virtqueue_get_head(VirtQueue *vq, unsigned int idx,
unsigned int *head)
{
@@ -740,8 +785,9 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
if (unlikely(vdev->broken)) {
return NULL;
}
- if (virtio_queue_empty(vq)) {
- return NULL;
+ rcu_read_lock();
+ if (virtio_queue_empty_rcu(vq)) {
+ goto done;
}
/* Needed after virtio_queue_empty(), see comment in
* virtqueue_num_heads(). */
@@ -754,11 +800,11 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
if (vq->inuse >= vq->vring.num) {
virtio_error(vdev, "Virtqueue size exceeded");
- return NULL;
+ goto done;
}
if (!virtqueue_get_head(vq, vq->last_avail_idx++, &head)) {
- return NULL;
+ goto done;
}
if (virtio_vdev_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX)) {
@@ -767,7 +813,6 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
i = head;
- rcu_read_lock();
caches = atomic_rcu_read(&vq->vring.caches);
if (caches->desc.len < max * sizeof(VRingDesc)) {
virtio_error(vdev, "Cannot map descriptor ring");
@@ -1481,6 +1526,7 @@ static void virtio_set_isr(VirtIODevice *vdev, int value)
}
}
+/* Called within rcu_read_lock(). */
static bool virtio_should_notify(VirtIODevice *vdev, VirtQueue *vq)
{
uint16_t old, new;
@@ -1506,7 +1552,12 @@ static bool virtio_should_notify(VirtIODevice *vdev, VirtQueue *vq)
void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq)
{
- if (!virtio_should_notify(vdev, vq)) {
+ bool should_notify;
+ rcu_read_lock();
+ should_notify = virtio_should_notify(vdev, vq);
+ rcu_read_unlock();
+
+ if (!should_notify) {
return;
}
@@ -1533,7 +1584,12 @@ void virtio_notify_irqfd(VirtIODevice *vdev, VirtQueue *vq)
void virtio_notify(VirtIODevice *vdev, VirtQueue *vq)
{
- if (!virtio_should_notify(vdev, vq)) {
+ bool should_notify;
+ rcu_read_lock();
+ should_notify = virtio_should_notify(vdev, vq);
+ rcu_read_unlock();
+
+ if (!should_notify) {
return;
}
@@ -1994,6 +2050,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
}
}
+ rcu_read_lock();
for (i = 0; i < num; i++) {
if (vdev->vq[i].vring.desc) {
uint16_t nheads;
@@ -2028,6 +2085,7 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
}
}
}
+ rcu_read_unlock();
return 0;
}
@@ -2154,9 +2212,11 @@ void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
{
+ rcu_read_lock();
if (vdev->vq[n].vring.desc) {
vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
}
+ rcu_read_unlock();
}
void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n)
--
1.8.3.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
` (8 preceding siblings ...)
2017-01-27 15:40 ` [Qemu-devel] [PATCH 9/9] virtio: use VRingMemoryRegionCaches for avail and used rings Paolo Bonzini
@ 2017-02-07 15:22 ` Paolo Bonzini
9 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2017-02-07 15:22 UTC (permalink / raw)
To: qemu-devel; +Cc: mst
On 27/01/2017 16:40, Paolo Bonzini wrote:
> A few fixes caught by "make check" (yes, brown paper bag).
>
> Paolo
>
> v2->v3: patch 1 committed already (replaced by new patch 1)
> fix error handling in patch 3
> fix freeing uninitialized VRingMemoryRegionCache (patch 7)
> new patch 8
>
> Paolo Bonzini (9):
> memory: make memory_listener_unregister idempotent
> virtio: add virtio_*_phys_cached
> virtio: use address_space_map/unmap to access descriptors
> exec: make address_space_cache_destroy idempotent
> virtio: use MemoryRegionCache to access descriptors
> virtio: add MemoryListener to cache ring translations
> virtio: use VRingMemoryRegionCaches for descriptor ring
> virtio: check for ring setup in virtio_queue_update_used_idx
> virtio: use VRingMemoryRegionCaches for avail and used rings
>
> exec.c | 1 +
> hw/net/virtio-net.c | 14 +-
> hw/virtio/virtio.c | 338 ++++++++++++++++++++++++++++++--------
> include/exec/memory.h | 2 +
> include/hw/virtio/virtio-access.h | 52 ++++++
> include/hw/virtio/virtio.h | 1 +
> memory.c | 5 +
> 7 files changed, 345 insertions(+), 68 deletions(-)
>
Ping?
Paolo
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent
2017-01-27 15:40 ` [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent Paolo Bonzini
@ 2017-02-09 22:54 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-02-09 22:54 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel; +Cc: mst
On 01/27/2017 12:40 PM, Paolo Bonzini wrote:
> Make it easy to unregister a MemoryListener without tracking whether it
> had been registered before.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> v3: new
>
> memory.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/memory.c b/memory.c
> index 2bfc37f..8fafd4c 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -2371,8 +2371,13 @@ void memory_listener_register(MemoryListener *listener, AddressSpace *as)
>
> void memory_listener_unregister(MemoryListener *listener)
> {
> + if (!listener->address_space) {
> + return;
> + }
> +
> QTAILQ_REMOVE(&memory_listeners, listener, link);
> QTAILQ_REMOVE(&listener->address_space->listeners, listener, link_as);
> + listener->address_space = NULL;
> }
>
> void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent
2017-01-27 15:40 ` [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent Paolo Bonzini
@ 2017-02-09 22:57 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-02-09 22:57 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel; +Cc: mst
On 01/27/2017 12:40 PM, Paolo Bonzini wrote:
> Clear cache->mr so that address_space_cache_destroy does nothing
> the second time it is called.
>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> exec.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/exec.c b/exec.c
> index f2bed92..5de15cf 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3165,6 +3165,7 @@ void address_space_cache_destroy(MemoryRegionCache *cache)
> xen_invalidate_map_cache_entry(cache->ptr);
> }
> memory_region_unref(cache->mr);
> + cache->mr = NULL;
> }
>
> /* Called from RCU critical section. This function has the same
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx
2017-01-27 15:40 ` [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx Paolo Bonzini
@ 2017-02-09 22:59 ` Philippe Mathieu-Daudé
0 siblings, 0 replies; 14+ messages in thread
From: Philippe Mathieu-Daudé @ 2017-02-09 22:59 UTC (permalink / raw)
To: Paolo Bonzini, qemu-devel; +Cc: mst
On 01/27/2017 12:40 PM, Paolo Bonzini wrote:
> If the vring has not been set up, it is not necessary for vring_used_idx
> to do anything (as is already the case when the caller is virtio_load).
> This is harmless for now, but it will be a problem when the
> MemoryRegionCache has not been set up.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> hw/virtio/virtio.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 32bf364..23fac1e 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -2154,7 +2154,9 @@ void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
>
> void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
> {
> - vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
> + if (vdev->vq[n].vring.desc) {
> + vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
> + }
> }
>
> void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n)
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2017-02-09 22:59 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-01-27 15:40 [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 1/9] memory: make memory_listener_unregister idempotent Paolo Bonzini
2017-02-09 22:54 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 2/9] virtio: add virtio_*_phys_cached Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 3/9] virtio: use address_space_map/unmap to access descriptors Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 4/9] exec: make address_space_cache_destroy idempotent Paolo Bonzini
2017-02-09 22:57 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 5/9] virtio: use MemoryRegionCache to access descriptors Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 6/9] virtio: add MemoryListener to cache ring translations Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 7/9] virtio: use VRingMemoryRegionCaches for descriptor ring Paolo Bonzini
2017-01-27 15:40 ` [Qemu-devel] [PATCH 8/9] virtio: check for vring setup in virtio_queue_update_used_idx Paolo Bonzini
2017-02-09 22:59 ` Philippe Mathieu-Daudé
2017-01-27 15:40 ` [Qemu-devel] [PATCH 9/9] virtio: use VRingMemoryRegionCaches for avail and used rings Paolo Bonzini
2017-02-07 15:22 ` [Qemu-devel] [PATCH v3 0/9] virtio: use MemoryRegionCache for descriptors and rings Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).