public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [KVM PATCH v5 0/2] iosignalfd
@ 2009-06-03 20:17 Gregory Haskins
  2009-06-03 20:17 ` [KVM PATCH v5 1/2] kvm: make io_bus interface more robust Gregory Haskins
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 20:17 UTC (permalink / raw)
  To: kvm; +Cc: linux-kernel, avi, davidel, mtosatti, paulmck, markmc

(Applies to kvm.git/master:25deed73)

This is v5 of the series.  For more details, please see the header to
patch 2/2.

This series has been tested against the kvm-eventfd unit test, and
appears to be functioning properly.  You can download this test here:

ftp://ftp.novell.com/dev/ghaskins/kvm-eventfd.tar.bz2

This series is ready to be considered for inclusion, pending any further
review comments.

[
   Changelog:

      v5:
           *) Removed "cookie" field, which was a misunderstanding on my
              part on what Avi wanted for a data-match feature
	   *) Added a new "trigger" data-match feature which I think is
              much closer to what we need.
	   *) We retain the dev_count field in the io_bus infrastructure
	      and instead back-fill the array on removal.
	   *) Various minor cleanups
	   *) Rebased to kvm.git/master:25deed73

      v4:
           *) Fixed a bug in the original 2/4 where the PIT failure case
              would potentially leave the io_bus components registered.
           *) Condensed the v3 2/4 and 3/4 into one patch (2/2) since
              the patches became interdependent with the fix described above
           *) Rebased to kvm.git/master:74dfca0a

      v3:
           *) fixed patch 2/4 to handle error cases instead of BUG_ON
           *) implemented same HAVE_EVENTFD protection mechanism as
              irqfd to prevent compilation errors on unsupported arches
           *) completed testing
           *) rebased to kvm.git/master:7391a6d5

      v2:
           *) added optional data-matching capability (via cookie field)
           *) changed name from iofd to iosignalfd
           *) added io_bus unregister function
           *) implemented deassign feature

      v1:
           *) original release (integrated into irqfd v7 series as "iofd")
]

---

Gregory Haskins (2):
      kvm: add iosignalfd support
      kvm: make io_bus interface more robust


 arch/x86/kvm/i8254.c      |   22 +++
 arch/x86/kvm/i8259.c      |    9 +
 arch/x86/kvm/x86.c        |    1 
 include/linux/kvm.h       |   15 ++
 include/linux/kvm_host.h  |   16 ++
 virt/kvm/coalesced_mmio.c |    8 +
 virt/kvm/eventfd.c        |  356 +++++++++++++++++++++++++++++++++++++++++++++
 virt/kvm/ioapic.c         |    9 +
 virt/kvm/kvm_main.c       |   41 +++++
 9 files changed, 462 insertions(+), 15 deletions(-)

-- 
Signature

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [KVM PATCH v5 1/2] kvm: make io_bus interface more robust
  2009-06-03 20:17 [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
@ 2009-06-03 20:17 ` Gregory Haskins
  2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
  2009-06-03 21:37 ` [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
  2 siblings, 0 replies; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 20:17 UTC (permalink / raw)
  To: kvm; +Cc: linux-kernel, avi, davidel, mtosatti, paulmck, markmc

Today kvm_io_bus_regsiter_dev() returns void and will internally BUG_ON if it
fails.  We want to create dynamic MMIO/PIO entries driven from userspace later
in the series, so we need to enhance the code to be more robust with the
following changes:

   1) Add a return value to the registration function
   2) Fix up all the callsites to check the return code, handle any
      failures, and percolate the error up to the caller.
   3) Add an unregister function that collapses holes in the array

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
---

 arch/x86/kvm/i8254.c      |   22 ++++++++++++++++++++--
 arch/x86/kvm/i8259.c      |    9 ++++++++-
 include/linux/kvm_host.h  |    6 ++++--
 virt/kvm/coalesced_mmio.c |    8 ++++++--
 virt/kvm/ioapic.c         |    9 +++++++--
 virt/kvm/kvm_main.c       |   30 ++++++++++++++++++++++++++++--
 6 files changed, 73 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index f91b0e3..f01eabe 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -586,6 +586,7 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
 {
 	struct kvm_pit *pit;
 	struct kvm_kpit_state *pit_state;
+	int ret;
 
 	pit = kzalloc(sizeof(struct kvm_pit), GFP_KERNEL);
 	if (!pit)
@@ -620,14 +621,31 @@ struct kvm_pit *kvm_create_pit(struct kvm *kvm, u32 flags)
 	kvm_register_irq_mask_notifier(kvm, 0, &pit->mask_notifier);
 
 	kvm_iodevice_init(&pit->dev, &pit_dev_ops);
-	kvm_io_bus_register_dev(&kvm->pio_bus, &pit->dev);
+	ret = kvm_io_bus_register_dev(&kvm->pio_bus, &pit->dev);
+	if (ret < 0)
+		goto fail;
 
 	if (flags & KVM_PIT_SPEAKER_DUMMY) {
 		kvm_iodevice_init(&pit->speaker_dev, &speaker_dev_ops);
-		kvm_io_bus_register_dev(&kvm->pio_bus, &pit->speaker_dev);
+		ret = kvm_io_bus_register_dev(&kvm->pio_bus,
+					      &pit->speaker_dev);
+		if (ret < 0)
+			goto fail;
 	}
 
 	return pit;
+
+fail:
+	if (flags & KVM_PIT_SPEAKER_DUMMY)
+		kvm_io_bus_unregister_dev(&kvm->pio_bus, &pit->speaker_dev);
+
+	kvm_io_bus_unregister_dev(&kvm->pio_bus, &pit->dev);
+
+	if (pit->irq_source_id >= 0)
+		kvm_free_irq_source_id(kvm, pit->irq_source_id);
+
+	kfree(pit);
+	return NULL;
 }
 
 void kvm_free_pit(struct kvm *kvm)
diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c
index 2520922..af9b8b4 100644
--- a/arch/x86/kvm/i8259.c
+++ b/arch/x86/kvm/i8259.c
@@ -530,6 +530,8 @@ static const struct kvm_io_device_ops picdev_ops = {
 struct kvm_pic *kvm_create_pic(struct kvm *kvm)
 {
 	struct kvm_pic *s;
+	int ret;
+
 	s = kzalloc(sizeof(struct kvm_pic), GFP_KERNEL);
 	if (!s)
 		return NULL;
@@ -546,6 +548,11 @@ struct kvm_pic *kvm_create_pic(struct kvm *kvm)
 	 * Initialize PIO device
 	 */
 	kvm_iodevice_init(&s->dev, &picdev_ops);
-	kvm_io_bus_register_dev(&kvm->pio_bus, &s->dev);
+	ret = kvm_io_bus_register_dev(&kvm->pio_bus, &s->dev);
+	if (ret < 0) {
+		kfree(s);
+		return NULL;
+	}
+
 	return s;
 }
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index bcb94eb..216fe07 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -61,8 +61,10 @@ void kvm_io_bus_init(struct kvm_io_bus *bus);
 void kvm_io_bus_destroy(struct kvm_io_bus *bus);
 struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus,
 					  gpa_t addr, int len, int is_write);
-void kvm_io_bus_register_dev(struct kvm_io_bus *bus,
-			     struct kvm_io_device *dev);
+int kvm_io_bus_register_dev(struct kvm_io_bus *bus,
+			    struct kvm_io_device *dev);
+void kvm_io_bus_unregister_dev(struct kvm_io_bus *bus,
+			    struct kvm_io_device *dev);
 
 struct kvm_vcpu {
 	struct kvm *kvm;
diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c
index c4c7ec2..282fcde 100644
--- a/virt/kvm/coalesced_mmio.c
+++ b/virt/kvm/coalesced_mmio.c
@@ -97,6 +97,7 @@ static const struct kvm_io_device_ops coalesced_mmio_ops = {
 int kvm_coalesced_mmio_init(struct kvm *kvm)
 {
 	struct kvm_coalesced_mmio_dev *dev;
+	int ret;
 
 	dev = kzalloc(sizeof(struct kvm_coalesced_mmio_dev), GFP_KERNEL);
 	if (!dev)
@@ -104,9 +105,12 @@ int kvm_coalesced_mmio_init(struct kvm *kvm)
 	kvm_iodevice_init(&dev->dev, &coalesced_mmio_ops);
 	dev->kvm = kvm;
 	kvm->coalesced_mmio_dev = dev;
-	kvm_io_bus_register_dev(&kvm->mmio_bus, &dev->dev);
 
-	return 0;
+	ret = kvm_io_bus_register_dev(&kvm->mmio_bus, &dev->dev);
+	if (ret < 0)
+		kfree(dev);
+
+	return ret;
 }
 
 int kvm_vm_ioctl_register_coalesced_mmio(struct kvm *kvm,
diff --git a/virt/kvm/ioapic.c b/virt/kvm/ioapic.c
index 6b00433..4a37ea1 100644
--- a/virt/kvm/ioapic.c
+++ b/virt/kvm/ioapic.c
@@ -328,6 +328,7 @@ static const struct kvm_io_device_ops ioapic_mmio_ops = {
 int kvm_ioapic_init(struct kvm *kvm)
 {
 	struct kvm_ioapic *ioapic;
+	int ret;
 
 	ioapic = kzalloc(sizeof(struct kvm_ioapic), GFP_KERNEL);
 	if (!ioapic)
@@ -336,7 +337,11 @@ int kvm_ioapic_init(struct kvm *kvm)
 	kvm_ioapic_reset(ioapic);
 	kvm_iodevice_init(&ioapic->dev, &ioapic_mmio_ops);
 	ioapic->kvm = kvm;
-	kvm_io_bus_register_dev(&kvm->mmio_bus, &ioapic->dev);
-	return 0;
+
+	ret = kvm_io_bus_register_dev(&kvm->mmio_bus, &ioapic->dev);
+	if (ret < 0)
+		kfree(ioapic);
+
+	return ret;
 }
 
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 902fed9..179c650 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2457,11 +2457,37 @@ struct kvm_io_device *kvm_io_bus_find_dev(struct kvm_io_bus *bus,
 	return NULL;
 }
 
-void kvm_io_bus_register_dev(struct kvm_io_bus *bus, struct kvm_io_device *dev)
+/* assumes kvm->lock held */
+int kvm_io_bus_register_dev(struct kvm_io_bus *bus, struct kvm_io_device *dev)
 {
-	BUG_ON(bus->dev_count > (NR_IOBUS_DEVS-1));
+	if (bus->dev_count > (NR_IOBUS_DEVS-1))
+		return -ENOSPC;
 
 	bus->devs[bus->dev_count++] = dev;
+
+	return 0;
+}
+
+/* assumes kvm->lock held */
+void kvm_io_bus_unregister_dev(struct kvm_io_bus *bus,
+			       struct kvm_io_device *dev)
+{
+	int i;
+
+	for (i = 0; i < bus->dev_count; i++) {
+
+		if (bus->devs[i] == dev) {
+			int j;
+
+			/* backfill the hole */
+			for (j = i; j < bus->dev_count-1; j++)
+				bus->devs[j] = bus->devs[j+1];
+
+			bus->dev_count--;
+
+			break;
+		}
+	}
 }
 
 static struct notifier_block kvm_cpu_notifier = {


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-03 20:17 [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
  2009-06-03 20:17 ` [KVM PATCH v5 1/2] kvm: make io_bus interface more robust Gregory Haskins
@ 2009-06-03 20:17 ` Gregory Haskins
  2009-06-03 20:37   ` Gregory Haskins
                     ` (2 more replies)
  2009-06-03 21:37 ` [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
  2 siblings, 3 replies; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 20:17 UTC (permalink / raw)
  To: kvm; +Cc: linux-kernel, avi, davidel, mtosatti, paulmck, markmc

iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
signal when written to by a guest.  Host userspace can register any arbitrary
IO address with a corresponding eventfd and then pass the eventfd to a
specific end-point of interest for handling.

Normal IO requires a blocking round-trip since the operation may cause
side-effects in the emulated model or may return data to the caller.
Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
"heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
device model synchronously before returning control back to the vcpu.

However, there is a subclass of IO which acts purely as a trigger for
other IO (such as to kick off an out-of-band DMA request, etc).  For these
patterns, the synchronous call is particularly expensive since we really
only want to simply get our notification transmitted asychronously and
return as quickly as possible.  All the sychronous infrastructure to ensure
proper data-dependencies are met in the normal IO case are just unecessary
overhead for signalling.  This adds additional computational load on the
system, as well as latency to the signalling path.

Therefore, we provide a mechanism for registration of an in-kernel trigger
point that allows the VCPU to only require a very brief, lightweight
exit just long enough to signal an eventfd.  This also means that any
clients compatible with the eventfd interface (which includes userspace
and kernelspace equally well) can now register to be notified. The end
result should be a more flexible and higher performance notification API
for the backend KVM hypervisor and perhipheral components.

To test this theory, we built a test-harness called "doorbell".  This
module has a function called "doorbell_ring()" which simply increments a
counter for each time the doorbell is signaled.  It supports signalling
from either an eventfd, or an ioctl().

We then wired up two paths to the doorbell: One via QEMU via a registered
io region and through the doorbell ioctl().  The other is direct via
iosignalfd.

You can download this test harness here:

ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2

The measured results are as follows:

qemu-mmio:       110000 iops, 9.09us rtt
iosignalfd-mmio: 200100 iops, 5.00us rtt
iosignalfd-pio:  367300 iops, 2.72us rtt

I didn't measure qemu-pio, because I have to figure out how to register a
PIO region with qemu's device model, and I got lazy.  However, for now we
can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
and -350ns for HC, we get:

qemu-pio:      153139 iops, 6.53us rtt
iosignalfd-hc: 412585 iops, 2.37us rtt

these are just for fun, for now, until I can gather more data.

Here is a graph for your convenience:

http://developer.novell.com/wiki/images/7/76/Iofd-chart.png

The conclusion to draw is that we save about 4us by skipping the userspace
hop.

--------------------

Signed-off-by: Gregory Haskins <ghaskins@novell.com>
---

 arch/x86/kvm/x86.c       |    1 
 include/linux/kvm.h      |   15 ++
 include/linux/kvm_host.h |   10 +
 virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
 virt/kvm/kvm_main.c      |   11 +
 5 files changed, 389 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c1ed485..c96c0e3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
 	case KVM_CAP_IRQ_INJECT_STATUS:
 	case KVM_CAP_ASSIGN_DEV_IRQ:
 	case KVM_CAP_IRQFD:
+	case KVM_CAP_IOSIGNALFD:
 	case KVM_CAP_PIT2:
 		r = 1;
 		break;
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 632a856..53b720d 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -300,6 +300,19 @@ struct kvm_guest_debug {
 	struct kvm_guest_debug_arch arch;
 };
 
+#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
+#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
+#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
+
+struct kvm_iosignalfd {
+	__u64 trigger;
+	__u64 addr;
+	__u32 len;
+	__u32 fd;
+	__u32 flags;
+	__u8  pad[36];
+};
+
 #define KVM_TRC_SHIFT           16
 /*
  * kvm trace categories
@@ -430,6 +443,7 @@ struct kvm_trace_rec {
 #ifdef __KVM_HAVE_PIT
 #define KVM_CAP_PIT2 33
 #endif
+#define KVM_CAP_IOSIGNALFD 34
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
@@ -537,6 +551,7 @@ struct kvm_irqfd {
 #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
 #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
 #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
+#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
 
 /*
  * ioctls for vcpu fds
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 216fe07..b705960 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -138,6 +138,7 @@ struct kvm {
 	struct kvm_io_bus pio_bus;
 #ifdef CONFIG_HAVE_KVM_EVENTFD
 	struct list_head irqfds;
+	struct list_head iosignalfds;
 #endif
 	struct kvm_vm_stat stat;
 	struct kvm_arch arch;
@@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
 
 #ifdef CONFIG_HAVE_KVM_EVENTFD
 
-void kvm_irqfd_init(struct kvm *kvm);
+void kvm_eventfd_init(struct kvm *kvm);
 int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
 void kvm_irqfd_release(struct kvm *kvm);
+int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
 
 #else
 
-static inline void kvm_irqfd_init(struct kvm *kvm) {}
+static inline void kvm_eventfd_init(struct kvm *kvm) {}
 static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
 {
 	return -EINVAL;
 }
 
 static inline void kvm_irqfd_release(struct kvm *kvm) {}
+static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
+{
+	return -EINVAL;
+}
 
 #endif /* CONFIG_HAVE_KVM_EVENTFD */
 
diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
index f3f2ea1..77befb3 100644
--- a/virt/kvm/eventfd.c
+++ b/virt/kvm/eventfd.c
@@ -21,6 +21,7 @@
  */
 
 #include <linux/kvm_host.h>
+#include <linux/kvm.h>
 #include <linux/workqueue.h>
 #include <linux/syscalls.h>
 #include <linux/wait.h>
@@ -29,6 +30,8 @@
 #include <linux/list.h>
 #include <linux/eventfd.h>
 
+#include "iodev.h"
+
 /*
  * --------------------------------------------------------------------
  * irqfd: Allows an fd to be used to inject an interrupt to the guest
@@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
 }
 
 void
-kvm_irqfd_init(struct kvm *kvm)
+kvm_eventfd_init(struct kvm *kvm)
 {
 	INIT_LIST_HEAD(&kvm->irqfds);
+	INIT_LIST_HEAD(&kvm->iosignalfds);
 }
 
 int
@@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
 		irqfd_release(irqfd);
 	}
 }
+
+/*
+ * --------------------------------------------------------------------
+ * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
+ *
+ * userspace can register a PIO/MMIO address with an eventfd for recieving
+ * notification when the memory has been touched.
+ * --------------------------------------------------------------------
+ */
+
+/*
+ * Design note: We create one PIO/MMIO device (iosignalfd_group) which
+ * aggregates  one or more iosignalfd_items.  Each item points to exactly one
+ * eventfd, and can be registered to trigger on any write to the group
+ * (wildcard), or to a write of a specific value.  If more than one item is to
+ * be supported, the addr/len ranges must all be identical in the group.  If a
+ * trigger value is to be supported on a particular item, the group range must
+ * be exactly the width of the trigger.
+ */
+
+struct _iosignalfd_item {
+	struct list_head     list;
+	struct file         *file;
+	unsigned char       *match;
+	struct rcu_head      rcu;
+};
+
+struct _iosignalfd_group {
+	struct list_head     list;
+	u64                  addr;
+	size_t               length;
+	struct list_head     items;
+	struct kvm_io_device dev;
+};
+
+static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
+{
+	return container_of(dev, struct _iosignalfd_group, dev);
+}
+
+static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
+{
+	return container_of(rhp, struct _iosignalfd_item, rcu);
+}
+
+static int
+iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
+			  int is_write)
+{
+	struct _iosignalfd_group *p = to_group(this);
+
+	return ((addr >= p->addr && (addr < p->addr + p->length)));
+}
+
+static int
+iosignalfd_is_match(struct _iosignalfd_group *group,
+		    struct _iosignalfd_item *item,
+		    const void *val,
+		    int len)
+{
+	if (!item->match)
+		/* wildcard is a hit */
+		return true;
+
+	if (len != group->length)
+		/* mis-matched length is a miss */
+		return false;
+
+	/* otherwise, we have to actually compare the data */
+	return !memcmp(item->match, val, len) ? true : false;
+}
+
+/*
+ * MMIO/PIO writes trigger an event (if the data matches).
+ *
+ * This is invoked by the io_bus subsystem in response to an address match
+ * against the group.  We must then walk the list of individual items to check
+ * for a match and, if applicable, to send the appropriate signal. If the item
+ * in question does not have a "match" pointer, it is considered a wildcard
+ * and will always generate a signal.  There can be an arbitrary number
+ * of distinct matches or wildcards per group.
+ */
+static void
+iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
+		       const void *val)
+{
+	struct _iosignalfd_group *group = to_group(this);
+	struct _iosignalfd_item *item;
+
+	/* FIXME: We should probably use SRCU */
+	rcu_read_lock();
+
+	list_for_each_entry_rcu(item, &group->items, list) {
+		if (iosignalfd_is_match(group, item, val, len))
+			eventfd_signal(item->file, 1);
+	}
+
+	rcu_read_unlock();
+}
+
+/*
+ * MMIO/PIO reads against the group indiscriminately return all zeros
+ */
+static void
+iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
+		      void *val)
+{
+	memset(val, 0, len);
+}
+
+static void
+_iosignalfd_group_destructor(struct _iosignalfd_group *group)
+{
+	list_del(&group->list);
+	kfree(group);
+}
+
+static void
+iosignalfd_group_destructor(struct kvm_io_device *this)
+{
+	struct _iosignalfd_group *group = to_group(this);
+
+	_iosignalfd_group_destructor(group);
+}
+
+/* assumes kvm->lock held */
+static struct _iosignalfd_group *
+iosignalfd_group_find(struct kvm *kvm, u64 addr)
+{
+	struct _iosignalfd_group *group;
+
+	list_for_each_entry(group, &kvm->iosignalfds, list) {
+		if (group->addr == addr)
+			return group;
+	}
+
+	return NULL;
+}
+
+static const struct kvm_io_device_ops iosignalfd_ops = {
+	.read       = iosignalfd_group_read,
+	.write      = iosignalfd_group_write,
+	.in_range   = iosignalfd_group_in_range,
+	.destructor = iosignalfd_group_destructor,
+};
+
+/*
+ * Atomically find an existing group, or create a new one if it doesn't already
+ * exist.
+ *
+ * assumes kvm->lock is held
+ */
+static struct _iosignalfd_group *
+iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
+		      u64 addr, size_t len)
+{
+	struct _iosignalfd_group *group;
+
+	group = iosignalfd_group_find(kvm, addr);
+	if (!group) {
+		int ret;
+
+		group = kzalloc(sizeof(*group), GFP_KERNEL);
+		if (!group)
+			return ERR_PTR(-ENOMEM);
+
+		INIT_LIST_HEAD(&group->list);
+		INIT_LIST_HEAD(&group->items);
+		group->addr   = addr;
+		group->length = len;
+		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
+
+		ret = kvm_io_bus_register_dev(bus, &group->dev);
+		if (ret < 0) {
+			kfree(group);
+			return ERR_PTR(ret);
+		}
+
+		list_add_tail(&group->list, &kvm->iosignalfds);
+
+	} else if (group->length != len)
+		/*
+		 * Existing groups must have the same addr/len tuple or we
+		 * reject the request
+		 */
+		return ERR_PTR(-EINVAL);
+
+	return group;
+}
+
+static int
+kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
+{
+	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
+	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
+	struct _iosignalfd_group *group = NULL;
+	struct _iosignalfd_item  *item = NULL;
+	struct file              *file;
+	int                       ret;
+
+	file = eventfd_fget(args->fd);
+	if (IS_ERR(file)) {
+		ret = PTR_ERR(file);
+		return ret;
+	}
+
+	item = kzalloc(sizeof(*item), GFP_KERNEL);
+	if (!item) {
+		ret = -ENOMEM;
+		goto fail;
+	}
+
+	INIT_LIST_HEAD(&item->list);
+	item->file = file;
+
+	/*
+	 * Registering a "trigger" address is optional.  If this flag
+	 * is not specified, we leave the item->match pointer NULL, which
+	 * indicates a wildcard
+	 */
+	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
+		if (args->len > sizeof(u64)) {
+			ret = -EINVAL;
+			goto fail;
+		}
+
+		item->match = kzalloc(args->len, GFP_KERNEL);
+		if (!item->match) {
+			ret = -ENOMEM;
+			goto fail;
+		}
+
+		if (copy_from_user(item->match,
+				   (void *)args->trigger,
+				   args->len)) {
+			ret = -EFAULT;
+			goto fail;
+		}
+	}
+
+	mutex_lock(&kvm->lock);
+
+	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
+	if (IS_ERR(group)) {
+		ret = PTR_ERR(group);
+		mutex_unlock(&kvm->lock);
+		goto fail;
+	}
+
+	/*
+	 * Note: We are committed to succeed at this point since we have
+	 * (potentially) published a new group-device.  Any failure handling
+	 * added in the future after this point will need to be handled
+	 * carefully.
+	 */
+
+	list_add_tail_rcu(&item->list, &group->items);
+
+	mutex_unlock(&kvm->lock);
+
+	return 0;
+
+fail:
+	if (item) {
+		/*
+		 * it would have never made it to the group->items list
+		 * in the failure path, so we dont need to worry about removing
+		 * it
+		 */
+		kfree(item->match);
+		kfree(item);
+	}
+
+	if (file)
+		fput(file);
+
+	return ret;
+}
+
+static void
+iosignalfd_item_free(struct rcu_head *rhp)
+{
+	struct _iosignalfd_item *item = to_item(rhp);
+
+	fput(item->file);
+	kfree(item->match);
+	kfree(item);
+}
+
+static int
+kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
+{
+	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
+	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
+	struct _iosignalfd_group *group;
+	struct _iosignalfd_item  *item, *tmp;
+	struct file              *file;
+	int                       ret = 0;
+
+	mutex_lock(&kvm->lock);
+
+	group = iosignalfd_group_find(kvm, args->addr);
+	if (!group) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	file = eventfd_fget(args->fd);
+	if (IS_ERR(file)) {
+		ret = PTR_ERR(file);
+		goto out;
+	}
+
+	list_for_each_entry_safe(item, tmp, &group->items, list) {
+		/*
+		 * any items registered at this group-address with the matching
+		 * eventfd will be removed
+		 */
+		if (item->file != file)
+			continue;
+
+		list_del_rcu(&item->list);
+		call_rcu(&item->rcu, iosignalfd_item_free);
+	}
+
+	if (list_empty(&group->items)) {
+		/*
+		 * We should unpublish our group device if we just removed
+		 * the last of its contained items
+		 */
+		kvm_io_bus_unregister_dev(bus, &group->dev);
+		_iosignalfd_group_destructor(group);
+	}
+
+	fput(file);
+
+out:
+	mutex_unlock(&kvm->lock);
+
+	return ret;
+}
+
+int
+kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
+{
+	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
+		return kvm_deassign_iosignalfd(kvm, args);
+
+	return kvm_assign_iosignalfd(kvm, args);
+}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 179c650..91d0fe2 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
 	atomic_inc(&kvm->mm->mm_count);
 	spin_lock_init(&kvm->mmu_lock);
 	kvm_io_bus_init(&kvm->pio_bus);
-	kvm_irqfd_init(kvm);
+	kvm_eventfd_init(kvm);
 	mutex_init(&kvm->lock);
 	kvm_io_bus_init(&kvm->mmio_bus);
 	init_rwsem(&kvm->slots_lock);
@@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
 		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
 		break;
 	}
+	case KVM_IOSIGNALFD: {
+		struct kvm_iosignalfd entry;
+
+		r = -EFAULT;
+		if (copy_from_user(&entry, argp, sizeof entry))
+			goto out;
+		r = kvm_iosignalfd(kvm, &entry);
+		break;
+	}
 	default:
 		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
 	}


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
@ 2009-06-03 20:37   ` Gregory Haskins
  2009-06-03 21:45   ` Gregory Haskins
  2009-06-04  1:34   ` Paul E. McKenney
  2 siblings, 0 replies; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 20:37 UTC (permalink / raw)
  To: paulmck; +Cc: kvm, linux-kernel, avi, davidel, mtosatti, markmc

[-- Attachment #1: Type: text/plain, Size: 18531 bytes --]

Hi Paul,
  Sorry to bug you again, but here is yet another RCU related patch. 
See inline

Gregory Haskins wrote:
> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
> signal when written to by a guest.  Host userspace can register any arbitrary
> IO address with a corresponding eventfd and then pass the eventfd to a
> specific end-point of interest for handling.
>
> Normal IO requires a blocking round-trip since the operation may cause
> side-effects in the emulated model or may return data to the caller.
> Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
> "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
> device model synchronously before returning control back to the vcpu.
>
> However, there is a subclass of IO which acts purely as a trigger for
> other IO (such as to kick off an out-of-band DMA request, etc).  For these
> patterns, the synchronous call is particularly expensive since we really
> only want to simply get our notification transmitted asychronously and
> return as quickly as possible.  All the sychronous infrastructure to ensure
> proper data-dependencies are met in the normal IO case are just unecessary
> overhead for signalling.  This adds additional computational load on the
> system, as well as latency to the signalling path.
>
> Therefore, we provide a mechanism for registration of an in-kernel trigger
> point that allows the VCPU to only require a very brief, lightweight
> exit just long enough to signal an eventfd.  This also means that any
> clients compatible with the eventfd interface (which includes userspace
> and kernelspace equally well) can now register to be notified. The end
> result should be a more flexible and higher performance notification API
> for the backend KVM hypervisor and perhipheral components.
>
> To test this theory, we built a test-harness called "doorbell".  This
> module has a function called "doorbell_ring()" which simply increments a
> counter for each time the doorbell is signaled.  It supports signalling
> from either an eventfd, or an ioctl().
>
> We then wired up two paths to the doorbell: One via QEMU via a registered
> io region and through the doorbell ioctl().  The other is direct via
> iosignalfd.
>
> You can download this test harness here:
>
> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
>
> The measured results are as follows:
>
> qemu-mmio:       110000 iops, 9.09us rtt
> iosignalfd-mmio: 200100 iops, 5.00us rtt
> iosignalfd-pio:  367300 iops, 2.72us rtt
>
> I didn't measure qemu-pio, because I have to figure out how to register a
> PIO region with qemu's device model, and I got lazy.  However, for now we
> can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
> and -350ns for HC, we get:
>
> qemu-pio:      153139 iops, 6.53us rtt
> iosignalfd-hc: 412585 iops, 2.37us rtt
>
> these are just for fun, for now, until I can gather more data.
>
> Here is a graph for your convenience:
>
> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
>
> The conclusion to draw is that we save about 4us by skipping the userspace
> hop.
>
> --------------------
>
> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> ---
>
>  arch/x86/kvm/x86.c       |    1 
>  include/linux/kvm.h      |   15 ++
>  include/linux/kvm_host.h |   10 +
>  virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
>  virt/kvm/kvm_main.c      |   11 +
>  5 files changed, 389 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c1ed485..c96c0e3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
>  	case KVM_CAP_IRQ_INJECT_STATUS:
>  	case KVM_CAP_ASSIGN_DEV_IRQ:
>  	case KVM_CAP_IRQFD:
> +	case KVM_CAP_IOSIGNALFD:
>  	case KVM_CAP_PIT2:
>  		r = 1;
>  		break;
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 632a856..53b720d 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -300,6 +300,19 @@ struct kvm_guest_debug {
>  	struct kvm_guest_debug_arch arch;
>  };
>  
> +#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
> +#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
> +#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
> +
> +struct kvm_iosignalfd {
> +	__u64 trigger;
> +	__u64 addr;
> +	__u32 len;
> +	__u32 fd;
> +	__u32 flags;
> +	__u8  pad[36];
> +};
> +
>  #define KVM_TRC_SHIFT           16
>  /*
>   * kvm trace categories
> @@ -430,6 +443,7 @@ struct kvm_trace_rec {
>  #ifdef __KVM_HAVE_PIT
>  #define KVM_CAP_PIT2 33
>  #endif
> +#define KVM_CAP_IOSIGNALFD 34
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> @@ -537,6 +551,7 @@ struct kvm_irqfd {
>  #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
>  #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
>  #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
> +#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
>  
>  /*
>   * ioctls for vcpu fds
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 216fe07..b705960 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -138,6 +138,7 @@ struct kvm {
>  	struct kvm_io_bus pio_bus;
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>  	struct list_head irqfds;
> +	struct list_head iosignalfds;
>  #endif
>  	struct kvm_vm_stat stat;
>  	struct kvm_arch arch;
> @@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
>  
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>  
> -void kvm_irqfd_init(struct kvm *kvm);
> +void kvm_eventfd_init(struct kvm *kvm);
>  int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
>  void kvm_irqfd_release(struct kvm *kvm);
> +int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
>  
>  #else
>  
> -static inline void kvm_irqfd_init(struct kvm *kvm) {}
> +static inline void kvm_eventfd_init(struct kvm *kvm) {}
>  static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
>  {
>  	return -EINVAL;
>  }
>  
>  static inline void kvm_irqfd_release(struct kvm *kvm) {}
> +static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	return -EINVAL;
> +}
>  
>  #endif /* CONFIG_HAVE_KVM_EVENTFD */
>  
> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> index f3f2ea1..77befb3 100644
> --- a/virt/kvm/eventfd.c
> +++ b/virt/kvm/eventfd.c
> @@ -21,6 +21,7 @@
>   */
>  
>  #include <linux/kvm_host.h>
> +#include <linux/kvm.h>
>  #include <linux/workqueue.h>
>  #include <linux/syscalls.h>
>  #include <linux/wait.h>
> @@ -29,6 +30,8 @@
>  #include <linux/list.h>
>  #include <linux/eventfd.h>
>  
> +#include "iodev.h"
> +
>  /*
>   * --------------------------------------------------------------------
>   * irqfd: Allows an fd to be used to inject an interrupt to the guest
> @@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
>  }
>  
>  void
> -kvm_irqfd_init(struct kvm *kvm)
> +kvm_eventfd_init(struct kvm *kvm)
>  {
>  	INIT_LIST_HEAD(&kvm->irqfds);
> +	INIT_LIST_HEAD(&kvm->iosignalfds);
>  }
>  
>  int
> @@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
>  		irqfd_release(irqfd);
>  	}
>  }
> +
> +/*
> + * --------------------------------------------------------------------
> + * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
> + *
> + * userspace can register a PIO/MMIO address with an eventfd for recieving
> + * notification when the memory has been touched.
> + * --------------------------------------------------------------------
> + */
> +
> +/*
> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
> + * aggregates  one or more iosignalfd_items.  Each item points to exactly one
> + * eventfd, and can be registered to trigger on any write to the group
> + * (wildcard), or to a write of a specific value.  If more than one item is to
> + * be supported, the addr/len ranges must all be identical in the group.  If a
> + * trigger value is to be supported on a particular item, the group range must
> + * be exactly the width of the trigger.
> + */
> +
> +struct _iosignalfd_item {
> +	struct list_head     list;
> +	struct file         *file;
> +	unsigned char       *match;
> +	struct rcu_head      rcu;
> +};
> +
> +struct _iosignalfd_group {
> +	struct list_head     list;
> +	u64                  addr;
> +	size_t               length;
> +	struct list_head     items;
> +	struct kvm_io_device dev;
> +};
> +
> +static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
> +{
> +	return container_of(dev, struct _iosignalfd_group, dev);
> +}
> +
> +static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
> +{
> +	return container_of(rhp, struct _iosignalfd_item, rcu);
> +}
> +
> +static int
> +iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
> +			  int is_write)
> +{
> +	struct _iosignalfd_group *p = to_group(this);
> +
> +	return ((addr >= p->addr && (addr < p->addr + p->length)));
> +}
> +
> +static int
> +iosignalfd_is_match(struct _iosignalfd_group *group,
> +		    struct _iosignalfd_item *item,
> +		    const void *val,
> +		    int len)
> +{
> +	if (!item->match)
> +		/* wildcard is a hit */
> +		return true;
> +
> +	if (len != group->length)
> +		/* mis-matched length is a miss */
> +		return false;
> +
> +	/* otherwise, we have to actually compare the data */
> +	return !memcmp(item->match, val, len) ? true : false;
> +}
> +
> +/*
> + * MMIO/PIO writes trigger an event (if the data matches).
> + *
> + * This is invoked by the io_bus subsystem in response to an address match
> + * against the group.  We must then walk the list of individual items to check
> + * for a match and, if applicable, to send the appropriate signal. If the item
> + * in question does not have a "match" pointer, it is considered a wildcard
> + * and will always generate a signal.  There can be an arbitrary number
> + * of distinct matches or wildcards per group.
> + */
> +static void
> +iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
> +		       const void *val)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +	struct _iosignalfd_item *item;
> +
> +	/* FIXME: We should probably use SRCU */
> +	rcu_read_lock();
> +
> +	list_for_each_entry_rcu(item, &group->items, list) {
> +		if (iosignalfd_is_match(group, item, val, len))
> +			eventfd_signal(item->file, 1);
> +	}
> +
> +	rcu_read_unlock();
>   

[1]

> +}
> +
> +/*
> + * MMIO/PIO reads against the group indiscriminately return all zeros
> + */
> +static void
> +iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
> +		      void *val)
> +{
> +	memset(val, 0, len);
> +}
> +
> +static void
> +_iosignalfd_group_destructor(struct _iosignalfd_group *group)
> +{
> +	list_del(&group->list);
> +	kfree(group);
> +}
> +
> +static void
> +iosignalfd_group_destructor(struct kvm_io_device *this)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +
> +	_iosignalfd_group_destructor(group);
> +}
> +
> +/* assumes kvm->lock held */
> +static struct _iosignalfd_group *
> +iosignalfd_group_find(struct kvm *kvm, u64 addr)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	list_for_each_entry(group, &kvm->iosignalfds, list) {
> +		if (group->addr == addr)
> +			return group;
> +	}
> +
> +	return NULL;
> +}
> +
> +static const struct kvm_io_device_ops iosignalfd_ops = {
> +	.read       = iosignalfd_group_read,
> +	.write      = iosignalfd_group_write,
> +	.in_range   = iosignalfd_group_in_range,
> +	.destructor = iosignalfd_group_destructor,
> +};
> +
> +/*
> + * Atomically find an existing group, or create a new one if it doesn't already
> + * exist.
> + *
> + * assumes kvm->lock is held
> + */
> +static struct _iosignalfd_group *
> +iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
> +		      u64 addr, size_t len)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	group = iosignalfd_group_find(kvm, addr);
> +	if (!group) {
> +		int ret;
> +
> +		group = kzalloc(sizeof(*group), GFP_KERNEL);
> +		if (!group)
> +			return ERR_PTR(-ENOMEM);
> +
> +		INIT_LIST_HEAD(&group->list);
> +		INIT_LIST_HEAD(&group->items);
> +		group->addr   = addr;
> +		group->length = len;
> +		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
> +
> +		ret = kvm_io_bus_register_dev(bus, &group->dev);
> +		if (ret < 0) {
> +			kfree(group);
> +			return ERR_PTR(ret);
> +		}
> +
> +		list_add_tail(&group->list, &kvm->iosignalfds);
> +
> +	} else if (group->length != len)
> +		/*
> +		 * Existing groups must have the same addr/len tuple or we
> +		 * reject the request
> +		 */
> +		return ERR_PTR(-EINVAL);
> +
> +	return group;
> +}
> +
> +static int
> +kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group = NULL;
> +	struct _iosignalfd_item  *item = NULL;
> +	struct file              *file;
> +	int                       ret;
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		return ret;
> +	}
> +
> +	item = kzalloc(sizeof(*item), GFP_KERNEL);
> +	if (!item) {
> +		ret = -ENOMEM;
> +		goto fail;
> +	}
> +
> +	INIT_LIST_HEAD(&item->list);
> +	item->file = file;
> +
> +	/*
> +	 * Registering a "trigger" address is optional.  If this flag
> +	 * is not specified, we leave the item->match pointer NULL, which
> +	 * indicates a wildcard
> +	 */
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
> +		if (args->len > sizeof(u64)) {
> +			ret = -EINVAL;
> +			goto fail;
> +		}
> +
> +		item->match = kzalloc(args->len, GFP_KERNEL);
> +		if (!item->match) {
> +			ret = -ENOMEM;
> +			goto fail;
> +		}
> +
> +		if (copy_from_user(item->match,
> +				   (void *)args->trigger,
> +				   args->len)) {
> +			ret = -EFAULT;
> +			goto fail;
> +		}
> +	}
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		mutex_unlock(&kvm->lock);
> +		goto fail;
> +	}
> +
> +	/*
> +	 * Note: We are committed to succeed at this point since we have
> +	 * (potentially) published a new group-device.  Any failure handling
> +	 * added in the future after this point will need to be handled
> +	 * carefully.
> +	 */
> +
> +	list_add_tail_rcu(&item->list, &group->items);
> +
> +	mutex_unlock(&kvm->lock);
> +
> +	return 0;
> +
> +fail:
> +	if (item) {
> +		/*
> +		 * it would have never made it to the group->items list
> +		 * in the failure path, so we dont need to worry about removing
> +		 * it
> +		 */
> +		kfree(item->match);
> +		kfree(item);
> +	}
> +
> +	if (file)
> +		fput(file);
> +
> +	return ret;
> +}
> +
> +static void
> +iosignalfd_item_free(struct rcu_head *rhp)
> +{
> +	struct _iosignalfd_item *item = to_item(rhp);
> +
> +	fput(item->file);
> +	kfree(item->match);
> +	kfree(item);
> +}
> +
> +static int
> +kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group;
> +	struct _iosignalfd_item  *item, *tmp;
> +	struct file              *file;
> +	int                       ret = 0;
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_find(kvm, args->addr);
> +	if (!group) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		goto out;
> +	}
> +
> +	list_for_each_entry_safe(item, tmp, &group->items, list) {
> +		/*
> +		 * any items registered at this group-address with the matching
> +		 * eventfd will be removed
> +		 */
> +		if (item->file != file)
> +			continue;
> +
> +		list_del_rcu(&item->list);
> +		call_rcu(&item->rcu, iosignalfd_item_free);
>   

I am fairly certain that this is ok w.r.t. [1], but I wonder what would
happen if I convert [1] to SRCU.  Can I still use call_rcu(), or is
synchronize_srcu the only barrier compatible with srcu_read_lock()?

> +	}
> +
> +	if (list_empty(&group->items)) {
> +		/*
> +		 * We should unpublish our group device if we just removed
> +		 * the last of its contained items
> +		 */
> +		kvm_io_bus_unregister_dev(bus, &group->dev);
> +		_iosignalfd_group_destructor(group);
> +	}
> +
> +	fput(file);
> +
> +out:
> +	mutex_unlock(&kvm->lock);
> +
> +	return ret;
> +}
> +
> +int
> +kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
> +		return kvm_deassign_iosignalfd(kvm, args);
> +
> +	return kvm_assign_iosignalfd(kvm, args);
> +}
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 179c650..91d0fe2 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
>  	atomic_inc(&kvm->mm->mm_count);
>  	spin_lock_init(&kvm->mmu_lock);
>  	kvm_io_bus_init(&kvm->pio_bus);
> -	kvm_irqfd_init(kvm);
> +	kvm_eventfd_init(kvm);
>  	mutex_init(&kvm->lock);
>  	kvm_io_bus_init(&kvm->mmio_bus);
>  	init_rwsem(&kvm->slots_lock);
> @@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
>  		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
>  		break;
>  	}
> +	case KVM_IOSIGNALFD: {
> +		struct kvm_iosignalfd entry;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&entry, argp, sizeof entry))
> +			goto out;
> +		r = kvm_iosignalfd(kvm, &entry);
> +		break;
> +	}
>  	default:
>  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
>  	}
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 0/2] iosignalfd
  2009-06-03 20:17 [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
  2009-06-03 20:17 ` [KVM PATCH v5 1/2] kvm: make io_bus interface more robust Gregory Haskins
  2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
@ 2009-06-03 21:37 ` Gregory Haskins
  2009-06-04 12:25   ` Avi Kivity
  2 siblings, 1 reply; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 21:37 UTC (permalink / raw)
  To: mtosatti; +Cc: kvm, linux-kernel, avi, davidel, paulmck, markmc

[-- Attachment #1: Type: text/plain, Size: 3172 bytes --]

Gregory Haskins wrote:
> (Applies to kvm.git/master:25deed73)
>
> This is v5 of the series.  For more details, please see the header to
> patch 2/2.
>
> This series has been tested against the kvm-eventfd unit test, and
> appears to be functioning properly.  You can download this test here:
>
> ftp://ftp.novell.com/dev/ghaskins/kvm-eventfd.tar.bz2
>
> This series is ready to be considered for inclusion, pending any further
> review comments.
>   

Sorry, Marcello.  I re-used an old email when composing this. :)

Marcello, Avi, and myself have previously agreed that Marcello's
mmio-locking cleanup should go in first.   When that happens, I will
need to rebase this series because it changes how you interface to the
io_bus code.  I should have mentioned that here, but forgot.  (Speaking
of, is there an ETA when that code will be merged Avi?)

That aside, after I sent this series I went to get some coffee to clear
my head and I thought of an issue in the code.  I will reply inline to
patch 2/2.

-Greg

> [
>    Changelog:
>
>       v5:
>            *) Removed "cookie" field, which was a misunderstanding on my
>               part on what Avi wanted for a data-match feature
> 	   *) Added a new "trigger" data-match feature which I think is
>               much closer to what we need.
> 	   *) We retain the dev_count field in the io_bus infrastructure
> 	      and instead back-fill the array on removal.
> 	   *) Various minor cleanups
> 	   *) Rebased to kvm.git/master:25deed73
>
>       v4:
>            *) Fixed a bug in the original 2/4 where the PIT failure case
>               would potentially leave the io_bus components registered.
>            *) Condensed the v3 2/4 and 3/4 into one patch (2/2) since
>               the patches became interdependent with the fix described above
>            *) Rebased to kvm.git/master:74dfca0a
>
>       v3:
>            *) fixed patch 2/4 to handle error cases instead of BUG_ON
>            *) implemented same HAVE_EVENTFD protection mechanism as
>               irqfd to prevent compilation errors on unsupported arches
>            *) completed testing
>            *) rebased to kvm.git/master:7391a6d5
>
>       v2:
>            *) added optional data-matching capability (via cookie field)
>            *) changed name from iofd to iosignalfd
>            *) added io_bus unregister function
>            *) implemented deassign feature
>
>       v1:
>            *) original release (integrated into irqfd v7 series as "iofd")
> ]
>
> ---
>
> Gregory Haskins (2):
>       kvm: add iosignalfd support
>       kvm: make io_bus interface more robust
>
>
>  arch/x86/kvm/i8254.c      |   22 +++
>  arch/x86/kvm/i8259.c      |    9 +
>  arch/x86/kvm/x86.c        |    1 
>  include/linux/kvm.h       |   15 ++
>  include/linux/kvm_host.h  |   16 ++
>  virt/kvm/coalesced_mmio.c |    8 +
>  virt/kvm/eventfd.c        |  356 +++++++++++++++++++++++++++++++++++++++++++++
>  virt/kvm/ioapic.c         |    9 +
>  virt/kvm/kvm_main.c       |   41 +++++
>  9 files changed, 462 insertions(+), 15 deletions(-)
>
>   



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
  2009-06-03 20:37   ` Gregory Haskins
@ 2009-06-03 21:45   ` Gregory Haskins
  2009-06-04  1:34   ` Paul E. McKenney
  2 siblings, 0 replies; 10+ messages in thread
From: Gregory Haskins @ 2009-06-03 21:45 UTC (permalink / raw)
  To: kvm; +Cc: linux-kernel, avi, davidel, mtosatti, paulmck, markmc

[-- Attachment #1: Type: text/plain, Size: 18762 bytes --]

Gregory Haskins wrote:
> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
> signal when written to by a guest.  Host userspace can register any arbitrary
> IO address with a corresponding eventfd and then pass the eventfd to a
> specific end-point of interest for handling.
>
> Normal IO requires a blocking round-trip since the operation may cause
> side-effects in the emulated model or may return data to the caller.
> Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
> "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
> device model synchronously before returning control back to the vcpu.
>
> However, there is a subclass of IO which acts purely as a trigger for
> other IO (such as to kick off an out-of-band DMA request, etc).  For these
> patterns, the synchronous call is particularly expensive since we really
> only want to simply get our notification transmitted asychronously and
> return as quickly as possible.  All the sychronous infrastructure to ensure
> proper data-dependencies are met in the normal IO case are just unecessary
> overhead for signalling.  This adds additional computational load on the
> system, as well as latency to the signalling path.
>
> Therefore, we provide a mechanism for registration of an in-kernel trigger
> point that allows the VCPU to only require a very brief, lightweight
> exit just long enough to signal an eventfd.  This also means that any
> clients compatible with the eventfd interface (which includes userspace
> and kernelspace equally well) can now register to be notified. The end
> result should be a more flexible and higher performance notification API
> for the backend KVM hypervisor and perhipheral components.
>
> To test this theory, we built a test-harness called "doorbell".  This
> module has a function called "doorbell_ring()" which simply increments a
> counter for each time the doorbell is signaled.  It supports signalling
> from either an eventfd, or an ioctl().
>
> We then wired up two paths to the doorbell: One via QEMU via a registered
> io region and through the doorbell ioctl().  The other is direct via
> iosignalfd.
>
> You can download this test harness here:
>
> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
>
> The measured results are as follows:
>
> qemu-mmio:       110000 iops, 9.09us rtt
> iosignalfd-mmio: 200100 iops, 5.00us rtt
> iosignalfd-pio:  367300 iops, 2.72us rtt
>
> I didn't measure qemu-pio, because I have to figure out how to register a
> PIO region with qemu's device model, and I got lazy.  However, for now we
> can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
> and -350ns for HC, we get:
>
> qemu-pio:      153139 iops, 6.53us rtt
> iosignalfd-hc: 412585 iops, 2.37us rtt
>
> these are just for fun, for now, until I can gather more data.
>
> Here is a graph for your convenience:
>
> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
>
> The conclusion to draw is that we save about 4us by skipping the userspace
> hop.
>
> --------------------
>
> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> ---
>
>  arch/x86/kvm/x86.c       |    1 
>  include/linux/kvm.h      |   15 ++
>  include/linux/kvm_host.h |   10 +
>  virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
>  virt/kvm/kvm_main.c      |   11 +
>  5 files changed, 389 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c1ed485..c96c0e3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
>  	case KVM_CAP_IRQ_INJECT_STATUS:
>  	case KVM_CAP_ASSIGN_DEV_IRQ:
>  	case KVM_CAP_IRQFD:
> +	case KVM_CAP_IOSIGNALFD:
>  	case KVM_CAP_PIT2:
>  		r = 1;
>  		break;
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 632a856..53b720d 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -300,6 +300,19 @@ struct kvm_guest_debug {
>  	struct kvm_guest_debug_arch arch;
>  };
>  
> +#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
> +#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
> +#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
> +
> +struct kvm_iosignalfd {
> +	__u64 trigger;
> +	__u64 addr;
> +	__u32 len;
> +	__u32 fd;
> +	__u32 flags;
> +	__u8  pad[36];
> +};
> +
>  #define KVM_TRC_SHIFT           16
>  /*
>   * kvm trace categories
> @@ -430,6 +443,7 @@ struct kvm_trace_rec {
>  #ifdef __KVM_HAVE_PIT
>  #define KVM_CAP_PIT2 33
>  #endif
> +#define KVM_CAP_IOSIGNALFD 34
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> @@ -537,6 +551,7 @@ struct kvm_irqfd {
>  #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
>  #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
>  #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
> +#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
>  
>  /*
>   * ioctls for vcpu fds
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 216fe07..b705960 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -138,6 +138,7 @@ struct kvm {
>  	struct kvm_io_bus pio_bus;
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>  	struct list_head irqfds;
> +	struct list_head iosignalfds;
>  #endif
>  	struct kvm_vm_stat stat;
>  	struct kvm_arch arch;
> @@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
>  
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>  
> -void kvm_irqfd_init(struct kvm *kvm);
> +void kvm_eventfd_init(struct kvm *kvm);
>  int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
>  void kvm_irqfd_release(struct kvm *kvm);
> +int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
>  
>  #else
>  
> -static inline void kvm_irqfd_init(struct kvm *kvm) {}
> +static inline void kvm_eventfd_init(struct kvm *kvm) {}
>  static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
>  {
>  	return -EINVAL;
>  }
>  
>  static inline void kvm_irqfd_release(struct kvm *kvm) {}
> +static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	return -EINVAL;
> +}
>  
>  #endif /* CONFIG_HAVE_KVM_EVENTFD */
>  
> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> index f3f2ea1..77befb3 100644
> --- a/virt/kvm/eventfd.c
> +++ b/virt/kvm/eventfd.c
> @@ -21,6 +21,7 @@
>   */
>  
>  #include <linux/kvm_host.h>
> +#include <linux/kvm.h>
>  #include <linux/workqueue.h>
>  #include <linux/syscalls.h>
>  #include <linux/wait.h>
> @@ -29,6 +30,8 @@
>  #include <linux/list.h>
>  #include <linux/eventfd.h>
>  
> +#include "iodev.h"
> +
>  /*
>   * --------------------------------------------------------------------
>   * irqfd: Allows an fd to be used to inject an interrupt to the guest
> @@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
>  }
>  
>  void
> -kvm_irqfd_init(struct kvm *kvm)
> +kvm_eventfd_init(struct kvm *kvm)
>  {
>  	INIT_LIST_HEAD(&kvm->irqfds);
> +	INIT_LIST_HEAD(&kvm->iosignalfds);
>  }
>  
>  int
> @@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
>  		irqfd_release(irqfd);
>  	}
>  }
> +
> +/*
> + * --------------------------------------------------------------------
> + * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
> + *
> + * userspace can register a PIO/MMIO address with an eventfd for recieving
> + * notification when the memory has been touched.
> + * --------------------------------------------------------------------
> + */
> +
> +/*
> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
> + * aggregates  one or more iosignalfd_items.  Each item points to exactly one
> + * eventfd, and can be registered to trigger on any write to the group
> + * (wildcard), or to a write of a specific value.  If more than one item is to
> + * be supported, the addr/len ranges must all be identical in the group.  If a
> + * trigger value is to be supported on a particular item, the group range must
> + * be exactly the width of the trigger.
> + */
> +
> +struct _iosignalfd_item {
> +	struct list_head     list;
> +	struct file         *file;
> +	unsigned char       *match;
> +	struct rcu_head      rcu;
> +};
> +
> +struct _iosignalfd_group {
> +	struct list_head     list;
> +	u64                  addr;
> +	size_t               length;
> +	struct list_head     items;
> +	struct kvm_io_device dev;
> +};
> +
> +static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
> +{
> +	return container_of(dev, struct _iosignalfd_group, dev);
> +}
> +
> +static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
> +{
> +	return container_of(rhp, struct _iosignalfd_item, rcu);
> +}
> +
> +static int
> +iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
> +			  int is_write)
> +{
> +	struct _iosignalfd_group *p = to_group(this);
> +
> +	return ((addr >= p->addr && (addr < p->addr + p->length)));
> +}
> +
> +static int
> +iosignalfd_is_match(struct _iosignalfd_group *group,
> +		    struct _iosignalfd_item *item,
> +		    const void *val,
> +		    int len)
> +{
> +	if (!item->match)
> +		/* wildcard is a hit */
> +		return true;
> +
> +	if (len != group->length)
> +		/* mis-matched length is a miss */
> +		return false;
> +
> +	/* otherwise, we have to actually compare the data */
> +	return !memcmp(item->match, val, len) ? true : false;
> +}
> +
> +/*
> + * MMIO/PIO writes trigger an event (if the data matches).
> + *
> + * This is invoked by the io_bus subsystem in response to an address match
> + * against the group.  We must then walk the list of individual items to check
> + * for a match and, if applicable, to send the appropriate signal. If the item
> + * in question does not have a "match" pointer, it is considered a wildcard
> + * and will always generate a signal.  There can be an arbitrary number
> + * of distinct matches or wildcards per group.
> + */
> +static void
> +iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
> +		       const void *val)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +	struct _iosignalfd_item *item;
> +
> +	/* FIXME: We should probably use SRCU */
> +	rcu_read_lock();
> +
> +	list_for_each_entry_rcu(item, &group->items, list) {
> +		if (iosignalfd_is_match(group, item, val, len))
> +			eventfd_signal(item->file, 1);
> +	}
> +
> +	rcu_read_unlock();
> +}
> +
> +/*
> + * MMIO/PIO reads against the group indiscriminately return all zeros
> + */
> +static void
> +iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
> +		      void *val)
> +{
> +	memset(val, 0, len);
> +}
> +
> +static void
> +_iosignalfd_group_destructor(struct _iosignalfd_group *group)
> +{
> +	list_del(&group->list);
> +	kfree(group);
> +}
> +
> +static void
> +iosignalfd_group_destructor(struct kvm_io_device *this)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +
> +	_iosignalfd_group_destructor(group);
> +}
> +
> +/* assumes kvm->lock held */
> +static struct _iosignalfd_group *
> +iosignalfd_group_find(struct kvm *kvm, u64 addr)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	list_for_each_entry(group, &kvm->iosignalfds, list) {
> +		if (group->addr == addr)
> +			return group;
> +	}
> +
> +	return NULL;
> +}
> +
> +static const struct kvm_io_device_ops iosignalfd_ops = {
> +	.read       = iosignalfd_group_read,
> +	.write      = iosignalfd_group_write,
> +	.in_range   = iosignalfd_group_in_range,
> +	.destructor = iosignalfd_group_destructor,
> +};
> +
> +/*
> + * Atomically find an existing group, or create a new one if it doesn't already
> + * exist.
> + *
> + * assumes kvm->lock is held
> + */
> +static struct _iosignalfd_group *
> +iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
> +		      u64 addr, size_t len)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	group = iosignalfd_group_find(kvm, addr);
> +	if (!group) {
> +		int ret;
> +
> +		group = kzalloc(sizeof(*group), GFP_KERNEL);
> +		if (!group)
> +			return ERR_PTR(-ENOMEM);
> +
> +		INIT_LIST_HEAD(&group->list);
> +		INIT_LIST_HEAD(&group->items);
> +		group->addr   = addr;
> +		group->length = len;
> +		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
> +
> +		ret = kvm_io_bus_register_dev(bus, &group->dev);
> +		if (ret < 0) {
> +			kfree(group);
> +			return ERR_PTR(ret);
> +		}
> +
> +		list_add_tail(&group->list, &kvm->iosignalfds);
> +
> +	} else if (group->length != len)
> +		/*
> +		 * Existing groups must have the same addr/len tuple or we
> +		 * reject the request
> +		 */
> +		return ERR_PTR(-EINVAL);
> +
> +	return group;
> +}
> +
> +static int
> +kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group = NULL;
> +	struct _iosignalfd_item  *item = NULL;
> +	struct file              *file;
> +	int                       ret;
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		return ret;
> +	}
> +
> +	item = kzalloc(sizeof(*item), GFP_KERNEL);
> +	if (!item) {
> +		ret = -ENOMEM;
> +		goto fail;
> +	}
> +
> +	INIT_LIST_HEAD(&item->list);
> +	item->file = file;
> +
> +	/*
> +	 * Registering a "trigger" address is optional.  If this flag
> +	 * is not specified, we leave the item->match pointer NULL, which
> +	 * indicates a wildcard
> +	 */
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
> +		if (args->len > sizeof(u64)) {
> +			ret = -EINVAL;
> +			goto fail;
> +		}
> +
> +		item->match = kzalloc(args->len, GFP_KERNEL);
> +		if (!item->match) {
> +			ret = -ENOMEM;
> +			goto fail;
> +		}
> +
> +		if (copy_from_user(item->match,
> +				   (void *)args->trigger,
> +				   args->len)) {
> +			ret = -EFAULT;
> +			goto fail;
> +		}
> +	}
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		mutex_unlock(&kvm->lock);
> +		goto fail;
> +	}
> +
> +	/*
> +	 * Note: We are committed to succeed at this point since we have
> +	 * (potentially) published a new group-device.  Any failure handling
> +	 * added in the future after this point will need to be handled
> +	 * carefully.
> +	 */
> +
> +	list_add_tail_rcu(&item->list, &group->items);
> +
> +	mutex_unlock(&kvm->lock);
> +
> +	return 0;
> +
> +fail:
> +	if (item) {
> +		/*
> +		 * it would have never made it to the group->items list
> +		 * in the failure path, so we dont need to worry about removing
> +		 * it
> +		 */
> +		kfree(item->match);
> +		kfree(item);
> +	}
> +
> +	if (file)
> +		fput(file);
> +
> +	return ret;
> +}
> +
> +static void
> +iosignalfd_item_free(struct rcu_head *rhp)
> +{
> +	struct _iosignalfd_item *item = to_item(rhp);
> +
> +	fput(item->file);
> +	kfree(item->match);
> +	kfree(item);
> +}
> +
> +static int
> +kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group;
> +	struct _iosignalfd_item  *item, *tmp;
> +	struct file              *file;
> +	int                       ret = 0;
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_find(kvm, args->addr);
> +	if (!group) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		goto out;
> +	}
> +
> +	list_for_each_entry_safe(item, tmp, &group->items, list) {
> +		/*
> +		 * any items registered at this group-address with the matching
> +		 * eventfd will be removed
> +		 */
> +		if (item->file != file)
> +			continue;
> +
> +		list_del_rcu(&item->list);
> +		call_rcu(&item->rcu, iosignalfd_item_free);
> +	}
> +
> +	if (list_empty(&group->items)) {
> +		/*
> +		 * We should unpublish our group device if we just removed
> +		 * the last of its contained items
> +		 */
> +		kvm_io_bus_unregister_dev(bus, &group->dev);
> +		_iosignalfd_group_destructor(group);
>   

This is the issue I mentioned in the last email (against 0/2).  I may
need to be concerned about racing to destroy the group before the next
grace period.

That aside, my whole use of RCU here is a bit dubious (at least in the
current code) since today all io_bus operations hold the kvm->lock while
they execute.  I think we would like to fix this in the future to be
more fine grained, so thinking in this direction is probably not a bad
idea.  However, I really shouldn't do it half-assed like I am right now
;)  I will fix this.

-Greg

> +	}
> +
> +	fput(file);
> +
> +out:
> +	mutex_unlock(&kvm->lock);
> +
> +	return ret;
> +}
> +
> +int
> +kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
> +		return kvm_deassign_iosignalfd(kvm, args);
> +
> +	return kvm_assign_iosignalfd(kvm, args);
> +}
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 179c650..91d0fe2 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
>  	atomic_inc(&kvm->mm->mm_count);
>  	spin_lock_init(&kvm->mmu_lock);
>  	kvm_io_bus_init(&kvm->pio_bus);
> -	kvm_irqfd_init(kvm);
> +	kvm_eventfd_init(kvm);
>  	mutex_init(&kvm->lock);
>  	kvm_io_bus_init(&kvm->mmio_bus);
>  	init_rwsem(&kvm->slots_lock);
> @@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
>  		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
>  		break;
>  	}
> +	case KVM_IOSIGNALFD: {
> +		struct kvm_iosignalfd entry;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&entry, argp, sizeof entry))
> +			goto out;
> +		r = kvm_iosignalfd(kvm, &entry);
> +		break;
> +	}
>  	default:
>  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
>  	}
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>   



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
  2009-06-03 20:37   ` Gregory Haskins
  2009-06-03 21:45   ` Gregory Haskins
@ 2009-06-04  1:34   ` Paul E. McKenney
  2009-06-04  2:55     ` Gregory Haskins
  2 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2009-06-04  1:34 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: kvm, linux-kernel, avi, davidel, mtosatti, markmc

On Wed, Jun 03, 2009 at 04:17:49PM -0400, Gregory Haskins wrote:
> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
> signal when written to by a guest.  Host userspace can register any arbitrary
> IO address with a corresponding eventfd and then pass the eventfd to a
> specific end-point of interest for handling.
> 
> Normal IO requires a blocking round-trip since the operation may cause
> side-effects in the emulated model or may return data to the caller.
> Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
> "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
> device model synchronously before returning control back to the vcpu.
> 
> However, there is a subclass of IO which acts purely as a trigger for
> other IO (such as to kick off an out-of-band DMA request, etc).  For these
> patterns, the synchronous call is particularly expensive since we really
> only want to simply get our notification transmitted asychronously and
> return as quickly as possible.  All the sychronous infrastructure to ensure
> proper data-dependencies are met in the normal IO case are just unecessary
> overhead for signalling.  This adds additional computational load on the
> system, as well as latency to the signalling path.
> 
> Therefore, we provide a mechanism for registration of an in-kernel trigger
> point that allows the VCPU to only require a very brief, lightweight
> exit just long enough to signal an eventfd.  This also means that any
> clients compatible with the eventfd interface (which includes userspace
> and kernelspace equally well) can now register to be notified. The end
> result should be a more flexible and higher performance notification API
> for the backend KVM hypervisor and perhipheral components.
> 
> To test this theory, we built a test-harness called "doorbell".  This
> module has a function called "doorbell_ring()" which simply increments a
> counter for each time the doorbell is signaled.  It supports signalling
> from either an eventfd, or an ioctl().
> 
> We then wired up two paths to the doorbell: One via QEMU via a registered
> io region and through the doorbell ioctl().  The other is direct via
> iosignalfd.
> 
> You can download this test harness here:
> 
> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
> 
> The measured results are as follows:
> 
> qemu-mmio:       110000 iops, 9.09us rtt
> iosignalfd-mmio: 200100 iops, 5.00us rtt
> iosignalfd-pio:  367300 iops, 2.72us rtt
> 
> I didn't measure qemu-pio, because I have to figure out how to register a
> PIO region with qemu's device model, and I got lazy.  However, for now we
> can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
> and -350ns for HC, we get:
> 
> qemu-pio:      153139 iops, 6.53us rtt
> iosignalfd-hc: 412585 iops, 2.37us rtt
> 
> these are just for fun, for now, until I can gather more data.
> 
> Here is a graph for your convenience:
> 
> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
> 
> The conclusion to draw is that we save about 4us by skipping the userspace
> hop.
> 
> --------------------

One set of decision criteria for RCU vs. SRCU below, and one question
about the RCU reader running concurrently with a deletion.  Looks pretty
good, in general!

							Thanx, Paul

> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> ---
> 
>  arch/x86/kvm/x86.c       |    1 
>  include/linux/kvm.h      |   15 ++
>  include/linux/kvm_host.h |   10 +
>  virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
>  virt/kvm/kvm_main.c      |   11 +
>  5 files changed, 389 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index c1ed485..c96c0e3 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
>  	case KVM_CAP_IRQ_INJECT_STATUS:
>  	case KVM_CAP_ASSIGN_DEV_IRQ:
>  	case KVM_CAP_IRQFD:
> +	case KVM_CAP_IOSIGNALFD:
>  	case KVM_CAP_PIT2:
>  		r = 1;
>  		break;
> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> index 632a856..53b720d 100644
> --- a/include/linux/kvm.h
> +++ b/include/linux/kvm.h
> @@ -300,6 +300,19 @@ struct kvm_guest_debug {
>  	struct kvm_guest_debug_arch arch;
>  };
> 
> +#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
> +#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
> +#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
> +
> +struct kvm_iosignalfd {
> +	__u64 trigger;
> +	__u64 addr;
> +	__u32 len;
> +	__u32 fd;
> +	__u32 flags;
> +	__u8  pad[36];
> +};
> +
>  #define KVM_TRC_SHIFT           16
>  /*
>   * kvm trace categories
> @@ -430,6 +443,7 @@ struct kvm_trace_rec {
>  #ifdef __KVM_HAVE_PIT
>  #define KVM_CAP_PIT2 33
>  #endif
> +#define KVM_CAP_IOSIGNALFD 34
> 
>  #ifdef KVM_CAP_IRQ_ROUTING
> 
> @@ -537,6 +551,7 @@ struct kvm_irqfd {
>  #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
>  #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
>  #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
> +#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
> 
>  /*
>   * ioctls for vcpu fds
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 216fe07..b705960 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -138,6 +138,7 @@ struct kvm {
>  	struct kvm_io_bus pio_bus;
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>  	struct list_head irqfds;
> +	struct list_head iosignalfds;
>  #endif
>  	struct kvm_vm_stat stat;
>  	struct kvm_arch arch;
> @@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
> 
>  #ifdef CONFIG_HAVE_KVM_EVENTFD
> 
> -void kvm_irqfd_init(struct kvm *kvm);
> +void kvm_eventfd_init(struct kvm *kvm);
>  int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
>  void kvm_irqfd_release(struct kvm *kvm);
> +int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
> 
>  #else
> 
> -static inline void kvm_irqfd_init(struct kvm *kvm) {}
> +static inline void kvm_eventfd_init(struct kvm *kvm) {}
>  static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
>  {
>  	return -EINVAL;
>  }
> 
>  static inline void kvm_irqfd_release(struct kvm *kvm) {}
> +static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	return -EINVAL;
> +}
> 
>  #endif /* CONFIG_HAVE_KVM_EVENTFD */
> 
> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> index f3f2ea1..77befb3 100644
> --- a/virt/kvm/eventfd.c
> +++ b/virt/kvm/eventfd.c
> @@ -21,6 +21,7 @@
>   */
> 
>  #include <linux/kvm_host.h>
> +#include <linux/kvm.h>
>  #include <linux/workqueue.h>
>  #include <linux/syscalls.h>
>  #include <linux/wait.h>
> @@ -29,6 +30,8 @@
>  #include <linux/list.h>
>  #include <linux/eventfd.h>
> 
> +#include "iodev.h"
> +
>  /*
>   * --------------------------------------------------------------------
>   * irqfd: Allows an fd to be used to inject an interrupt to the guest
> @@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
>  }
> 
>  void
> -kvm_irqfd_init(struct kvm *kvm)
> +kvm_eventfd_init(struct kvm *kvm)
>  {
>  	INIT_LIST_HEAD(&kvm->irqfds);
> +	INIT_LIST_HEAD(&kvm->iosignalfds);
>  }
> 
>  int
> @@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
>  		irqfd_release(irqfd);
>  	}
>  }
> +
> +/*
> + * --------------------------------------------------------------------
> + * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
> + *
> + * userspace can register a PIO/MMIO address with an eventfd for recieving
> + * notification when the memory has been touched.
> + * --------------------------------------------------------------------
> + */
> +
> +/*
> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
> + * aggregates  one or more iosignalfd_items.  Each item points to exactly one
> + * eventfd, and can be registered to trigger on any write to the group
> + * (wildcard), or to a write of a specific value.  If more than one item is to
> + * be supported, the addr/len ranges must all be identical in the group.  If a
> + * trigger value is to be supported on a particular item, the group range must
> + * be exactly the width of the trigger.
> + */
> +
> +struct _iosignalfd_item {
> +	struct list_head     list;
> +	struct file         *file;
> +	unsigned char       *match;
> +	struct rcu_head      rcu;
> +};
> +
> +struct _iosignalfd_group {
> +	struct list_head     list;
> +	u64                  addr;
> +	size_t               length;
> +	struct list_head     items;
> +	struct kvm_io_device dev;
> +};
> +
> +static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
> +{
> +	return container_of(dev, struct _iosignalfd_group, dev);
> +}
> +
> +static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
> +{
> +	return container_of(rhp, struct _iosignalfd_item, rcu);
> +}

Given that you only use to_item() in one place, not clear to me that it
is a win compared to open-coding it.  In contrast, to_group() is used
in several places, so it makes sense to define it.

> +static int
> +iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
> +			  int is_write)
> +{
> +	struct _iosignalfd_group *p = to_group(this);
> +
> +	return ((addr >= p->addr && (addr < p->addr + p->length)));
> +}
> +
> +static int
> +iosignalfd_is_match(struct _iosignalfd_group *group,
> +		    struct _iosignalfd_item *item,
> +		    const void *val,
> +		    int len)
> +{
> +	if (!item->match)
> +		/* wildcard is a hit */
> +		return true;
> +
> +	if (len != group->length)
> +		/* mis-matched length is a miss */
> +		return false;
> +
> +	/* otherwise, we have to actually compare the data */
> +	return !memcmp(item->match, val, len) ? true : false;
> +}
> +
> +/*
> + * MMIO/PIO writes trigger an event (if the data matches).
> + *
> + * This is invoked by the io_bus subsystem in response to an address match
> + * against the group.  We must then walk the list of individual items to check
> + * for a match and, if applicable, to send the appropriate signal. If the item
> + * in question does not have a "match" pointer, it is considered a wildcard
> + * and will always generate a signal.  There can be an arbitrary number
> + * of distinct matches or wildcards per group.
> + */
> +static void
> +iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
> +		       const void *val)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +	struct _iosignalfd_item *item;
> +
> +	/* FIXME: We should probably use SRCU */

Some test questions to help with this decision:

1.	Ignoring lock-contention issues, would you be willing to hold a
	spinlock_t across this critical section?  If the answer is
	"yes", then you should also be willing to use rcu_read_lock().
	If the answer is "no", proceed to the next question.

2.	Is this critical section's execution time almost always less than
	a hundred microseconds or so?  If the answer is "yes", then you
	should also be willing to use rcu_read_lock().	If the answer is
	"no", proceed to the next question.

3.	Is this critical section executed infrequently (say, less than once
	per few jiffies, on average), and is the critical section's
	execution time less than a jiffy or so?  If the answer is "yes",
	then you should also be willing to use rcu_read_lock().  If the
	answer is "no", proceed to the next question.

4.	Did you get here?  If so, you probably don't want to use
	rcu_read_lock().  Consider using something like srcu_read_lock(),
	reference counts, or sleeplocks instead.

Of course, these questions should be regarded as rough rules of thumb.
And for every rough rule of thumb, there will be exceptions.  These
questions should still serve as a good general guide.

> +	rcu_read_lock();
> +
> +	list_for_each_entry_rcu(item, &group->items, list) {
> +		if (iosignalfd_is_match(group, item, val, len))
> +			eventfd_signal(item->file, 1);

>From what I can see, this code could invoke eventfd_signal() on an
item that just got list_del_rcu()ed.  Is this really OK?

(No idea whether or not it is OK myself.  Looks like we might change
some state in the underlying file and attempt to wake up some processes.
Might be OK, might not, depending on what you are up to.)

> +	}
> +
> +	rcu_read_unlock();
> +}
> +
> +/*
> + * MMIO/PIO reads against the group indiscriminately return all zeros
> + */
> +static void
> +iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
> +		      void *val)
> +{
> +	memset(val, 0, len);
> +}
> +
> +static void
> +_iosignalfd_group_destructor(struct _iosignalfd_group *group)
> +{
> +	list_del(&group->list);
> +	kfree(group);
> +}
> +
> +static void
> +iosignalfd_group_destructor(struct kvm_io_device *this)
> +{
> +	struct _iosignalfd_group *group = to_group(this);
> +
> +	_iosignalfd_group_destructor(group);
> +}
> +
> +/* assumes kvm->lock held */
> +static struct _iosignalfd_group *
> +iosignalfd_group_find(struct kvm *kvm, u64 addr)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	list_for_each_entry(group, &kvm->iosignalfds, list) {
> +		if (group->addr == addr)
> +			return group;
> +	}
> +
> +	return NULL;
> +}
> +
> +static const struct kvm_io_device_ops iosignalfd_ops = {
> +	.read       = iosignalfd_group_read,
> +	.write      = iosignalfd_group_write,
> +	.in_range   = iosignalfd_group_in_range,
> +	.destructor = iosignalfd_group_destructor,
> +};
> +
> +/*
> + * Atomically find an existing group, or create a new one if it doesn't already
> + * exist.
> + *
> + * assumes kvm->lock is held
> + */
> +static struct _iosignalfd_group *
> +iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
> +		      u64 addr, size_t len)
> +{
> +	struct _iosignalfd_group *group;
> +
> +	group = iosignalfd_group_find(kvm, addr);
> +	if (!group) {
> +		int ret;
> +
> +		group = kzalloc(sizeof(*group), GFP_KERNEL);
> +		if (!group)
> +			return ERR_PTR(-ENOMEM);
> +
> +		INIT_LIST_HEAD(&group->list);
> +		INIT_LIST_HEAD(&group->items);
> +		group->addr   = addr;
> +		group->length = len;
> +		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
> +
> +		ret = kvm_io_bus_register_dev(bus, &group->dev);
> +		if (ret < 0) {
> +			kfree(group);
> +			return ERR_PTR(ret);
> +		}
> +
> +		list_add_tail(&group->list, &kvm->iosignalfds);
> +
> +	} else if (group->length != len)
> +		/*
> +		 * Existing groups must have the same addr/len tuple or we
> +		 * reject the request
> +		 */
> +		return ERR_PTR(-EINVAL);
> +
> +	return group;
> +}
> +
> +static int
> +kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group = NULL;
> +	struct _iosignalfd_item  *item = NULL;
> +	struct file              *file;
> +	int                       ret;
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		return ret;
> +	}
> +
> +	item = kzalloc(sizeof(*item), GFP_KERNEL);
> +	if (!item) {
> +		ret = -ENOMEM;
> +		goto fail;
> +	}
> +
> +	INIT_LIST_HEAD(&item->list);
> +	item->file = file;
> +
> +	/*
> +	 * Registering a "trigger" address is optional.  If this flag
> +	 * is not specified, we leave the item->match pointer NULL, which
> +	 * indicates a wildcard
> +	 */
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
> +		if (args->len > sizeof(u64)) {
> +			ret = -EINVAL;
> +			goto fail;
> +		}
> +
> +		item->match = kzalloc(args->len, GFP_KERNEL);
> +		if (!item->match) {
> +			ret = -ENOMEM;
> +			goto fail;
> +		}
> +
> +		if (copy_from_user(item->match,
> +				   (void *)args->trigger,
> +				   args->len)) {
> +			ret = -EFAULT;
> +			goto fail;
> +		}
> +	}
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
> +	if (IS_ERR(group)) {
> +		ret = PTR_ERR(group);
> +		mutex_unlock(&kvm->lock);
> +		goto fail;
> +	}
> +
> +	/*
> +	 * Note: We are committed to succeed at this point since we have
> +	 * (potentially) published a new group-device.  Any failure handling
> +	 * added in the future after this point will need to be handled
> +	 * carefully.
> +	 */
> +
> +	list_add_tail_rcu(&item->list, &group->items);

Good, protected by kvm->lock.

> +	mutex_unlock(&kvm->lock);
> +
> +	return 0;
> +
> +fail:
> +	if (item) {
> +		/*
> +		 * it would have never made it to the group->items list
> +		 * in the failure path, so we dont need to worry about removing
> +		 * it
> +		 */
> +		kfree(item->match);
> +		kfree(item);
> +	}
> +
> +	if (file)
> +		fput(file);
> +
> +	return ret;
> +}
> +
> +static void
> +iosignalfd_item_free(struct rcu_head *rhp)
> +{
> +	struct _iosignalfd_item *item = to_item(rhp);
> +
> +	fput(item->file);
> +	kfree(item->match);
> +	kfree(item);
> +}
> +
> +static int
> +kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> +	struct _iosignalfd_group *group;
> +	struct _iosignalfd_item  *item, *tmp;
> +	struct file              *file;
> +	int                       ret = 0;
> +
> +	mutex_lock(&kvm->lock);
> +
> +	group = iosignalfd_group_find(kvm, args->addr);
> +	if (!group) {
> +		ret = -EINVAL;
> +		goto out;
> +	}
> +
> +	file = eventfd_fget(args->fd);
> +	if (IS_ERR(file)) {
> +		ret = PTR_ERR(file);
> +		goto out;
> +	}
> +
> +	list_for_each_entry_safe(item, tmp, &group->items, list) {
> +		/*
> +		 * any items registered at this group-address with the matching
> +		 * eventfd will be removed
> +		 */
> +		if (item->file != file)
> +			continue;
> +
> +		list_del_rcu(&item->list);

Good, also protected by kvm->lock.

> +		call_rcu(&item->rcu, iosignalfd_item_free);

And also good here, deferring free of the RCU-protected structures.

> +	}
> +
> +	if (list_empty(&group->items)) {
> +		/*
> +		 * We should unpublish our group device if we just removed
> +		 * the last of its contained items
> +		 */
> +		kvm_io_bus_unregister_dev(bus, &group->dev);
> +		_iosignalfd_group_destructor(group);
> +	}
> +
> +	fput(file);
> +
> +out:
> +	mutex_unlock(&kvm->lock);
> +
> +	return ret;
> +}
> +
> +int
> +kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> +{
> +	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
> +		return kvm_deassign_iosignalfd(kvm, args);
> +
> +	return kvm_assign_iosignalfd(kvm, args);
> +}
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 179c650..91d0fe2 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
>  	atomic_inc(&kvm->mm->mm_count);
>  	spin_lock_init(&kvm->mmu_lock);
>  	kvm_io_bus_init(&kvm->pio_bus);
> -	kvm_irqfd_init(kvm);
> +	kvm_eventfd_init(kvm);
>  	mutex_init(&kvm->lock);
>  	kvm_io_bus_init(&kvm->mmio_bus);
>  	init_rwsem(&kvm->slots_lock);
> @@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
>  		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
>  		break;
>  	}
> +	case KVM_IOSIGNALFD: {
> +		struct kvm_iosignalfd entry;
> +
> +		r = -EFAULT;
> +		if (copy_from_user(&entry, argp, sizeof entry))
> +			goto out;
> +		r = kvm_iosignalfd(kvm, &entry);
> +		break;
> +	}
>  	default:
>  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
>  	}
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-04  1:34   ` Paul E. McKenney
@ 2009-06-04  2:55     ` Gregory Haskins
  2009-06-04 16:51       ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Gregory Haskins @ 2009-06-04  2:55 UTC (permalink / raw)
  To: paulmck; +Cc: kvm, linux-kernel, avi, davidel, mtosatti, markmc

[-- Attachment #1: Type: text/plain, Size: 23606 bytes --]

Paul E. McKenney wrote:
> On Wed, Jun 03, 2009 at 04:17:49PM -0400, Gregory Haskins wrote:
>   
>> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
>> signal when written to by a guest.  Host userspace can register any arbitrary
>> IO address with a corresponding eventfd and then pass the eventfd to a
>> specific end-point of interest for handling.
>>
>> Normal IO requires a blocking round-trip since the operation may cause
>> side-effects in the emulated model or may return data to the caller.
>> Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
>> "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
>> device model synchronously before returning control back to the vcpu.
>>
>> However, there is a subclass of IO which acts purely as a trigger for
>> other IO (such as to kick off an out-of-band DMA request, etc).  For these
>> patterns, the synchronous call is particularly expensive since we really
>> only want to simply get our notification transmitted asychronously and
>> return as quickly as possible.  All the sychronous infrastructure to ensure
>> proper data-dependencies are met in the normal IO case are just unecessary
>> overhead for signalling.  This adds additional computational load on the
>> system, as well as latency to the signalling path.
>>
>> Therefore, we provide a mechanism for registration of an in-kernel trigger
>> point that allows the VCPU to only require a very brief, lightweight
>> exit just long enough to signal an eventfd.  This also means that any
>> clients compatible with the eventfd interface (which includes userspace
>> and kernelspace equally well) can now register to be notified. The end
>> result should be a more flexible and higher performance notification API
>> for the backend KVM hypervisor and perhipheral components.
>>
>> To test this theory, we built a test-harness called "doorbell".  This
>> module has a function called "doorbell_ring()" which simply increments a
>> counter for each time the doorbell is signaled.  It supports signalling
>> from either an eventfd, or an ioctl().
>>
>> We then wired up two paths to the doorbell: One via QEMU via a registered
>> io region and through the doorbell ioctl().  The other is direct via
>> iosignalfd.
>>
>> You can download this test harness here:
>>
>> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
>>
>> The measured results are as follows:
>>
>> qemu-mmio:       110000 iops, 9.09us rtt
>> iosignalfd-mmio: 200100 iops, 5.00us rtt
>> iosignalfd-pio:  367300 iops, 2.72us rtt
>>
>> I didn't measure qemu-pio, because I have to figure out how to register a
>> PIO region with qemu's device model, and I got lazy.  However, for now we
>> can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
>> and -350ns for HC, we get:
>>
>> qemu-pio:      153139 iops, 6.53us rtt
>> iosignalfd-hc: 412585 iops, 2.37us rtt
>>
>> these are just for fun, for now, until I can gather more data.
>>
>> Here is a graph for your convenience:
>>
>> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
>>
>> The conclusion to draw is that we save about 4us by skipping the userspace
>> hop.
>>
>> --------------------
>>     
>
> One set of decision criteria for RCU vs. SRCU below, and one question
> about the RCU reader running concurrently with a deletion.  Looks pretty
> good, in general!
>
> 							Thanx, Paul
>
>   
>> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
>> ---
>>
>>  arch/x86/kvm/x86.c       |    1 
>>  include/linux/kvm.h      |   15 ++
>>  include/linux/kvm_host.h |   10 +
>>  virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
>>  virt/kvm/kvm_main.c      |   11 +
>>  5 files changed, 389 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index c1ed485..c96c0e3 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
>>  	case KVM_CAP_IRQ_INJECT_STATUS:
>>  	case KVM_CAP_ASSIGN_DEV_IRQ:
>>  	case KVM_CAP_IRQFD:
>> +	case KVM_CAP_IOSIGNALFD:
>>  	case KVM_CAP_PIT2:
>>  		r = 1;
>>  		break;
>> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
>> index 632a856..53b720d 100644
>> --- a/include/linux/kvm.h
>> +++ b/include/linux/kvm.h
>> @@ -300,6 +300,19 @@ struct kvm_guest_debug {
>>  	struct kvm_guest_debug_arch arch;
>>  };
>>
>> +#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
>> +#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
>> +#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
>> +
>> +struct kvm_iosignalfd {
>> +	__u64 trigger;
>> +	__u64 addr;
>> +	__u32 len;
>> +	__u32 fd;
>> +	__u32 flags;
>> +	__u8  pad[36];
>> +};
>> +
>>  #define KVM_TRC_SHIFT           16
>>  /*
>>   * kvm trace categories
>> @@ -430,6 +443,7 @@ struct kvm_trace_rec {
>>  #ifdef __KVM_HAVE_PIT
>>  #define KVM_CAP_PIT2 33
>>  #endif
>> +#define KVM_CAP_IOSIGNALFD 34
>>
>>  #ifdef KVM_CAP_IRQ_ROUTING
>>
>> @@ -537,6 +551,7 @@ struct kvm_irqfd {
>>  #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
>>  #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
>>  #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
>> +#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
>>
>>  /*
>>   * ioctls for vcpu fds
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index 216fe07..b705960 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -138,6 +138,7 @@ struct kvm {
>>  	struct kvm_io_bus pio_bus;
>>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>>  	struct list_head irqfds;
>> +	struct list_head iosignalfds;
>>  #endif
>>  	struct kvm_vm_stat stat;
>>  	struct kvm_arch arch;
>> @@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
>>
>>  #ifdef CONFIG_HAVE_KVM_EVENTFD
>>
>> -void kvm_irqfd_init(struct kvm *kvm);
>> +void kvm_eventfd_init(struct kvm *kvm);
>>  int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
>>  void kvm_irqfd_release(struct kvm *kvm);
>> +int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
>>
>>  #else
>>
>> -static inline void kvm_irqfd_init(struct kvm *kvm) {}
>> +static inline void kvm_eventfd_init(struct kvm *kvm) {}
>>  static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
>>  {
>>  	return -EINVAL;
>>  }
>>
>>  static inline void kvm_irqfd_release(struct kvm *kvm) {}
>> +static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
>> +{
>> +	return -EINVAL;
>> +}
>>
>>  #endif /* CONFIG_HAVE_KVM_EVENTFD */
>>
>> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
>> index f3f2ea1..77befb3 100644
>> --- a/virt/kvm/eventfd.c
>> +++ b/virt/kvm/eventfd.c
>> @@ -21,6 +21,7 @@
>>   */
>>
>>  #include <linux/kvm_host.h>
>> +#include <linux/kvm.h>
>>  #include <linux/workqueue.h>
>>  #include <linux/syscalls.h>
>>  #include <linux/wait.h>
>> @@ -29,6 +30,8 @@
>>  #include <linux/list.h>
>>  #include <linux/eventfd.h>
>>
>> +#include "iodev.h"
>> +
>>  /*
>>   * --------------------------------------------------------------------
>>   * irqfd: Allows an fd to be used to inject an interrupt to the guest
>> @@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
>>  }
>>
>>  void
>> -kvm_irqfd_init(struct kvm *kvm)
>> +kvm_eventfd_init(struct kvm *kvm)
>>  {
>>  	INIT_LIST_HEAD(&kvm->irqfds);
>> +	INIT_LIST_HEAD(&kvm->iosignalfds);
>>  }
>>
>>  int
>> @@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
>>  		irqfd_release(irqfd);
>>  	}
>>  }
>> +
>> +/*
>> + * --------------------------------------------------------------------
>> + * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
>> + *
>> + * userspace can register a PIO/MMIO address with an eventfd for recieving
>> + * notification when the memory has been touched.
>> + * --------------------------------------------------------------------
>> + */
>> +
>> +/*
>> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
>> + * aggregates  one or more iosignalfd_items.  Each item points to exactly one
>> + * eventfd, and can be registered to trigger on any write to the group
>> + * (wildcard), or to a write of a specific value.  If more than one item is to
>> + * be supported, the addr/len ranges must all be identical in the group.  If a
>> + * trigger value is to be supported on a particular item, the group range must
>> + * be exactly the width of the trigger.
>> + */
>> +
>> +struct _iosignalfd_item {
>> +	struct list_head     list;
>> +	struct file         *file;
>> +	unsigned char       *match;
>> +	struct rcu_head      rcu;
>> +};
>> +
>> +struct _iosignalfd_group {
>> +	struct list_head     list;
>> +	u64                  addr;
>> +	size_t               length;
>> +	struct list_head     items;
>> +	struct kvm_io_device dev;
>> +};
>> +
>> +static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
>> +{
>> +	return container_of(dev, struct _iosignalfd_group, dev);
>> +}
>> +
>> +static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
>> +{
>> +	return container_of(rhp, struct _iosignalfd_item, rcu);
>> +}
>>     
>
> Given that you only use to_item() in one place, not clear to me that it
> is a win compared to open-coding it.  In contrast, to_group() is used
> in several places, so it makes sense to define it.
>   

Yeah, I think open-coding it resulted in a 80+ line that I had to break,
and I hate that.  So I went for consistency and tidiness instead, if but
a tad overkill. ;)
>   
>> +static int
>> +iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
>> +			  int is_write)
>> +{
>> +	struct _iosignalfd_group *p = to_group(this);
>> +
>> +	return ((addr >= p->addr && (addr < p->addr + p->length)));
>> +}
>> +
>> +static int
>> +iosignalfd_is_match(struct _iosignalfd_group *group,
>> +		    struct _iosignalfd_item *item,
>> +		    const void *val,
>> +		    int len)
>> +{
>> +	if (!item->match)
>> +		/* wildcard is a hit */
>> +		return true;
>> +
>> +	if (len != group->length)
>> +		/* mis-matched length is a miss */
>> +		return false;
>> +
>> +	/* otherwise, we have to actually compare the data */
>> +	return !memcmp(item->match, val, len) ? true : false;
>> +}
>> +
>> +/*
>> + * MMIO/PIO writes trigger an event (if the data matches).
>> + *
>> + * This is invoked by the io_bus subsystem in response to an address match
>> + * against the group.  We must then walk the list of individual items to check
>> + * for a match and, if applicable, to send the appropriate signal. If the item
>> + * in question does not have a "match" pointer, it is considered a wildcard
>> + * and will always generate a signal.  There can be an arbitrary number
>> + * of distinct matches or wildcards per group.
>> + */
>> +static void
>> +iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
>> +		       const void *val)
>> +{
>> +	struct _iosignalfd_group *group = to_group(this);
>> +	struct _iosignalfd_item *item;
>> +
>> +	/* FIXME: We should probably use SRCU */
>>     
>
> Some test questions to help with this decision:
>
> 1.	Ignoring lock-contention issues, would you be willing to hold a
> 	spinlock_t across this critical section?  If the answer is
> 	"yes", then you should also be willing to use rcu_read_lock().
> 	If the answer is "no", proceed to the next question.
>
> 2.	Is this critical section's execution time almost always less than
> 	a hundred microseconds or so?  If the answer is "yes", then you
> 	should also be willing to use rcu_read_lock().	If the answer is
> 	"no", proceed to the next question.
>
> 3.	Is this critical section executed infrequently (say, less than once
> 	per few jiffies, on average), and is the critical section's
> 	execution time less than a jiffy or so?  If the answer is "yes",
> 	then you should also be willing to use rcu_read_lock().  If the
> 	answer is "no", proceed to the next question.
>
> 4.	Did you get here?  If so, you probably don't want to use
> 	rcu_read_lock().  Consider using something like srcu_read_lock(),
> 	reference counts, or sleeplocks instead.
>
> Of course, these questions should be regarded as rough rules of thumb.
> And for every rough rule of thumb, there will be exceptions.  These
> questions should still serve as a good general guide.
>   

Actually as it stands right now the eventfd_signal() takes a
spin_lock_irq() for its duration, so a basic rcu_read_lock() may be in
the noise (at least for now).  I'd like to work with Davide at some
point in the future to change eventfd internals to allow signalling
callbacks to be optionally preemptible (at least when they are called
from preemptible code to begin with), but we can address this code path
at that time.

The biggest caveat here is that while today the eventfd_signal()
unavoidably introduces its own non-preemptible section, iosignalfd could
conceivably be composed of an arbitrarily long list of
iosignalfd_items.  We would therefore be extending this CS out a bit
further than the single eventfd_signal overhead alone would imply.  That
might be justification in of itself to use something more friendly, like
srcu.

Out of curiosity, how much more expensive is an srcu read-side CS
compared to plain rcu?  How about compared to an atomic (like
atomic_inc)?  Don't go off running some microbenchmarks just for me.  I
am just wondering if you have measured it in the past and know the
relative numbers off the top of your head.

>   
>> +	rcu_read_lock();
>> +
>> +	list_for_each_entry_rcu(item, &group->items, list) {
>> +		if (iosignalfd_is_match(group, item, val, len))
>> +			eventfd_signal(item->file, 1);
>>     
>
> From what I can see, this code could invoke eventfd_signal() on an
> item that just got list_del_rcu()ed.  Is this really OK?
>   

If I understand the question, this is similar to the situation with the
irqfd review we just did:  that is, there is a window of time between
unlinking the object and its destruction, and the question is whether
its still ok to use the object during this window?

If that is true, the answer is "yes".  As in the irqfd case, I really
only care about making sure that we cannot invoke the free()/fput()
function while this critical section referenced above is running.  We
hold a valid reference to item->file until the point that the
iosignal_item_free() method is called (via call_rcu), so the
eventfd_signal() should be ok as long as it completes before then.
> (No idea whether or not it is OK myself.  Looks like we might change
> some state in the underlying file and attempt to wake up some processes.
> Might be OK, might not, depending on what you are up to.)
>
>   
>> +	}
>> +
>> +	rcu_read_unlock();
>> +}
>> +
>> +/*
>> + * MMIO/PIO reads against the group indiscriminately return all zeros
>> + */
>> +static void
>> +iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
>> +		      void *val)
>> +{
>> +	memset(val, 0, len);
>> +}
>> +
>> +static void
>> +_iosignalfd_group_destructor(struct _iosignalfd_group *group)
>> +{
>> +	list_del(&group->list);
>> +	kfree(group);
>> +}
>> +
>> +static void
>> +iosignalfd_group_destructor(struct kvm_io_device *this)
>> +{
>> +	struct _iosignalfd_group *group = to_group(this);
>> +
>> +	_iosignalfd_group_destructor(group);
>> +}
>> +
>> +/* assumes kvm->lock held */
>> +static struct _iosignalfd_group *
>> +iosignalfd_group_find(struct kvm *kvm, u64 addr)
>> +{
>> +	struct _iosignalfd_group *group;
>> +
>> +	list_for_each_entry(group, &kvm->iosignalfds, list) {
>> +		if (group->addr == addr)
>> +			return group;
>> +	}
>> +
>> +	return NULL;
>> +}
>> +
>> +static const struct kvm_io_device_ops iosignalfd_ops = {
>> +	.read       = iosignalfd_group_read,
>> +	.write      = iosignalfd_group_write,
>> +	.in_range   = iosignalfd_group_in_range,
>> +	.destructor = iosignalfd_group_destructor,
>> +};
>> +
>> +/*
>> + * Atomically find an existing group, or create a new one if it doesn't already
>> + * exist.
>> + *
>> + * assumes kvm->lock is held
>> + */
>> +static struct _iosignalfd_group *
>> +iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
>> +		      u64 addr, size_t len)
>> +{
>> +	struct _iosignalfd_group *group;
>> +
>> +	group = iosignalfd_group_find(kvm, addr);
>> +	if (!group) {
>> +		int ret;
>> +
>> +		group = kzalloc(sizeof(*group), GFP_KERNEL);
>> +		if (!group)
>> +			return ERR_PTR(-ENOMEM);
>> +
>> +		INIT_LIST_HEAD(&group->list);
>> +		INIT_LIST_HEAD(&group->items);
>> +		group->addr   = addr;
>> +		group->length = len;
>> +		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
>> +
>> +		ret = kvm_io_bus_register_dev(bus, &group->dev);
>> +		if (ret < 0) {
>> +			kfree(group);
>> +			return ERR_PTR(ret);
>> +		}
>> +
>> +		list_add_tail(&group->list, &kvm->iosignalfds);
>> +
>> +	} else if (group->length != len)
>> +		/*
>> +		 * Existing groups must have the same addr/len tuple or we
>> +		 * reject the request
>> +		 */
>> +		return ERR_PTR(-EINVAL);
>> +
>> +	return group;
>> +}
>> +
>> +static int
>> +kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
>> +{
>> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
>> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
>> +	struct _iosignalfd_group *group = NULL;
>> +	struct _iosignalfd_item  *item = NULL;
>> +	struct file              *file;
>> +	int                       ret;
>> +
>> +	file = eventfd_fget(args->fd);
>> +	if (IS_ERR(file)) {
>> +		ret = PTR_ERR(file);
>> +		return ret;
>> +	}
>> +
>> +	item = kzalloc(sizeof(*item), GFP_KERNEL);
>> +	if (!item) {
>> +		ret = -ENOMEM;
>> +		goto fail;
>> +	}
>> +
>> +	INIT_LIST_HEAD(&item->list);
>> +	item->file = file;
>> +
>> +	/*
>> +	 * Registering a "trigger" address is optional.  If this flag
>> +	 * is not specified, we leave the item->match pointer NULL, which
>> +	 * indicates a wildcard
>> +	 */
>> +	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
>> +		if (args->len > sizeof(u64)) {
>> +			ret = -EINVAL;
>> +			goto fail;
>> +		}
>> +
>> +		item->match = kzalloc(args->len, GFP_KERNEL);
>> +		if (!item->match) {
>> +			ret = -ENOMEM;
>> +			goto fail;
>> +		}
>> +
>> +		if (copy_from_user(item->match,
>> +				   (void *)args->trigger,
>> +				   args->len)) {
>> +			ret = -EFAULT;
>> +			goto fail;
>> +		}
>> +	}
>> +
>> +	mutex_lock(&kvm->lock);
>> +
>> +	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
>> +	if (IS_ERR(group)) {
>> +		ret = PTR_ERR(group);
>> +		mutex_unlock(&kvm->lock);
>> +		goto fail;
>> +	}
>> +
>> +	/*
>> +	 * Note: We are committed to succeed at this point since we have
>> +	 * (potentially) published a new group-device.  Any failure handling
>> +	 * added in the future after this point will need to be handled
>> +	 * carefully.
>> +	 */
>> +
>> +	list_add_tail_rcu(&item->list, &group->items);
>>     
>
> Good, protected by kvm->lock.
>
>   
>> +	mutex_unlock(&kvm->lock);
>> +
>> +	return 0;
>> +
>> +fail:
>> +	if (item) {
>> +		/*
>> +		 * it would have never made it to the group->items list
>> +		 * in the failure path, so we dont need to worry about removing
>> +		 * it
>> +		 */
>> +		kfree(item->match);
>> +		kfree(item);
>> +	}
>> +
>> +	if (file)
>> +		fput(file);
>> +
>> +	return ret;
>> +}
>> +
>> +static void
>> +iosignalfd_item_free(struct rcu_head *rhp)
>> +{
>> +	struct _iosignalfd_item *item = to_item(rhp);
>> +
>> +	fput(item->file);
>> +	kfree(item->match);
>> +	kfree(item);
>> +}
>> +
>> +static int
>> +kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
>> +{
>> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
>> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
>> +	struct _iosignalfd_group *group;
>> +	struct _iosignalfd_item  *item, *tmp;
>> +	struct file              *file;
>> +	int                       ret = 0;
>> +
>> +	mutex_lock(&kvm->lock);
>> +
>> +	group = iosignalfd_group_find(kvm, args->addr);
>> +	if (!group) {
>> +		ret = -EINVAL;
>> +		goto out;
>> +	}
>> +
>> +	file = eventfd_fget(args->fd);
>> +	if (IS_ERR(file)) {
>> +		ret = PTR_ERR(file);
>> +		goto out;
>> +	}
>> +
>> +	list_for_each_entry_safe(item, tmp, &group->items, list) {
>>     

Just to be sure, its ok to use a non-rcu list iterator in conjunction
with the list_add/del_rcu mutating primitives as long as I provide my
own mutal exclusion (e.g. kvm->lock), right?

>> +		/*
>> +		 * any items registered at this group-address with the matching
>> +		 * eventfd will be removed
>> +		 */
>> +		if (item->file != file)
>> +			continue;
>> +
>> +		list_del_rcu(&item->list);
>>     
>
> Good, also protected by kvm->lock.
>
>   
>> +		call_rcu(&item->rcu, iosignalfd_item_free);
>>     
>
> And also good here, deferring free of the RCU-protected structures.
>   

Out of curiosity: Would call_rcu() work if we were using srcu instead? 
Or is my only choice a blocking synchronize_srcu() here?  I couldn't
seem to find a call_srcu().

>   
>> +	}
>> +
>> +	if (list_empty(&group->items)) {
>> +		/*
>> +		 * We should unpublish our group device if we just removed
>> +		 * the last of its contained items
>> +		 */
>> +		kvm_io_bus_unregister_dev(bus, &group->dev);
>> +		_iosignalfd_group_destructor(group);
>>     

I think I screwed this part up.  I should technically defer the
destruction of the group object until after the next grace as well,
right?  (We are using its groups->items list in the read-side CS). 
Otherwise its a race, I think.

>> +	}
>> +
>> +	fput(file);
>> +
>> +out:
>> +	mutex_unlock(&kvm->lock);
>> +
>> +	return ret;
>> +}
>> +
>> +int
>> +kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
>> +{
>> +	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
>> +		return kvm_deassign_iosignalfd(kvm, args);
>> +
>> +	return kvm_assign_iosignalfd(kvm, args);
>> +}
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index 179c650..91d0fe2 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
>>  	atomic_inc(&kvm->mm->mm_count);
>>  	spin_lock_init(&kvm->mmu_lock);
>>  	kvm_io_bus_init(&kvm->pio_bus);
>> -	kvm_irqfd_init(kvm);
>> +	kvm_eventfd_init(kvm);
>>  	mutex_init(&kvm->lock);
>>  	kvm_io_bus_init(&kvm->mmio_bus);
>>  	init_rwsem(&kvm->slots_lock);
>> @@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
>>  		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
>>  		break;
>>  	}
>> +	case KVM_IOSIGNALFD: {
>> +		struct kvm_iosignalfd entry;
>> +
>> +		r = -EFAULT;
>> +		if (copy_from_user(&entry, argp, sizeof entry))
>> +			goto out;
>> +		r = kvm_iosignalfd(kvm, &entry);
>> +		break;
>> +	}
>>  	default:
>>  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
>>  	}
>>
>>     

Thanks again, Paul!
-Greg


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 266 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 0/2] iosignalfd
  2009-06-03 21:37 ` [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
@ 2009-06-04 12:25   ` Avi Kivity
  0 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2009-06-04 12:25 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: mtosatti, kvm, linux-kernel, davidel, paulmck, markmc

Gregory Haskins wrote:
> Marcello, Avi, and myself have previously agreed that Marcello's
> mmio-locking cleanup should go in first.   When that happens, I will
> need to rebase this series because it changes how you interface to the
> io_bus code.  I should have mentioned that here, but forgot.  (Speaking
> of, is there an ETA when that code will be merged Avi?)
>   

I had issues with the unbalanced locking the patchset introduced in 
coalesced_mmio, once these are resolved the patchset will be merged.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [KVM PATCH v5 2/2] kvm: add iosignalfd support
  2009-06-04  2:55     ` Gregory Haskins
@ 2009-06-04 16:51       ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2009-06-04 16:51 UTC (permalink / raw)
  To: Gregory Haskins; +Cc: kvm, linux-kernel, avi, davidel, mtosatti, markmc

[-- Attachment #1: Type: text/plain, Size: 26990 bytes --]

On Wed, Jun 03, 2009 at 10:55:17PM -0400, Gregory Haskins wrote:
> Paul E. McKenney wrote:
> > On Wed, Jun 03, 2009 at 04:17:49PM -0400, Gregory Haskins wrote:
> >   
> >> iosignalfd is a mechanism to register PIO/MMIO regions to trigger an eventfd
> >> signal when written to by a guest.  Host userspace can register any arbitrary
> >> IO address with a corresponding eventfd and then pass the eventfd to a
> >> specific end-point of interest for handling.
> >>
> >> Normal IO requires a blocking round-trip since the operation may cause
> >> side-effects in the emulated model or may return data to the caller.
> >> Therefore, an IO in KVM traps from the guest to the host, causes a VMX/SVM
> >> "heavy-weight" exit back to userspace, and is ultimately serviced by qemu's
> >> device model synchronously before returning control back to the vcpu.
> >>
> >> However, there is a subclass of IO which acts purely as a trigger for
> >> other IO (such as to kick off an out-of-band DMA request, etc).  For these
> >> patterns, the synchronous call is particularly expensive since we really
> >> only want to simply get our notification transmitted asychronously and
> >> return as quickly as possible.  All the sychronous infrastructure to ensure
> >> proper data-dependencies are met in the normal IO case are just unecessary
> >> overhead for signalling.  This adds additional computational load on the
> >> system, as well as latency to the signalling path.
> >>
> >> Therefore, we provide a mechanism for registration of an in-kernel trigger
> >> point that allows the VCPU to only require a very brief, lightweight
> >> exit just long enough to signal an eventfd.  This also means that any
> >> clients compatible with the eventfd interface (which includes userspace
> >> and kernelspace equally well) can now register to be notified. The end
> >> result should be a more flexible and higher performance notification API
> >> for the backend KVM hypervisor and perhipheral components.
> >>
> >> To test this theory, we built a test-harness called "doorbell".  This
> >> module has a function called "doorbell_ring()" which simply increments a
> >> counter for each time the doorbell is signaled.  It supports signalling
> >> from either an eventfd, or an ioctl().
> >>
> >> We then wired up two paths to the doorbell: One via QEMU via a registered
> >> io region and through the doorbell ioctl().  The other is direct via
> >> iosignalfd.
> >>
> >> You can download this test harness here:
> >>
> >> ftp://ftp.novell.com/dev/ghaskins/doorbell.tar.bz2
> >>
> >> The measured results are as follows:
> >>
> >> qemu-mmio:       110000 iops, 9.09us rtt
> >> iosignalfd-mmio: 200100 iops, 5.00us rtt
> >> iosignalfd-pio:  367300 iops, 2.72us rtt
> >>
> >> I didn't measure qemu-pio, because I have to figure out how to register a
> >> PIO region with qemu's device model, and I got lazy.  However, for now we
> >> can extrapolate based on the data from the NULLIO runs of +2.56us for MMIO,
> >> and -350ns for HC, we get:
> >>
> >> qemu-pio:      153139 iops, 6.53us rtt
> >> iosignalfd-hc: 412585 iops, 2.37us rtt
> >>
> >> these are just for fun, for now, until I can gather more data.
> >>
> >> Here is a graph for your convenience:
> >>
> >> http://developer.novell.com/wiki/images/7/76/Iofd-chart.png
> >>
> >> The conclusion to draw is that we save about 4us by skipping the userspace
> >> hop.
> >>
> >> --------------------
> >>     
> >
> > One set of decision criteria for RCU vs. SRCU below, and one question
> > about the RCU reader running concurrently with a deletion.  Looks pretty
> > good, in general!
> >
> > 							Thanx, Paul
> >
> >   
> >> Signed-off-by: Gregory Haskins <ghaskins@novell.com>
> >> ---
> >>
> >>  arch/x86/kvm/x86.c       |    1 
> >>  include/linux/kvm.h      |   15 ++
> >>  include/linux/kvm_host.h |   10 +
> >>  virt/kvm/eventfd.c       |  356 ++++++++++++++++++++++++++++++++++++++++++++++
> >>  virt/kvm/kvm_main.c      |   11 +
> >>  5 files changed, 389 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >> index c1ed485..c96c0e3 100644
> >> --- a/arch/x86/kvm/x86.c
> >> +++ b/arch/x86/kvm/x86.c
> >> @@ -1097,6 +1097,7 @@ int kvm_dev_ioctl_check_extension(long ext)
> >>  	case KVM_CAP_IRQ_INJECT_STATUS:
> >>  	case KVM_CAP_ASSIGN_DEV_IRQ:
> >>  	case KVM_CAP_IRQFD:
> >> +	case KVM_CAP_IOSIGNALFD:
> >>  	case KVM_CAP_PIT2:
> >>  		r = 1;
> >>  		break;
> >> diff --git a/include/linux/kvm.h b/include/linux/kvm.h
> >> index 632a856..53b720d 100644
> >> --- a/include/linux/kvm.h
> >> +++ b/include/linux/kvm.h
> >> @@ -300,6 +300,19 @@ struct kvm_guest_debug {
> >>  	struct kvm_guest_debug_arch arch;
> >>  };
> >>
> >> +#define KVM_IOSIGNALFD_FLAG_TRIGGER   (1 << 0)
> >> +#define KVM_IOSIGNALFD_FLAG_PIO       (1 << 1)
> >> +#define KVM_IOSIGNALFD_FLAG_DEASSIGN  (1 << 2)
> >> +
> >> +struct kvm_iosignalfd {
> >> +	__u64 trigger;
> >> +	__u64 addr;
> >> +	__u32 len;
> >> +	__u32 fd;
> >> +	__u32 flags;
> >> +	__u8  pad[36];
> >> +};
> >> +
> >>  #define KVM_TRC_SHIFT           16
> >>  /*
> >>   * kvm trace categories
> >> @@ -430,6 +443,7 @@ struct kvm_trace_rec {
> >>  #ifdef __KVM_HAVE_PIT
> >>  #define KVM_CAP_PIT2 33
> >>  #endif
> >> +#define KVM_CAP_IOSIGNALFD 34
> >>
> >>  #ifdef KVM_CAP_IRQ_ROUTING
> >>
> >> @@ -537,6 +551,7 @@ struct kvm_irqfd {
> >>  #define KVM_DEASSIGN_DEV_IRQ       _IOW(KVMIO, 0x75, struct kvm_assigned_irq)
> >>  #define KVM_IRQFD                  _IOW(KVMIO, 0x76, struct kvm_irqfd)
> >>  #define KVM_CREATE_PIT2		   _IOW(KVMIO, 0x77, struct kvm_pit_config)
> >> +#define KVM_IOSIGNALFD             _IOW(KVMIO, 0x78, struct kvm_iosignalfd)
> >>
> >>  /*
> >>   * ioctls for vcpu fds
> >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> >> index 216fe07..b705960 100644
> >> --- a/include/linux/kvm_host.h
> >> +++ b/include/linux/kvm_host.h
> >> @@ -138,6 +138,7 @@ struct kvm {
> >>  	struct kvm_io_bus pio_bus;
> >>  #ifdef CONFIG_HAVE_KVM_EVENTFD
> >>  	struct list_head irqfds;
> >> +	struct list_head iosignalfds;
> >>  #endif
> >>  	struct kvm_vm_stat stat;
> >>  	struct kvm_arch arch;
> >> @@ -533,19 +534,24 @@ static inline void kvm_free_irq_routing(struct kvm *kvm) {}
> >>
> >>  #ifdef CONFIG_HAVE_KVM_EVENTFD
> >>
> >> -void kvm_irqfd_init(struct kvm *kvm);
> >> +void kvm_eventfd_init(struct kvm *kvm);
> >>  int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags);
> >>  void kvm_irqfd_release(struct kvm *kvm);
> >> +int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args);
> >>
> >>  #else
> >>
> >> -static inline void kvm_irqfd_init(struct kvm *kvm) {}
> >> +static inline void kvm_eventfd_init(struct kvm *kvm) {}
> >>  static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags)
> >>  {
> >>  	return -EINVAL;
> >>  }
> >>
> >>  static inline void kvm_irqfd_release(struct kvm *kvm) {}
> >> +static inline int kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> >> +{
> >> +	return -EINVAL;
> >> +}
> >>
> >>  #endif /* CONFIG_HAVE_KVM_EVENTFD */
> >>
> >> diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
> >> index f3f2ea1..77befb3 100644
> >> --- a/virt/kvm/eventfd.c
> >> +++ b/virt/kvm/eventfd.c
> >> @@ -21,6 +21,7 @@
> >>   */
> >>
> >>  #include <linux/kvm_host.h>
> >> +#include <linux/kvm.h>
> >>  #include <linux/workqueue.h>
> >>  #include <linux/syscalls.h>
> >>  #include <linux/wait.h>
> >> @@ -29,6 +30,8 @@
> >>  #include <linux/list.h>
> >>  #include <linux/eventfd.h>
> >>
> >> +#include "iodev.h"
> >> +
> >>  /*
> >>   * --------------------------------------------------------------------
> >>   * irqfd: Allows an fd to be used to inject an interrupt to the guest
> >> @@ -208,9 +211,10 @@ kvm_deassign_irqfd(struct kvm *kvm, int fd, int gsi)
> >>  }
> >>
> >>  void
> >> -kvm_irqfd_init(struct kvm *kvm)
> >> +kvm_eventfd_init(struct kvm *kvm)
> >>  {
> >>  	INIT_LIST_HEAD(&kvm->irqfds);
> >> +	INIT_LIST_HEAD(&kvm->iosignalfds);
> >>  }
> >>
> >>  int
> >> @@ -233,3 +237,353 @@ kvm_irqfd_release(struct kvm *kvm)
> >>  		irqfd_release(irqfd);
> >>  	}
> >>  }
> >> +
> >> +/*
> >> + * --------------------------------------------------------------------
> >> + * iosignalfd: translate a PIO/MMIO memory write to an eventfd signal.
> >> + *
> >> + * userspace can register a PIO/MMIO address with an eventfd for recieving
> >> + * notification when the memory has been touched.
> >> + * --------------------------------------------------------------------
> >> + */
> >> +
> >> +/*
> >> + * Design note: We create one PIO/MMIO device (iosignalfd_group) which
> >> + * aggregates  one or more iosignalfd_items.  Each item points to exactly one
> >> + * eventfd, and can be registered to trigger on any write to the group
> >> + * (wildcard), or to a write of a specific value.  If more than one item is to
> >> + * be supported, the addr/len ranges must all be identical in the group.  If a
> >> + * trigger value is to be supported on a particular item, the group range must
> >> + * be exactly the width of the trigger.
> >> + */
> >> +
> >> +struct _iosignalfd_item {
> >> +	struct list_head     list;
> >> +	struct file         *file;
> >> +	unsigned char       *match;
> >> +	struct rcu_head      rcu;
> >> +};
> >> +
> >> +struct _iosignalfd_group {
> >> +	struct list_head     list;
> >> +	u64                  addr;
> >> +	size_t               length;
> >> +	struct list_head     items;
> >> +	struct kvm_io_device dev;
> >> +};
> >> +
> >> +static inline struct _iosignalfd_group *to_group(struct kvm_io_device *dev)
> >> +{
> >> +	return container_of(dev, struct _iosignalfd_group, dev);
> >> +}
> >> +
> >> +static inline struct _iosignalfd_item *to_item(struct rcu_head *rhp)
> >> +{
> >> +	return container_of(rhp, struct _iosignalfd_item, rcu);
> >> +}
> >>     
> >
> > Given that you only use to_item() in one place, not clear to me that it
> > is a win compared to open-coding it.  In contrast, to_group() is used
> > in several places, so it makes sense to define it.
> >   
> 
> Yeah, I think open-coding it resulted in a 80+ line that I had to break,
> and I hate that.  So I went for consistency and tidiness instead, if but
> a tad overkill. ;)
> >   
> >> +static int
> >> +iosignalfd_group_in_range(struct kvm_io_device *this, gpa_t addr, int len,
> >> +			  int is_write)
> >> +{
> >> +	struct _iosignalfd_group *p = to_group(this);
> >> +
> >> +	return ((addr >= p->addr && (addr < p->addr + p->length)));
> >> +}
> >> +
> >> +static int
> >> +iosignalfd_is_match(struct _iosignalfd_group *group,
> >> +		    struct _iosignalfd_item *item,
> >> +		    const void *val,
> >> +		    int len)
> >> +{
> >> +	if (!item->match)
> >> +		/* wildcard is a hit */
> >> +		return true;
> >> +
> >> +	if (len != group->length)
> >> +		/* mis-matched length is a miss */
> >> +		return false;
> >> +
> >> +	/* otherwise, we have to actually compare the data */
> >> +	return !memcmp(item->match, val, len) ? true : false;
> >> +}
> >> +
> >> +/*
> >> + * MMIO/PIO writes trigger an event (if the data matches).
> >> + *
> >> + * This is invoked by the io_bus subsystem in response to an address match
> >> + * against the group.  We must then walk the list of individual items to check
> >> + * for a match and, if applicable, to send the appropriate signal. If the item
> >> + * in question does not have a "match" pointer, it is considered a wildcard
> >> + * and will always generate a signal.  There can be an arbitrary number
> >> + * of distinct matches or wildcards per group.
> >> + */
> >> +static void
> >> +iosignalfd_group_write(struct kvm_io_device *this, gpa_t addr, int len,
> >> +		       const void *val)
> >> +{
> >> +	struct _iosignalfd_group *group = to_group(this);
> >> +	struct _iosignalfd_item *item;
> >> +
> >> +	/* FIXME: We should probably use SRCU */
> >>     
> >
> > Some test questions to help with this decision:
> >
> > 1.	Ignoring lock-contention issues, would you be willing to hold a
> > 	spinlock_t across this critical section?  If the answer is
> > 	"yes", then you should also be willing to use rcu_read_lock().
> > 	If the answer is "no", proceed to the next question.
> >
> > 2.	Is this critical section's execution time almost always less than
> > 	a hundred microseconds or so?  If the answer is "yes", then you
> > 	should also be willing to use rcu_read_lock().	If the answer is
> > 	"no", proceed to the next question.
> >
> > 3.	Is this critical section executed infrequently (say, less than once
> > 	per few jiffies, on average), and is the critical section's
> > 	execution time less than a jiffy or so?  If the answer is "yes",
> > 	then you should also be willing to use rcu_read_lock().  If the
> > 	answer is "no", proceed to the next question.
> >
> > 4.	Did you get here?  If so, you probably don't want to use
> > 	rcu_read_lock().  Consider using something like srcu_read_lock(),
> > 	reference counts, or sleeplocks instead.
> >
> > Of course, these questions should be regarded as rough rules of thumb.
> > And for every rough rule of thumb, there will be exceptions.  These
> > questions should still serve as a good general guide.
> >   
> 
> Actually as it stands right now the eventfd_signal() takes a
> spin_lock_irq() for its duration, so a basic rcu_read_lock() may be in
> the noise (at least for now).  I'd like to work with Davide at some
> point in the future to change eventfd internals to allow signalling
> callbacks to be optionally preemptible (at least when they are called
> from preemptible code to begin with), but we can address this code path
> at that time.
> 
> The biggest caveat here is that while today the eventfd_signal()
> unavoidably introduces its own non-preemptible section, iosignalfd could
> conceivably be composed of an arbitrarily long list of
> iosignalfd_items.  We would therefore be extending this CS out a bit
> further than the single eventfd_signal overhead alone would imply.  That
> might be justification in of itself to use something more friendly, like
> srcu.

How long would the list normally be?  What conditions/configurations
would result in it being longer than (say) 1,000 elements?

> Out of curiosity, how much more expensive is an srcu read-side CS
> compared to plain rcu?  How about compared to an atomic (like
> atomic_inc)?  Don't go off running some microbenchmarks just for me.  I
> am just wondering if you have measured it in the past and know the
> relative numbers off the top of your head.

The attached table appeared in IBM Systems Journal last year.  It shows
srcu_read_lock() and srcu_read_unlock() being about the same cost as
rcu_read_lock_bh() and rcu_read_unlock_bh().  Of course, different
systems will see slightly different costs.  Either way, -much- cheaper
than a cache miss.

> >> +	rcu_read_lock();
> >> +
> >> +	list_for_each_entry_rcu(item, &group->items, list) {
> >> +		if (iosignalfd_is_match(group, item, val, len))
> >> +			eventfd_signal(item->file, 1);
> >>     
> >
> > From what I can see, this code could invoke eventfd_signal() on an
> > item that just got list_del_rcu()ed.  Is this really OK?
> >   
> 
> If I understand the question, this is similar to the situation with the
> irqfd review we just did:  that is, there is a window of time between
> unlinking the object and its destruction, and the question is whether
> its still ok to use the object during this window?
> 
> If that is true, the answer is "yes".  As in the irqfd case, I really
> only care about making sure that we cannot invoke the free()/fput()
> function while this critical section referenced above is running.  We
> hold a valid reference to item->file until the point that the
> iosignal_item_free() method is called (via call_rcu), so the
> eventfd_signal() should be ok as long as it completes before then.

Sounds good, then!

> > (No idea whether or not it is OK myself.  Looks like we might change
> > some state in the underlying file and attempt to wake up some processes.
> > Might be OK, might not, depending on what you are up to.)
> >
> >   
> >> +	}
> >> +
> >> +	rcu_read_unlock();
> >> +}
> >> +
> >> +/*
> >> + * MMIO/PIO reads against the group indiscriminately return all zeros
> >> + */
> >> +static void
> >> +iosignalfd_group_read(struct kvm_io_device *this, gpa_t addr, int len,
> >> +		      void *val)
> >> +{
> >> +	memset(val, 0, len);
> >> +}
> >> +
> >> +static void
> >> +_iosignalfd_group_destructor(struct _iosignalfd_group *group)
> >> +{
> >> +	list_del(&group->list);
> >> +	kfree(group);
> >> +}
> >> +
> >> +static void
> >> +iosignalfd_group_destructor(struct kvm_io_device *this)
> >> +{
> >> +	struct _iosignalfd_group *group = to_group(this);
> >> +
> >> +	_iosignalfd_group_destructor(group);
> >> +}
> >> +
> >> +/* assumes kvm->lock held */
> >> +static struct _iosignalfd_group *
> >> +iosignalfd_group_find(struct kvm *kvm, u64 addr)
> >> +{
> >> +	struct _iosignalfd_group *group;
> >> +
> >> +	list_for_each_entry(group, &kvm->iosignalfds, list) {
> >> +		if (group->addr == addr)
> >> +			return group;
> >> +	}
> >> +
> >> +	return NULL;
> >> +}
> >> +
> >> +static const struct kvm_io_device_ops iosignalfd_ops = {
> >> +	.read       = iosignalfd_group_read,
> >> +	.write      = iosignalfd_group_write,
> >> +	.in_range   = iosignalfd_group_in_range,
> >> +	.destructor = iosignalfd_group_destructor,
> >> +};
> >> +
> >> +/*
> >> + * Atomically find an existing group, or create a new one if it doesn't already
> >> + * exist.
> >> + *
> >> + * assumes kvm->lock is held
> >> + */
> >> +static struct _iosignalfd_group *
> >> +iosignalfd_group_get(struct kvm *kvm, struct kvm_io_bus *bus,
> >> +		      u64 addr, size_t len)
> >> +{
> >> +	struct _iosignalfd_group *group;
> >> +
> >> +	group = iosignalfd_group_find(kvm, addr);
> >> +	if (!group) {
> >> +		int ret;
> >> +
> >> +		group = kzalloc(sizeof(*group), GFP_KERNEL);
> >> +		if (!group)
> >> +			return ERR_PTR(-ENOMEM);
> >> +
> >> +		INIT_LIST_HEAD(&group->list);
> >> +		INIT_LIST_HEAD(&group->items);
> >> +		group->addr   = addr;
> >> +		group->length = len;
> >> +		kvm_iodevice_init(&group->dev, &iosignalfd_ops);
> >> +
> >> +		ret = kvm_io_bus_register_dev(bus, &group->dev);
> >> +		if (ret < 0) {
> >> +			kfree(group);
> >> +			return ERR_PTR(ret);
> >> +		}
> >> +
> >> +		list_add_tail(&group->list, &kvm->iosignalfds);
> >> +
> >> +	} else if (group->length != len)
> >> +		/*
> >> +		 * Existing groups must have the same addr/len tuple or we
> >> +		 * reject the request
> >> +		 */
> >> +		return ERR_PTR(-EINVAL);
> >> +
> >> +	return group;
> >> +}
> >> +
> >> +static int
> >> +kvm_assign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> >> +{
> >> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> >> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> >> +	struct _iosignalfd_group *group = NULL;
> >> +	struct _iosignalfd_item  *item = NULL;
> >> +	struct file              *file;
> >> +	int                       ret;
> >> +
> >> +	file = eventfd_fget(args->fd);
> >> +	if (IS_ERR(file)) {
> >> +		ret = PTR_ERR(file);
> >> +		return ret;
> >> +	}
> >> +
> >> +	item = kzalloc(sizeof(*item), GFP_KERNEL);
> >> +	if (!item) {
> >> +		ret = -ENOMEM;
> >> +		goto fail;
> >> +	}
> >> +
> >> +	INIT_LIST_HEAD(&item->list);
> >> +	item->file = file;
> >> +
> >> +	/*
> >> +	 * Registering a "trigger" address is optional.  If this flag
> >> +	 * is not specified, we leave the item->match pointer NULL, which
> >> +	 * indicates a wildcard
> >> +	 */
> >> +	if (args->flags & KVM_IOSIGNALFD_FLAG_TRIGGER) {
> >> +		if (args->len > sizeof(u64)) {
> >> +			ret = -EINVAL;
> >> +			goto fail;
> >> +		}
> >> +
> >> +		item->match = kzalloc(args->len, GFP_KERNEL);
> >> +		if (!item->match) {
> >> +			ret = -ENOMEM;
> >> +			goto fail;
> >> +		}
> >> +
> >> +		if (copy_from_user(item->match,
> >> +				   (void *)args->trigger,
> >> +				   args->len)) {
> >> +			ret = -EFAULT;
> >> +			goto fail;
> >> +		}
> >> +	}
> >> +
> >> +	mutex_lock(&kvm->lock);
> >> +
> >> +	group = iosignalfd_group_get(kvm, bus, args->addr, args->len);
> >> +	if (IS_ERR(group)) {
> >> +		ret = PTR_ERR(group);
> >> +		mutex_unlock(&kvm->lock);
> >> +		goto fail;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Note: We are committed to succeed at this point since we have
> >> +	 * (potentially) published a new group-device.  Any failure handling
> >> +	 * added in the future after this point will need to be handled
> >> +	 * carefully.
> >> +	 */
> >> +
> >> +	list_add_tail_rcu(&item->list, &group->items);
> >>     
> >
> > Good, protected by kvm->lock.
> >
> >   
> >> +	mutex_unlock(&kvm->lock);
> >> +
> >> +	return 0;
> >> +
> >> +fail:
> >> +	if (item) {
> >> +		/*
> >> +		 * it would have never made it to the group->items list
> >> +		 * in the failure path, so we dont need to worry about removing
> >> +		 * it
> >> +		 */
> >> +		kfree(item->match);
> >> +		kfree(item);
> >> +	}
> >> +
> >> +	if (file)
> >> +		fput(file);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static void
> >> +iosignalfd_item_free(struct rcu_head *rhp)
> >> +{
> >> +	struct _iosignalfd_item *item = to_item(rhp);
> >> +
> >> +	fput(item->file);
> >> +	kfree(item->match);
> >> +	kfree(item);
> >> +}
> >> +
> >> +static int
> >> +kvm_deassign_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> >> +{
> >> +	int                       pio = args->flags & KVM_IOSIGNALFD_FLAG_PIO;
> >> +	struct kvm_io_bus        *bus = pio ? &kvm->pio_bus : &kvm->mmio_bus;
> >> +	struct _iosignalfd_group *group;
> >> +	struct _iosignalfd_item  *item, *tmp;
> >> +	struct file              *file;
> >> +	int                       ret = 0;
> >> +
> >> +	mutex_lock(&kvm->lock);
> >> +
> >> +	group = iosignalfd_group_find(kvm, args->addr);
> >> +	if (!group) {
> >> +		ret = -EINVAL;
> >> +		goto out;
> >> +	}
> >> +
> >> +	file = eventfd_fget(args->fd);
> >> +	if (IS_ERR(file)) {
> >> +		ret = PTR_ERR(file);
> >> +		goto out;
> >> +	}
> >> +
> >> +	list_for_each_entry_safe(item, tmp, &group->items, list) {
> >>     
> 
> Just to be sure, its ok to use a non-rcu list iterator in conjunction
> with the list_add/del_rcu mutating primitives as long as I provide my
> own mutal exclusion (e.g. kvm->lock), right?

Correct.  The _rcu variants of the list iterators are only required in
situations where the list might be concurrently changing.  If you are
preventing such change (in this case by holding the relevant lock), then
it is OK to use the non-_rcu variants of the list iterators.

> >> +		/*
> >> +		 * any items registered at this group-address with the matching
> >> +		 * eventfd will be removed
> >> +		 */
> >> +		if (item->file != file)
> >> +			continue;
> >> +
> >> +		list_del_rcu(&item->list);
> >>     
> >
> > Good, also protected by kvm->lock.
> >
> >   
> >> +		call_rcu(&item->rcu, iosignalfd_item_free);
> >>     
> >
> > And also good here, deferring free of the RCU-protected structures.
> >   
> 
> Out of curiosity: Would call_rcu() work if we were using srcu instead? 
> Or is my only choice a blocking synchronize_srcu() here?  I couldn't
> seem to find a call_srcu().

Indeed, there is no call_srcu().  The reason for omitting call_srcu() is
to prevent memory piling up in case an SRCU reader blocks indefinitely.
With synchronize_srcu(), the amount of memory waiting for a grace period
is limited by the number of tasks blocked in synchronize_srcu().

If you really do need call_rcu(), one approach would be to traverse the
list in pieces.  One way to do this is as follows:

o	Have a number of elements that you are willing to traverse in
	one go, say 512 of them.

o	If you reach that limit, mark the current element and exit
	the RCU read-side critical section, retaining a reference
	to the marked element.  The marking operation must be performed
	under the protection of the lock used to modify the list.

	Hmm...  Unfortunately, this is a mutex, which means that it
	cannot be acquired in an RCU read-side critical section.
	So the mark will need to be set with a cmpxchg operation.
	If it is zero, set it to one.  If it is not zero, advance
	to the next element and retry.

o	If you need to delete the element, use a cmpxchg operation
	to set the mark to two.  If successful, use call_rcu() just
	like you do now.  Otherwise, if the value is 1, remove it
	from the list, but do not invoke call_rcu().

o	Upon restarting the RCU read-side scan, use cmpxchg to
	change the mark value from 1 to 0.  If this fails due to
	the value being 2, invoke call_rcu() on the element and
	continue the scan.

Seem reasonable?

> >> +	}
> >> +
> >> +	if (list_empty(&group->items)) {
> >> +		/*
> >> +		 * We should unpublish our group device if we just removed
> >> +		 * the last of its contained items
> >> +		 */
> >> +		kvm_io_bus_unregister_dev(bus, &group->dev);
> >> +		_iosignalfd_group_destructor(group);
> >>     
> 
> I think I screwed this part up.  I should technically defer the
> destruction of the group object until after the next grace as well,
> right?  (We are using its groups->items list in the read-side CS). 
> Otherwise its a race, I think.

Ah, I missed that part, as I failed to trace out all the data strcutures.
Yes, everything that is accessed by an RCU reader without some other
protection must be defer-freed, e.g., by call_rcu().  The lock protects
some of the fields, but the lock itself is acquired under only RCU
protection, so the structure containing the lock must sit for a grace
period between the time it is made inaccessible to RCU readers and the
time that it is freed.

> >> +	}
> >> +
> >> +	fput(file);
> >> +
> >> +out:
> >> +	mutex_unlock(&kvm->lock);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +int
> >> +kvm_iosignalfd(struct kvm *kvm, struct kvm_iosignalfd *args)
> >> +{
> >> +	if (args->flags & KVM_IOSIGNALFD_FLAG_DEASSIGN)
> >> +		return kvm_deassign_iosignalfd(kvm, args);
> >> +
> >> +	return kvm_assign_iosignalfd(kvm, args);
> >> +}
> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> >> index 179c650..91d0fe2 100644
> >> --- a/virt/kvm/kvm_main.c
> >> +++ b/virt/kvm/kvm_main.c
> >> @@ -977,7 +977,7 @@ static struct kvm *kvm_create_vm(void)
> >>  	atomic_inc(&kvm->mm->mm_count);
> >>  	spin_lock_init(&kvm->mmu_lock);
> >>  	kvm_io_bus_init(&kvm->pio_bus);
> >> -	kvm_irqfd_init(kvm);
> >> +	kvm_eventfd_init(kvm);
> >>  	mutex_init(&kvm->lock);
> >>  	kvm_io_bus_init(&kvm->mmio_bus);
> >>  	init_rwsem(&kvm->slots_lock);
> >> @@ -2215,6 +2215,15 @@ static long kvm_vm_ioctl(struct file *filp,
> >>  		r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags);
> >>  		break;
> >>  	}
> >> +	case KVM_IOSIGNALFD: {
> >> +		struct kvm_iosignalfd entry;
> >> +
> >> +		r = -EFAULT;
> >> +		if (copy_from_user(&entry, argp, sizeof entry))
> >> +			goto out;
> >> +		r = kvm_iosignalfd(kvm, &entry);
> >> +		break;
> >> +	}
> >>  	default:
> >>  		r = kvm_arch_vm_ioctl(filp, ioctl, arg);
> >>  	}
> >>
> >>     
> 
> Thanks again, Paul!
> -Greg
> 



[-- Attachment #2: RCUperftableCrop.png --]
[-- Type: image/png, Size: 71723 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-06-04 16:51 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-06-03 20:17 [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
2009-06-03 20:17 ` [KVM PATCH v5 1/2] kvm: make io_bus interface more robust Gregory Haskins
2009-06-03 20:17 ` [KVM PATCH v5 2/2] kvm: add iosignalfd support Gregory Haskins
2009-06-03 20:37   ` Gregory Haskins
2009-06-03 21:45   ` Gregory Haskins
2009-06-04  1:34   ` Paul E. McKenney
2009-06-04  2:55     ` Gregory Haskins
2009-06-04 16:51       ` Paul E. McKenney
2009-06-03 21:37 ` [KVM PATCH v5 0/2] iosignalfd Gregory Haskins
2009-06-04 12:25   ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox