From: liu ping fan <qemulist@gmail.com>
To: qemu-devel@nongnu.org
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
Cam Macdonell <cam@cs.ualberta.ca>,
Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Qemu-devel] [PATCH v2 2/2] ivshmem: use irqfd to interrupt among VMs
Date: Thu, 13 Dec 2012 15:34:00 +0800 [thread overview]
Message-ID: <CAJnKYQm7zSyDh72EaxBFjf-02+ajhiY4OHugE7rnhzi4XQwuqw@mail.gmail.com> (raw)
In-Reply-To: <1354775870-24944-2-git-send-email-qemulist@gmail.com>
Hi Jan and Cam,
It has been tested with uio driver. And other opinion for the code?
Regards,
Pingfan
On Thu, Dec 6, 2012 at 2:37 PM, Liu Ping Fan <qemulist@gmail.com> wrote:
> From: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
>
> Using irqfd, so we can avoid switch between kernel and user when
> VMs interrupts each other.
>
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
> ---
> hw/ivshmem.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
> 1 files changed, 53 insertions(+), 1 deletions(-)
>
> diff --git a/hw/ivshmem.c b/hw/ivshmem.c
> index 7c8630c..b394b07 100644
> --- a/hw/ivshmem.c
> +++ b/hw/ivshmem.c
> @@ -19,6 +19,7 @@
> #include "hw.h"
> #include "pc.h"
> #include "pci.h"
> +#include "msi.h"
> #include "msix.h"
> #include "kvm.h"
> #include "migration.h"
> @@ -83,6 +84,7 @@ typedef struct IVShmemState {
> uint32_t vectors;
> uint32_t features;
> EventfdEntry *eventfd_table;
> + int *vector_virqs;
>
> Error *migration_blocker;
>
> @@ -625,16 +627,62 @@ static int ivshmem_load(QEMUFile* f, void *opaque, int version_id)
> return 0;
> }
>
> +static int ivshmem_vector_use(PCIDevice *dev, unsigned vector,
> + MSIMessage msg)
> +{
> + IVShmemState *s = DO_UPCAST(IVShmemState, dev, dev);
> + int virq;
> + EventNotifier *n = &s->peers[s->vm_id].eventfds[vector];
> +
> + virq = kvm_irqchip_add_msi_route(kvm_state, msg);
> + if (virq >= 0 && kvm_irqchip_add_irqfd_notifier(kvm_state, n, virq) >= 0) {
> + s->vector_virqs[vector] = virq;
> + qemu_chr_add_handlers(s->eventfd_chr[vector], NULL, NULL, NULL, NULL);
> + } else if (virq >= 0) {
> + kvm_irqchip_release_virq(kvm_state, virq);
> + error_report("ivshmem, can not setup irqfd\n");
> + return -1;
> + } else {
> + error_report("ivshmem, no enough msi route to setup irqfd\n");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +static void ivshmem_vector_release(PCIDevice *dev, unsigned vector)
> +{
> + IVShmemState *s = DO_UPCAST(IVShmemState, dev, dev);
> + EventNotifier *n = &s->peers[s->vm_id].eventfds[vector];
> + int virq = s->vector_virqs[vector];
> +
> + if (s->vector_virqs[vector] >= 0) {
> + kvm_irqchip_remove_irqfd_notifier(kvm_state, n, virq);
> + kvm_irqchip_release_virq(kvm_state, virq);
> + s->vector_virqs[vector] = -1;
> + }
> +}
> +
> static void ivshmem_write_config(PCIDevice *pci_dev, uint32_t address,
> uint32_t val, int len)
> {
> + bool is_enabled, was_enabled = msi_enabled(pci_dev);
> +
> pci_default_write_config(pci_dev, address, val, len);
> + is_enabled = msix_enabled(pci_dev);
> + if (!was_enabled && is_enabled) {
> + msix_set_vector_notifiers(pci_dev, ivshmem_vector_use,
> + ivshmem_vector_release);
> + } else if (was_enabled && !is_enabled) {
> + msix_unset_vector_notifiers(pci_dev);
> + }
> }
>
> static int pci_ivshmem_init(PCIDevice *dev)
> {
> IVShmemState *s = DO_UPCAST(IVShmemState, dev, dev);
> uint8_t *pci_conf;
> + int i;
>
> if (s->sizearg == NULL)
> s->ivshmem_size = 4 << 20; /* 4 MB default */
> @@ -758,7 +806,10 @@ static int pci_ivshmem_init(PCIDevice *dev)
> }
>
> s->dev.config_write = ivshmem_write_config;
> -
> + s->vector_virqs = g_new0(int, s->vectors);
> + for (i = 0; i < s->vectors; i++) {
> + s->vector_virqs[i] = -1;
> + }
> return 0;
> }
>
> @@ -770,6 +821,7 @@ static void pci_ivshmem_uninit(PCIDevice *dev)
> migrate_del_blocker(s->migration_blocker);
> error_free(s->migration_blocker);
> }
> + g_free(s->vector_virqs);
>
> memory_region_destroy(&s->ivshmem_mmio);
> memory_region_del_subregion(&s->bar, &s->ivshmem);
> --
> 1.7.4.4
>
next prev parent reply other threads:[~2012-12-13 7:34 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-12-06 6:37 [Qemu-devel] [PATCH v2 1/2] ivshmem: remove msix_write_config Liu Ping Fan
2012-12-06 6:37 ` [Qemu-devel] [PATCH v2 2/2] ivshmem: use irqfd to interrupt among VMs Liu Ping Fan
2012-12-13 7:34 ` liu ping fan [this message]
-- strict thread matches above, loose matches on Subject: below --
2012-11-25 3:51 [Qemu-devel] [PATCH v2 1/2] ivshmem: remove msix_write_config Liu Ping Fan
2012-11-25 3:51 ` [Qemu-devel] [PATCH v2 2/2] ivshmem: use irqfd to interrupt among VMs Liu Ping Fan
2012-11-27 21:48 ` Cam Macdonell
2012-11-28 2:53 ` liu ping fan
2012-11-29 4:42 ` Cam Macdonell
2012-11-29 8:34 ` liu ping fan
2012-11-29 17:33 ` Cam Macdonell
2012-12-04 11:10 ` Andrew Jones
2012-12-05 3:17 ` Cam Macdonell
2012-12-05 9:25 ` Andrew Jones
2012-12-05 5:34 ` Cam Macdonell
2012-12-05 8:50 ` Jan Kiszka
2012-12-06 5:10 ` Cam Macdonell
2012-12-06 6:26 ` liu ping fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJnKYQm7zSyDh72EaxBFjf-02+ajhiY4OHugE7rnhzi4XQwuqw@mail.gmail.com \
--to=qemulist@gmail.com \
--cc=anthony@codemonkey.ws \
--cc=cam@cs.ualberta.ca \
--cc=jan.kiszka@siemens.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).