qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Lan Tianyu <tianyu.lan@intel.com>
Cc: qemu-devel@nongnu.org, kevin.tian@intel.com, mst@redhat.com,
	jan.kiszka@siemens.com, jasowang@redhat.com, peterx@redhat.com,
	david@gibson.dropbear.id.au, yi.l.liu@intel.com
Subject: Re: [Qemu-devel] [Resend RFC PATCH 4/4] VFIO: Read IOMMU fault info from kernel space when get fault event
Date: Mon, 20 Feb 2017 14:09:08 -0700	[thread overview]
Message-ID: <20170220140908.2057953b@t450s.home> (raw)
In-Reply-To: <1487554087-15347-5-git-send-email-tianyu.lan@intel.com>

On Mon, 20 Feb 2017 09:28:07 +0800
Lan Tianyu <tianyu.lan@intel.com> wrote:

> This patch is to implement fault event handler with new vfio cmd to
> get fault info and notify vIOMMU device model.
> 
> Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
> ---
>  hw/vfio/common.c           | 51 ++++++++++++++++++++++++++++++++++++++++++++++
>  linux-headers/linux/vfio.h | 22 ++++++++++++++++++++
>  2 files changed, 73 insertions(+)
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 628b424..4f76e26 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -297,6 +297,57 @@ static bool vfio_listener_skipped_section(MemoryRegionSection *section)
>  
>  static void vfio_iommu_fault(void *opaque)
>  {
> +    VFIOContainer *container = opaque;
> +    struct vfio_iommu_type1_get_fault_info *info;
> +    struct vfio_iommu_fault_info *fault_info;
> +    MemoryRegion *mr = container->space->as->root;
> +    int count = 0, i, ret;
> +    IOMMUFaultInfo tmp;
> +
> +    if (!event_notifier_test_and_clear(&container->fault_notifier)) {
> +        return;
> +    }
> +
> +    info = g_malloc0(sizeof(*info));
> +    if (!info) {
> +        error_report("vfio: Fail to allocate memory");
> +        return;
> +    }
> +
> +    info->argsz = sizeof(*info);
> +
> +    ret = ioctl(container->fd, VFIO_IOMMU_GET_FAULT_INFO, info);
> +    if (ret && ret != -ENOSPC) {
> +        error_report("vfio: Can't get fault info");
> +        goto err_exit;
> +    }
> +
> +    count = info->count;
> +    if (count <= 0) {
> +        goto err_exit;
> +    }
> +
> +    info = g_realloc(info, sizeof(*info) + count * sizeof(*fault_info));
> +    info->argsz = sizeof(*info) + count * sizeof(*fault_info);
> +    fault_info = info->fault_info;
> +
> +    ret = ioctl(container->fd, VFIO_IOMMU_GET_FAULT_INFO, info);
> +    if (ret) {
> +        error_report("vfio: Can't get fault info");
> +        goto err_exit;
> +    }
> +
> +    for (i = 0; i < info->count; i++) {
> +        tmp.addr = fault_info[i].addr;
> +        tmp.sid = fault_info[i].sid;
> +        tmp.fault_reason = fault_info[i].fault_reason;
> +        tmp.is_write = fault_info[i].is_write;
> +
> +        memory_region_iommu_fault_notify(mr, &tmp);
> +    }

Are there service requirements for handling these faults?  Can the
device wait indefinitely?  Can userspace handling of such faults meet
the device service requirements?  Is userspace handling sufficient for
the device's performance needs?  Do we get one eventfd per fault entry?
How do we know if the faults have overflowed?  Would an overflow be
fatal or would there be a retry mechanism?

> +
> +err_exit:
> +    g_free(info);
>  }
>  
>  static int vfio_set_iommu_fault_notifier(struct VFIOContainer *container)
> diff --git a/linux-headers/linux/vfio.h b/linux-headers/linux/vfio.h
> index ca890ee..8b172f5 100644
> --- a/linux-headers/linux/vfio.h
> +++ b/linux-headers/linux/vfio.h
> @@ -550,6 +550,28 @@ struct vfio_iommu_type1_set_fault_eventfd {
>  
>  #define VFIO_IOMMU_SET_FAULT_EVENTFD	_IO(VFIO_TYPE, VFIO_BASE + 17)
>  
> +/*
> + * VFIO_IOMMU_GET_FAULT_INFO		_IO(VFIO_TYPE, VFIO_BASE + 18)
> + *
> + * Return IOMMU fault info to userspace.
> + */
> +
> +struct vfio_iommu_fault_info {
> +	__u64	addr;
> +	__u16   sid;
> +	__u8    fault_reason;
> +	__u8	is_write:1;
> +};
> +
> +struct vfio_iommu_type1_get_fault_info {
> +	__u32	argsz;
> +	__u32   flags;
> +	__u32	count;
> +	struct vfio_iommu_fault_info fault_info[];
> +};
> +
> +#define VFIO_IOMMU_GET_FAULT_INFO	_IO(VFIO_TYPE, VFIO_BASE + 18)
> +
>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>  
>  /*

      reply	other threads:[~2017-02-20 21:11 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-20  1:28 [Qemu-devel] [Resend RFC PATCH 0/4] VT-d: Inject fault event from IOMMU hardware Lan Tianyu
2017-02-20  1:28 ` [Qemu-devel] [Resend RFC PATCH 1/4] VFIO: Set eventfd for IOMMU fault event via new vfio cmd Lan Tianyu
2017-02-20 21:08   ` Alex Williamson
2017-02-20  1:28 ` [Qemu-devel] [Resend RFC PATCH 3/4] Intel iommu: Add Intel IOMMU fault event callback Lan Tianyu
2017-02-20 21:08   ` Alex Williamson
2017-02-20  1:28 ` [Qemu-devel] [Resend RFC PATCH 4/4] VFIO: Read IOMMU fault info from kernel space when get fault event Lan Tianyu
2017-02-20 21:09   ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170220140908.2057953b@t450s.home \
    --to=alex.williamson@redhat.com \
    --cc=david@gibson.dropbear.id.au \
    --cc=jan.kiszka@siemens.com \
    --cc=jasowang@redhat.com \
    --cc=kevin.tian@intel.com \
    --cc=mst@redhat.com \
    --cc=peterx@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=tianyu.lan@intel.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).