qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Martin Oliveira <Martin.Oliveira@eideticom.com>
Cc: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>,
	"fam@euphon.net" <fam@euphon.net>,
	"f4bug@amsat.org" <f4bug@amsat.org>,
	Stephen Bates <stephen@eideticom.com>,
	Chaitanya Kulkarni <Chaitanyak@nvidia.com>,
	Alex Williamson <alex.williamson@redhat.com>
Subject: Re: EBUSY when using NVMe Block Driver with multiple devices in the same IOMMU group
Date: Wed, 24 Aug 2022 13:18:59 -0400	[thread overview]
Message-ID: <YwZdg/nExYoDNRR/@fedora> (raw)
In-Reply-To: <DM6PR19MB4248C040D8E12FAF3CD9D615E4709@DM6PR19MB4248.namprd19.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 1557 bytes --]

On Tue, Aug 23, 2022 at 10:36:00PM +0000, Martin Oliveira wrote:
> Hello,
> 
> I'm trying to use the QEMU NVMe userspace driver and I'm hitting an error when trying to use more than one device from an IOMMU group:
> 
>     Failed to open VFIO group file: /dev/vfio/39: Device or resource busy
> 
> If devices belong to different IOMMU groups, then it works as expected.
> 
> For each device, I bind it to vfio-pci and then use something like this:
> 
>     -drive file=nvme://0000:26:00.0,if=none,id=drive0,format=raw
>     -device virtio-blk,drive=drive0,id=virtio0,serial=nvme0
> 
> Using the file-based protocol (file=/dev/nvme0n1) works with multiple devices from the same group.
> 
> My host is running a 5.19 kernel and QEMU is the latest upstream (a8cc5842b5cb).

First, multiple QEMU instances cannot access nvme:// devices sharing the
same IOMMU group. I don't think this will ever be possible because it
opens a backdoor around process memory isolation.

However, a single QEMU (or qemu-storage-daemon) instance should be able
to access multiple nvme:// devices in the same IOMMU group.
Unfortunately the code currently doesn't support that.
util/vfio-helpers.c:qemu_vfio_init_pci() has no logic for sharing
groups/containers. Opening the group fails with EBUSY because the kernel
only allows the file to be opened once at any given time.

It's possible to extend the util/vfio-helpers.c code to reuse VFIO
groups (and share VFIO containers), but I'm not aware of anyone who is
currently working on that.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2022-08-24 17:20 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-23 22:36 EBUSY when using NVMe Block Driver with multiple devices in the same IOMMU group Martin Oliveira
2022-08-24 16:17 ` Martin Oliveira
2022-08-24 17:18 ` Stefan Hajnoczi [this message]
2022-08-25  9:04   ` Chaitanya Kulkarni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YwZdg/nExYoDNRR/@fedora \
    --to=stefanha@redhat.com \
    --cc=Chaitanyak@nvidia.com \
    --cc=Martin.Oliveira@eideticom.com \
    --cc=alex.williamson@redhat.com \
    --cc=f4bug@amsat.org \
    --cc=fam@euphon.net \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stephen@eideticom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).