From: Francesco Zuliani <francesco.zuliani@neat.it>
To: "Alex Williamson" <alex.williamson@redhat.com>,
"Marc-André Lureau" <mlureau@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] hw/misc: slavepci_passthru driver
Date: Tue, 19 Jan 2016 11:30:20 +0100 [thread overview]
Message-ID: <569E103C.4060103@neat.it> (raw)
In-Reply-To: <1453135313.32741.92.camel@redhat.com>
Hi Alex,
On 01/18/2016 05:41 PM, Alex Williamson wrote:
> On Mon, 2016-01-18 at 10:16 -0500, Marc-André Lureau wrote:
>> Hi
>>
>> ----- Original Message -----
>>> Hi there,
>>>
>>> I'd like to submit this new pci driver ( hw/misc )for inclusion,
>>> if you think it could be useful to other as well as ourself.
>>>
>>> The driver "worked for our needs" BUT we haven't done extensive
>>> testing and this is our first attempt to submit a patch so I kindly
>>> ask for extra-forgiveness .
>>>
>>> The "slavepci_passthru" driver is useful in the scenario described
>>> below to implement a simplified passthru when the host CPU does not
>>> support IOMMU and one is interested only in pci target-mode (slave
>>> devices).
>> Let's CC Alex, who worked on the most recent framework for something related to that (VFIO).
>>
>>> Embedded system cpu (e.g. Atom, AMD G-Series) often lack the VT-d
>>> extensions (IOMMU) needed to be able to pass-thru pci peripherals to
>>> the guest machine (i.e. the pci pass-thru feature cannot be used).
>>>
>>> If one is only interested in using the pci board as a pci-target
>>> (slave device), this driver mmap(s) the host-pci-bars into the guest
>>> within a virtual pci-device.
> What exactly do you mean by pci-target/slave device? Does this mean
> that the device is not DMA capable, ie. cannot enable BusMaster?
Yes, exactly. Our approach can be used ONLY if one is NOT interested in
DMA-Capability (i.e. it is not possible to enable BusMaster)
>>> This is useful in our case for debugging via qemu gsbserver facility
>>> (i.e. '-s' option in qemu) a system running barebone-executable .
>>>
>>> Currently the driver assumes the custom pci card has four 32-bit bars
>>> to be mapped (in current patch this is mandatory)
>>>
>>> HowTo:
>>> To use the new driver one shall:
>>> - define two environment variables for assigning proper VID and DID to
>>> associate to the guest pci card
>>> - give the host pci bar address to map in the guest.
>>>
>>> Example Usage:
>>>
>>> Let us suppose that we have in the host a slave pci device with the
>>> following 4 bars (i.e. output of lspci -v -s YOUR-CARD | grep Memory)
>>> Memory at db800000 (32-bit, non-prefetchable) [size=4K]
>>> Memory at db900000 (32-bit, non-prefetchable) [size=8K]
>>> Memory at dba00000 (32-bit, non-prefetchable) [size=4K]
>>> Memory at dbb00000 (32-bit, non-prefetchable) [size=4K]
>>>
>>> We can map these bars in a guest-pci with VID=0xe33e DID=0x000a using
>>>
>>> SLAVEPASSTHRU_VID="0xe33e" SLAVEPASSTHRU_DID="0xa" qemu-system-x86_64 \
>>> YOUR-SET-OF-FLAGS \
>>> -device
>>> slavepassthru,size1=4096,baseaddr1=0xdb900000,size2=8192,baseaddr2=0xdba00000,size3=4096,baseaddr3=0xdbd00000,size4=4096,baseaddr4=0xdbe00000
>>>
>>> Please note that if your device has less than four bars you can give
>>> the same size and baseaddress to the unused bars.
> Those are some pretty serious usage restrictions and using /dev/mem is
> really not practical. The resource files in pci-sysfs would even be a
> better option.
our was a quick hack to fulfill our needs, the approach via sysfs is
of course the right one and we would implement it if this patch is of
interest.
> I didn't see how IO and MMIO BARs get enabled on the
> physical device or whether you support any kind of interrupt scheme.
In our case the IO space is not used.
The MMIO space is already enabled.
Our custom board does not have any interrupt and our quick hack
did not implement it.
> I
> had never really intended QEMU use of this, but you might want to
> consider vfio no-iommu mode:
>
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/drivers/vfio/vfio.c?id=03a76b60f8ba27974e2d252bc555d2c103420e15
>
> Using this taints the kernel, but maybe that's nothing you mind if
> you're already letting QEMU access /dev/mem. The QEMU vfio-pci driver
> would need to be modified to use the new device and of course it
> wouldn't have IOMMU translation capabilities. That means that the
> BusMaster bit should protected and MSI/X capabilities should be hidden
> from the VM. It seems more flexible and featureful than what you have
> here. Thanks,
I was not aware of this interesting patch, I will study it to see if
it fits our use case.
Just for information you mean "taint" in that "security" is broken, not
licensing issues, am I right?
Thanks a lot for your time
Francesco Zuliani
> Alex
next prev parent reply other threads:[~2016-01-19 10:30 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-18 14:28 [Qemu-devel] [PATCH] hw/misc: slavepci_passthru driver Francesco Zuliani
2016-01-18 15:16 ` Marc-André Lureau
2016-01-18 16:41 ` Alex Williamson
2016-01-19 10:30 ` Francesco Zuliani [this message]
2016-01-19 15:51 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=569E103C.4060103@neat.it \
--to=francesco.zuliani@neat.it \
--cc=alex.williamson@redhat.com \
--cc=mlureau@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).