* vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization)
@ 2023-10-09 7:06 Markus Frank
2023-10-09 9:29 ` Eric Auger
0 siblings, 1 reply; 4+ messages in thread
From: Markus Frank @ 2023-10-09 7:06 UTC (permalink / raw)
To: qemu-devel
Hello,
I have already sent this email to qemu-discuss but I did not get a reply.
https://lists.nongnu.org/archive/html/qemu-discuss/2023-09/msg00034.html
Maybe someone here could help me and reply to this email or the one on qemu-discuss?
I would like to pass through PCI devices to Layer-2 VMs via Nested Virtualization.
Is there current documentation for this topic somewhere?
I used these parameters:
-machine ...,kernel-irqchip=split
-device intel-iommu
With these parameters PCI pass through to L2-VMs worked fine.
Now I come to the part where I get confused.
https://wiki.qemu.org/Features/VT-d#With_Virtio_Devices
Is this documentation relevant for PCI pass through? Do I need DMAR for virtio devices?
And there is also the virtio-iommu device where I also could use the i440fx chipset.
https://michael2012z.medium.com/virtio-iommu-789369049443
When adding "-device virtio-iommu-pci" pci pass through also works
but I get "kvm: virtio_iommu_translate no mapping for 0x1002030f000 for sid=240"
when starting qemu. What could that mean?
What do these parameters "disable-legacy=on,disable-modern=off,iommu_platform=on,ats=on"
actually do? When do I need them and on which virtio devices?
And which device should I rather use: virtio-iommu or intel-iommu?
Thanks in advance,
Markus
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization)
2023-10-09 7:06 vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization) Markus Frank
@ 2023-10-09 9:29 ` Eric Auger
2023-10-09 10:25 ` Markus Frank
0 siblings, 1 reply; 4+ messages in thread
From: Eric Auger @ 2023-10-09 9:29 UTC (permalink / raw)
To: Markus Frank, qemu-devel
Hi Markus,
On 10/9/23 09:06, Markus Frank wrote:
> Hello,
>
> I have already sent this email to qemu-discuss but I did not get a reply.
> https://lists.nongnu.org/archive/html/qemu-discuss/2023-09/msg00034.html
> Maybe someone here could help me and reply to this email or the one on
> qemu-discuss?
>
> I would like to pass through PCI devices to Layer-2 VMs via Nested
> Virtualization.
>
> Is there current documentation for this topic somewhere?
>
> I used these parameters:
> -machine ...,kernel-irqchip=split
> -device intel-iommu
>
> With these parameters PCI pass through to L2-VMs worked fine.
>
>
> Now I come to the part where I get confused.
>
> https://wiki.qemu.org/Features/VT-d#With_Virtio_Devices
> Is this documentation relevant for PCI pass through? Do I need DMAR for
> virtio devices?
If you just want the host assigned devices to be protected by the
viommu, you don't need to add iommu_platform=on along with the
virtio-pci devices.
>
> And there is also the virtio-iommu device where I also could use the
> i440fx chipset.
> https://michael2012z.medium.com/virtio-iommu-789369049443
you can use virtio-iommu with q35 machine.
>
> When adding "-device virtio-iommu-pci" pci pass through also works
> but I get "kvm: virtio_iommu_translate no mapping for 0x1002030f000 for
> sid=240"
> when starting qemu. What could that mean?
Normally you shouldn't get any such error. This means there is no
mapping programmed by the iommu-driver for this requester id (0x240) and
this iova=0x1002030f000. But if I understand correctly this does not
prevent your device from working, correct?
>
> What do these parameters
> "disable-legacy=on,disable-modern=off,iommu_platform=on,ats=on"
> actually do? When do I need them and on which virtio devices?
you need them if you want your virtio devices to be protected by the
viommu. Otherwise the viommu is bypassed.
>
> And which device should I rather use: virtio-iommu or intel-iommu?
Both should be working. virtio-iommu is more recent and less used in
production than intel-iommu though.
Thanks
Eric
>
> Thanks in advance,
> Markus
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization)
2023-10-09 9:29 ` Eric Auger
@ 2023-10-09 10:25 ` Markus Frank
2023-10-09 12:02 ` Eric Auger
0 siblings, 1 reply; 4+ messages in thread
From: Markus Frank @ 2023-10-09 10:25 UTC (permalink / raw)
To: Eric Auger, qemu-devel
Hi Eric,
thanks for the quick answer.
On 10/9/23 11:29, Eric Auger wrote:
> Hi Markus,
>
> On 10/9/23 09:06, Markus Frank wrote:
>> Hello,
>>
>> I have already sent this email to qemu-discuss but I did not get a reply.
>> https://lists.nongnu.org/archive/html/qemu-discuss/2023-09/msg00034.html
>> Maybe someone here could help me and reply to this email or the one on
>> qemu-discuss?
>>
>> I would like to pass through PCI devices to Layer-2 VMs via Nested
>> Virtualization.
>>
>> Is there current documentation for this topic somewhere?
>>
>> I used these parameters:
>> -machine ...,kernel-irqchip=split
>> -device intel-iommu
>>
>> With these parameters PCI pass through to L2-VMs worked fine.
>>
>>
>> Now I come to the part where I get confused.
>>
>> https://wiki.qemu.org/Features/VT-d#With_Virtio_Devices
>> Is this documentation relevant for PCI pass through? Do I need DMAR for
>> virtio devices?
> If you just want the host assigned devices to be protected by the
> viommu, you don't need to add iommu_platform=on along with the
> virtio-pci device>>
>> And there is also the virtio-iommu device where I also could use the
>> i440fx chipset.
>> https://michael2012z.medium.com/virtio-iommu-789369049443
>
> you can use virtio-iommu with q35 machine.
Yes I know. I meant that intel-iommu does not support i440fx and virtio-iommu does.
>>
>> When adding "-device virtio-iommu-pci" pci pass through also works
>> but I get "kvm: virtio_iommu_translate no mapping for 0x1002030f000 for
>> sid=240"
>> when starting qemu. What could that mean?
> Normally you shouldn't get any such error. This means there is no
> mapping programmed by the iommu-driver for this requester id (0x240) and
> this iova=0x1002030f000. But if I understand correctly this does not
> prevent your device from working, correct?
Yes. I didn't notice any problems. How could I find out what the requester id 0x240 refers to?
>>
>> What do these parameters
>> "disable-legacy=on,disable-modern=off,iommu_platform=on,ats=on"
>> actually do? When do I need them and on which virtio devices?
> you need them if you want your virtio devices to be protected by the
> viommu. Otherwise the viommu is bypassed.
Okay, so iommu_platform=on is more of a decision you should make per virtio-pci device.
So simplified the advantage is more isolation and the disadvantage is less performance?
>>
>> And which device should I rather use: virtio-iommu or intel-iommu?
> Both should be working. virtio-iommu is more recent and less used in
> production than intel-iommu though.
>
> Thanks
>
> Eric
>>
>> Thanks in advance,
>> Markus
>>
>>
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization)
2023-10-09 10:25 ` Markus Frank
@ 2023-10-09 12:02 ` Eric Auger
0 siblings, 0 replies; 4+ messages in thread
From: Eric Auger @ 2023-10-09 12:02 UTC (permalink / raw)
To: Markus Frank, qemu-devel
Hi,
On 10/9/23 12:25, Markus Frank wrote:
> Hi Eric,
>
> thanks for the quick answer.
>
> On 10/9/23 11:29, Eric Auger wrote:
>> Hi Markus,
>>
>> On 10/9/23 09:06, Markus Frank wrote:
>>> Hello,
>>>
>>> I have already sent this email to qemu-discuss but I did not get a
>>> reply.
>>> https://lists.nongnu.org/archive/html/qemu-discuss/2023-09/msg00034.html
>>> Maybe someone here could help me and reply to this email or the one on
>>> qemu-discuss?
>>>
>>> I would like to pass through PCI devices to Layer-2 VMs via Nested
>>> Virtualization.
>>>
>>> Is there current documentation for this topic somewhere?
>>>
>>> I used these parameters:
>>> -machine ...,kernel-irqchip=split
>>> -device intel-iommu
>>>
>>> With these parameters PCI pass through to L2-VMs worked fine.
>>>
>>>
>>> Now I come to the part where I get confused.
>>>
>>> https://wiki.qemu.org/Features/VT-d#With_Virtio_Devices
>>> Is this documentation relevant for PCI pass through? Do I need DMAR for
>>> virtio devices?
>> If you just want the host assigned devices to be protected by the
>> viommu, you don't need to add iommu_platform=on along with the
>> virtio-pci device>>
>>> And there is also the virtio-iommu device where I also could use the
>>> i440fx chipset.
>>> https://michael2012z.medium.com/virtio-iommu-789369049443
>>
>> you can use virtio-iommu with q35 machine.
> Yes I know. I meant that intel-iommu does not support i440fx and
> virtio-iommu does.
>>>
>>> When adding "-device virtio-iommu-pci" pci pass through also works
>>> but I get "kvm: virtio_iommu_translate no mapping for 0x1002030f000 for
>>> sid=240"
>>> when starting qemu. What could that mean?
>> Normally you shouldn't get any such error. This means there is no
>> mapping programmed by the iommu-driver for this requester id (0x240) and
>> this iova=0x1002030f000. But if I understand correctly this does not
>> prevent your device from working, correct?
> Yes. I didn't notice any problems. How could I find out what the
> requester id 0x240 refers to?
on your guest issue lspci and look at the end points BDF that matches
0x240.
>>>
>>> What do these parameters
>>> "disable-legacy=on,disable-modern=off,iommu_platform=on,ats=on"
>>> actually do? When do I need them and on which virtio devices?
>> you need them if you want your virtio devices to be protected by the
>> viommu. Otherwise the viommu is bypassed.
> Okay, so iommu_platform=on is more of a decision you should make per
> virtio-pci device.
> So simplified the advantage is more isolation and the disadvantage is
> less performance?
yes setting iommu_platform forces the driver to use the DMA API.
Eric
>>>
>>> And which device should I rather use: virtio-iommu or intel-iommu?
>> Both should be working. virtio-iommu is more recent and less used in
>> production than intel-iommu though.
>>
>> Thanks
>>
>> Eric
>>>
>>> Thanks in advance,
>>> Markus
>>>
>>>
>>
>>
>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-10-09 12:04 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-09 7:06 vIOMMU - PCI pass through to Layer 2 VMs (Nested Virtualization) Markus Frank
2023-10-09 9:29 ` Eric Auger
2023-10-09 10:25 ` Markus Frank
2023-10-09 12:02 ` Eric Auger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).