From: Ben <figure1802@126.com>
To: "Nicolin Chen" <nicolinc@nvidia.com>
Cc: eric.auger@redhat.com,
linux-arm-kernel <linux-arm-kernel@lists.infradead.org>
Subject: Re:Re: Re: how test the Translation Support for SMMUv3?
Date: Wed, 3 Jan 2024 22:09:34 +0800 (CST) [thread overview]
Message-ID: <4bb04c20.66ed.18ccfa87b41.Coremail.figure1802@126.com> (raw)
In-Reply-To: <ZZRpPFAJ7AZH3aH3@Asurada-Nvidia>
At 2024-01-03 03:51:24, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
>
>> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
>>
>> Here is the Host side running on FVP (platform is rdn1egde).
>>
>> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
>> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
>>
>> when i want to run the QEMU to launch a VM, some failed, like below:
>>
>> root@master:/# cat qemu-iommufd.sh
>> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
>> root@master:/# sh qemu-iommufd.sh
>> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
>> Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
>> Specify the 'raw' format explicitly to remove the restrictions.
>> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
>>
>> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
>>
>> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
>> total 0
>> lrwxrwxrwx 1 root root 0 Dec 31 13:29 device -> ../../../0000:05:00.0
>> drwxr-xr-x 2 root root 0 Dec 31 13:29 power
>> lrwxrwxrwx 1 root root 0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
>> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
>>
>> any suggestion on that?
>
>CONFIG_VFIO_DEVICE_CDEV=y
>
>Do you have this enabled in kernel config?
Thanks your suggestion. Right now I can run the QEMU to launch a VM.
After assigned a device to VM and binded the vfio-pci driver for this device in VM,
it failed to open "/dev/vfio/x" device file. Any suggestion?
Here is the log and steps:
On host side:
root@master:~# lspci -k
02:00.0 Unassigned class [ff00]: ARM Device ff80
Subsystem: ARM Device 0000
echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
./qemu-system-aarch64-iommufd -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel ./Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=./busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0
On the VM side:
/ # echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
/ # ./vfio_test 0000:00:02.0
Failed to open /dev/vfio/1, -1 (No such file or directory)
>
>> BTW, another questions,
>> 1. does it the device which assigned to VM by VFIO can leverage the nested IOMMU?
>
>I think so, as long as it's behind an IOMMU hardware that supports
>nesting (and requiring both host kernel and VMM/qemu patches).
>
>> how about the virtual device emulated by QEMU without assigned via VFIO?
>
>The basic nesting feature is about 2-stage translation setup (STE
>configuration in SMMU term) and cache invalidation. An emulated
>device doesn't exist in the host kernel, so there is no nesting
>IMHO.
>
>> 2. when fill the S1 and S2 page table for device on nested IOMMU scenario?
>> does it a shadow page table for vIOMMU on VM? and will trap into hypervisor
>> to refill the real S1 and S2 page table? I am not clear the workflow for your
>> patchset.
>
>S2 page table is created/filled at VM creating stage. It's basically
>managed by the hypervisor or host kernel. S1 page table on the other
>hand is created inside the guest memory and thus managed by the guest
>OS. As I mentioned the above, nesting is all about STE configuration
>besides cache invalidation. VMM traps the S1 page table pointer from
>the guest and forwards the pointer to the host kernel to then setup
>the STE of the device's for a 2-stage translation mode.
>
Thanks a lot!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2024-01-03 14:10 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <61332e87.404e.18caa3d6e15.Coremail.figure1802@126.com>
2023-12-27 21:30 ` how test the Translation Support for SMMUv3? Nicolin Chen
2023-12-31 14:18 ` Ben
2024-01-02 19:51 ` Nicolin Chen
2024-01-03 14:09 ` Ben [this message]
2024-01-03 17:38 ` Nicolin Chen
2024-01-04 13:13 ` Ben
2024-01-04 22:35 ` Nicolin Chen
2024-01-05 1:50 ` Ben
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4bb04c20.66ed.18ccfa87b41.Coremail.figure1802@126.com \
--to=figure1802@126.com \
--cc=eric.auger@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=nicolinc@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox