public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
From: Ben  <figure1802@126.com>
To: "Nicolin Chen" <nicolinc@nvidia.com>
Cc: eric.auger@redhat.com,
	 linux-arm-kernel <linux-arm-kernel@lists.infradead.org>
Subject: Re:Re: Re: Re: how test the Translation Support for SMMUv3?
Date: Thu, 4 Jan 2024 21:13:39 +0800 (CST)	[thread overview]
Message-ID: <2fa4ae48.65c6.18cd49ba689.Coremail.figure1802@126.com> (raw)
In-Reply-To: <ZZWbg13e8r36pTC9@Asurada-Nvidia>



At 2024-01-04 01:38:11, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>On Wed, Jan 03, 2024 at 10:09:34PM +0800, Ben wrote:
>> At 2024-01-03 03:51:24, "Nicolin Chen" <nicolinc@nvidia.com> wrote:
>> >On Sun, Dec 31, 2023 at 10:18:12PM +0800, Ben wrote:
>> >
>> >> I am trying your patchset on FVP (Fixed Virtual Platforms) but failed.
>> >>
>> >> Here is the Host side running on FVP (platform is  rdn1egde).
>> >>
>> >> master:~# echo 0000:05:00.0 > /sys/bus/pci/devices/0000\:05\:00.0/driver/unbind
>> >> master:~# echo 0abc aced > /sys/bus/pci/drivers/vfio-pci/new_id
>> >>
>> >> when i want to run the QEMU to launch a VM, some failed, like below:
>> >>
>> >> root@master:/# cat qemu-iommufd.sh
>> >> ./build/qemu-system-aarch64 -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel /Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=/busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id="test0"
>> >> root@master:/# sh qemu-iommufd.sh
>> >> WARNING: Image format was not specified for '/busybox_arm64.ext4' and probing guessed raw.
>> >>          Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted.
>> >>          Specify the 'raw' format explicitly to remove the restrictions.
>> >> qemu-system-aarch64: -device vfio-pci,host=0000:05:00.0,iommufd=iommufd0,id=test0: vfio 0000:05:00.0: vfio /sys/bus/pci/devices/0000:05:00.0/vfio-dev: failed to load "/sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev"
>> >>
>> >> It looks cannot find the /sys/bus/pci/devices/0000:05:00.0/vfio-dev/vfio0/dev for this device.
>> >>
>> >> root@master:/# ls -l /sys/bus/pci/devices/0000\:05\:00.0/vfio-dev/vfio0/
>> >> total 0
>> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 device -> ../../../0000:05:00.0
>> >> drwxr-xr-x 2 root root    0 Dec 31 13:29 power
>> >> lrwxrwxrwx 1 root root    0 Dec 31 13:29 subsystem -> ../../../../../../../../class/vfio-dev
>> >> -rw-r--r-- 1 root root 4096 Dec 31 13:20 uevent
>> >>
>> >> any suggestion on that?
>> >
>> >CONFIG_VFIO_DEVICE_CDEV=y
>> >
>> >Do you have this enabled in kernel config?
>> 
>> Thanks your suggestion. Right now I can run the QEMU to launch a VM.
>> After assigned a device to VM and  binded  the vfio-pci driver for this device in VM,
>> it failed to open "/dev/vfio/x" device file. Any suggestion?
>> 
>> Here is the log and steps:
>> 
>> On host side:
>> root@master:~# lspci -k
>> 02:00.0 Unassigned class [ff00]: ARM Device ff80
>>         Subsystem: ARM Device 0000
>> 
>> echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
>> 
>> ./qemu-system-aarch64-iommufd -L /usr/local/share/qemu -object iommufd,id=iommufd0 -machine virt,accel=kvm,gic-version=3,iommu=nested-smmuv3,iommufd=iommufd0 -cpu host -m 256m -nographic -kernel ./Image -append "noinintrd nokaslr root=/dev/vda rootfstype=ext4 rw" -drive if=none,file=./busybox_arm64.ext4,id=hd0 -device virtio-blk-device,drive=hd0 -device vfio-pci,host=0000:02:00.0,iommufd=iommufd0
>> 
>> 
>> On the VM side:
>> 
>> / # echo 13b5 ff80 > /sys/bus/pci/drivers/vfio-pci/new_id
>> / # ./vfio_test 0000:00:02.0
>> Failed to open /dev/vfio/1, -1 (No such file or directory)
>
>VM side? You mean in the guest? 

yes, Guest Side.


>No, you shouldn't configure
>it to a VFIO device in the guest. Just treat it as a native
>PCI device and let its driver in the guest kernel probe it.
>
>The "Unassigned class" returned by the lspci running in the
>host is likely telling you that your kernel doesn't support
>the device at all?

The device (13b5 ff80) is a special device (SMMUv3TestEngine) implemented in FVP,
I wrote a simple PCI driver for it, just probe and call dma_alloc_coherent() API to alloc a DMA buffer.

Here is log on Guest side:
/ # insmod smmu_test.ko 
[ 8251.668308] smmu_test: module verification failed: signature and/or required key missing - tainting kernel
[ 8251.671198] smmu_test 0000:00:02.0: Adding to iommu group 1
[ 8251.672748] arm_smmu_attach_dev========
[ 8251.673823] arm_smmu_domain_finalise_s1 ====
[17991.095955] arm-smmu-v3 arm-smmu-v3.0.auto: arm_smmu_domain_finalise_nested ======
[ 8251.675278] smmu_test 0000:00:02.0: enabling device (0000 -> 0002)
qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x8000000000, 0x40000, 0xffffbc6a4000) = -14 (Bad address)
qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaaf4b56c60, 0x800004c000, 0x1000, 0xffffbf49d000) = -14 (Bad address)
[ 8251.678163] smmu_test_pci_probe === reg_phy 0x8000000000, len 0x40000
[ 8251.679978] smmu_test_pci_probe === reg 0xffff800009300000
[ 8251.681908] smmu_test_alloc_dma ---- iova 0xffff800008008000   dma_addr 0xfffff000
/ # 

so in this log, some qemu error logs are observed, does it the nested SMMU work fine? 
In the Guest side, is it GPA or SPA about the return address (dma_addr) of dma_alloc_coherent() API?


>
>Try some simpler device that's supported first. What happens
>to the 0000:05:00.0 that you passed through previously?

05:00.0 is a SATA device on FVP, but it is very strange that failed to assigned to VM, "device busy" reported. I changed to use SMMUv3TestEngine device.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2024-01-04 13:14 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <61332e87.404e.18caa3d6e15.Coremail.figure1802@126.com>
2023-12-27 21:30 ` how test the Translation Support for SMMUv3? Nicolin Chen
2023-12-31 14:18   ` Ben
2024-01-02 19:51     ` Nicolin Chen
2024-01-03 14:09       ` Ben
2024-01-03 17:38         ` Nicolin Chen
2024-01-04 13:13           ` Ben [this message]
2024-01-04 22:35             ` Nicolin Chen
2024-01-05  1:50               ` Ben

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2fa4ae48.65c6.18cd49ba689.Coremail.figure1802@126.com \
    --to=figure1802@126.com \
    --cc=eric.auger@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=nicolinc@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox