From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id CBE0BC83F17 for ; Thu, 29 Aug 2024 13:17:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:MIME-Version: Content-Transfer-Encoding:Content-Type:In-Reply-To:References:Message-ID:Date :Subject:CC:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=zI71nLitycxbkDBatHxHXpD31jHXG/FcLswRCYnElI0=; b=CKPeDLJ++zsdZIzFTF3SxjPFOT fPfG6CrVRrUVqtlvDDjrjhXotkc595wtAjYdT+QMKdGd9tcahI9ebEfp1G2CjFW/ByZQcDTVvKtMa Ve0Ymv285lmD6i16q3j238/xylUIxGv4zxYihjy3Rl7R3xzrA0SK77f+eERY/sI/d3PxLYcV2kRss lKYN9/4ig3GIBLlM7dccIr5c+231C9LX0BgL0cKJuPm/8kr3qEPwf/um9EYK/ZoLM1GxaXXDHGJ9V AKH1P7llyTdfF3s2neTviK7+FtD0p9y2nI7Cy6enGAblkL8UZVMOATEMVFUrHaBqTjnWrVdjeHhNF RNd0yPaA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjf1V-000000027V8-0qFm; Thu, 29 Aug 2024 13:17:13 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sjezP-000000026oa-3hAI for linux-arm-kernel@lists.infradead.org; Thu, 29 Aug 2024 13:15:06 +0000 Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WvhZs2vJ2z1j7gZ; Thu, 29 Aug 2024 21:14:45 +0800 (CST) Received: from dggpemf500003.china.huawei.com (unknown [7.185.36.204]) by mail.maildlp.com (Postfix) with ESMTPS id EEE1E140136; Thu, 29 Aug 2024 21:14:57 +0800 (CST) Received: from lhrpeml500005.china.huawei.com (7.191.163.240) by dggpemf500003.china.huawei.com (7.185.36.204) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 29 Aug 2024 21:14:57 +0800 Received: from lhrpeml500005.china.huawei.com ([7.191.163.240]) by lhrpeml500005.china.huawei.com ([7.191.163.240]) with mapi id 15.01.2507.039; Thu, 29 Aug 2024 14:14:55 +0100 From: Shameerali Kolothum Thodi To: Nicolin Chen CC: Jason Gunthorpe , "acpica-devel@lists.linux.dev" , "Guohanjun (Hanjun Guo)" , "iommu@lists.linux.dev" , Joerg Roedel , Kevin Tian , "kvm@vger.kernel.org" , Len Brown , "linux-acpi@vger.kernel.org" , "linux-arm-kernel@lists.infradead.org" , Lorenzo Pieralisi , "Rafael J. Wysocki" , "Robert Moore" , Robin Murphy , "Sudeep Holla" , Will Deacon , Alex Williamson , Eric Auger , Jean-Philippe Brucker , Moritz Fischer , Michael Shavit , "patches@lists.linux.dev" , Mostafa Saleh Subject: RE: [PATCH v2 0/8] Initial support for SMMUv3 nested translation Thread-Topic: [PATCH v2 0/8] Initial support for SMMUv3 nested translation Thread-Index: AQHa+JkQkcFFSAeDsUqyXyGP6/LaZLI7ju0AgAFNDID///2PgIAAHqvg///xn4CAAU4tEA== Date: Thu, 29 Aug 2024 13:14:54 +0000 Message-ID: References: <0-v2-621370057090+91fec-smmuv3_nesting_jgg@nvidia.com> <7debe8f99afa4e33aa1872be0d4a63e1@huawei.com> In-Reply-To: Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.203.177.241] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240829_061504_488317_5A87EDCB X-CRM114-Status: GOOD ( 31.10 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org > -----Original Message----- > From: Nicolin Chen > Sent: Wednesday, August 28, 2024 7:13 PM > To: Shameerali Kolothum Thodi > Cc: Jason Gunthorpe ; acpica-devel@lists.linux.dev; > Guohanjun (Hanjun Guo) ; > iommu@lists.linux.dev; Joerg Roedel ; Kevin Tian > ; kvm@vger.kernel.org; Len Brown > ; linux-acpi@vger.kernel.org; linux-arm- > kernel@lists.infradead.org; Lorenzo Pieralisi ; Ra= fael J. > Wysocki ; Robert Moore ; > Robin Murphy ; Sudeep Holla > ; Will Deacon ; Alex Williamson > ; Eric Auger ; Jean- > Philippe Brucker ; Moritz Fischer > ; Michael Shavit ; > patches@lists.linux.dev; Mostafa Saleh > Subject: Re: [PATCH v2 0/8] Initial support for SMMUv3 nested translation >=20 > On Wed, Aug 28, 2024 at 06:06:36PM +0000, Shameerali Kolothum Thodi > wrote: > > > > > As mentioned above, the VIOMMU series would be required to test > the > > > > > entire nesting feature, which now has a v2 rebasing on this serie= s. > > > > > I tested it with a paring QEMU branch. Please refer to: > > > > > https://lore.kernel.org/linux- > > > > > iommu/cover.1724776335.git.nicolinc@nvidia.com/ > > > > > > > > Thanks for this. I haven't gone through the viommu and its Qemu > branch > > > > yet. The way we present nested-smmuv3/iommufd to the Qemu seems > to > > > > have changed with the above Qemu branch(multiple nested SMMUs). > > > > The old Qemu command line for nested setup doesn't work anymore. > > > > > > > > Could you please share an example Qemu command line to verify this > > > > series(Sorry, if I missed it in the links/git). > > > > > > My bad. I updated those two "for_iommufd_" QEMU branches with a > > > README commit on top of each for the reference command. > > > > Thanks. I did give it a go and this is my command line based on above, >=20 > > But it fails to boot very early: > > > > root@ubuntu:/home/shameer/qemu-test# ./qemu_run-simple-iommufd- > nicolin-2 > > qemu-system-aarch64-nicolin-viommu: Illegal numa node 2 > > > > Any idea what am I missing? Do you any special config enabled while > building Qemu? >=20 > Looks like you are running on a multi-SMMU platform :) >=20 > Would you please try syncing your local branch? That should work, > as the update also had a small change to the virt code: >=20 > diff --git a/hw/arm/virt.c b/hw/arm/virt.c > index 161a28a311..a782909016 100644 > --- a/hw/arm/virt.c > +++ b/hw/arm/virt.c > @@ -1640,7 +1640,7 @@ static PCIBus > *create_pcie_expander_bridge(VirtMachineState *vms, uint8_t idx) > } >=20 > qdev_prop_set_uint8(dev, "bus_nr", bus_nr); > - qdev_prop_set_uint16(dev, "numa_node", idx); > + qdev_prop_set_uint16(dev, "numa_node", 0); > qdev_realize_and_unref(dev, BUS(bus), &error_fatal); That makes some progress. But still I am not seeing the assigned dev in Guest. -device vfio-pci-nohotplug,host=3D0000:75:00.1,iommufd=3Diommufd0 root@ubuntu:/# lspci -tv# root@ubuntu:/# lspci -tv -+-[0000:ca]---00.0-[cb]-- \-[0000:00]-+-00.0 Red Hat, Inc. QEMU PCIe Host bridge +-01.0 Red Hat, Inc Virtio network device +-02.0 Red Hat, Inc. QEMU PCIe Expander bridge +-03.0 Red Hat, Inc. QEMU PCIe Expander bridge +-04.0 Red Hat, Inc. QEMU PCIe Expander bridge +-05.0 Red Hat, Inc. QEMU PCIe Expander bridge +-06.0 Red Hat, Inc. QEMU PCIe Expander bridge +-07.0 Red Hat, Inc. QEMU PCIe Expander bridge +-08.0 Red Hat, Inc. QEMU PCIe Expander bridge \-09.0 Red Hat, Inc. QEMU PCIe Expander bridge The new root port is created, but no device attached. But without iommufd, -device vfio-pci-nohotplug,host=3D0000:75:00.1 root@ubuntu:/# lspci -tv -[0000:00]-+-00.0 Red Hat, Inc. QEMU PCIe Host bridge +-01.0 Red Hat, Inc Virtio network device +-02.0 Red Hat, Inc. QEMU PCIe Expander bridge +-03.0 Red Hat, Inc. QEMU PCIe Expander bridge +-04.0 Red Hat, Inc. QEMU PCIe Expander bridge +-05.0 Red Hat, Inc. QEMU PCIe Expander bridge +-06.0 Red Hat, Inc. QEMU PCIe Expander bridge +-07.0 Red Hat, Inc. QEMU PCIe Expander bridge +-08.0 Red Hat, Inc. QEMU PCIe Expander bridge +-09.0 Red Hat, Inc. QEMU PCIe Expander bridge \-0a.0 Huawei Technologies Co., Ltd. Device a251 We can see dev a251. And yes the setup has multiple SMMUs(8). Thanks, Shameer