From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D3D9BEE0215 for ; Wed, 11 Sep 2024 07:53:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:To:Subject:Cc:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mhgqBzr95B+5nV/Jf2xHR5g6HXhdGH1KBS0H+7Z7Yn0=; b=MtrUDCTilI7+ykVy1pd/9/q6CU GjVKMBIYy+kimleQSzCwFsZUQL705mu2jMRlxtpknUHqfq2ULgHcXGCDPitse6A4kKTAuTPcaQUJe KbA01+VpQZz3hc3F9b5a6esnLdLzFsR2xA4s6Qw17C6OWxddpJOAzpaqw/GJkRkHTvgSXmuqH1Gm4 EsHhlYOxJxKgmnYVMRzSJPcOD50Do+sFHg5In9DWxNz1QBAbJ6HUudJFusr8QM72lCLQAr17JnsKZ gMh9St4IiWpgoAmxJ+swVpqQ5sWf56ShYvNNcGkppJ1vfQfazDYj8yICnP4vuACpAjrT7vzifjDMb C5A2usFg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1soIA0-00000008WmH-2mXE; Wed, 11 Sep 2024 07:53:08 +0000 Received: from mgamail.intel.com ([198.175.65.21]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1soI84-00000008WKn-0aXg for linux-arm-kernel@lists.infradead.org; Wed, 11 Sep 2024 07:51:09 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1726041068; x=1757577068; h=message-id:date:mime-version:cc:subject:to:references: from:in-reply-to:content-transfer-encoding; bh=M9fkehi060zC3DImk/9ShDxtYwI5FbQzctu+1KRlBI8=; b=QxPksU/RxT3eKGd+MC3qAe9JNo4Zl+p4g+vWd4GFoxfCi73BKrfXmNH+ wWcWMyG4Geo5eYtmsAUHIIZfAaVBmyr+NgEjrRF0VMgsf+qC4e1qQsNtC j0IvYVcBt22r4USTtHzAcd19tovUR3qbYsy3iIHxUcSR1aYenyaKV4peF oIN6YdWGEGtH+2a8nDuVy/al/DFBIUbgjr/79AlPhdYZIeH8zykvw8ECc aTJjcMQ2Ls3xM5d1BwrpcFE81kpLuauKilp2mQv/0lKqdDlIjzR2L5Qet JF9hrk9E/bPHhaZoa85R8MQdSawjfi5rD00h7lQUQwa46DfkW68MC5ATl Q==; X-CSE-ConnectionGUID: 0meHcBE5ScaEspLkBuJ/NA== X-CSE-MsgGUID: V8Le7w/+Q+eB7gLPQwyx5A== X-IronPort-AV: E=McAfee;i="6700,10204,11191"; a="24761670" X-IronPort-AV: E=Sophos;i="6.10,219,1719903600"; d="scan'208";a="24761670" Received: from fmviesa006.fm.intel.com ([10.60.135.146]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2024 00:51:06 -0700 X-CSE-ConnectionGUID: ajRXdtpPQNKqOYG+oecUHg== X-CSE-MsgGUID: cFStQAe4QEW8m1wuEEJ7DQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,219,1719903600"; d="scan'208";a="66915156" Received: from lizhiqua-mobl.ccr.corp.intel.com (HELO [10.124.240.228]) ([10.124.240.228]) by fmviesa006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2024 00:51:01 -0700 Message-ID: Date: Wed, 11 Sep 2024 15:50:59 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Cc: baolu.lu@linux.intel.com, Jason Gunthorpe , "will@kernel.org" , "joro@8bytes.org" , "suravee.suthikulpanit@amd.com" , "robin.murphy@arm.com" , "dwmw2@infradead.org" , "shuah@kernel.org" , "linux-kernel@vger.kernel.org" , "iommu@lists.linux.dev" , "linux-arm-kernel@lists.infradead.org" , "linux-kselftest@vger.kernel.org" , "eric.auger@redhat.com" , "jean-philippe@linaro.org" , "mdf@kernel.org" , "mshavit@google.com" , "shameerali.kolothum.thodi@huawei.com" , "smostafa@google.com" , "Liu, Yi L" Subject: Re: [PATCH v2 17/19] iommu/arm-smmu-v3: Add arm_smmu_viommu_cache_invalidate To: Nicolin Chen , "Tian, Kevin" References: <4b61aba3bc6c1cce628d9db44d5b18ea567a8be1.1724776335.git.nicolinc@nvidia.com> <20240905162039.GT1358970@nvidia.com> <20240905182148.GA1358970@nvidia.com> Content-Language: en-US From: Baolu Lu In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240911_005108_264402_21DCC5CC X-CRM114-Status: GOOD ( 17.84 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024/9/11 15:20, Nicolin Chen wrote: > On Wed, Sep 11, 2024 at 06:25:16AM +0000, Tian, Kevin wrote: >>> From: Jason Gunthorpe >>> Sent: Friday, September 6, 2024 2:22 AM >>> >>> On Thu, Sep 05, 2024 at 11:00:49AM -0700, Nicolin Chen wrote: >>>> On Thu, Sep 05, 2024 at 01:20:39PM -0300, Jason Gunthorpe wrote: >>>>> On Tue, Aug 27, 2024 at 09:59:54AM -0700, Nicolin Chen wrote: >>>>> >>>>>> +static int arm_smmu_viommu_cache_invalidate(struct >>> iommufd_viommu *viommu, >>>>>> + struct iommu_user_data_array >>> *array) >>>>>> +{ >>>>>> + struct iommu_domain *domain = >>> iommufd_viommu_to_parent_domain(viommu); >>>>>> + >>>>>> + return __arm_smmu_cache_invalidate_user( >>>>>> + to_smmu_domain(domain), viommu, array); >>>>> I'd like to have the viommu struct directly hold the VMID. The nested >>>>> parent should be sharable between multiple viommus, it doesn't make >>>>> any sense that it would hold the vmid. >>>>> >>>>> This is struggling because it is trying too hard to not have the >>>>> driver allocate the viommu, and I think we should just go ahead and do >>>>> that. Store the vmid, today copied from the nesting parent in the vmid >>>>> private struct. No need for iommufd_viommu_to_parent_domain(), just >>>>> rework the APIs to pass the vmid down not a domain. >>>> OK. When I designed all this stuff, we still haven't made mind >>>> about sharing the s2 domain, i.e. moving the VMID, which might >>>> need a couple of more patches to achieve. >>> Yes, many more patches, and don't try to do it now.. But we can copy >>> the vmid from the s2 and place it in the viommu struct during >>> allocation time. >>> >> does it assume that a viommu object cannot span multiple physical >> IOMMUs so there is only one vmid per viommu? > I think so. One the reasons of introducing vIOMMU is to maintain > the shareability across physical IOMMUs at the s2 HWPT_PAGING. My understanding of VMID is something like domain id in x86 arch's. Is my understanding correct? If a VMID for an S2 hwpt is valid on physical IOMMU A but has already been allocated for another purpose on physical IOMMU B, how can it be shared across both IOMMUs? Or the VMID is allocated globally? Thanks, baolu