From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C4FCC433EF for ; Tue, 29 Mar 2022 05:40:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232614AbiC2Fmd (ORCPT ); Tue, 29 Mar 2022 01:42:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232573AbiC2FmU (ORCPT ); Tue, 29 Mar 2022 01:42:20 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82DC796826 for ; Mon, 28 Mar 2022 22:40:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1648532438; x=1680068438; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XEvCo8vc7HyGtJNWT/dzX6msN51mZr6s4VKvbdupNm4=; b=l/sYaOf0N1aDjtzeOVIAjb6mUEc6hd0VMruLw63oV6ZM4zzxToGn5KyI w27ZwK0wAqI1UXG+BRXtIcD2bzbWOTUrXOumE3JXIMgryhLRm89W+8d5Y WG899u/NkYVgnDwkkJK6WPInsbNeBA8HNCioLQWWzX5+HPgvXI03qIjhG IdSt3auk6b0ySuxKvrzSHpN4yqDq9yUJjYTjSrKe+c0uBkavVDJBfliqZ v7+pHY6kBBm/UmHOScbmFXcyEDJlKD+WDB3f8yyBB8Ri525/2GSAyAQ9X 6jMbz+XuvzKdgL41X3sLgFKrv/rqVCcHi9CzmMiSjYrMMKsJjoZu5EGZk Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10300"; a="259137019" X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="259137019" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Mar 2022 22:40:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,219,1643702400"; d="scan'208";a="694603644" Received: from allen-box.sh.intel.com ([10.239.159.48]) by fmsmga001.fm.intel.com with ESMTP; 28 Mar 2022 22:40:34 -0700 From: Lu Baolu To: Joerg Roedel , Jason Gunthorpe , Christoph Hellwig , Kevin Tian , Ashok Raj , Will Deacon , Robin Murphy , Jean-Philippe Brucker Cc: Eric Auger , Liu Yi L , Jacob jun Pan , iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Lu Baolu Subject: [PATCH RFC v2 07/11] arm-smmu-v3/sva: Add SVA domain support Date: Tue, 29 Mar 2022 13:37:56 +0800 Message-Id: <20220329053800.3049561-8-baolu.lu@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220329053800.3049561-1-baolu.lu@linux.intel.com> References: <20220329053800.3049561-1-baolu.lu@linux.intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add support for SVA domain allocation and provide an SVA-specific iommu_domain_ops. Signed-off-by: Lu Baolu --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 14 +++++++ .../iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c | 42 +++++++++++++++++++ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 21 ++++++++++ 3 files changed, 77 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index cd48590ada30..7631c00fdcbd 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -759,6 +759,10 @@ struct iommu_sva *arm_smmu_sva_bind(struct device *dev, struct mm_struct *mm, void arm_smmu_sva_unbind(struct iommu_sva *handle); u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle); void arm_smmu_sva_notifier_synchronize(void); +int arm_smmu_sva_attach_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t id); +void arm_smmu_sva_detach_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t id); #else /* CONFIG_ARM_SMMU_V3_SVA */ static inline bool arm_smmu_sva_supported(struct arm_smmu_device *smmu) { @@ -804,5 +808,15 @@ static inline u32 arm_smmu_sva_get_pasid(struct iommu_sva *handle) } static inline void arm_smmu_sva_notifier_synchronize(void) {} + +static inline int arm_smmu_sva_attach_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t id) +{ + return -ENODEV; +} + +static inline void arm_smmu_sva_detach_dev_pasid(struct iommu_domain *domain, + struct device *dev, + ioasid_t id) {} #endif /* CONFIG_ARM_SMMU_V3_SVA */ #endif /* _ARM_SMMU_V3_H */ diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c index 22ddd05bbdcd..ce229820086d 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c @@ -534,3 +534,45 @@ void arm_smmu_sva_notifier_synchronize(void) */ mmu_notifier_synchronize(); } + +int arm_smmu_sva_attach_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t id) +{ + int ret = 0; + struct iommu_sva *handle; + struct mm_struct *mm = iommu_sva_domain_mm(domain); + + if (domain->type != IOMMU_DOMAIN_SVA || !mm) + return -EINVAL; + + mutex_lock(&sva_lock); + handle = __arm_smmu_sva_bind(dev, mm); + if (IS_ERR(handle)) + ret = PTR_ERR(handle); + mutex_unlock(&sva_lock); + + return ret; +} + +void arm_smmu_sva_detach_dev_pasid(struct iommu_domain *domain, + struct device *dev, ioasid_t id) +{ + struct arm_smmu_bond *bond = NULL, *t; + struct mm_struct *mm = iommu_sva_domain_mm(domain); + struct arm_smmu_master *master = dev_iommu_priv_get(dev); + + mutex_lock(&sva_lock); + list_for_each_entry(t, &master->bonds, list) { + if (t->mm == mm) { + bond = t; + break; + } + } + + if (!WARN_ON(!bond) && refcount_dec_and_test(&bond->refs)) { + list_del(&bond->list); + arm_smmu_mmu_notifier_put(bond->smmu_mn); + kfree(bond); + } + mutex_unlock(&sva_lock); +} diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index afc63fce6107..bd80de0bad98 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1995,10 +1995,31 @@ static bool arm_smmu_capable(enum iommu_cap cap) } } +static void arm_smmu_sva_domain_free(struct iommu_domain *domain) +{ + kfree(domain); +} + +static const struct iommu_domain_ops arm_smmu_sva_domain_ops = { + .attach_dev_pasid = arm_smmu_sva_attach_dev_pasid, + .detach_dev_pasid = arm_smmu_sva_detach_dev_pasid, + .free = arm_smmu_sva_domain_free, +}; + static struct iommu_domain *arm_smmu_domain_alloc(unsigned type) { struct arm_smmu_domain *smmu_domain; + if (type == IOMMU_DOMAIN_SVA) { + struct iommu_domain *domain; + + domain = kzalloc(sizeof(*domain), GFP_KERNEL); + if (domain) + domain->ops = &arm_smmu_sva_domain_ops; + + return domain; + } + if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA && type != IOMMU_DOMAIN_DMA_FQ && -- 2.25.1