From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F610C433EF for ; Tue, 21 Sep 2021 16:17:50 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1AFEE60F70 for ; Tue, 21 Sep 2021 16:17:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1AFEE60F70 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Z1eRHOj33H/yAM/9YvJ1/aWTYeJadnQl/NCRbRHjkb0=; b=MgF5uJfpkhAZ+y jkyxELQp3hPG+8ni8VOdQc4TMz/2bdoKJggTGJtpGxGU/BFcH4QDsMpqUSC0CenYNz7NZgjQ21Ixi g4dml+2C9l07rEK2bun7fvX/w67YSIfjWvQTF4kV/o+md3qHw7YYGVxqCZypCqG/Kn2H6WIJxDy+t 8QAeKu7o2T63HSPpaKe8Rc9fVIxNPNn2YrVq4AWo2mF7Ibofs7pkgZbfTKibCsu4v0skNYHDXDCn8 tDlRrBXh8co92Ht6s1RxN40Bl0PMOnJib5E9fKtm+9CRBe8iMdsN+G1nYZm14SDO+w18eCI5CvonS vlxzNajFbD8tyBkSeWAQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mSiQP-0058we-57; Tue, 21 Sep 2021 16:15:18 +0000 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mSiLA-0056mb-0S for linux-arm-kernel@lists.infradead.org; Tue, 21 Sep 2021 16:09:54 +0000 Received: by mail-wr1-x42e.google.com with SMTP id t18so40766946wrb.0 for ; Tue, 21 Sep 2021 09:09:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=n/dUevHsiLdE2irKaS1K4eioH3NiOV8sQVgnvAoo5Ko=; b=mfU91cRuSYxoetgBDYjcRxoyjxzZNOhzDWtY9d0TYAAmHG0KE/ijeHoF7573yNouyH RkVW0cPEEr6cTRAn0+3XYADRMFj0hG7QpVeZQyPRKy6yWxP9rSdhTwYK6A7qjsqioj0I DL1dMhbzXj1kHxEulMNRKhALT3iaSxiJg3zSBt4NGDbGRSztkRW2NwzVHKOh6C1+ir2q lOKAenY5YLo+A5ur2WYBK95yVMjqo+DZLGY3onk8smssehg0eg9KB0dEl6TSvukBb1eG Nu+77/ytnWeV1lqlaLwQeSvXNbJHVE1MDxw0OvymIuiRf4Kej3stGYgUA/TxbY4KENe8 Gl8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=n/dUevHsiLdE2irKaS1K4eioH3NiOV8sQVgnvAoo5Ko=; b=42bLH3zjZXsZVbyU/IgOu2vvv24uXZkyoLsE6cJxH/Ca0yB0qZrKh4DXzFKYAnXsA3 xlOACaVrsHmhU4/M0rtiDhtCWzcKQfi1mBjVju+ojH6zDvGAng2JYWR2GyhjnAGKa+8k 8epXebF475DocwQ+GtmBp5MeKeYtEo1expaS3nfLHbdIM6LxGDt8rRBuYvqnLAWyPlqF LwmkfcCeZYvFh8YKs2cnVdNT90FxmDztYyTJxILaMmTFkk3NebjVfh+cloDoAd7VzQg6 t2/m/+4ddjcqY0c+07NMA89Wiqg9CooOfDjtXlrHQt3PDiTEKLUqetqBzrD53AQcztmE 4qZg== X-Gm-Message-State: AOAM531uOnjwGY8uzdrmw7Y5heGh3uUVL23RV72fILzEKUko/oCnlMB1 Ep8QUnbcmW90fYtSA6qt1NdP9g== X-Google-Smtp-Source: ABdhPJxBu+3YJh8avIZrTawiYkjU/W2b+1aQMbVujqjUbPyuv6hyDWne+bNnYKVD0hbLSVljVhUkuA== X-Received: by 2002:a05:600c:1c26:: with SMTP id j38mr5732706wms.12.1632240590526; Tue, 21 Sep 2021 09:09:50 -0700 (PDT) Received: from myrica (cpc92880-cmbg19-2-0-cust679.5-4.cable.virginm.net. [82.27.106.168]) by smtp.gmail.com with ESMTPSA id c4sm13205218wrt.23.2021.09.21.09.09.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Sep 2021 09:09:50 -0700 (PDT) Date: Tue, 21 Sep 2021 17:09:28 +0100 From: Jean-Philippe Brucker To: Vivek Gautam Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux-foundation.org, virtualization@lists.linux-foundation.org, joro@8bytes.org, will.deacon@arm.com, mst@redhat.com, robin.murphy@arm.com, eric.auger@redhat.com, kevin.tian@intel.com, jacob.jun.pan@linux.intel.com, yi.l.liu@intel.com, Lorenzo.Pieralisi@arm.com, shameerali.kolothum.thodi@huawei.com Subject: Re: [PATCH RFC v1 09/11] iommu/virtio: Implement sva bind/unbind calls Message-ID: References: <20210423095147.27922-1-vivek.gautam@arm.com> <20210423095147.27922-10-vivek.gautam@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210423095147.27922-10-vivek.gautam@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210921_090952_135314_4104CEC8 X-CRM114-Status: GOOD ( 30.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Apr 23, 2021 at 03:21:45PM +0530, Vivek Gautam wrote: > SVA bind and unbind implementations will allow to prepare translation > context with CPU page tables that can be programmed into host iommu > hardware to realize shared address space utilization between the CPU > and virtualized devices using virtio-iommu. > > Signed-off-by: Vivek Gautam > --- > drivers/iommu/virtio-iommu.c | 199 +++++++++++++++++++++++++++++- > include/uapi/linux/virtio_iommu.h | 2 + > 2 files changed, 199 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index 250c137a211b..08f1294baeab 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -14,6 +14,9 @@ > #include > #include > #include > +#include > +#include > +#include > #include > #include > #include > @@ -28,6 +31,7 @@ > #include > #include "iommu-pasid-table.h" > #include "iommu-sva-lib.h" > +#include "io-pgtable-arm.h" Is this used here? > > #define MSI_IOVA_BASE 0x8000000 > #define MSI_IOVA_LENGTH 0x100000 > @@ -41,6 +45,7 @@ DEFINE_XARRAY_ALLOC1(viommu_asid_xa); > > static DEFINE_MUTEX(sva_lock); > static DEFINE_MUTEX(iopf_lock); > +static DEFINE_MUTEX(viommu_asid_lock); > > struct viommu_dev_pri_work { > struct work_struct work; > @@ -88,10 +93,22 @@ struct viommu_mapping { > struct viommu_mm { > int pasid; > u64 archid; > + struct viommu_sva_bond *bond; > struct io_pgtable_ops *ops; > struct viommu_domain *domain; > }; > > +struct viommu_sva_bond { > + struct iommu_sva sva; > + struct mm_struct *mm; > + struct iommu_psdtable_mmu_notifier *viommu_mn; > + struct list_head list; > + refcount_t refs; > +}; > + > +#define sva_to_viommu_bond(handle) \ > + container_of(handle, struct viommu_sva_bond, sva) > + > struct viommu_domain { > struct iommu_domain domain; > struct viommu_dev *viommu; > @@ -136,6 +153,7 @@ struct viommu_endpoint { > bool pri_supported; > bool sva_enabled; > bool iopf_enabled; > + struct list_head bonds; > }; > > struct viommu_ep_entry { > @@ -1423,14 +1441,15 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev, > > pst_cfg->iommu_dev = viommu->dev->parent; > > + mutex_lock(&viommu_asid_lock); > /* Prepare PASID tables info to allocate a new table */ > ret = viommu_prepare_pst(vdev, pst_cfg, fmt); > if (ret) > - return ret; > + goto err_out_unlock; > > ret = iommu_psdtable_alloc(tbl, pst_cfg); > if (ret) > - return ret; > + goto err_out_unlock; > > pst_cfg->iommu_dev = viommu->dev->parent; > pst_cfg->fmt = PASID_TABLE_ARM_SMMU_V3; > @@ -1452,6 +1471,7 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev, > if (ret) > goto err_free_ops; > } > + mutex_unlock(&viommu_asid_lock); > } else { > /* TODO: otherwise, check for compatibility with vdev. */ > return -ENOSYS; > @@ -1467,6 +1487,8 @@ static int viommu_attach_pasid_table(struct viommu_endpoint *vdev, > err_free_psdtable: > iommu_psdtable_free(tbl, &tbl->cfg); > > +err_out_unlock: > + mutex_unlock(&viommu_asid_lock); > return ret; > } > > @@ -1706,6 +1728,7 @@ static struct iommu_device *viommu_probe_device(struct device *dev) > vdev->dev = dev; > vdev->viommu = viommu; > INIT_LIST_HEAD(&vdev->resv_regions); > + INIT_LIST_HEAD(&vdev->bonds); > dev_iommu_priv_set(dev, vdev); > > if (viommu->probe_size) { > @@ -1755,6 +1778,175 @@ static int viommu_of_xlate(struct device *dev, struct of_phandle_args *args) > return iommu_fwspec_add_ids(dev, args->args, 1); > } > > +static u32 viommu_sva_get_pasid(struct iommu_sva *handle) > +{ > + struct viommu_sva_bond *bond = sva_to_viommu_bond(handle); > + > + return bond->mm->pasid; > +} > + > +static void viommu_mmu_notifier_free(struct mmu_notifier *mn) > +{ > + kfree(mn_to_pstiommu(mn)); > +} > + > +static struct mmu_notifier_ops viommu_mmu_notifier_ops = { > + .free_notifier = viommu_mmu_notifier_free, .invalidate_range and .release will be needed as well, to keep up to date with changes to the address space > +}; > + > +/* Allocate or get existing MMU notifier for this {domain, mm} pair */ > +static struct iommu_psdtable_mmu_notifier * > +viommu_mmu_notifier_get(struct viommu_domain *vdomain, struct mm_struct *mm, > + u32 asid_bits) > +{ > + int ret; > + struct iommu_psdtable_mmu_notifier *viommu_mn; > + struct iommu_pasid_table *tbl = vdomain->pasid_tbl; > + > + list_for_each_entry(viommu_mn, &tbl->mmu_notifiers, list) { > + if (viommu_mn->mn.mm == mm) { > + refcount_inc(&viommu_mn->refs); > + return viommu_mn; > + } > + } > + > + mutex_lock(&viommu_asid_lock); > + viommu_mn = iommu_psdtable_alloc_shared(tbl, mm, &viommu_asid_xa, > + asid_bits); > + mutex_unlock(&viommu_asid_lock); > + if (IS_ERR(viommu_mn)) > + return ERR_CAST(viommu_mn); > + > + refcount_set(&viommu_mn->refs, 1); > + viommu_mn->cookie = vdomain; > + viommu_mn->mn.ops = &viommu_mmu_notifier_ops; > + > + ret = mmu_notifier_register(&viommu_mn->mn, mm); > + if (ret) > + goto err_free_cd; > + > + ret = iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, > + viommu_mn->vendor.cd); > + if (ret) > + goto err_put_notifier; > + > + list_add(&viommu_mn->list, &tbl->mmu_notifiers); > + return viommu_mn; > + > +err_put_notifier: > + /* Frees viommu_mn */ > + mmu_notifier_put(&viommu_mn->mn); > +err_free_cd: > + iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd); > + return ERR_PTR(ret); > +} > + > +static void > +viommu_mmu_notifier_put(struct iommu_psdtable_mmu_notifier *viommu_mn) > +{ > + struct mm_struct *mm = viommu_mn->mn.mm; > + struct viommu_domain *vdomain = viommu_mn->cookie; > + struct iommu_pasid_table *tbl = vdomain->pasid_tbl; > + u16 asid = viommu_mn->vendor.cd->asid; > + > + if (!refcount_dec_and_test(&viommu_mn->refs)) > + return; > + > + list_del(&viommu_mn->list); > + iommu_psdtable_write(tbl, &tbl->cfg, mm->pasid, NULL); > + > + /* > + * If we went through clear(), we've already invalidated, and no > + * new TLB entry can have been formed. > + */ > + if (!viommu_mn->cleared) > + iommu_psdtable_flush_tlb(tbl, vdomain, asid); > + > + /* Frees smmu_mn */ > + mmu_notifier_put(&viommu_mn->mn); > + iommu_psdtable_free_shared(tbl, &viommu_asid_xa, viommu_mn->vendor.cd); > +} > + > +static struct iommu_sva * > +__viommu_sva_bind(struct device *dev, struct mm_struct *mm) > +{ > + int ret; > + struct viommu_sva_bond *bond; > + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); > + struct viommu_domain *vdomain = to_viommu_domain(domain); > + struct viommu_endpoint *vdev = dev_iommu_priv_get(dev); > + struct virtio_iommu_probe_table_format *desc = vdev->pgtf; > + > + if (!vdev || !vdev->sva_enabled) > + return ERR_PTR(-ENODEV); > + > + /* If bind() was already called for this {dev, mm} pair, reuse it. */ > + list_for_each_entry(bond, &vdev->bonds, list) { > + if (bond->mm == mm) { > + refcount_inc(&bond->refs); > + return &bond->sva; > + } > + } > + > + bond = kzalloc(sizeof(*bond), GFP_KERNEL); > + if (!bond) > + return ERR_PTR(-ENOMEM); > + > + /* Allocate a PASID for this mm if necessary */ > + ret = iommu_sva_alloc_pasid(mm, 1, (1U << vdev->pasid_bits) - 1); > + if (ret) > + goto err_free_bond; > + > + bond->mm = mm; > + bond->sva.dev = dev; > + refcount_set(&bond->refs, 1); > + > + bond->viommu_mn = viommu_mmu_notifier_get(vdomain, mm, desc->asid_bits); > + if (IS_ERR(bond->viommu_mn)) { > + ret = PTR_ERR(bond->viommu_mn); > + goto err_free_pasid; > + } > + > + list_add(&bond->list, &vdev->bonds); > + return &bond->sva; > + > +err_free_pasid: > + iommu_sva_free_pasid(mm); > +err_free_bond: > + kfree(bond); > + return ERR_PTR(ret); > +} > + > +/* closely follows arm_smmu_sva_bind() */ > +static struct iommu_sva *viommu_sva_bind(struct device *dev, > + struct mm_struct *mm, void *drvdata) > +{ > + struct iommu_sva *handle; > + > + mutex_lock(&sva_lock); > + handle = __viommu_sva_bind(dev, mm); > + mutex_unlock(&sva_lock); > + return handle; > +} > + > +void viommu_sva_unbind(struct iommu_sva *handle) > +{ > + struct viommu_sva_bond *bond = sva_to_viommu_bond(handle); > + struct viommu_endpoint *vdev = dev_iommu_priv_get(handle->dev); > + > + if (vdev->pri_supported) > + iopf_queue_flush_dev(handle->dev); > + > + mutex_lock(&sva_lock); > + if (refcount_dec_and_test(&bond->refs)) { > + list_del(&bond->list); > + viommu_mmu_notifier_put(bond->viommu_mn); > + iommu_sva_free_pasid(bond->mm); > + kfree(bond); > + } > + mutex_unlock(&sva_lock); > +} > + > static bool viommu_endpoint_iopf_supported(struct viommu_endpoint *vdev) > { > /* TODO: support Stall model later */ > @@ -1960,6 +2152,9 @@ static struct iommu_ops viommu_ops = { > .dev_feat_enabled = viommu_dev_feature_enabled, > .dev_enable_feat = viommu_dev_enable_feature, > .dev_disable_feat = viommu_dev_disable_feature, > + .sva_bind = viommu_sva_bind, > + .sva_unbind = viommu_sva_unbind, > + .sva_get_pasid = viommu_sva_get_pasid, > }; > > static int viommu_init_vqs(struct viommu_dev *viommu) > diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h > index 88a3db493108..c12d9b6a7243 100644 > --- a/include/uapi/linux/virtio_iommu.h > +++ b/include/uapi/linux/virtio_iommu.h > @@ -122,6 +122,8 @@ struct virtio_iommu_req_attach_pst_arm { > #define VIRTIO_IOMMU_PGTF_ARM_HPD0 (1ULL << 41) > #define VIRTIO_IOMMU_PGTF_ARM_EPD1 (1 << 23) > > +#define VIRTIO_IOMMU_PGTF_ARM_IPS_SHIFT 32 > +#define VIRTIO_IOMMU_PGTF_ARM_IPS_MASK 0x7 Probably not the right place for this change Thanks, Jean > #define VIRTIO_IOMMU_PGTF_ARM_TG0_SHIFT 14 > #define VIRTIO_IOMMU_PGTF_ARM_TG0_MASK 0x3 > #define VIRTIO_IOMMU_PGTF_ARM_SH0_SHIFT 12 > -- > 2.17.1 > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel