From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 874DDA94A; Tue, 5 Nov 2024 02:59:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.191 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730775601; cv=none; b=pzYSeFKSAHJXfcEU2apbzqMoenD1rzu5KVc0UOuG66gCCtM0g3vAQj/ySWfKuU7BGqeYifxzRvBKI0xxY31NF5/0e7NjAdlp76Fr0htA0ipbN5Zqq5l4nN9X3psNvYa14zr1DV5LIyOhYNC9rm0C84/kyquv4JQ/ZoUPH+gmzkE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730775601; c=relaxed/simple; bh=qy0uecQsNj68BCutaQn17Ljsgqipg8+yC8nSYfWz4XE=; h=Message-ID:Date:MIME-Version:CC:Subject:To:References:From: In-Reply-To:Content-Type; b=tTEBQ29NJi1iygG3Ddc5vTYd/W8h0wdyy1R1L/IMJZylDnUWSMRCQVmCfvhV4PhEbGFtOf8m5FydyPXorjntbHZdTVSLH2RV6oAVYU0Q1RQrUA/oWiXE8nQepuheWj4AW68uRylp60oMi+x3dzb7R9aA+kjrfj98f1ytF/AQZNw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.191 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4XjCh83dNZz1jwcn; Tue, 5 Nov 2024 10:58:16 +0800 (CST) Received: from kwepemk100013.china.huawei.com (unknown [7.202.194.61]) by mail.maildlp.com (Postfix) with ESMTPS id A966C1A016C; Tue, 5 Nov 2024 10:59:55 +0800 (CST) Received: from [10.67.120.192] (10.67.120.192) by kwepemk100013.china.huawei.com (7.202.194.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 5 Nov 2024 10:59:55 +0800 Message-ID: <8ce8ba06-acb9-400f-acee-ef0dbd023dc1@huawei.com> Date: Tue, 5 Nov 2024 10:59:54 +0800 Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird CC: , Salil Mehta , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: Re: Patch "net: hns3: add sync command to sync io-pgtable" has been added to the 6.11-stable tree To: , , References: <20241101192250.3849110-1-sashal@kernel.org> From: Jijie Shao In-Reply-To: <20241101192250.3849110-1-sashal@kernel.org> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemk100013.china.huawei.com (7.202.194.61) on 2024/11/2 3:22, Sasha Levin wrote: > This is a note to let you know that I've just added the patch titled > > net: hns3: add sync command to sync io-pgtable > > to the 6.11-stable tree which can be found at: > http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary > > The filename of the patch is: > net-hns3-add-sync-command-to-sync-io-pgtable.patch > and it can be found in the queue-6.11 subdirectory. > > If you, or anyone else, feels it should not be added to the stable tree, > please let know about it. Hi: This patch was reverted from netdev, so, it also need be reverted from stable tree. I am sorry for that. reverted link: https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/commit/?id=249cfa318fb1b77eb726c2ff4f74c9685f04e568 Thanks, Jijie Shao > > > > commit 0ea8c71561bc40a678c7bf15e081737e1f2d15e2 > Author: Jian Shen > Date: Fri Oct 25 17:29:31 2024 +0800 > > net: hns3: add sync command to sync io-pgtable > > [ Upstream commit f2c14899caba76da93ff3fff46b4d5a8f43ce07e ] > > To avoid errors in pgtable prefectch, add a sync command to sync > io-pagtable. > > This is a supplement for the previous patch. > We want all the tx packet can be handled with tx bounce buffer path. > But it depends on the remain space of the spare buffer, checked by the > hns3_can_use_tx_bounce(). In most cases, maybe 99.99%, it returns true. > But once it return false by no available space, the packet will be handled > with the former path, which will map/unmap the skb buffer. > Then the driver will face the smmu prefetch risk again. > > So add a sync command in this case to avoid smmu prefectch, > just protects corner scenes. > > Fixes: 295ba232a8c3 ("net: hns3: add device version to replace pci revision") > Signed-off-by: Jian Shen > Signed-off-by: Peiyang Wang > Signed-off-by: Jijie Shao > Signed-off-by: Paolo Abeni > Signed-off-by: Sasha Levin > > diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c > index ac88e301f2211..8760b4e9ade6b 100644 > --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c > +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c > @@ -381,6 +381,24 @@ static const struct hns3_rx_ptype hns3_rx_ptype_tbl[] = { > #define HNS3_INVALID_PTYPE \ > ARRAY_SIZE(hns3_rx_ptype_tbl) > > +static void hns3_dma_map_sync(struct device *dev, unsigned long iova) > +{ > + struct iommu_domain *domain = iommu_get_domain_for_dev(dev); > + struct iommu_iotlb_gather iotlb_gather; > + size_t granule; > + > + if (!domain || !iommu_is_dma_domain(domain)) > + return; > + > + granule = 1 << __ffs(domain->pgsize_bitmap); > + iova = ALIGN_DOWN(iova, granule); > + iotlb_gather.start = iova; > + iotlb_gather.end = iova + granule - 1; > + iotlb_gather.pgsize = granule; > + > + iommu_iotlb_sync(domain, &iotlb_gather); > +} > + > static irqreturn_t hns3_irq_handle(int irq, void *vector) > { > struct hns3_enet_tqp_vector *tqp_vector = vector; > @@ -1728,7 +1746,9 @@ static int hns3_map_and_fill_desc(struct hns3_enet_ring *ring, void *priv, > unsigned int type) > { > struct hns3_desc_cb *desc_cb = &ring->desc_cb[ring->next_to_use]; > + struct hnae3_handle *handle = ring->tqp->handle; > struct device *dev = ring_to_dev(ring); > + struct hnae3_ae_dev *ae_dev; > unsigned int size; > dma_addr_t dma; > > @@ -1760,6 +1780,13 @@ static int hns3_map_and_fill_desc(struct hns3_enet_ring *ring, void *priv, > return -ENOMEM; > } > > + /* Add a SYNC command to sync io-pgtale to avoid errors in pgtable > + * prefetch > + */ > + ae_dev = hns3_get_ae_dev(handle); > + if (ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V3) > + hns3_dma_map_sync(dev, dma); > + > desc_cb->priv = priv; > desc_cb->length = size; > desc_cb->dma = dma;