From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753699AbcIOPEa (ORCPT ); Thu, 15 Sep 2016 11:04:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35752 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751564AbcIOPEK (ORCPT ); Thu, 15 Sep 2016 11:04:10 -0400 From: Baoquan He To: joro@8bytes.org Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kexec@lists.infradead.org, dyoung@redhat.com, xlpang@redhat.com, Vincent.Wan@amd.com, Baoquan He Subject: [PATCH v5 8/8] iommu/amd: Update domain into to dte entry during device driver init Date: Thu, 15 Sep 2016 23:03:26 +0800 Message-Id: <1473951806-25511-9-git-send-email-bhe@redhat.com> In-Reply-To: <1473951806-25511-1-git-send-email-bhe@redhat.com> References: <1473951806-25511-1-git-send-email-bhe@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 15 Sep 2016 15:04:05 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org All devices are supposed to reset themselves at device driver initialization stage. At this time if in kdump kernel those on-flight DMA will be stopped because of device reset. It's best time to update the protection domain info, especially pte_root, to dte entry which the device relates to. Signed-off-by: Baoquan He --- drivers/iommu/amd_iommu.c | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index 6c37300..00b64ee 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2310,6 +2310,10 @@ static dma_addr_t __map_single(struct device *dev, unsigned int pages; int prot = 0; int i; + struct iommu_dev_data *dev_data = get_dev_data(dev); + struct protection_domain *domain = get_domain(dev); + u16 alias = amd_iommu_alias_table[dev_data->devid]; + struct amd_iommu *iommu = amd_iommu_rlookup_table[dev_data->devid]; pages = iommu_num_pages(paddr, size, PAGE_SIZE); paddr &= PAGE_MASK; @@ -2319,6 +2323,13 @@ static dma_addr_t __map_single(struct device *dev, goto out; prot = dir2prot(direction); + if (translation_pre_enabled(iommu) && !dev_data->domain_updated) { + dev_data->domain_updated = true; + set_dte_entry(dev_data->devid, domain, dev_data->ats.enabled); + if (alias != dev_data->devid) + set_dte_entry(alias, domain, dev_data->ats.enabled); + device_flush_dte(dev_data); + } start = address; for (i = 0; i < pages; ++i) { @@ -2470,6 +2481,9 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, struct scatterlist *s; unsigned long address; u64 dma_mask; + struct iommu_dev_data *dev_data = get_dev_data(dev); + u16 alias = amd_iommu_alias_table[dev_data->devid]; + struct amd_iommu *iommu = amd_iommu_rlookup_table[dev_data->devid]; domain = get_domain(dev); if (IS_ERR(domain)) @@ -2485,6 +2499,13 @@ static int map_sg(struct device *dev, struct scatterlist *sglist, goto out_err; prot = dir2prot(direction); + if (translation_pre_enabled(iommu) && !dev_data->domain_updated) { + dev_data->domain_updated = true; + set_dte_entry(dev_data->devid, domain, dev_data->ats.enabled); + if (alias != dev_data->devid) + set_dte_entry(alias, domain, dev_data->ats.enabled); + device_flush_dte(dev_data); + } /* Map all sg entries */ for_each_sg(sglist, s, nelems, i) { -- 2.5.5