linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/4] Add host kdump support for SNP
@ 2025-07-30 21:55 Ashish Kalra
  2025-07-30 21:55 ` [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump Ashish Kalra
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Ashish Kalra @ 2025-07-30 21:55 UTC (permalink / raw)
  To: joro, suravee.suthikulpanit, thomas.lendacky, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

From: Ashish Kalra <ashish.kalra@amd.com>

When a crash is triggered the kernel attempts to shut down SEV-SNP
using the SNP_SHUTDOWN_EX command. If active SNP VMs are present,
SNP_SHUTDOWN_EX fails as firmware checks all encryption-capable ASIDs
to ensure none are in use and that a DF_FLUSH is not required. 

This casues the kdump kernel to boot with IOMMU SNP enforcement still
enabled and IOMMU completion wait buffers (CWBs), command buffers,
device tables and event buffer registers remain locked and exclusive
to the previous kernel. Attempts to allocate and use new buffers in
the kdump kernel fail, as the hardware ignores writes to the locked
MMIO registers (per AMD IOMMU spec Section 2.12.2.1).

As a result, the kdump kernel cannot initialize the IOMMU or enable IRQ
remapping which is required for proper operation.

This results in repeated "Completion-Wait loop timed out" errors and a
second kernel panic: "Kernel panic - not syncing: timer doesn't work
through Interrupt-remapped IO-APIC"

The list of MMIO registers locked and which ignore writes after failed
SNP shutdown are mentioned in the AMD IOMMU specifications below:

Section 2.12.2.1.
https://docs.amd.com/v/u/en-US/48882_3.10_PUB

Instead of allocating new buffers, re-use the previous kernel’s pages
for completion wait buffers, command buffers, event buffers and device
tables and operate with the already enabled SNP configuration and
existing data structures.

This approach is now used for kdump boot regardless of whether SNP is
enabled during kdump.

The patch-series enables successful crashkernel/kdump operation on SNP
hosts even when SNP_SHUTDOWN_EX fails.

v5:
- Fix sparse build warnings, use (__force void *) for
  fixing cast return of (void __iomem *) to (void *) from ioremap_encrypted()
  in iommu_memremap().
- Add Tested-by tags.

v4:
- Fix commit logs.
- Explicitly call ioremap_encrypted() if SME is enabled and memremap()
otherwise if SME is not enabled in iommu_memremap().
- Rename remap_cwwb_sem() to remap_or_alloc_cwwb_sem().
- Fix inline comments.
- Skip both SEV and SNP INIT for kdump boot.
- Add a BUG_ON() if reuse_device_table() fails in case of SNP enabled.
- Drop "Fixes:" tag as this patch-series enables host kdump for SNP.

v3:
- Moving to AMD IOMMU driver fix so that there is no need to do SNP_DECOMMISSION
during panic() and kdump kernel boot will be more agnostic to 
whether or not SNP_SHUTDOWN is done properly (or even done at all),
i.e., even with active SNP guests. Fixing crashkernel/kdump boot with IOMMU SNP/RMP
enforcement still enabled prior to kdump boot by reusing the pages of the previous 
kernel for IOMMU completion wait buffers, command buffer and device table and
memremap them during kdump boot.
- Rebased on linux-next.
- Split the original patch into smaller patches and prepare separate
patches for adding iommu_memremap() helper and remapping/unmapping of 
IOMMU buffers for kdump, Reusing device table for kdump and skip the
enabling of IOMMU buffers for kdump.
- Add new functions for remapping/unmapping IOMMU buffers and call
them from alloc_iommu_buffers/free_iommu_buffers in case of kdump boot
else call the exisiting alloc/free variants of CWB, command and event buffers.
- Skip SNP INIT in case of kdump boot.
- The final patch skips enabling IOMMU command buffer and event buffer
for kdump boot which fixes kdump on SNP host.
- Add comment that completion wait buffers are only re-used when SNP is
enabled.

Ashish Kalra (4):
  iommu/amd: Add support to remap/unmap IOMMU buffers for kdump
  iommu/amd: Reuse device table for kdump
  crypto: ccp: Skip SEV and SNP INIT for kdump boot
  iommu/amd: Skip enabling command/event buffers for kdump

 drivers/crypto/ccp/sev-dev.c        |   8 +
 drivers/iommu/amd/amd_iommu_types.h |   5 +
 drivers/iommu/amd/init.c            | 298 +++++++++++++++++++---------
 drivers/iommu/amd/iommu.c           |   2 +-
 4 files changed, 221 insertions(+), 92 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump
  2025-07-30 21:55 [PATCH v5 0/4] Add host kdump support for SNP Ashish Kalra
@ 2025-07-30 21:55 ` Ashish Kalra
  2025-08-21 11:29   ` Vasant Hegde
  2025-07-30 21:56 ` [PATCH v5 2/4] iommu/amd: Reuse device table " Ashish Kalra
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 9+ messages in thread
From: Ashish Kalra @ 2025-07-30 21:55 UTC (permalink / raw)
  To: joro, suravee.suthikulpanit, thomas.lendacky, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

From: Ashish Kalra <ashish.kalra@amd.com>

After a panic if SNP is enabled in the previous kernel then the kdump
kernel boots with IOMMU SNP enforcement still enabled.

IOMMU completion wait buffers (CWBs), command buffers and event buffer
registers remain locked and exclusive to the previous kernel. Attempts
to allocate and use new buffers in the kdump kernel fail, as hardware
ignores writes to the locked MMIO registers as per AMD IOMMU spec
Section 2.12.2.1.

This results in repeated "Completion-Wait loop timed out" errors and a
second kernel panic: "Kernel panic - not syncing: timer doesn't work
through Interrupt-remapped IO-APIC"

The list of MMIO registers locked and which ignore writes after failed
SNP shutdown are mentioned in the AMD IOMMU specifications below:

Section 2.12.2.1.
https://docs.amd.com/v/u/en-US/48882_3.10_PUB

Reuse the pages of the previous kernel for completion wait buffers,
command buffers, event buffers and memremap them during kdump boot
and essentially work with an already enabled IOMMU configuration and
re-using the previous kernel’s data structures.

Reusing of command buffers and event buffers is now done for kdump boot
irrespective of SNP being enabled during kdump.

Re-use of completion wait buffers is only done when SNP is enabled as
the exclusion base register is used for the completion wait buffer
(CWB) address only when SNP is enabled.

Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 drivers/iommu/amd/amd_iommu_types.h |   5 +
 drivers/iommu/amd/init.c            | 164 ++++++++++++++++++++++++++--
 drivers/iommu/amd/iommu.c           |   2 +-
 3 files changed, 158 insertions(+), 13 deletions(-)

diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
index 5219d7ddfdaa..8a863cae99db 100644
--- a/drivers/iommu/amd/amd_iommu_types.h
+++ b/drivers/iommu/amd/amd_iommu_types.h
@@ -791,6 +791,11 @@ struct amd_iommu {
 	u32 flags;
 	volatile u64 *cmd_sem;
 	atomic64_t cmd_sem_val;
+	/*
+	 * Track physical address to directly use it in build_completion_wait()
+	 * and avoid adding any special checks and handling for kdump.
+	 */
+	u64 cmd_sem_paddr;
 
 #ifdef CONFIG_AMD_IOMMU_DEBUGFS
 	/* DebugFS Info */
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 7b5af6176de9..aae1aa7723a5 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -710,6 +710,26 @@ static void __init free_alias_table(struct amd_iommu_pci_seg *pci_seg)
 	pci_seg->alias_table = NULL;
 }
 
+static inline void *iommu_memremap(unsigned long paddr, size_t size)
+{
+	phys_addr_t phys;
+
+	if (!paddr)
+		return NULL;
+
+	/*
+	 * Obtain true physical address in kdump kernel when SME is enabled.
+	 * Currently, previous kernel with SME enabled and kdump kernel
+	 * with SME support disabled is not supported.
+	 */
+	phys = __sme_clr(paddr);
+
+	if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
+		return (__force void *)ioremap_encrypted(phys, size);
+	else
+		return memremap(phys, size, MEMREMAP_WB);
+}
+
 /*
  * Allocates the command buffer. This buffer is per AMD IOMMU. We can
  * write commands to that buffer later and the IOMMU will execute them
@@ -942,8 +962,103 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
 static int __init alloc_cwwb_sem(struct amd_iommu *iommu)
 {
 	iommu->cmd_sem = iommu_alloc_4k_pages(iommu, GFP_KERNEL, 1);
+	if (!iommu->cmd_sem)
+		return -ENOMEM;
+	iommu->cmd_sem_paddr = iommu_virt_to_phys((void *)iommu->cmd_sem);
+	return 0;
+}
+
+static int __init remap_event_buffer(struct amd_iommu *iommu)
+{
+	u64 paddr;
+
+	pr_info_once("Re-using event buffer from the previous kernel\n");
+	/*
+	 * Read-back the event log base address register and apply
+	 * PM_ADDR_MASK to obtain the event log base address.
+	 */
+	paddr = readq(iommu->mmio_base + MMIO_EVT_BUF_OFFSET) & PM_ADDR_MASK;
+	iommu->evt_buf = iommu_memremap(paddr, EVT_BUFFER_SIZE);
 
-	return iommu->cmd_sem ? 0 : -ENOMEM;
+	return iommu->evt_buf ? 0 : -ENOMEM;
+}
+
+static int __init remap_command_buffer(struct amd_iommu *iommu)
+{
+	u64 paddr;
+
+	pr_info_once("Re-using command buffer from the previous kernel\n");
+	/*
+	 * Read-back the command buffer base address register and apply
+	 * PM_ADDR_MASK to obtain the command buffer base address.
+	 */
+	paddr = readq(iommu->mmio_base + MMIO_CMD_BUF_OFFSET) & PM_ADDR_MASK;
+	iommu->cmd_buf = iommu_memremap(paddr, CMD_BUFFER_SIZE);
+
+	return iommu->cmd_buf ? 0 : -ENOMEM;
+}
+
+static int __init remap_or_alloc_cwwb_sem(struct amd_iommu *iommu)
+{
+	u64 paddr;
+
+	if (check_feature(FEATURE_SNP)) {
+		/*
+		 * When SNP is enabled, the exclusion base register is used for the
+		 * completion wait buffer (CWB) address. Read and re-use it.
+		 */
+		pr_info_once("Re-using CWB buffers from the previous kernel\n");
+		/*
+		 * Read-back the exclusion base register and apply PM_ADDR_MASK
+		 * to obtain the exclusion range base address.
+		 */
+		paddr = readq(iommu->mmio_base + MMIO_EXCL_BASE_OFFSET) & PM_ADDR_MASK;
+		iommu->cmd_sem = iommu_memremap(paddr, PAGE_SIZE);
+		if (!iommu->cmd_sem)
+			return -ENOMEM;
+		iommu->cmd_sem_paddr = paddr;
+	} else {
+		return alloc_cwwb_sem(iommu);
+	}
+
+	return 0;
+}
+
+static int __init alloc_iommu_buffers(struct amd_iommu *iommu)
+{
+	int ret;
+
+	/*
+	 * Reuse/Remap the previous kernel's allocated completion wait
+	 * command and event buffers for kdump boot.
+	 */
+	if (is_kdump_kernel()) {
+		ret = remap_or_alloc_cwwb_sem(iommu);
+		if (ret)
+			return ret;
+
+		ret = remap_command_buffer(iommu);
+		if (ret)
+			return ret;
+
+		ret = remap_event_buffer(iommu);
+		if (ret)
+			return ret;
+	} else {
+		ret = alloc_cwwb_sem(iommu);
+		if (ret)
+			return ret;
+
+		ret = alloc_command_buffer(iommu);
+		if (ret)
+			return ret;
+
+		ret = alloc_event_buffer(iommu);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
 }
 
 static void __init free_cwwb_sem(struct amd_iommu *iommu)
@@ -951,6 +1066,38 @@ static void __init free_cwwb_sem(struct amd_iommu *iommu)
 	if (iommu->cmd_sem)
 		iommu_free_pages((void *)iommu->cmd_sem);
 }
+static void __init unmap_cwwb_sem(struct amd_iommu *iommu)
+{
+	if (iommu->cmd_sem) {
+		if (check_feature(FEATURE_SNP))
+			memunmap((void *)iommu->cmd_sem);
+		else
+			iommu_free_pages((void *)iommu->cmd_sem);
+	}
+}
+
+static void __init unmap_command_buffer(struct amd_iommu *iommu)
+{
+	memunmap((void *)iommu->cmd_buf);
+}
+
+static void __init unmap_event_buffer(struct amd_iommu *iommu)
+{
+	memunmap(iommu->evt_buf);
+}
+
+static void __init free_iommu_buffers(struct amd_iommu *iommu)
+{
+	if (is_kdump_kernel()) {
+		unmap_cwwb_sem(iommu);
+		unmap_command_buffer(iommu);
+		unmap_event_buffer(iommu);
+	} else {
+		free_cwwb_sem(iommu);
+		free_command_buffer(iommu);
+		free_event_buffer(iommu);
+	}
+}
 
 static void iommu_enable_xt(struct amd_iommu *iommu)
 {
@@ -1655,9 +1802,7 @@ static void __init free_sysfs(struct amd_iommu *iommu)
 static void __init free_iommu_one(struct amd_iommu *iommu)
 {
 	free_sysfs(iommu);
-	free_cwwb_sem(iommu);
-	free_command_buffer(iommu);
-	free_event_buffer(iommu);
+	free_iommu_buffers(iommu);
 	amd_iommu_free_ppr_log(iommu);
 	free_ga_log(iommu);
 	iommu_unmap_mmio_space(iommu);
@@ -1821,14 +1966,9 @@ static int __init init_iommu_one_late(struct amd_iommu *iommu)
 {
 	int ret;
 
-	if (alloc_cwwb_sem(iommu))
-		return -ENOMEM;
-
-	if (alloc_command_buffer(iommu))
-		return -ENOMEM;
-
-	if (alloc_event_buffer(iommu))
-		return -ENOMEM;
+	ret = alloc_iommu_buffers(iommu);
+	if (ret)
+		return ret;
 
 	iommu->int_enabled = false;
 
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index eb348c63a8d0..05a9ab3da1a3 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1195,7 +1195,7 @@ static void build_completion_wait(struct iommu_cmd *cmd,
 				  struct amd_iommu *iommu,
 				  u64 data)
 {
-	u64 paddr = iommu_virt_to_phys((void *)iommu->cmd_sem);
+	u64 paddr = iommu->cmd_sem_paddr;
 
 	memset(cmd, 0, sizeof(*cmd));
 	cmd->data[0] = lower_32_bits(paddr) | CMD_COMPL_WAIT_STORE_MASK;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 2/4] iommu/amd: Reuse device table for kdump
  2025-07-30 21:55 [PATCH v5 0/4] Add host kdump support for SNP Ashish Kalra
  2025-07-30 21:55 ` [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump Ashish Kalra
@ 2025-07-30 21:56 ` Ashish Kalra
  2025-08-21 11:45   ` Vasant Hegde
  2025-07-30 21:56 ` [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot Ashish Kalra
  2025-07-30 21:56 ` [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump Ashish Kalra
  3 siblings, 1 reply; 9+ messages in thread
From: Ashish Kalra @ 2025-07-30 21:56 UTC (permalink / raw)
  To: joro, suravee.suthikulpanit, thomas.lendacky, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

From: Ashish Kalra <ashish.kalra@amd.com>

After a panic if SNP is enabled in the previous kernel then the kdump
kernel boots with IOMMU SNP enforcement still enabled.

IOMMU device table register is locked and exclusive to the previous
kernel. Attempts to copy old device table from the previous kernel
fails in kdump kernel as hardware ignores writes to the locked device
table base address register as per AMD IOMMU spec Section 2.12.2.1.

This results in repeated "Completion-Wait loop timed out" errors and a
second kernel panic: "Kernel panic - not syncing: timer doesn't work
through Interrupt-remapped IO-APIC".

Reuse device table instead of copying device table in case of kdump
boot and remove all copying device table code.
 
Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 drivers/iommu/amd/init.c | 106 +++++++++++++--------------------------
 1 file changed, 36 insertions(+), 70 deletions(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index aae1aa7723a5..05d9c1764883 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -406,6 +406,9 @@ static void iommu_set_device_table(struct amd_iommu *iommu)
 
 	BUG_ON(iommu->mmio_base == NULL);
 
+	if (is_kdump_kernel())
+		return;
+
 	entry = iommu_virt_to_phys(dev_table);
 	entry |= (dev_table_size >> 12) - 1;
 	memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET,
@@ -646,7 +649,10 @@ static inline int __init alloc_dev_table(struct amd_iommu_pci_seg *pci_seg)
 
 static inline void free_dev_table(struct amd_iommu_pci_seg *pci_seg)
 {
-	iommu_free_pages(pci_seg->dev_table);
+	if (is_kdump_kernel())
+		memunmap((void *)pci_seg->dev_table);
+	else
+		iommu_free_pages(pci_seg->dev_table);
 	pci_seg->dev_table = NULL;
 }
 
@@ -1129,15 +1135,12 @@ static void set_dte_bit(struct dev_table_entry *dte, u8 bit)
 	dte->data[i] |= (1UL << _bit);
 }
 
-static bool __copy_device_table(struct amd_iommu *iommu)
+static bool __reuse_device_table(struct amd_iommu *iommu)
 {
-	u64 int_ctl, int_tab_len, entry = 0;
 	struct amd_iommu_pci_seg *pci_seg = iommu->pci_seg;
-	struct dev_table_entry *old_devtb = NULL;
-	u32 lo, hi, devid, old_devtb_size;
+	u32 lo, hi, old_devtb_size;
 	phys_addr_t old_devtb_phys;
-	u16 dom_id, dte_v, irq_v;
-	u64 tmp;
+	u64 entry;
 
 	/* Each IOMMU use separate device table with the same size */
 	lo = readl(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET);
@@ -1162,66 +1165,22 @@ static bool __copy_device_table(struct amd_iommu *iommu)
 		pr_err("The address of old device table is above 4G, not trustworthy!\n");
 		return false;
 	}
-	old_devtb = (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT) && is_kdump_kernel())
-		    ? (__force void *)ioremap_encrypted(old_devtb_phys,
-							pci_seg->dev_table_size)
-		    : memremap(old_devtb_phys, pci_seg->dev_table_size, MEMREMAP_WB);
-
-	if (!old_devtb)
-		return false;
 
-	pci_seg->old_dev_tbl_cpy = iommu_alloc_pages_sz(
-		GFP_KERNEL | GFP_DMA32, pci_seg->dev_table_size);
+	/*
+	 * IOMMU Device Table Base Address MMIO register is locked
+	 * if SNP is enabled during kdump, reuse the previous kernel's
+	 * device table.
+	 */
+	pci_seg->old_dev_tbl_cpy = iommu_memremap(old_devtb_phys, pci_seg->dev_table_size);
 	if (pci_seg->old_dev_tbl_cpy == NULL) {
-		pr_err("Failed to allocate memory for copying old device table!\n");
-		memunmap(old_devtb);
+		pr_err("Failed to remap memory for reusing old device table!\n");
 		return false;
 	}
 
-	for (devid = 0; devid <= pci_seg->last_bdf; ++devid) {
-		pci_seg->old_dev_tbl_cpy[devid] = old_devtb[devid];
-		dom_id = old_devtb[devid].data[1] & DEV_DOMID_MASK;
-		dte_v = old_devtb[devid].data[0] & DTE_FLAG_V;
-
-		if (dte_v && dom_id) {
-			pci_seg->old_dev_tbl_cpy[devid].data[0] = old_devtb[devid].data[0];
-			pci_seg->old_dev_tbl_cpy[devid].data[1] = old_devtb[devid].data[1];
-			/* Reserve the Domain IDs used by previous kernel */
-			if (ida_alloc_range(&pdom_ids, dom_id, dom_id, GFP_ATOMIC) != dom_id) {
-				pr_err("Failed to reserve domain ID 0x%x\n", dom_id);
-				memunmap(old_devtb);
-				return false;
-			}
-			/* If gcr3 table existed, mask it out */
-			if (old_devtb[devid].data[0] & DTE_FLAG_GV) {
-				tmp = (DTE_GCR3_30_15 | DTE_GCR3_51_31);
-				pci_seg->old_dev_tbl_cpy[devid].data[1] &= ~tmp;
-				tmp = (DTE_GCR3_14_12 | DTE_FLAG_GV);
-				pci_seg->old_dev_tbl_cpy[devid].data[0] &= ~tmp;
-			}
-		}
-
-		irq_v = old_devtb[devid].data[2] & DTE_IRQ_REMAP_ENABLE;
-		int_ctl = old_devtb[devid].data[2] & DTE_IRQ_REMAP_INTCTL_MASK;
-		int_tab_len = old_devtb[devid].data[2] & DTE_INTTABLEN_MASK;
-		if (irq_v && (int_ctl || int_tab_len)) {
-			if ((int_ctl != DTE_IRQ_REMAP_INTCTL) ||
-			    (int_tab_len != DTE_INTTABLEN_512 &&
-			     int_tab_len != DTE_INTTABLEN_2K)) {
-				pr_err("Wrong old irq remapping flag: %#x\n", devid);
-				memunmap(old_devtb);
-				return false;
-			}
-
-			pci_seg->old_dev_tbl_cpy[devid].data[2] = old_devtb[devid].data[2];
-		}
-	}
-	memunmap(old_devtb);
-
 	return true;
 }
 
-static bool copy_device_table(void)
+static bool reuse_device_table(void)
 {
 	struct amd_iommu *iommu;
 	struct amd_iommu_pci_seg *pci_seg;
@@ -1229,17 +1188,17 @@ static bool copy_device_table(void)
 	if (!amd_iommu_pre_enabled)
 		return false;
 
-	pr_warn("Translation is already enabled - trying to copy translation structures\n");
+	pr_warn("Translation is already enabled - trying to reuse translation structures\n");
 
 	/*
 	 * All IOMMUs within PCI segment shares common device table.
-	 * Hence copy device table only once per PCI segment.
+	 * Hence reuse device table only once per PCI segment.
 	 */
 	for_each_pci_segment(pci_seg) {
 		for_each_iommu(iommu) {
 			if (pci_seg->id != iommu->pci_seg->id)
 				continue;
-			if (!__copy_device_table(iommu))
+			if (!__reuse_device_table(iommu))
 				return false;
 			break;
 		}
@@ -2918,8 +2877,8 @@ static void early_enable_iommu(struct amd_iommu *iommu)
  * This function finally enables all IOMMUs found in the system after
  * they have been initialized.
  *
- * Or if in kdump kernel and IOMMUs are all pre-enabled, try to copy
- * the old content of device table entries. Not this case or copy failed,
+ * Or if in kdump kernel and IOMMUs are all pre-enabled, try to reuse
+ * the old content of device table entries. Not this case or reuse failed,
  * just continue as normal kernel does.
  */
 static void early_enable_iommus(void)
@@ -2927,18 +2886,25 @@ static void early_enable_iommus(void)
 	struct amd_iommu *iommu;
 	struct amd_iommu_pci_seg *pci_seg;
 
-	if (!copy_device_table()) {
+	if (!reuse_device_table()) {
 		/*
-		 * If come here because of failure in copying device table from old
+		 * If come here because of failure in reusing device table from old
 		 * kernel with all IOMMUs enabled, print error message and try to
 		 * free allocated old_dev_tbl_cpy.
 		 */
-		if (amd_iommu_pre_enabled)
-			pr_err("Failed to copy DEV table from previous kernel.\n");
+		if (amd_iommu_pre_enabled) {
+			pr_err("Failed to reuse DEV table from previous kernel.\n");
+			/*
+			 * Bail out early if unable to remap/reuse DEV table from
+			 * previous kernel if SNP enabled as IOMMU commands will
+			 * time out without DEV table and cause kdump boot panic.
+			 */
+			BUG_ON(check_feature(FEATURE_SNP));
+		}
 
 		for_each_pci_segment(pci_seg) {
 			if (pci_seg->old_dev_tbl_cpy != NULL) {
-				iommu_free_pages(pci_seg->old_dev_tbl_cpy);
+				memunmap((void *)pci_seg->old_dev_tbl_cpy);
 				pci_seg->old_dev_tbl_cpy = NULL;
 			}
 		}
@@ -2948,7 +2914,7 @@ static void early_enable_iommus(void)
 			early_enable_iommu(iommu);
 		}
 	} else {
-		pr_info("Copied DEV table from previous kernel.\n");
+		pr_info("Reused DEV table from previous kernel.\n");
 
 		for_each_pci_segment(pci_seg) {
 			iommu_free_pages(pci_seg->dev_table);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot
  2025-07-30 21:55 [PATCH v5 0/4] Add host kdump support for SNP Ashish Kalra
  2025-07-30 21:55 ` [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump Ashish Kalra
  2025-07-30 21:56 ` [PATCH v5 2/4] iommu/amd: Reuse device table " Ashish Kalra
@ 2025-07-30 21:56 ` Ashish Kalra
  2025-08-22 16:48   ` Tom Lendacky
  2025-07-30 21:56 ` [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump Ashish Kalra
  3 siblings, 1 reply; 9+ messages in thread
From: Ashish Kalra @ 2025-07-30 21:56 UTC (permalink / raw)
  To: joro, suravee.suthikulpanit, thomas.lendacky, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

From: Ashish Kalra <ashish.kalra@amd.com>

If SNP is enabled and initialized in the previous kernel then SNP is
already initialized for kdump boot and attempting SNP INIT again
during kdump boot causes SNP INIT failure and eventually leads to
IOMMU failures.

For SEV avoid SEV INIT failure warnings during kdump boot if SEV
is enabled and initialized in the previous kernel.

Skip SNP and SEV INIT if doing kdump boot.

Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 drivers/crypto/ccp/sev-dev.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index e058ba027792..c204831ca4a6 100644
--- a/drivers/crypto/ccp/sev-dev.c
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -28,6 +28,7 @@
 #include <linux/fs_struct.h>
 #include <linux/psp.h>
 #include <linux/amd-iommu.h>
+#include <linux/crash_dump.h>
 
 #include <asm/smp.h>
 #include <asm/cacheflush.h>
@@ -1345,6 +1346,13 @@ static int _sev_platform_init_locked(struct sev_platform_init_args *args)
 	if (!psp_master || !psp_master->sev_data)
 		return -ENODEV;
 
+	/*
+	 * Skip SNP/SEV INIT for kdump boot as SEV/SNP is already initialized
+	 * in previous kernel if SEV/SNP is enabled.
+	 */
+	if (is_kdump_kernel())
+		return 0;
+
 	sev = psp_master->sev_data;
 
 	if (sev->state == SEV_STATE_INIT)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump
  2025-07-30 21:55 [PATCH v5 0/4] Add host kdump support for SNP Ashish Kalra
                   ` (2 preceding siblings ...)
  2025-07-30 21:56 ` [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot Ashish Kalra
@ 2025-07-30 21:56 ` Ashish Kalra
  2025-08-21 11:32   ` Vasant Hegde
  3 siblings, 1 reply; 9+ messages in thread
From: Ashish Kalra @ 2025-07-30 21:56 UTC (permalink / raw)
  To: joro, suravee.suthikulpanit, thomas.lendacky, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

From: Ashish Kalra <ashish.kalra@amd.com>

After a panic if SNP is enabled in the previous kernel then the kdump
kernel boots with IOMMU SNP enforcement still enabled.

IOMMU command buffers and event buffer registers remain locked and
exclusive to the previous kernel. Attempts to enable command and event
buffers in the kdump kernel will fail, as hardware ignores writes to
the locked MMIO registers as per AMD IOMMU spec Section 2.12.2.1.

Skip enabling command buffers and event buffers for kdump boot as they
are already enabled in the previous kernel.

Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
---
 drivers/iommu/amd/init.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index 05d9c1764883..4e792a112b48 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -821,11 +821,16 @@ static void iommu_enable_command_buffer(struct amd_iommu *iommu)
 
 	BUG_ON(iommu->cmd_buf == NULL);
 
-	entry = iommu_virt_to_phys(iommu->cmd_buf);
-	entry |= MMIO_CMD_SIZE_512;
-
-	memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET,
-		    &entry, sizeof(entry));
+	if (!is_kdump_kernel()) {
+		/*
+		 * Command buffer is re-used for kdump kernel and setting
+		 * of MMIO register is not required.
+		 */
+		entry = iommu_virt_to_phys(iommu->cmd_buf);
+		entry |= MMIO_CMD_SIZE_512;
+		memcpy_toio(iommu->mmio_base + MMIO_CMD_BUF_OFFSET,
+			    &entry, sizeof(entry));
+	}
 
 	amd_iommu_reset_cmd_buffer(iommu);
 }
@@ -876,10 +881,15 @@ static void iommu_enable_event_buffer(struct amd_iommu *iommu)
 
 	BUG_ON(iommu->evt_buf == NULL);
 
-	entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK;
-
-	memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET,
-		    &entry, sizeof(entry));
+	if (!is_kdump_kernel()) {
+		/*
+		 * Event buffer is re-used for kdump kernel and setting
+		 * of MMIO register is not required.
+		 */
+		entry = iommu_virt_to_phys(iommu->evt_buf) | EVT_LEN_MASK;
+		memcpy_toio(iommu->mmio_base + MMIO_EVT_BUF_OFFSET,
+			    &entry, sizeof(entry));
+	}
 
 	/* set head and tail to zero manually */
 	writel(0x00, iommu->mmio_base + MMIO_EVT_HEAD_OFFSET);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump
  2025-07-30 21:55 ` [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump Ashish Kalra
@ 2025-08-21 11:29   ` Vasant Hegde
  0 siblings, 0 replies; 9+ messages in thread
From: Vasant Hegde @ 2025-08-21 11:29 UTC (permalink / raw)
  To: Ashish Kalra, joro, suravee.suthikulpanit, thomas.lendacky,
	Sairaj.ArunKodilkar, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

Hi Ashish,


On 7/31/2025 3:25 AM, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> After a panic if SNP is enabled in the previous kernel then the kdump
> kernel boots with IOMMU SNP enforcement still enabled.
> 
> IOMMU completion wait buffers (CWBs), command buffers and event buffer
> registers remain locked and exclusive to the previous kernel. Attempts
> to allocate and use new buffers in the kdump kernel fail, as hardware
> ignores writes to the locked MMIO registers as per AMD IOMMU spec
> Section 2.12.2.1.
> 
> This results in repeated "Completion-Wait loop timed out" errors and a
> second kernel panic: "Kernel panic - not syncing: timer doesn't work
> through Interrupt-remapped IO-APIC"
> 
> The list of MMIO registers locked and which ignore writes after failed
> SNP shutdown are mentioned in the AMD IOMMU specifications below:
> 
> Section 2.12.2.1.
> https://docs.amd.com/v/u/en-US/48882_3.10_PUB
> 
> Reuse the pages of the previous kernel for completion wait buffers,
> command buffers, event buffers and memremap them during kdump boot
> and essentially work with an already enabled IOMMU configuration and
> re-using the previous kernel’s data structures.
> 
> Reusing of command buffers and event buffers is now done for kdump boot
> irrespective of SNP being enabled during kdump.
> 
> Re-use of completion wait buffers is only done when SNP is enabled as
> the exclusion base register is used for the completion wait buffer
> (CWB) address only when SNP is enabled.
> 
> Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

I'd still think introducing ops for various callback makes more sense than
having explicit is_kdump_kernel check everywhere. I am fine with having follow
up patch to fix that. With that and few minor nits below :

Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>


> ---
>  drivers/iommu/amd/amd_iommu_types.h |   5 +
>  drivers/iommu/amd/init.c            | 164 ++++++++++++++++++++++++++--
>  drivers/iommu/amd/iommu.c           |   2 +-
>  3 files changed, 158 insertions(+), 13 deletions(-)
> 
> diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
> index 5219d7ddfdaa..8a863cae99db 100644
> --- a/drivers/iommu/amd/amd_iommu_types.h
> +++ b/drivers/iommu/amd/amd_iommu_types.h
> @@ -791,6 +791,11 @@ struct amd_iommu {
>  	u32 flags;
>  	volatile u64 *cmd_sem;
>  	atomic64_t cmd_sem_val;
> +	/*
> +	 * Track physical address to directly use it in build_completion_wait()
> +	 * and avoid adding any special checks and handling for kdump.
> +	 */
> +	u64 cmd_sem_paddr;
>  
>  #ifdef CONFIG_AMD_IOMMU_DEBUGFS
>  	/* DebugFS Info */
> diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
> index 7b5af6176de9..aae1aa7723a5 100644
> --- a/drivers/iommu/amd/init.c
> +++ b/drivers/iommu/amd/init.c
> @@ -710,6 +710,26 @@ static void __init free_alias_table(struct amd_iommu_pci_seg *pci_seg)
>  	pci_seg->alias_table = NULL;
>  }
>  
> +static inline void *iommu_memremap(unsigned long paddr, size_t size)
> +{
> +	phys_addr_t phys;
> +
> +	if (!paddr)
> +		return NULL;
> +
> +	/*
> +	 * Obtain true physical address in kdump kernel when SME is enabled.
> +	 * Currently, previous kernel with SME enabled and kdump kernel
> +	 * with SME support disabled is not supported.
> +	 */> +	phys = __sme_clr(paddr);
> +
> +	if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
> +		return (__force void *)ioremap_encrypted(phys, size);
> +	else
> +		return memremap(phys, size, MEMREMAP_WB);
> +}
> +
>  /*
>   * Allocates the command buffer. This buffer is per AMD IOMMU. We can
>   * write commands to that buffer later and the IOMMU will execute them
> @@ -942,8 +962,103 @@ static int iommu_init_ga_log(struct amd_iommu *iommu)
>  static int __init alloc_cwwb_sem(struct amd_iommu *iommu)
>  {
>  	iommu->cmd_sem = iommu_alloc_4k_pages(iommu, GFP_KERNEL, 1);
> +	if (!iommu->cmd_sem)
> +		return -ENOMEM;
> +	iommu->cmd_sem_paddr = iommu_virt_to_phys((void *)iommu->cmd_sem);
> +	return 0;
> +}
> +
> +static int __init remap_event_buffer(struct amd_iommu *iommu)
> +{
> +	u64 paddr;
> +
> +	pr_info_once("Re-using event buffer from the previous kernel\n");
> +	/*
> +	 * Read-back the event log base address register and apply
> +	 * PM_ADDR_MASK to obtain the event log base address.

IMO this comment is redundant. Its implicit that we have to apply the addr_mask
to get the actual address.

> +	 */
> +	paddr = readq(iommu->mmio_base + MMIO_EVT_BUF_OFFSET) & PM_ADDR_MASK;
> +	iommu->evt_buf = iommu_memremap(paddr, EVT_BUFFER_SIZE);
>  
> -	return iommu->cmd_sem ? 0 : -ENOMEM;
> +	return iommu->evt_buf ? 0 : -ENOMEM;
> +}
> +
> +static int __init remap_command_buffer(struct amd_iommu *iommu)
> +{
> +	u64 paddr;
> +
> +	pr_info_once("Re-using command buffer from the previous kernel\n");
> +	/*
> +	 * Read-back the command buffer base address register and apply
> +	 * PM_ADDR_MASK to obtain the command buffer base address.

ditto.


-Vasant


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump
  2025-07-30 21:56 ` [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump Ashish Kalra
@ 2025-08-21 11:32   ` Vasant Hegde
  0 siblings, 0 replies; 9+ messages in thread
From: Vasant Hegde @ 2025-08-21 11:32 UTC (permalink / raw)
  To: Ashish Kalra, joro, suravee.suthikulpanit, thomas.lendacky,
	Sairaj.ArunKodilkar, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm



On 7/31/2025 3:26 AM, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> After a panic if SNP is enabled in the previous kernel then the kdump
> kernel boots with IOMMU SNP enforcement still enabled.
> 
> IOMMU command buffers and event buffer registers remain locked and
> exclusive to the previous kernel. Attempts to enable command and event
> buffers in the kdump kernel will fail, as hardware ignores writes to
> the locked MMIO registers as per AMD IOMMU spec Section 2.12.2.1.
> 
> Skip enabling command buffers and event buffers for kdump boot as they
> are already enabled in the previous kernel.
> 
> Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>

Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>


-Vasant


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 2/4] iommu/amd: Reuse device table for kdump
  2025-07-30 21:56 ` [PATCH v5 2/4] iommu/amd: Reuse device table " Ashish Kalra
@ 2025-08-21 11:45   ` Vasant Hegde
  0 siblings, 0 replies; 9+ messages in thread
From: Vasant Hegde @ 2025-08-21 11:45 UTC (permalink / raw)
  To: Ashish Kalra, joro, suravee.suthikulpanit, thomas.lendacky,
	Sairaj.ArunKodilkar, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm



On 7/31/2025 3:26 AM, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> After a panic if SNP is enabled in the previous kernel then the kdump
> kernel boots with IOMMU SNP enforcement still enabled.
> 
> IOMMU device table register is locked and exclusive to the previous
> kernel. Attempts to copy old device table from the previous kernel
> fails in kdump kernel as hardware ignores writes to the locked device
> table base address register as per AMD IOMMU spec Section 2.12.2.1.
> 

Can you please expand and add something like below so that actually issue is clear.

This causes the IOMMU driver (OS) and the hardware to reference different memory
locations. As a result, the IOMMU hardware cannot process the command.




> This results in repeated "Completion-Wait loop timed out" errors and a
> second kernel panic: "Kernel panic - not syncing: timer doesn't work
> through Interrupt-remapped IO-APIC".
> 
> Reuse device table instead of copying device table in case of kdump
> boot and remove all copying device table code.
>  
> Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>


Few minor nits. Otherwise patch looks ok.

Reviewed-by: Vasant Hegde <vasant.hegde@amd.com>


> ---
>  drivers/iommu/amd/init.c | 106 +++++++++++++--------------------------
>  1 file changed, 36 insertions(+), 70 deletions(-)
> 
> diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
> index aae1aa7723a5..05d9c1764883 100644
> --- a/drivers/iommu/amd/init.c
> +++ b/drivers/iommu/amd/init.c
> @@ -406,6 +406,9 @@ static void iommu_set_device_table(struct amd_iommu *iommu)
>  
>  	BUG_ON(iommu->mmio_base == NULL);
>  
> +	if (is_kdump_kernel())
> +		return;
> +
>  	entry = iommu_virt_to_phys(dev_table);
>  	entry |= (dev_table_size >> 12) - 1;
>  	memcpy_toio(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET,
> @@ -646,7 +649,10 @@ static inline int __init alloc_dev_table(struct amd_iommu_pci_seg *pci_seg)
>  
>  static inline void free_dev_table(struct amd_iommu_pci_seg *pci_seg)
>  {
> -	iommu_free_pages(pci_seg->dev_table);
> +	if (is_kdump_kernel())
> +		memunmap((void *)pci_seg->dev_table);
> +	else
> +		iommu_free_pages(pci_seg->dev_table);
>  	pci_seg->dev_table = NULL;
>  }
>  
> @@ -1129,15 +1135,12 @@ static void set_dte_bit(struct dev_table_entry *dte, u8 bit)
>  	dte->data[i] |= (1UL << _bit);
>  }
>  
> -static bool __copy_device_table(struct amd_iommu *iommu)
> +static bool __reuse_device_table(struct amd_iommu *iommu)
>  {
> -	u64 int_ctl, int_tab_len, entry = 0;
>  	struct amd_iommu_pci_seg *pci_seg = iommu->pci_seg;
> -	struct dev_table_entry *old_devtb = NULL;
> -	u32 lo, hi, devid, old_devtb_size;
> +	u32 lo, hi, old_devtb_size;
>  	phys_addr_t old_devtb_phys;
> -	u16 dom_id, dte_v, irq_v;
> -	u64 tmp;
> +	u64 entry;
>  
>  	/* Each IOMMU use separate device table with the same size */
>  	lo = readl(iommu->mmio_base + MMIO_DEV_TABLE_OFFSET);
> @@ -1162,66 +1165,22 @@ static bool __copy_device_table(struct amd_iommu *iommu)
>  		pr_err("The address of old device table is above 4G, not trustworthy!\n");
>  		return false;
>  	}
> -	old_devtb = (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT) && is_kdump_kernel())
> -		    ? (__force void *)ioremap_encrypted(old_devtb_phys,
> -							pci_seg->dev_table_size)
> -		    : memremap(old_devtb_phys, pci_seg->dev_table_size, MEMREMAP_WB);
> -
> -	if (!old_devtb)
> -		return false;
>  
> -	pci_seg->old_dev_tbl_cpy = iommu_alloc_pages_sz(
> -		GFP_KERNEL | GFP_DMA32, pci_seg->dev_table_size);
> +	/*
> +	 * IOMMU Device Table Base Address MMIO register is locked
> +	 * if SNP is enabled during kdump, reuse the previous kernel's
> +	 * device table.

Can you please reword as its reusing crash kernel device table in all scenarios ?

-Vasant


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot
  2025-07-30 21:56 ` [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot Ashish Kalra
@ 2025-08-22 16:48   ` Tom Lendacky
  0 siblings, 0 replies; 9+ messages in thread
From: Tom Lendacky @ 2025-08-22 16:48 UTC (permalink / raw)
  To: Ashish Kalra, joro, suravee.suthikulpanit, Sairaj.ArunKodilkar,
	Vasant.Hegde, herbert
  Cc: seanjc, pbonzini, will, robin.murphy, john.allen, davem,
	michael.roth, iommu, linux-kernel, linux-crypto, kvm

On 7/30/25 16:56, Ashish Kalra wrote:
> From: Ashish Kalra <ashish.kalra@amd.com>
> 
> If SNP is enabled and initialized in the previous kernel then SNP is
> already initialized for kdump boot and attempting SNP INIT again

s/for/during/

s/SNP INIT/to initialize SNP/

> during kdump boot causes SNP INIT failure and eventually leads to

s/during kdump boot/in the kdump kernel/
s/causes SNP INIT failure/results in an SNP initialization failure/
s/ and eventually/. This leads to/

> IOMMU failures.

s/IOMMU failures/IOMMU initialization failure/

> 
> For SEV avoid SEV INIT failure warnings during kdump boot if SEV
> is enabled and initialized in the previous kernel.
> 
> Skip SNP and SEV INIT if doing kdump boot.

These last two paragraphs seem disjointed. Make a single paragraph that
combines the sentences and says why you're doing what you're doing, e.g.:

Since SEV guests will not be run under a kdump kernel, there is no reason
to attempt initialization of SEV or SNP. This can prevent initialization
errors that leads to IOMMU initialization failure if SEV or SNP were
previously initialized.

Massage as needed.

> 
> Tested-by: Sairaj Kodilkar <sarunkod@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  drivers/crypto/ccp/sev-dev.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
> index e058ba027792..c204831ca4a6 100644
> --- a/drivers/crypto/ccp/sev-dev.c
> +++ b/drivers/crypto/ccp/sev-dev.c
> @@ -28,6 +28,7 @@
>  #include <linux/fs_struct.h>
>  #include <linux/psp.h>
>  #include <linux/amd-iommu.h>
> +#include <linux/crash_dump.h>
>  
>  #include <asm/smp.h>
>  #include <asm/cacheflush.h>
> @@ -1345,6 +1346,13 @@ static int _sev_platform_init_locked(struct sev_platform_init_args *args)
>  	if (!psp_master || !psp_master->sev_data)
>  		return -ENODEV;
>  
> +	/*
> +	 * Skip SNP/SEV INIT for kdump boot as SEV/SNP is already initialized

s/INIT for kdump boot/initialization under a kdump kernel/
s/is already/may already be/

> +	 * in previous kernel if SEV/SNP is enabled.

s/in previous/in the previous/
s/ if SEV/SNP is enabled//

Also state here that since no guests will be run under a kdump kernel it
is ok do skip initialization.

Thanks,
Tom

> +	 */
> +	if (is_kdump_kernel())
> +		return 0;
> +
>  	sev = psp_master->sev_data;
>  
>  	if (sev->state == SEV_STATE_INIT)


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-08-22 16:48 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30 21:55 [PATCH v5 0/4] Add host kdump support for SNP Ashish Kalra
2025-07-30 21:55 ` [PATCH v5 1/4] iommu/amd: Add support to remap/unmap IOMMU buffers for kdump Ashish Kalra
2025-08-21 11:29   ` Vasant Hegde
2025-07-30 21:56 ` [PATCH v5 2/4] iommu/amd: Reuse device table " Ashish Kalra
2025-08-21 11:45   ` Vasant Hegde
2025-07-30 21:56 ` [PATCH v5 3/4] crypto: ccp: Skip SEV and SNP INIT for kdump boot Ashish Kalra
2025-08-22 16:48   ` Tom Lendacky
2025-07-30 21:56 ` [PATCH v5 4/4] iommu/amd: Skip enabling command/event buffers for kdump Ashish Kalra
2025-08-21 11:32   ` Vasant Hegde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).