linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache
@ 2018-01-22 19:29 James Morse
  2018-01-22 19:29 ` [RFC 1/7] ACPI / APEI: Move the estatus queue code up, and under its own ifdef James Morse
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

Hi guys, kbuild-robot,

This RFC is rough, and not at all ready. This is the current status of
my attempts to split up the ghes.c code to allow multiple notifications
to be NMI-like. On arm64 we have NOTIFY_{SEA, SEI, SDEI} all of which
have NMI-like behaviour.

This series splits up APEIs in_nmi() path so that more than one
notification can use it. To support the asyncronous notifications: SEI
and SDEI we move all the NMI-like handlers over to the estatus-cache.
This gives us the same APEI behaviour as x86, and means the multiple
notification methods can interact if firmware implements more than one.

estatus.. queue? ghes.c has three things all called 'estatus'. One is
a pool of memory that has a static size, and is grown/shrunk when new
NMI users are allocated. The second is the cache, this holds recent
notifications so it can suppress notifications we've already handled.
The last is the queue, which holds data from NMI notifications (in pool
memory) that can't be handled immediatly.

So far this has only been tested using SDEI.

This RFC makes a know race worse. (I aim to fix the race before dropping
the RFC tag). Xie XiuQi reported that both the arch code and
memory_failure() will signal an affected process, what the process gets
depends on the order these run in, and how the signals get merged.
Using the estatus-cache makes this worse. My intention is for the arch
code's new 'apei_claim_x()' helpers to give any queue that the claimed
RAS event may be stuck in a kick, depending on which irq/preemptible
flags the notification caused to be set.

Your CC list is wrong! Yes, given how ropey this is I want to keep the
noise low, it would only need posting again at rc1.


Comments on the overall approach welcome!


Thanks,

James Morse (7):
  ACPI / APEI: Move the estatus queue code up, and under its own ifdef
  ACPI / APEI: Generalise the estatus queue's add/remove and notify code
  ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue
  KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing
  arm64: KVM/mm: Move SEA handling behind a single 'claim' interface.
  ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi()
    users
  ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications

 arch/arm/include/asm/kvm_ras.h       |  14 ++
 arch/arm/include/asm/system_misc.h   |   5 -
 arch/arm64/include/asm/acpi.h        |   2 +
 arch/arm64/include/asm/daifflags.h   |   1 +
 arch/arm64/include/asm/fixmap.h      |   4 +-
 arch/arm64/include/asm/kvm_ras.h     |  23 ++
 arch/arm64/include/asm/system_misc.h |   2 -
 arch/arm64/kernel/acpi.c             |  30 +++
 arch/arm64/mm/fault.c                |  30 +--
 drivers/acpi/apei/ghes.c             | 467 ++++++++++++++++++-----------------
 include/acpi/ghes.h                  |   5 +
 virt/kvm/arm/mmu.c                   |   4 +-
 12 files changed, 327 insertions(+), 260 deletions(-)
 create mode 100644 arch/arm/include/asm/kvm_ras.h
 create mode 100644 arch/arm64/include/asm/kvm_ras.h

-- 
2.15.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC 1/7] ACPI / APEI: Move the estatus queue code up, and under its own ifdef
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-22 19:29 ` [RFC 2/7] ACPI / APEI: Generalise the estatus queue's add/remove and notify code James Morse
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

We want to allow multiple NMI-like notifications on arm64, but the GHES
driver only has one in_nmi() path. To support asynchronous notifications
we also need to use the estatus-queue.

First we move the estatus-queue code so that notification types other
than NOTFIY_NMI can use it.

This patch moves code around ... and makes the following trivial changes:
 * Adds WANT_NMI_ESTATUS_QUEUE to allow this code to be built without
   CONFIG_HAVE_ACPI_APEI_NMI. This symbol later comes to mean NOTIFY_NMI,
   which would only be selected by x86.
 * Freshen the dated comment above ghes_estatus_llist. printk() is no
   longer the issue, its the helpers like memory_failure_queue() that
   still aren't nmi safe.

If anyone prefers I can split this so these two non-code changes are a
separate patch.

Not-signed-off: James Morse <james.morse@arm.com>
---
 drivers/acpi/apei/ghes.c | 281 +++++++++++++++++++++++++----------------------
 1 file changed, 148 insertions(+), 133 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 6402f7fad3bb..fef59ca1f7a7 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -59,6 +59,10 @@
 
 #define GHES_PFX	"GHES: "
 
+#ifdef CONFIG_HAVE_ACPI_APEI_NMI
+#define WANT_NMI_ESTATUS_QUEUE	1
+#endif
+
 #define GHES_ESTATUS_MAX_SIZE		65536
 #define GHES_ESOURCE_PREALLOC_MAX_SIZE	65536
 
@@ -529,6 +533,16 @@ static int ghes_print_estatus(const char *pfx,
 	return 0;
 }
 
+static void __ghes_panic(struct ghes *ghes)
+{
+	__ghes_print_estatus(KERN_EMERG, ghes->generic, ghes->estatus);
+
+	/* reboot to log the error! */
+	if (!panic_timeout)
+		panic_timeout = ghes_panic_timeout;
+	panic("Fatal hardware error!");
+}
+
 /*
  * GHES error status reporting throttle, to report more kinds of
  * errors, instead of just most frequently occurred errors.
@@ -656,6 +670,138 @@ static void ghes_estatus_cache_add(
 	rcu_read_unlock();
 }
 
+#ifdef WANT_NMI_ESTATUS_QUEUE
+/*
+ * While printk() now has an in_nmi() path, the handling for CPER records
+ * does not. For example, memory_failure_queue() takes spinlocks and calls
+ * schedule_work_on().
+ *
+ * So in any NMI-like handler, we allocate required memory from lock-less
+ * memory allocator (ghes_estatus_pool), save estatus into it, put them into
+ * lock-less list (ghes_estatus_llist), then delay printk into IRQ context via
+ * irq_work (ghes_proc_irq_work).  ghes_estatus_size_request record
+ * required pool size by all NMI error source.
+ *
+ * Memory from the ghes_estatus_pool is also used with the ghes_estatus_cache
+ * to suppress frequent messages.
+ */
+static struct llist_head ghes_estatus_llist;
+static struct irq_work ghes_proc_irq_work;
+
+static void ghes_print_queued_estatus(void)
+{
+	struct llist_node *llnode;
+	struct ghes_estatus_node *estatus_node;
+	struct acpi_hest_generic *generic;
+	struct acpi_hest_generic_status *estatus;
+	u32 len, node_len;
+
+	llnode = llist_del_all(&ghes_estatus_llist);
+	/*
+	 * Because the time order of estatus in list is reversed,
+	 * revert it back to proper order.
+	 */
+	llnode = llist_reverse_order(llnode);
+	while (llnode) {
+		estatus_node = llist_entry(llnode, struct ghes_estatus_node,
+					   llnode);
+		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
+		len = cper_estatus_len(estatus);
+		node_len = GHES_ESTATUS_NODE_LEN(len);
+		generic = estatus_node->generic;
+		ghes_print_estatus(NULL, generic, estatus);
+		llnode = llnode->next;
+	}
+}
+
+/* Save estatus for further processing in IRQ context */
+static void __process_error(struct ghes *ghes)
+{
+#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
+	u32 len, node_len;
+	struct ghes_estatus_node *estatus_node;
+	struct acpi_hest_generic_status *estatus;
+
+	if (ghes_estatus_cached(ghes->estatus))
+		return;
+
+	len = cper_estatus_len(ghes->estatus);
+	node_len = GHES_ESTATUS_NODE_LEN(len);
+
+	estatus_node = (void *)gen_pool_alloc(ghes_estatus_pool, node_len);
+	if (!estatus_node)
+		return;
+
+	estatus_node->ghes = ghes;
+	estatus_node->generic = ghes->generic;
+	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
+	memcpy(estatus, ghes->estatus, len);
+	llist_add(&estatus_node->llnode, &ghes_estatus_llist);
+#endif
+}
+
+static unsigned long ghes_esource_prealloc_size(
+	const struct acpi_hest_generic *generic)
+{
+	unsigned long block_length, prealloc_records, prealloc_size;
+
+	block_length = min_t(unsigned long, generic->error_block_length,
+			     GHES_ESTATUS_MAX_SIZE);
+	prealloc_records = max_t(unsigned long,
+				 generic->records_to_preallocate, 1);
+	prealloc_size = min_t(unsigned long, block_length * prealloc_records,
+			      GHES_ESOURCE_PREALLOC_MAX_SIZE);
+
+	return prealloc_size;
+}
+
+static void ghes_estatus_pool_shrink(unsigned long len)
+{
+	ghes_estatus_pool_size_request -= PAGE_ALIGN(len);
+}
+
+static void ghes_proc_in_irq(struct irq_work *irq_work)
+{
+	struct llist_node *llnode, *next;
+	struct ghes_estatus_node *estatus_node;
+	struct acpi_hest_generic *generic;
+	struct acpi_hest_generic_status *estatus;
+	u32 len, node_len;
+
+	llnode = llist_del_all(&ghes_estatus_llist);
+	/*
+	 * Because the time order of estatus in list is reversed,
+	 * revert it back to proper order.
+	 */
+	llnode = llist_reverse_order(llnode);
+	while (llnode) {
+		next = llnode->next;
+		estatus_node = llist_entry(llnode, struct ghes_estatus_node,
+					   llnode);
+		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
+		len = cper_estatus_len(estatus);
+		node_len = GHES_ESTATUS_NODE_LEN(len);
+		ghes_do_proc(estatus_node->ghes, estatus);
+		if (!ghes_estatus_cached(estatus)) {
+			generic = estatus_node->generic;
+			if (ghes_print_estatus(NULL, generic, estatus))
+				ghes_estatus_cache_add(generic, estatus);
+		}
+		gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node,
+			      node_len);
+		llnode = next;
+	}
+}
+
+static void ghes_nmi_init_cxt(void)
+{
+	init_irq_work(&ghes_proc_irq_work, ghes_proc_in_irq);
+}
+
+#else
+static inline void ghes_nmi_init_cxt(void) { }
+#endif /* WANT_NMI_ESTATUS_QUEUE */
+
 static int ghes_ack_error(struct acpi_hest_generic_v2 *gv2)
 {
 	int rc;
@@ -671,16 +817,6 @@ static int ghes_ack_error(struct acpi_hest_generic_v2 *gv2)
 	return apei_write(val, &gv2->read_ack_register);
 }
 
-static void __ghes_panic(struct ghes *ghes)
-{
-	__ghes_print_estatus(KERN_EMERG, ghes->generic, ghes->estatus);
-
-	/* reboot to log the error! */
-	if (!panic_timeout)
-		panic_timeout = ghes_panic_timeout;
-	panic("Fatal hardware error!");
-}
-
 static int ghes_proc(struct ghes *ghes)
 {
 	int rc;
@@ -813,109 +949,13 @@ static inline void ghes_sea_remove(struct ghes *ghes) { }
 
 #ifdef CONFIG_HAVE_ACPI_APEI_NMI
 /*
- * printk is not safe in NMI context.  So in NMI handler, we allocate
- * required memory from lock-less memory allocator
- * (ghes_estatus_pool), save estatus into it, put them into lock-less
- * list (ghes_estatus_llist), then delay printk into IRQ context via
- * irq_work (ghes_proc_irq_work).  ghes_estatus_size_request record
- * required pool size by all NMI error source.
- */
-static struct llist_head ghes_estatus_llist;
-static struct irq_work ghes_proc_irq_work;
-
-/*
- * NMI may be triggered on any CPU, so ghes_in_nmi is used for
- * having only one concurrent reader.
+ * NOTIFY_NMI may be triggered on any CPU, so ghes_in_nmi is
+ * used for having only one concurrent reader.
  */
 static atomic_t ghes_in_nmi = ATOMIC_INIT(0);
 
 static LIST_HEAD(ghes_nmi);
 
-static void ghes_proc_in_irq(struct irq_work *irq_work)
-{
-	struct llist_node *llnode, *next;
-	struct ghes_estatus_node *estatus_node;
-	struct acpi_hest_generic *generic;
-	struct acpi_hest_generic_status *estatus;
-	u32 len, node_len;
-
-	llnode = llist_del_all(&ghes_estatus_llist);
-	/*
-	 * Because the time order of estatus in list is reversed,
-	 * revert it back to proper order.
-	 */
-	llnode = llist_reverse_order(llnode);
-	while (llnode) {
-		next = llnode->next;
-		estatus_node = llist_entry(llnode, struct ghes_estatus_node,
-					   llnode);
-		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
-		len = cper_estatus_len(estatus);
-		node_len = GHES_ESTATUS_NODE_LEN(len);
-		ghes_do_proc(estatus_node->ghes, estatus);
-		if (!ghes_estatus_cached(estatus)) {
-			generic = estatus_node->generic;
-			if (ghes_print_estatus(NULL, generic, estatus))
-				ghes_estatus_cache_add(generic, estatus);
-		}
-		gen_pool_free(ghes_estatus_pool, (unsigned long)estatus_node,
-			      node_len);
-		llnode = next;
-	}
-}
-
-static void ghes_print_queued_estatus(void)
-{
-	struct llist_node *llnode;
-	struct ghes_estatus_node *estatus_node;
-	struct acpi_hest_generic *generic;
-	struct acpi_hest_generic_status *estatus;
-	u32 len, node_len;
-
-	llnode = llist_del_all(&ghes_estatus_llist);
-	/*
-	 * Because the time order of estatus in list is reversed,
-	 * revert it back to proper order.
-	 */
-	llnode = llist_reverse_order(llnode);
-	while (llnode) {
-		estatus_node = llist_entry(llnode, struct ghes_estatus_node,
-					   llnode);
-		estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
-		len = cper_estatus_len(estatus);
-		node_len = GHES_ESTATUS_NODE_LEN(len);
-		generic = estatus_node->generic;
-		ghes_print_estatus(NULL, generic, estatus);
-		llnode = llnode->next;
-	}
-}
-
-/* Save estatus for further processing in IRQ context */
-static void __process_error(struct ghes *ghes)
-{
-#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
-	u32 len, node_len;
-	struct ghes_estatus_node *estatus_node;
-	struct acpi_hest_generic_status *estatus;
-
-	if (ghes_estatus_cached(ghes->estatus))
-		return;
-
-	len = cper_estatus_len(ghes->estatus);
-	node_len = GHES_ESTATUS_NODE_LEN(len);
-
-	estatus_node = (void *)gen_pool_alloc(ghes_estatus_pool, node_len);
-	if (!estatus_node)
-		return;
-
-	estatus_node->ghes = ghes;
-	estatus_node->generic = ghes->generic;
-	estatus = GHES_ESTATUS_FROM_NODE(estatus_node);
-	memcpy(estatus, ghes->estatus, len);
-	llist_add(&estatus_node->llnode, &ghes_estatus_llist);
-#endif
-}
-
 static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 {
 	struct ghes *ghes;
@@ -954,26 +994,6 @@ static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 	return ret;
 }
 
-static unsigned long ghes_esource_prealloc_size(
-	const struct acpi_hest_generic *generic)
-{
-	unsigned long block_length, prealloc_records, prealloc_size;
-
-	block_length = min_t(unsigned long, generic->error_block_length,
-			     GHES_ESTATUS_MAX_SIZE);
-	prealloc_records = max_t(unsigned long,
-				 generic->records_to_preallocate, 1);
-	prealloc_size = min_t(unsigned long, block_length * prealloc_records,
-			      GHES_ESOURCE_PREALLOC_MAX_SIZE);
-
-	return prealloc_size;
-}
-
-static void ghes_estatus_pool_shrink(unsigned long len)
-{
-	ghes_estatus_pool_size_request -= PAGE_ALIGN(len);
-}
-
 static void ghes_nmi_add(struct ghes *ghes)
 {
 	unsigned long len;
@@ -1005,14 +1025,9 @@ static void ghes_nmi_remove(struct ghes *ghes)
 	ghes_estatus_pool_shrink(len);
 }
 
-static void ghes_nmi_init_cxt(void)
-{
-	init_irq_work(&ghes_proc_irq_work, ghes_proc_in_irq);
-}
 #else /* CONFIG_HAVE_ACPI_APEI_NMI */
 static inline void ghes_nmi_add(struct ghes *ghes) { }
 static inline void ghes_nmi_remove(struct ghes *ghes) { }
-static inline void ghes_nmi_init_cxt(void) { }
 #endif /* CONFIG_HAVE_ACPI_APEI_NMI */
 
 static int ghes_probe(struct platform_device *ghes_dev)
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 2/7] ACPI / APEI: Generalise the estatus queue's add/remove and notify code
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
  2018-01-22 19:29 ` [RFC 1/7] ACPI / APEI: Move the estatus queue code up, and under its own ifdef James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-22 19:29 ` [RFC 3/7] ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue James Morse
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

Refactor the estatus queue's pool grow/shrink code and notification
routine from NOTIFY_NMI's handlers. This will allow another notification
method to use the estatus queue without duplicating this code.

This patch adds rcu_read_lock()/rcu_read_unlock() around the list
list_for_each_entry_rcu() walker. These aren't strictly necessary as
the whole nmi_enter/nmi_exit() window is a spooky RCU read-side
critical section.

This patch keeps the oops_begin() call only for x86, arm64 doesn't have
one of these, and APEI is the only thing outside arch code calling this..

The existing ghes_estatus_pool_shrink() is folded into the new
ghes_estatus_queue_shrink_pool() as only the queue uses it.
(in contrast ghes_init() calls ghes_estatus_pool_expand() to allocate
 the pool memory used by the estatus-cache)

Not-signed-off: James Morse <james.morse@arm.com>
---
 drivers/acpi/apei/ghes.c | 93 ++++++++++++++++++++++++++++++------------------
 1 file changed, 58 insertions(+), 35 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index fef59ca1f7a7..ca15f6537dbb 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -740,6 +740,44 @@ static void __process_error(struct ghes *ghes)
 #endif
 }
 
+static int ghes_estatus_queue_notified(struct list_head *rcu_list)
+{
+	int sev;
+	int ret = -ENOENT;
+	struct ghes *ghes;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(ghes, rcu_list, list) {
+		if (ghes_read_estatus(ghes, 1)) {
+			ghes_clear_estatus(ghes);
+			continue;
+		} else {
+			ret = 0;
+		}
+
+		sev = ghes_severity(ghes->estatus->error_severity);
+		if (sev >= GHES_SEV_PANIC) {
+#ifdef CONFIG_X86
+			oops_begin();
+#endif
+			ghes_print_queued_estatus();
+			__ghes_panic(ghes);
+		}
+
+		if (!(ghes->flags & GHES_TO_CLEAR))
+			continue;
+
+		__process_error(ghes);
+		ghes_clear_estatus(ghes);
+	}
+	rcu_read_unlock();
+
+	if (IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) && ret == 0)
+		irq_work_queue(&ghes_proc_irq_work);
+
+	return ret;
+}
+
 static unsigned long ghes_esource_prealloc_size(
 	const struct acpi_hest_generic *generic)
 {
@@ -755,11 +793,24 @@ static unsigned long ghes_esource_prealloc_size(
 	return prealloc_size;
 }
 
-static void ghes_estatus_pool_shrink(unsigned long len)
+/* After removing a queue user, we can shrink to pool */
+static void ghes_estatus_queue_shrink_pool(struct ghes *ghes)
 {
+	unsigned long len;
+
+	len = ghes_esource_prealloc_size(ghes->generic);
 	ghes_estatus_pool_size_request -= PAGE_ALIGN(len);
 }
 
+/* Before adding a queue user, grow the pool */
+static void ghes_estatus_queue_grow_pool(struct ghes *ghes)
+{
+	unsigned long len;
+
+	len = ghes_esource_prealloc_size(ghes->generic);
+	ghes_estatus_pool_expand(len);
+}
+
 static void ghes_proc_in_irq(struct irq_work *irq_work)
 {
 	struct llist_node *llnode, *next;
@@ -958,48 +1009,22 @@ static LIST_HEAD(ghes_nmi);
 
 static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 {
-	struct ghes *ghes;
-	int sev, ret = NMI_DONE;
+	int ret = NMI_DONE;
 
 	if (!atomic_add_unless(&ghes_in_nmi, 1, 1))
 		return ret;
 
-	list_for_each_entry_rcu(ghes, &ghes_nmi, list) {
-		if (ghes_read_estatus(ghes, 1)) {
-			ghes_clear_estatus(ghes);
-			continue;
-		} else {
-			ret = NMI_HANDLED;
-		}
-
-		sev = ghes_severity(ghes->estatus->error_severity);
-		if (sev >= GHES_SEV_PANIC) {
-			oops_begin();
-			ghes_print_queued_estatus();
-			__ghes_panic(ghes);
-		}
-
-		if (!(ghes->flags & GHES_TO_CLEAR))
-			continue;
+	if (!ghes_estatus_queue_notified(&ghes_nmi))
+		ret = NMI_HANDLED;
 
-		__process_error(ghes);
-		ghes_clear_estatus(ghes);
-	}
-
-#ifdef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG
-	if (ret == NMI_HANDLED)
-		irq_work_queue(&ghes_proc_irq_work);
-#endif
 	atomic_dec(&ghes_in_nmi);
 	return ret;
 }
 
 static void ghes_nmi_add(struct ghes *ghes)
 {
-	unsigned long len;
+	ghes_estatus_queue_grow_pool(ghes);
 
-	len = ghes_esource_prealloc_size(ghes->generic);
-	ghes_estatus_pool_expand(len);
 	mutex_lock(&ghes_list_mutex);
 	if (list_empty(&ghes_nmi))
 		register_nmi_handler(NMI_LOCAL, ghes_notify_nmi, 0, "ghes");
@@ -1009,8 +1034,6 @@ static void ghes_nmi_add(struct ghes *ghes)
 
 static void ghes_nmi_remove(struct ghes *ghes)
 {
-	unsigned long len;
-
 	mutex_lock(&ghes_list_mutex);
 	list_del_rcu(&ghes->list);
 	if (list_empty(&ghes_nmi))
@@ -1021,8 +1044,8 @@ static void ghes_nmi_remove(struct ghes *ghes)
 	 * freed after NMI handler finishes.
 	 */
 	synchronize_rcu();
-	len = ghes_esource_prealloc_size(ghes->generic);
-	ghes_estatus_pool_shrink(len);
+
+	ghes_estatus_queue_shrink_pool(ghes);
 }
 
 #else /* CONFIG_HAVE_ACPI_APEI_NMI */
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 3/7] ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
  2018-01-22 19:29 ` [RFC 1/7] ACPI / APEI: Move the estatus queue code up, and under its own ifdef James Morse
  2018-01-22 19:29 ` [RFC 2/7] ACPI / APEI: Generalise the estatus queue's add/remove and notify code James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-22 19:29 ` [RFC 4/7] KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing James Morse
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

Now that the estatus queue can be used by more than one notification
method, we can move notifications that have NMI-like behaviour over to
it, and start abstracting GHES's single in_nmi() path.

Switch NOTIFY_SEA over to use the estatus queue.

Not-signed-off: James Morse <james.morse@arm.com>
---
 drivers/acpi/apei/ghes.c | 17 ++++++-----------
 1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index ca15f6537dbb..7d58a791de90 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -59,7 +59,7 @@
 
 #define GHES_PFX	"GHES: "
 
-#ifdef CONFIG_HAVE_ACPI_APEI_NMI
+#if defined(CONFIG_HAVE_ACPI_APEI_NMI) || defined(CONFIG_ACPI_APEI_SEA)
 #define WANT_NMI_ESTATUS_QUEUE	1
 #endif
 
@@ -967,20 +967,13 @@ static LIST_HEAD(ghes_sea);
  */
 int ghes_notify_sea(void)
 {
-	struct ghes *ghes;
-	int ret = -ENOENT;
-
-	rcu_read_lock();
-	list_for_each_entry_rcu(ghes, &ghes_sea, list) {
-		if (!ghes_proc(ghes))
-			ret = 0;
-	}
-	rcu_read_unlock();
-	return ret;
+	return ghes_estatus_queue_notified(&ghes_sea);
 }
 
 static void ghes_sea_add(struct ghes *ghes)
 {
+	ghes_estatus_queue_grow_pool(ghes);
+
 	mutex_lock(&ghes_list_mutex);
 	list_add_rcu(&ghes->list, &ghes_sea);
 	mutex_unlock(&ghes_list_mutex);
@@ -992,6 +985,8 @@ static void ghes_sea_remove(struct ghes *ghes)
 	list_del_rcu(&ghes->list);
 	mutex_unlock(&ghes_list_mutex);
 	synchronize_rcu();
+
+	ghes_estatus_queue_shrink_pool(ghes);
 }
 #else /* CONFIG_ACPI_APEI_SEA */
 static inline void ghes_sea_add(struct ghes *ghes) { }
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 4/7] KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
                   ` (2 preceding siblings ...)
  2018-01-22 19:29 ` [RFC 3/7] ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-22 19:29 ` [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface James Morse
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

To split up APEIs in_nmi() path, we need any nmi-like callers to always
be in_nmi(). KVM shouldn't have to know about this, pull the RAS plumbing
out into a header file.

Currently guest synchronous external aborts are claimed as RAS
notifications by handle_guest_sea(), which is hidden in the arch codes
mm/fault.c. 32bit gets a dummy declaration in system_misc.h.

There is going to be more of this in the future if/when we support
the SError-based firmware-first notification mechanism and/or
kernel-first notifications for both synchronous external abort and
SError. Each of these will come with some Kconfig symbols and a
handful of header files.

Create a header file for all this.

This patch gives handle_guest_sea() a 'kvm_' prefix, and moves the
declarations to kvm_ras.h as preparation for a future patch that moves
the ACPI-specific RAS code out of mm/fault.c.

Not-signed-off: James Morse <james.morse@arm.com>
---
 arch/arm/include/asm/kvm_ras.h       | 14 ++++++++++++++
 arch/arm/include/asm/system_misc.h   |  5 -----
 arch/arm64/include/asm/kvm_ras.h     | 11 +++++++++++
 arch/arm64/include/asm/system_misc.h |  2 --
 arch/arm64/mm/fault.c                |  2 +-
 virt/kvm/arm/mmu.c                   |  4 ++--
 6 files changed, 28 insertions(+), 10 deletions(-)
 create mode 100644 arch/arm/include/asm/kvm_ras.h
 create mode 100644 arch/arm64/include/asm/kvm_ras.h

diff --git a/arch/arm/include/asm/kvm_ras.h b/arch/arm/include/asm/kvm_ras.h
new file mode 100644
index 000000000000..9e6cf0aab657
--- /dev/null
+++ b/arch/arm/include/asm/kvm_ras.h
@@ -0,0 +1,14 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2017 - Arm Ltd
+
+#ifndef __ARM_KVM_RAS_H__
+#define __ARM_KVM_RAS_H__
+
+#include <linux/types.h>
+
+static inline int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
+{
+	return -1;
+}
+
+#endif /* __ARM_KVM_RAS_H__ */
diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h
index 78f6db114faf..51e5ab50b35f 100644
--- a/arch/arm/include/asm/system_misc.h
+++ b/arch/arm/include/asm/system_misc.h
@@ -23,11 +23,6 @@ extern void (*arm_pm_idle)(void);
 
 extern unsigned int user_debug;
 
-static inline int handle_guest_sea(phys_addr_t addr, unsigned int esr)
-{
-	return -1;
-}
-
 #endif /* !__ASSEMBLY__ */
 
 #endif /* __ASM_ARM_SYSTEM_MISC_H */
diff --git a/arch/arm64/include/asm/kvm_ras.h b/arch/arm64/include/asm/kvm_ras.h
new file mode 100644
index 000000000000..9a54576b759f
--- /dev/null
+++ b/arch/arm64/include/asm/kvm_ras.h
@@ -0,0 +1,11 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (C) 2017 - Arm Ltd
+
+#ifndef __ARM64_KVM_RAS_H__
+#define __ARM64_KVM_RAS_H__
+
+#include <linux/types.h>
+
+int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr);
+
+#endif /* __ARM64_KVM_RAS_H__ */
diff --git a/arch/arm64/include/asm/system_misc.h b/arch/arm64/include/asm/system_misc.h
index 07aa8e3c5630..d0beefeb6d25 100644
--- a/arch/arm64/include/asm/system_misc.h
+++ b/arch/arm64/include/asm/system_misc.h
@@ -56,8 +56,6 @@ extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 	__show_ratelimited;						\
 })
 
-int handle_guest_sea(phys_addr_t addr, unsigned int esr);
-
 #endif	/* __ASSEMBLY__ */
 
 #endif	/* __ASM_SYSTEM_MISC_H */
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 0e671ddf4855..39e607515e8f 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -674,7 +674,7 @@ static const struct fault_info fault_info[] = {
 	{ do_bad,		SIGBUS,  0,		"unknown 63"			},
 };
 
-int handle_guest_sea(phys_addr_t addr, unsigned int esr)
+int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
 {
 	int ret = -ENOENT;
 
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 876caf531d32..6bd3b082b23a 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -27,10 +27,10 @@
 #include <asm/kvm_arm.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_mmio.h>
+#include <asm/kvm_ras.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_emulate.h>
 #include <asm/virt.h>
-#include <asm/system_misc.h>
 
 #include "trace.h"
 
@@ -1487,7 +1487,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
 		 * For RAS the host kernel may handle this abort.
 		 * There is no need to pass the error into the guest.
 		 */
-		if (!handle_guest_sea(fault_ipa, kvm_vcpu_get_hsr(vcpu)))
+		if (!kvm_handle_guest_sea(fault_ipa, kvm_vcpu_get_hsr(vcpu)))
 			return 1;
 
 		if (unlikely(!is_iabt)) {
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface.
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
                   ` (3 preceding siblings ...)
  2018-01-22 19:29 ` [RFC 4/7] KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-23  8:46   ` gengdongjiu
  2018-01-22 19:29 ` [RFC 6/7] ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi() users James Morse
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

To split up APEIs in_nmi() path, we need the nmi-like callers to always
be in_nmi(). Add a helper to do the work and claim the notification.

When KVM or the arch code takes an exception that might be a RAS
notification, it asks the APEI firmware-first code whether it wants
to claim the exception. We can then go on to see if (a future)
kernel-first mechanism wants to claim the notification, before
falling through to the existing default behaviour.

The NOTIFY_SEA code was merged before we had multiple, possibly-interacting,
NMI-like notifications and the need to consider kernel-first in the future.
Make the 'claiming' behaviour explicit, and give ourselves somewhere
to hook in kernel-first.

We're restructuring the APEI code to allow multiple NMI-like
notifications, any notification that might interrupt interrupts-masked
code must always be wrapped in nmi_enter()/nmi_exit().

We mask SError over this window to prevent an asynchronous RAS error
arriving and tripping 'nmi_enter()'s BUG_ON(in_nmi()).

Not-signed-off: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/acpi.h      |  2 ++
 arch/arm64/include/asm/daifflags.h |  1 +
 arch/arm64/include/asm/kvm_ras.h   | 14 +++++++++++++-
 arch/arm64/kernel/acpi.c           | 30 ++++++++++++++++++++++++++++++
 arch/arm64/mm/fault.c              | 30 ++++++------------------------
 5 files changed, 52 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
index 32f465a80e4e..cf844b8d6ab8 100644
--- a/arch/arm64/include/asm/acpi.h
+++ b/arch/arm64/include/asm/acpi.h
@@ -94,6 +94,8 @@ void __init acpi_init_cpus(void);
 static inline void acpi_init_cpus(void) { }
 #endif /* CONFIG_ACPI */
 
+bool apei_claim_sea(void);
+
 #ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
 bool acpi_parking_protocol_valid(int cpu);
 void __init
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 22e4c83de5a5..cbd753855bf3 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -20,6 +20,7 @@
 
 #define DAIF_PROCCTX		0
 #define DAIF_PROCCTX_NOIRQ	PSR_I_BIT
+#define DAIF_ERRCTX		(PSR_I_BIT | PSR_A_BIT)
 
 /* mask/save/unmask/restore all exceptions, including interrupts. */
 static inline void local_daif_mask(void)
diff --git a/arch/arm64/include/asm/kvm_ras.h b/arch/arm64/include/asm/kvm_ras.h
index 9a54576b759f..7fd38408a602 100644
--- a/arch/arm64/include/asm/kvm_ras.h
+++ b/arch/arm64/include/asm/kvm_ras.h
@@ -4,8 +4,20 @@
 #ifndef __ARM64_KVM_RAS_H__
 #define __ARM64_KVM_RAS_H__
 
+#include <linux/acpi.h>
+#include <linux/errno.h>
 #include <linux/types.h>
 
-int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr);
+static inline int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
+{
+	int ret = -ENOENT;
+
+	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
+		if (apei_claim_sea())
+			ret = 0;
+	}
+
+	return ret;
+}
 
 #endif /* __ARM64_KVM_RAS_H__ */
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index 252396a96c78..b2fc9c7a807d 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -33,6 +33,8 @@
 
 #ifdef CONFIG_ACPI_APEI
 # include <linux/efi.h>
+# include <acpi/ghes.h>
+# include <asm/daifflags.h>
 # include <asm/pgtable.h>
 #endif
 
@@ -261,4 +263,32 @@ pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
 		return __pgprot(PROT_NORMAL_NC);
 	return __pgprot(PROT_DEVICE_nGnRnE);
 }
+
+
+/*
+ * Claim Synchronous External Aborts as a firmwre first notification.
+ *
+ * Used by KVM and the arch do_sea handler.
+ */
+bool apei_claim_sea(void)
+{
+	bool ret = false;
+
+	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
+		unsigned long flags = arch_local_save_flags();
+
+		/*
+		 * APEI expects an NMI-like notification to always be called
+		 * in NMI context.
+		 */
+		local_daif_restore(DAIF_ERRCTX);
+		nmi_enter();
+		if (ghes_notify_sea() == 0)
+			ret = true;
+		nmi_exit();
+		local_daif_restore(flags);
+	}
+
+	return ret;
+}
 #endif
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 39e607515e8f..360b37594649 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -18,6 +18,7 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <linux/acpi.h>
 #include <linux/extable.h>
 #include <linux/signal.h>
 #include <linux/mm.h>
@@ -44,8 +45,6 @@
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
 
-#include <acpi/ghes.h>
-
 struct fault_info {
 	int	(*fn)(unsigned long addr, unsigned int esr,
 		      struct pt_regs *regs);
@@ -580,19 +579,12 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
 	pr_err("Synchronous External Abort: %s (0x%08x) at 0x%016lx\n",
 		inf->name, esr, addr);
 
-	/*
-	 * Synchronous aborts may interrupt code which had interrupts masked.
-	 * Before calling out into the wider kernel tell the interested
-	 * subsystems.
-	 */
 	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
-		if (interrupts_enabled(regs))
-			nmi_enter();
-
-		ret = ghes_notify_sea();
-
-		if (interrupts_enabled(regs))
-			nmi_exit();
+		/*
+		 * Return value ignored as we rely on signal merging.
+		 * Future patches will make this more robust.
+		 */
+	       apei_claim_sea();
 	}
 
 	info.si_signo = SIGBUS;
@@ -674,16 +666,6 @@ static const struct fault_info fault_info[] = {
 	{ do_bad,		SIGBUS,  0,		"unknown 63"			},
 };
 
-int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
-{
-	int ret = -ENOENT;
-
-	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA))
-		ret = ghes_notify_sea();
-
-	return ret;
-}
-
 asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
 					 struct pt_regs *regs)
 {
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 6/7] ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi() users
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
                   ` (4 preceding siblings ...)
  2018-01-22 19:29 ` [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-22 19:29 ` [RFC 7/7] ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications James Morse
  2018-01-23  8:51 ` [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache gengdongjiu
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

Arm64 has multiple NMI-like notifications, but GHES only has one
in_nmi() path. The interactions between these multiple NMI-like
notifications is, unclear.

Split this single path up by moving the fixmap idx and lock into
the struct ghes. Each notification's init function can then specify
which other notifications it masks, and can share a fixmap_idx with.

Two lock pointers are provided, but only one will be used by
ghes_copy_tofrom_phys(), depending on in_nmi(). This means any
notification that might arrive as an NMI must always be wrapped in
nmi_enter()/nmi_exit().

The double-underscore version of fix_to_virt() is used because
the index to be mapped can't be tested against the end of the
enum at compile time.

Not-signed-off: James Morse <james.morse@arm.com>
---
 drivers/acpi/apei/ghes.c | 79 ++++++++++++++++++------------------------------
 include/acpi/ghes.h      |  5 +++
 2 files changed, 35 insertions(+), 49 deletions(-)

diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 7d58a791de90..6c2391fd00f8 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -118,12 +118,9 @@ static DEFINE_MUTEX(ghes_list_mutex);
  * from BIOS to Linux can be determined only in NMI, IRQ or timer
  * handler, but general ioremap can not be used in atomic context, so
  * the fixmap is used instead.
- *
- * These 2 spinlocks are used to prevent the fixmap entries from being used
- * simultaneously.
  */
-static DEFINE_RAW_SPINLOCK(ghes_ioremap_lock_nmi);
-static DEFINE_SPINLOCK(ghes_ioremap_lock_irq);
+static DEFINE_RAW_SPINLOCK(ghes_fixmap_lock_nmi);
+static DEFINE_SPINLOCK(ghes_fixmap_lock_irq);
 
 static struct gen_pool *ghes_estatus_pool;
 static unsigned long ghes_estatus_pool_size_request;
@@ -133,38 +130,16 @@ static atomic_t ghes_estatus_cache_alloced;
 
 static int ghes_panic_timeout __read_mostly = 30;
 
-static void __iomem *ghes_ioremap_pfn_nmi(u64 pfn)
+static void __iomem *ghes_fixmap_pfn(int fixmap_idx, u64 pfn)
 {
 	phys_addr_t paddr;
 	pgprot_t prot;
 
 	paddr = pfn << PAGE_SHIFT;
 	prot = arch_apei_get_mem_attribute(paddr);
-	__set_fixmap(FIX_APEI_GHES_NMI, paddr, prot);
-
-	return (void __iomem *) fix_to_virt(FIX_APEI_GHES_NMI);
-}
-
-static void __iomem *ghes_ioremap_pfn_irq(u64 pfn)
-{
-	phys_addr_t paddr;
-	pgprot_t prot;
-
-	paddr = pfn << PAGE_SHIFT;
-	prot = arch_apei_get_mem_attribute(paddr);
-	__set_fixmap(FIX_APEI_GHES_IRQ, paddr, prot);
-
-	return (void __iomem *) fix_to_virt(FIX_APEI_GHES_IRQ);
-}
-
-static void ghes_iounmap_nmi(void)
-{
-	clear_fixmap(FIX_APEI_GHES_NMI);
-}
+	__set_fixmap(fixmap_idx, paddr, prot);
 
-static void ghes_iounmap_irq(void)
-{
-	clear_fixmap(FIX_APEI_GHES_IRQ);
+	return (void __iomem *) __fix_to_virt(fixmap_idx);
 }
 
 static int ghes_estatus_pool_init(void)
@@ -292,8 +267,8 @@ static inline int ghes_severity(int severity)
 	}
 }
 
-static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len,
-				  int from_phys)
+static void ghes_copy_tofrom_phys(struct ghes *ghes, void *buffer, u64 paddr,
+				  u32 len, int from_phys)
 {
 	void __iomem *vaddr;
 	unsigned long flags = 0;
@@ -303,13 +278,11 @@ static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len,
 
 	while (len > 0) {
 		offset = paddr - (paddr & PAGE_MASK);
-		if (in_nmi) {
-			raw_spin_lock(&ghes_ioremap_lock_nmi);
-			vaddr = ghes_ioremap_pfn_nmi(paddr >> PAGE_SHIFT);
-		} else {
-			spin_lock_irqsave(&ghes_ioremap_lock_irq, flags);
-			vaddr = ghes_ioremap_pfn_irq(paddr >> PAGE_SHIFT);
-		}
+		if (in_nmi)
+			raw_spin_lock(ghes->nmi_fixmap_lock);
+		else
+			spin_lock_irqsave(ghes->fixmap_lock, flags);
+		vaddr = ghes_fixmap_pfn(ghes->fixmap_idx, paddr >> PAGE_SHIFT);
 		trunk = PAGE_SIZE - offset;
 		trunk = min(trunk, len);
 		if (from_phys)
@@ -319,13 +292,11 @@ static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len,
 		len -= trunk;
 		paddr += trunk;
 		buffer += trunk;
-		if (in_nmi) {
-			ghes_iounmap_nmi();
-			raw_spin_unlock(&ghes_ioremap_lock_nmi);
-		} else {
-			ghes_iounmap_irq();
-			spin_unlock_irqrestore(&ghes_ioremap_lock_irq, flags);
-		}
+		clear_fixmap(ghes->fixmap_idx);
+		if (in_nmi)
+			raw_spin_unlock(ghes->nmi_fixmap_lock);
+		else
+			spin_unlock_irqrestore(ghes->fixmap_lock, flags);
 	}
 }
 
@@ -347,7 +318,7 @@ static int ghes_read_estatus(struct ghes *ghes, int silent)
 	if (!buf_paddr)
 		return -ENOENT;
 
-	ghes_copy_tofrom_phys(ghes->estatus, buf_paddr,
+	ghes_copy_tofrom_phys(ghes, ghes->estatus, buf_paddr,
 			      sizeof(*ghes->estatus), 1);
 	if (!ghes->estatus->block_status)
 		return -ENOENT;
@@ -363,7 +334,7 @@ static int ghes_read_estatus(struct ghes *ghes, int silent)
 		goto err_read_block;
 	if (cper_estatus_check_header(ghes->estatus))
 		goto err_read_block;
-	ghes_copy_tofrom_phys(ghes->estatus + 1,
+	ghes_copy_tofrom_phys(ghes, ghes->estatus + 1,
 			      buf_paddr + sizeof(*ghes->estatus),
 			      len - sizeof(*ghes->estatus), 1);
 	if (cper_estatus_check(ghes->estatus))
@@ -382,7 +353,7 @@ static void ghes_clear_estatus(struct ghes *ghes)
 	ghes->estatus->block_status = 0;
 	if (!(ghes->flags & GHES_TO_CLEAR))
 		return;
-	ghes_copy_tofrom_phys(ghes->estatus, ghes->buffer_paddr,
+	ghes_copy_tofrom_phys(ghes, ghes->estatus, ghes->buffer_paddr,
 			      sizeof(ghes->estatus->block_status), 0);
 	ghes->flags &= ~GHES_TO_CLEAR;
 }
@@ -972,6 +943,8 @@ int ghes_notify_sea(void)
 
 static void ghes_sea_add(struct ghes *ghes)
 {
+	ghes->nmi_fixmap_lock = &ghes_fixmap_lock_nmi;
+	ghes->fixmap_idx = FIX_APEI_GHES_NMI;
 	ghes_estatus_queue_grow_pool(ghes);
 
 	mutex_lock(&ghes_list_mutex);
@@ -1018,6 +991,8 @@ static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 
 static void ghes_nmi_add(struct ghes *ghes)
 {
+	ghes->nmi_fixmap_lock = &ghes_fixmap_lock_nmi;
+	ghes->fixmap_idx = FIX_APEI_GHES_NMI;
 	ghes_estatus_queue_grow_pool(ghes);
 
 	mutex_lock(&ghes_list_mutex);
@@ -1113,11 +1088,15 @@ static int ghes_probe(struct platform_device *ghes_dev)
 
 	switch (generic->notify.type) {
 	case ACPI_HEST_NOTIFY_POLLED:
+		ghes->fixmap_lock = &ghes_fixmap_lock_irq;
+		ghes->fixmap_idx = FIX_APEI_GHES_IRQ;
 		timer_setup(&ghes->timer, ghes_poll_func, TIMER_DEFERRABLE);
 		ghes_add_timer(ghes);
 		break;
 	case ACPI_HEST_NOTIFY_EXTERNAL:
 		/* External interrupt vector is GSI */
+		ghes->fixmap_lock = &ghes_fixmap_lock_irq;
+		ghes->fixmap_idx = FIX_APEI_GHES_IRQ;
 		rc = acpi_gsi_to_irq(generic->notify.vector, &ghes->irq);
 		if (rc) {
 			pr_err(GHES_PFX "Failed to map GSI to IRQ for generic hardware error source: %d\n",
@@ -1136,6 +1115,8 @@ static int ghes_probe(struct platform_device *ghes_dev)
 	case ACPI_HEST_NOTIFY_SCI:
 	case ACPI_HEST_NOTIFY_GSIV:
 	case ACPI_HEST_NOTIFY_GPIO:
+		ghes->fixmap_lock = &ghes_fixmap_lock_irq;
+		ghes->fixmap_idx = FIX_APEI_GHES_IRQ;
 		mutex_lock(&ghes_list_mutex);
 		if (list_empty(&ghes_hed))
 			register_acpi_hed_notifier(&ghes_notifier_hed);
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index 8feb0c866ee0..74dbd164f3fe 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -29,6 +29,11 @@ struct ghes {
 		struct timer_list timer;
 		unsigned int irq;
 	};
+
+	spinlock_t *fixmap_lock;
+	raw_spinlock_t *nmi_fixmap_lock;
+
+	int fixmap_idx;
 };
 
 struct ghes_estatus_node {
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 7/7] ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
                   ` (5 preceding siblings ...)
  2018-01-22 19:29 ` [RFC 6/7] ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi() users James Morse
@ 2018-01-22 19:29 ` James Morse
  2018-01-23  8:51 ` [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache gengdongjiu
  7 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2018-01-22 19:29 UTC (permalink / raw)
  To: Dongjiu Geng; +Cc: linux-acpi, huangshaoyu, Tyler Baicar, James Morse

Now that ghes uses the fixmap addresses and locks via some indirection
we can support multiple NMI-like notifications on arm64.

These should be named after their notification method. x86's
NOTIFY_NMI is unchanged, change the SEA fixmap entry to use
FIX_APEI_GHES_SEA.

Future patches can add support for FIX_APEI_GHES_SEI and
FIX_APEI_GHES_SDEI_{NORMAL,CRITICAL}.

Not-signed-off: James Morse <james.morse@arm.com>
---
 arch/arm64/include/asm/fixmap.h | 4 +++-
 drivers/acpi/apei/ghes.c        | 7 ++++---
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
index ec1e6d6fa14c..c3974517c2cb 100644
--- a/arch/arm64/include/asm/fixmap.h
+++ b/arch/arm64/include/asm/fixmap.h
@@ -55,7 +55,9 @@ enum fixed_addresses {
 #ifdef CONFIG_ACPI_APEI_GHES
 	/* Used for GHES mapping from assorted contexts */
 	FIX_APEI_GHES_IRQ,
-	FIX_APEI_GHES_NMI,
+#ifdef CONFIG_ACPI_APEI_SEA
+	FIX_APEI_GHES_SEA,
+#endif
 #endif /* CONFIG_ACPI_APEI_GHES */
 
 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 6c2391fd00f8..225e546d1081 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -119,7 +119,6 @@ static DEFINE_MUTEX(ghes_list_mutex);
  * handler, but general ioremap can not be used in atomic context, so
  * the fixmap is used instead.
  */
-static DEFINE_RAW_SPINLOCK(ghes_fixmap_lock_nmi);
 static DEFINE_SPINLOCK(ghes_fixmap_lock_irq);
 
 static struct gen_pool *ghes_estatus_pool;
@@ -931,6 +930,7 @@ static struct notifier_block ghes_notifier_hed = {
 
 #ifdef CONFIG_ACPI_APEI_SEA
 static LIST_HEAD(ghes_sea);
+static DEFINE_RAW_SPINLOCK(ghes_fixmap_lock_sea);
 
 /*
  * Return 0 only if one of the SEA error sources successfully reported an error
@@ -943,8 +943,8 @@ int ghes_notify_sea(void)
 
 static void ghes_sea_add(struct ghes *ghes)
 {
-	ghes->nmi_fixmap_lock = &ghes_fixmap_lock_nmi;
-	ghes->fixmap_idx = FIX_APEI_GHES_NMI;
+	ghes->nmi_fixmap_lock = &ghes_fixmap_lock_sea;
+	ghes->fixmap_idx = FIX_APEI_GHES_SEA;
 	ghes_estatus_queue_grow_pool(ghes);
 
 	mutex_lock(&ghes_list_mutex);
@@ -974,6 +974,7 @@ static inline void ghes_sea_remove(struct ghes *ghes) { }
 static atomic_t ghes_in_nmi = ATOMIC_INIT(0);
 
 static LIST_HEAD(ghes_nmi);
+static DEFINE_RAW_SPINLOCK(ghes_fixmap_lock_nmi);
 
 static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs)
 {
-- 
2.15.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface.
  2018-01-22 19:29 ` [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface James Morse
@ 2018-01-23  8:46   ` gengdongjiu
  0 siblings, 0 replies; 10+ messages in thread
From: gengdongjiu @ 2018-01-23  8:46 UTC (permalink / raw)
  To: James Morse; +Cc: linux-acpi, huangshaoyu, Tyler Baicar



On 2018/1/23 3:29, James Morse wrote:
> To split up APEIs in_nmi() path, we need the nmi-like callers to always
> be in_nmi(). Add a helper to do the work and claim the notification.
> 
> When KVM or the arch code takes an exception that might be a RAS
> notification, it asks the APEI firmware-first code whether it wants
> to claim the exception. We can then go on to see if (a future)
> kernel-first mechanism wants to claim the notification, before
> falling through to the existing default behaviour.
> 
> The NOTIFY_SEA code was merged before we had multiple, possibly-interacting,
> NMI-like notifications and the need to consider kernel-first in the future.
> Make the 'claiming' behaviour explicit, and give ourselves somewhere
> to hook in kernel-first.
> 
> We're restructuring the APEI code to allow multiple NMI-like
> notifications, any notification that might interrupt interrupts-masked
> code must always be wrapped in nmi_enter()/nmi_exit().
> 
> We mask SError over this window to prevent an asynchronous RAS error
> arriving and tripping 'nmi_enter()'s BUG_ON(in_nmi()).
> 
> Not-signed-off: James Morse <james.morse@arm.com>
> ---
>  arch/arm64/include/asm/acpi.h      |  2 ++
>  arch/arm64/include/asm/daifflags.h |  1 +
>  arch/arm64/include/asm/kvm_ras.h   | 14 +++++++++++++-
>  arch/arm64/kernel/acpi.c           | 30 ++++++++++++++++++++++++++++++
>  arch/arm64/mm/fault.c              | 30 ++++++------------------------
>  5 files changed, 52 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
> index 32f465a80e4e..cf844b8d6ab8 100644
> --- a/arch/arm64/include/asm/acpi.h
> +++ b/arch/arm64/include/asm/acpi.h
> @@ -94,6 +94,8 @@ void __init acpi_init_cpus(void);
>  static inline void acpi_init_cpus(void) { }
>  #endif /* CONFIG_ACPI */
>  
> +bool apei_claim_sea(void);
> +
>  #ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL
>  bool acpi_parking_protocol_valid(int cpu);
>  void __init
> diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
> index 22e4c83de5a5..cbd753855bf3 100644
> --- a/arch/arm64/include/asm/daifflags.h
> +++ b/arch/arm64/include/asm/daifflags.h
> @@ -20,6 +20,7 @@
>  
>  #define DAIF_PROCCTX		0
>  #define DAIF_PROCCTX_NOIRQ	PSR_I_BIT
> +#define DAIF_ERRCTX		(PSR_I_BIT | PSR_A_BIT)
>  
>  /* mask/save/unmask/restore all exceptions, including interrupts. */
>  static inline void local_daif_mask(void)
> diff --git a/arch/arm64/include/asm/kvm_ras.h b/arch/arm64/include/asm/kvm_ras.h
> index 9a54576b759f..7fd38408a602 100644
> --- a/arch/arm64/include/asm/kvm_ras.h
> +++ b/arch/arm64/include/asm/kvm_ras.h
> @@ -4,8 +4,20 @@
>  #ifndef __ARM64_KVM_RAS_H__
>  #define __ARM64_KVM_RAS_H__
>  
> +#include <linux/acpi.h>
> +#include <linux/errno.h>
>  #include <linux/types.h>
>  
> -int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr);
> +static inline int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
The addr and esr parameter should be not be used, can we remove them?

> +{
> +	int ret = -ENOENT;
> +
> +	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
> +		if (apei_claim_sea())
> +			ret = 0;
> +	}
> +
> +	return ret;
> +}
>  
>  #endif /* __ARM64_KVM_RAS_H__ */
> diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
> index 252396a96c78..b2fc9c7a807d 100644
> --- a/arch/arm64/kernel/acpi.c
> +++ b/arch/arm64/kernel/acpi.c
> @@ -33,6 +33,8 @@
>  
>  #ifdef CONFIG_ACPI_APEI
>  # include <linux/efi.h>
> +# include <acpi/ghes.h>
> +# include <asm/daifflags.h>
>  # include <asm/pgtable.h>
>  #endif
>  
> @@ -261,4 +263,32 @@ pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
>  		return __pgprot(PROT_NORMAL_NC);
>  	return __pgprot(PROT_DEVICE_nGnRnE);
>  }
> +
> +
> +/*
> + * Claim Synchronous External Aborts as a firmwre first notification.

firmwre?
firmwre -->firmware

> + *
> + * Used by KVM and the arch do_sea handler.
> + */
> +bool apei_claim_sea(void)
> +{
> +	bool ret = false;
> +
> +	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
> +		unsigned long flags = arch_local_save_flags();
> +
> +		/*
> +		 * APEI expects an NMI-like notification to always be called
> +		 * in NMI context.
> +		 */
> +		local_daif_restore(DAIF_ERRCTX);
> +		nmi_enter();
> +		if (ghes_notify_sea() == 0)
> +			ret = true;
> +		nmi_exit();
> +		local_daif_restore(flags);
> +	}
> +
> +	return ret;
> +}
>  #endif
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 39e607515e8f..360b37594649 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -18,6 +18,7 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <linux/acpi.h>
>  #include <linux/extable.h>
>  #include <linux/signal.h>
>  #include <linux/mm.h>
> @@ -44,8 +45,6 @@
>  #include <asm/pgtable.h>
>  #include <asm/tlbflush.h>
>  
> -#include <acpi/ghes.h>
> -
>  struct fault_info {
>  	int	(*fn)(unsigned long addr, unsigned int esr,
>  		      struct pt_regs *regs);
> @@ -580,19 +579,12 @@ static int do_sea(unsigned long addr, unsigned int esr, struct pt_regs *regs)
>  	pr_err("Synchronous External Abort: %s (0x%08x) at 0x%016lx\n",
>  		inf->name, esr, addr);
>  
> -	/*
> -	 * Synchronous aborts may interrupt code which had interrupts masked.
> -	 * Before calling out into the wider kernel tell the interested
> -	 * subsystems.
> -	 */
>  	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA)) {
> -		if (interrupts_enabled(regs))
> -			nmi_enter();
> -
> -		ret = ghes_notify_sea();

  You code needs to rebase, in the newest code, the return value have been ignored.

> -
> -		if (interrupts_enabled(regs))
> -			nmi_exit();
> +		/*
> +		 * Return value ignored as we rely on signal merging.
> +		 * Future patches will make this more robust.
> +		 */
> +	       apei_claim_sea();
>  	}
>  
>  	info.si_signo = SIGBUS;
> @@ -674,16 +666,6 @@ static const struct fault_info fault_info[] = {
>  	{ do_bad,		SIGBUS,  0,		"unknown 63"			},
>  };
>  
> -int kvm_handle_guest_sea(phys_addr_t addr, unsigned int esr)
> -{
> -	int ret = -ENOENT;
> -
> -	if (IS_ENABLED(CONFIG_ACPI_APEI_SEA))
> -		ret = ghes_notify_sea();
> -
> -	return ret;
> -}
> -
>  asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
>  					 struct pt_regs *regs)
>  {
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache
  2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
                   ` (6 preceding siblings ...)
  2018-01-22 19:29 ` [RFC 7/7] ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications James Morse
@ 2018-01-23  8:51 ` gengdongjiu
  7 siblings, 0 replies; 10+ messages in thread
From: gengdongjiu @ 2018-01-23  8:51 UTC (permalink / raw)
  To: James Morse, Zhengqiang (turing), wangxiongfeng (C)
  Cc: linux-acpi, huangshaoyu, Tyler Baicar

Add more Huawei's guys to know the status and review this solution.

Hi James,
   Thanks for your work.

On 2018/1/23 3:29, James Morse wrote:
> Hi guys, kbuild-robot,
> 
> This RFC is rough, and not at all ready. This is the current status of
> my attempts to split up the ghes.c code to allow multiple notifications
> to be NMI-like. On arm64 we have NOTIFY_{SEA, SEI, SDEI} all of which
> have NMI-like behaviour.
Ok.

> 
> This series splits up APEIs in_nmi() path so that more than one
> notification can use it. To support the asyncronous notifications: SEI
> and SDEI we move all the NMI-like handlers over to the estatus-cache.
> This gives us the same APEI behaviour as x86, and means the multiple
> notification methods can interact if firmware implements more than one.
> 
> estatus.. queue? ghes.c has three things all called 'estatus'. One is
> a pool of memory that has a static size, and is grown/shrunk when new
> NMI users are allocated. The second is the cache, this holds recent
> notifications so it can suppress notifications we've already handled.
> The last is the queue, which holds data from NMI notifications (in pool
> memory) that can't be handled immediatly.
thanks for the explanation.

> 
> So far this has only been tested using SDEI.
> 
> This RFC makes a know race worse. (I aim to fix the race before dropping
> the RFC tag). Xie XiuQi reported that both the arch code and
> memory_failure() will signal an affected process, what the process gets
Yes, Now in the ARCH code we firstly call memory_failure() which may signal an affected process,
then the ARCH code also signal the affected process.

> depends on the order these run in, and how the signals get merged.
> Using the estatus-cache makes this worse. My intention is for the arch
> code's new 'apei_claim_x()' helpers to give any queue that the claimed
I checked your patch and give some comments. It looks like using apei_claim_x()' helpers
is good, which can be called both by ARCH kernel and KVM in the NMI path.

> RAS event may be stuck in a kick, depending on which irq/preemptible
> flags the notification caused to be set.>
> Your CC list is wrong! Yes, given how ropey this is I want to keep the
> noise low, it would only need posting again at rc1.
Great, hope these patches can be applied in rc1 or rc2.

> 
> 
> Comments on the overall approach welcome!
> 
> 
> Thanks,
> 
> James Morse (7):
>   ACPI / APEI: Move the estatus queue code up, and under its own ifdef
>   ACPI / APEI: Generalise the estatus queue's add/remove and notify code
>   ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue
>   KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing
>   arm64: KVM/mm: Move SEA handling behind a single 'claim' interface.
>   ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi()
>     users
>   ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications
> 
>  arch/arm/include/asm/kvm_ras.h       |  14 ++
>  arch/arm/include/asm/system_misc.h   |   5 -
>  arch/arm64/include/asm/acpi.h        |   2 +
>  arch/arm64/include/asm/daifflags.h   |   1 +
>  arch/arm64/include/asm/fixmap.h      |   4 +-
>  arch/arm64/include/asm/kvm_ras.h     |  23 ++
>  arch/arm64/include/asm/system_misc.h |   2 -
>  arch/arm64/kernel/acpi.c             |  30 +++
>  arch/arm64/mm/fault.c                |  30 +--
>  drivers/acpi/apei/ghes.c             | 467 ++++++++++++++++++-----------------
>  include/acpi/ghes.h                  |   5 +
>  virt/kvm/arm/mmu.c                   |   4 +-
>  12 files changed, 327 insertions(+), 260 deletions(-)
>  create mode 100644 arch/arm/include/asm/kvm_ras.h
>  create mode 100644 arch/arm64/include/asm/kvm_ras.h
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-01-23  8:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-22 19:29 [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache James Morse
2018-01-22 19:29 ` [RFC 1/7] ACPI / APEI: Move the estatus queue code up, and under its own ifdef James Morse
2018-01-22 19:29 ` [RFC 2/7] ACPI / APEI: Generalise the estatus queue's add/remove and notify code James Morse
2018-01-22 19:29 ` [RFC 3/7] ACPI / APEI: Switch NOTIFY_SEA to use the estatus queue James Morse
2018-01-22 19:29 ` [RFC 4/7] KVM: arm/arm64: Add kvm_ras.h to collect kvm specific RAS plumbing James Morse
2018-01-22 19:29 ` [RFC 5/7] arm64: KVM/mm: Move SEA handling behind a single 'claim' interface James Morse
2018-01-23  8:46   ` gengdongjiu
2018-01-22 19:29 ` [RFC 6/7] ACPI / APEI: Make the fixmap_idx per-ghes to allow multiple in_nmi() users James Morse
2018-01-22 19:29 ` [RFC 7/7] ACPI / APEI: Split fixmap pages for arm64 NMI-like notifications James Morse
2018-01-23  8:51 ` [RFC 0/7] APEI: Move arm64 NMI notifications to use estatus cache gengdongjiu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).