linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 00/14] pseries exception cleanups
@ 2016-07-21  6:43 Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 01/14] powerpc: add arch/powerpc/tools directory Nicholas Piggin
                   ` (13 more replies)
  0 siblings, 14 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:43 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Hi,

This series does two major things. First is to change how our
feature fixup code works, and second is to reorganise exception
vectors for pseries, and that requires the first.

This has not had a huge amount of testing. In particular endian,
cross compiling, embedded, etc. At this point I want to get
something out for comments because it's become quite a large
change.

To that end, it applies to quite an old -linus revision:
d325ea859490511322d1f151dc38577ee9a7c6da

Rebasing takes a bit of work, but I'll bring it up to date if
the response is positive.

Thanks,
Nick


Nicholas Piggin (14):
  powerpc: add arch/powerpc/tools directory
  powerpc/pseries: remove cross-fixup branches in exception code
  powerpc: build-time fixup alternate feature relative addresses
  powerpc/pseries: move decrementer exception vector out of line
  powerpc/pseries: 4GB exception handler offsets
  powerpc/pseries: h_facility_unavailable realmode exception location
  powerpc/pseries: improved exception vector macros
  powerpc/pseries: consolidate exception handler alignment
  powerpc/64: use gas sections for arranging exception vectors
  powerpc/pseries: move related exception code together
  powerpc/pseries: use single macro for both parts of OOL exception
  powerpc/pseries: remove unused exception code, small cleanups
  powerpc/pseries: consolidate slb exceptions
  powerpc/pseries: exceptions use short handler load again

 arch/powerpc/Makefile                             |   23 +-
 arch/powerpc/include/asm/exception-64s.h          |  155 +-
 arch/powerpc/include/asm/feature-fixups.h         |    5 +-
 arch/powerpc/include/asm/head-64.h                |  351 ++++
 arch/powerpc/include/asm/ppc_asm.h                |   29 +-
 arch/powerpc/kernel/exceptions-64s.S              | 2052 ++++++++++-----------
 arch/powerpc/kernel/head_64.S                     |   84 +-
 arch/powerpc/kernel/vmlinux.lds.S                 |   32 +-
 arch/powerpc/lib/feature-fixups.c                 |   19 +-
 arch/powerpc/relocs_check.sh                      |   59 -
 arch/powerpc/scripts/gcc-check-mprofile-kernel.sh |   23 -
 arch/powerpc/tools/Makefile                       |    3 +
 arch/powerpc/tools/gcc-check-mprofile-kernel.sh   |   23 +
 arch/powerpc/tools/relocs/.gitignore              |    1 +
 arch/powerpc/tools/relocs/Makefile                |    9 +
 arch/powerpc/tools/relocs/code-patching.c         |   82 +
 arch/powerpc/tools/relocs/code-patching.h         |    7 +
 arch/powerpc/tools/relocs/elf_sections.c          |  337 ++++
 arch/powerpc/tools/relocs/elf_sections.h          |   50 +
 arch/powerpc/tools/relocs/process_relocs.c        |  437 +++++
 arch/powerpc/tools/relocs_check.sh                |   59 +
 21 files changed, 2527 insertions(+), 1313 deletions(-)
 create mode 100644 arch/powerpc/include/asm/head-64.h
 delete mode 100755 arch/powerpc/relocs_check.sh
 delete mode 100755 arch/powerpc/scripts/gcc-check-mprofile-kernel.sh
 create mode 100644 arch/powerpc/tools/Makefile
 create mode 100755 arch/powerpc/tools/gcc-check-mprofile-kernel.sh
 create mode 100644 arch/powerpc/tools/relocs/.gitignore
 create mode 100644 arch/powerpc/tools/relocs/Makefile
 create mode 100644 arch/powerpc/tools/relocs/code-patching.c
 create mode 100644 arch/powerpc/tools/relocs/code-patching.h
 create mode 100644 arch/powerpc/tools/relocs/elf_sections.c
 create mode 100644 arch/powerpc/tools/relocs/elf_sections.h
 create mode 100644 arch/powerpc/tools/relocs/process_relocs.c
 create mode 100755 arch/powerpc/tools/relocs_check.sh

-- 
2.8.1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/14] powerpc: add arch/powerpc/tools directory
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 02/14] powerpc/pseries: remove cross-fixup branches in exception code Nicholas Piggin
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Move a couple of existing scripts under there. Remove scripts directory:
a script is a tool, a tool is not a script.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/Makefile                             |  8 ++-
 arch/powerpc/relocs_check.sh                      | 59 -----------------------
 arch/powerpc/scripts/gcc-check-mprofile-kernel.sh | 23 ---------
 arch/powerpc/tools/gcc-check-mprofile-kernel.sh   | 23 +++++++++
 arch/powerpc/tools/relocs_check.sh                | 59 +++++++++++++++++++++++
 5 files changed, 88 insertions(+), 84 deletions(-)
 delete mode 100755 arch/powerpc/relocs_check.sh
 delete mode 100755 arch/powerpc/scripts/gcc-check-mprofile-kernel.sh
 create mode 100755 arch/powerpc/tools/gcc-check-mprofile-kernel.sh
 create mode 100755 arch/powerpc/tools/relocs_check.sh

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 709a22a..d8d30fc 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -134,7 +134,7 @@ CFLAGS-$(CONFIG_GENERIC_CPU) += -mcpu=powerpc64
 endif
 
 ifdef CONFIG_MPROFILE_KERNEL
-    ifeq ($(shell $(srctree)/arch/powerpc/scripts/gcc-check-mprofile-kernel.sh $(CC) -I$(srctree)/include -D__KERNEL__),OK)
+    ifeq ($(shell $(srctree)/arch/powerpc/tools/gcc-check-mprofile-kernel.sh $(CC) -I$(srctree)/include -D__KERNEL__),OK)
         CC_FLAGS_FTRACE := -pg -mprofile-kernel
         KBUILD_CPPFLAGS += -DCC_USING_MPROFILE_KERNEL
     else
@@ -227,6 +227,9 @@ cpu-as-$(CONFIG_E200)		+= -Wa,-me200
 KBUILD_AFLAGS += $(cpu-as-y)
 KBUILD_CFLAGS += $(cpu-as-y)
 
+archscripts: scripts_basic
+	$(Q)$(MAKE) $(build)=arch/powerpc/tools
+
 head-y				:= arch/powerpc/kernel/head_$(CONFIG_WORD_SIZE).o
 head-$(CONFIG_8xx)		:= arch/powerpc/kernel/head_8xx.o
 head-$(CONFIG_40x)		:= arch/powerpc/kernel/head_40x.o
@@ -268,7 +271,7 @@ quiet_cmd_relocs_check = CALL    $<
       cmd_relocs_check = $(CONFIG_SHELL) $< "$(OBJDUMP)" "$(obj)/vmlinux"
 
 PHONY += relocs_check
-relocs_check: arch/powerpc/relocs_check.sh vmlinux
+relocs_check: arch/powerpc/tools/relocs_check.sh vmlinux
 	$(call cmd,relocs_check)
 
 zImage: relocs_check
@@ -368,6 +371,7 @@ endif
 
 archclean:
 	$(Q)$(MAKE) $(clean)=$(boot)
+	$(Q)$(MAKE) $(clean)=arch/powerpc/tools
 
 archprepare: checkbin
 
diff --git a/arch/powerpc/relocs_check.sh b/arch/powerpc/relocs_check.sh
deleted file mode 100755
index 2e4ebd0..0000000
--- a/arch/powerpc/relocs_check.sh
+++ /dev/null
@@ -1,59 +0,0 @@
-#!/bin/sh
-
-# Copyright © 2015 IBM Corporation
-
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version
-# 2 of the License, or (at your option) any later version.
-
-# This script checks the relocations of a vmlinux for "suspicious"
-# relocations.
-
-# based on relocs_check.pl
-# Copyright © 2009 IBM Corporation
-
-if [ $# -lt 2 ]; then
-	echo "$0 [path to objdump] [path to vmlinux]" 1>&2
-	exit 1
-fi
-
-# Have Kbuild supply the path to objdump so we handle cross compilation.
-objdump="$1"
-vmlinux="$2"
-
-bad_relocs=$(
-"$objdump" -R "$vmlinux" |
-	# Only look at relocation lines.
-	grep -E '\<R_' |
-	# These relocations are okay
-	# On PPC64:
-	#	R_PPC64_RELATIVE, R_PPC64_NONE
-	#	R_PPC64_ADDR64 mach_<name>
-	# On PPC:
-	#	R_PPC_RELATIVE, R_PPC_ADDR16_HI,
-	#	R_PPC_ADDR16_HA,R_PPC_ADDR16_LO,
-	#	R_PPC_NONE
-	grep -F -w -v 'R_PPC64_RELATIVE
-R_PPC64_NONE
-R_PPC_ADDR16_LO
-R_PPC_ADDR16_HI
-R_PPC_ADDR16_HA
-R_PPC_RELATIVE
-R_PPC_NONE' |
-	grep -E -v '\<R_PPC64_ADDR64[[:space:]]+mach_'
-)
-
-if [ -z "$bad_relocs" ]; then
-	exit 0
-fi
-
-num_bad=$(echo "$bad_relocs" | wc -l)
-echo "WARNING: $num_bad bad relocations"
-echo "$bad_relocs"
-
-# If we see this type of relocation it's an idication that
-# we /may/ be using an old version of binutils.
-if echo "$bad_relocs" | grep -q -F -w R_PPC64_UADDR64; then
-	echo "WARNING: You need at least binutils >= 2.19 to build a CONFIG_RELOCATABLE kernel"
-fi
diff --git a/arch/powerpc/scripts/gcc-check-mprofile-kernel.sh b/arch/powerpc/scripts/gcc-check-mprofile-kernel.sh
deleted file mode 100755
index c658d8c..0000000
--- a/arch/powerpc/scripts/gcc-check-mprofile-kernel.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-
-set -e
-set -o pipefail
-
-# To debug, uncomment the following line
-# set -x
-
-# Test whether the compile option -mprofile-kernel exists and generates
-# profiling code (ie. a call to _mcount()).
-echo "int func() { return 0; }" | \
-    $* -S -x c -O2 -p -mprofile-kernel - -o - 2> /dev/null | \
-    grep -q "_mcount"
-
-# Test whether the notrace attribute correctly suppresses calls to _mcount().
-
-echo -e "#include <linux/compiler.h>\nnotrace int func() { return 0; }" | \
-    $* -S -x c -O2 -p -mprofile-kernel - -o - 2> /dev/null | \
-    grep -q "_mcount" && \
-    exit 1
-
-echo "OK"
-exit 0
diff --git a/arch/powerpc/tools/gcc-check-mprofile-kernel.sh b/arch/powerpc/tools/gcc-check-mprofile-kernel.sh
new file mode 100755
index 0000000..c658d8c
--- /dev/null
+++ b/arch/powerpc/tools/gcc-check-mprofile-kernel.sh
@@ -0,0 +1,23 @@
+#!/bin/bash
+
+set -e
+set -o pipefail
+
+# To debug, uncomment the following line
+# set -x
+
+# Test whether the compile option -mprofile-kernel exists and generates
+# profiling code (ie. a call to _mcount()).
+echo "int func() { return 0; }" | \
+    $* -S -x c -O2 -p -mprofile-kernel - -o - 2> /dev/null | \
+    grep -q "_mcount"
+
+# Test whether the notrace attribute correctly suppresses calls to _mcount().
+
+echo -e "#include <linux/compiler.h>\nnotrace int func() { return 0; }" | \
+    $* -S -x c -O2 -p -mprofile-kernel - -o - 2> /dev/null | \
+    grep -q "_mcount" && \
+    exit 1
+
+echo "OK"
+exit 0
diff --git a/arch/powerpc/tools/relocs_check.sh b/arch/powerpc/tools/relocs_check.sh
new file mode 100755
index 0000000..2e4ebd0
--- /dev/null
+++ b/arch/powerpc/tools/relocs_check.sh
@@ -0,0 +1,59 @@
+#!/bin/sh
+
+# Copyright © 2015 IBM Corporation
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version
+# 2 of the License, or (at your option) any later version.
+
+# This script checks the relocations of a vmlinux for "suspicious"
+# relocations.
+
+# based on relocs_check.pl
+# Copyright © 2009 IBM Corporation
+
+if [ $# -lt 2 ]; then
+	echo "$0 [path to objdump] [path to vmlinux]" 1>&2
+	exit 1
+fi
+
+# Have Kbuild supply the path to objdump so we handle cross compilation.
+objdump="$1"
+vmlinux="$2"
+
+bad_relocs=$(
+"$objdump" -R "$vmlinux" |
+	# Only look at relocation lines.
+	grep -E '\<R_' |
+	# These relocations are okay
+	# On PPC64:
+	#	R_PPC64_RELATIVE, R_PPC64_NONE
+	#	R_PPC64_ADDR64 mach_<name>
+	# On PPC:
+	#	R_PPC_RELATIVE, R_PPC_ADDR16_HI,
+	#	R_PPC_ADDR16_HA,R_PPC_ADDR16_LO,
+	#	R_PPC_NONE
+	grep -F -w -v 'R_PPC64_RELATIVE
+R_PPC64_NONE
+R_PPC_ADDR16_LO
+R_PPC_ADDR16_HI
+R_PPC_ADDR16_HA
+R_PPC_RELATIVE
+R_PPC_NONE' |
+	grep -E -v '\<R_PPC64_ADDR64[[:space:]]+mach_'
+)
+
+if [ -z "$bad_relocs" ]; then
+	exit 0
+fi
+
+num_bad=$(echo "$bad_relocs" | wc -l)
+echo "WARNING: $num_bad bad relocations"
+echo "$bad_relocs"
+
+# If we see this type of relocation it's an idication that
+# we /may/ be using an old version of binutils.
+if echo "$bad_relocs" | grep -q -F -w R_PPC64_UADDR64; then
+	echo "WARNING: You need at least binutils >= 2.19 to build a CONFIG_RELOCATABLE kernel"
+fi
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/14] powerpc/pseries: remove cross-fixup branches in exception code
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 01/14] powerpc: add arch/powerpc/tools directory Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses Nicholas Piggin
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

In preparation for reworking the alternate feature patching code,
remove a case of cross-fixup branching in the exception code. That
is, a branch from one alt feature section targeting another alt
feature section.

We have the following:

BEGIN_FTR_SECTION
b	do_kvm_0x502
...
do_kvm_0x502:
...
FTR_SECTION_ELSE
b	do_kvm_0x500
...
do_kvm_0x500:
...
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)


BEGIN_FTR_SECTION
b	do_kvm_0x502
FTR_SECTION_ELSE
b	do_kvm_0x500
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)

This somehow manages to work today. The first branch to do_kvm_0x500 works
as expected and branches to the do_kvm_0x500 code that is moved to the
destination location. The second branch to it branches to the linked
location of the do_kvm_0x500 code. Two copies of the code are being
executed.

A subsequent patch moves the patchcode far away and discards it after
boot, so this type of branch doesn't work.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 4c94406..18befa5 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -257,13 +257,18 @@ hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
 					    EXC_HV, SOFTEN_TEST_HV)
-		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
 					    EXC_STD, SOFTEN_TEST_PR)
-		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 
+	/*
+	 * Relon code jumps to these KVM handlers too so can't put them
+	 * in the feature sections.
+	 */
+	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
+	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
+
 	STD_EXCEPTION_PSERIES(0x600, alignment)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x600)
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 01/14] powerpc: add arch/powerpc/tools directory Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 02/14] powerpc/pseries: remove cross-fixup branches in exception code Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21 13:39   ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 04/14] powerpc/pseries: move decrementer exception vector out of line Nicholas Piggin
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Implement build-time fixup of alternate feature relative addresses for
the out-of-line ("else") patch code. This is done post-link with a new
powerpc build tool that parses relocations and fixup structures, and
adjusts branch instructions.

The "else" part of the feature patching system currently requires the
linker generate correct relative addresses for the linked location. The
the kernel instruction patching code copies instructions delta bytes to
runtime location, and applies negative delta to relative branches as it
does.

This is nice and simple, however the requirement to generate valid
relative addresses at the linked location is constraining:

- Sometimes it can't be done. Particularly in exception handlers
  that have fixed locations, and when both if and else parts of the
  fixup have relative references to the same code. This has caused
  headaches in the exception code, where things have to be juggled
  around until they work.

- It requires patch code to be placed near its destination location,
  despite never being executed in-place. This compounds real estate
  shortage in exception code, and puts this unused code near to
  (likely) performance critical code, which is not ideal.

- Alternate patch code can't be discarded after alt fixups are completed.
  Not a significant problem in practice, but it's tidy to have it, and the
  feature patching stuff is likely to see increased usage in future.

With this change, the relative branch adjustment is done at build, so
patch code branches are correct for their destination location. Kernel
runtime patching just moves the instructions.

__ftr_alt sections get moved out of the main text section and into an init
region, which stops them cluttering the runtime text and allows them to be
discarded at boot time.

Relative branches in the alt sections still need the linker to be happy
otherwise it will fail the link. So the alt sections are made allocatable
and executable, so branch stub trampolines for these branches can be added
for them. The relocation and patching system simply ignores these branch
stubs.

Relocations have to be included in the vmlinux file after linking, so
--emit-relocs is added to the build. XXX: These should be stripped afterwards?

fixup_entry data could be stripped at build time after this change.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/Makefile                      |  15 +-
 arch/powerpc/include/asm/feature-fixups.h  |   5 +-
 arch/powerpc/kernel/vmlinux.lds.S          |  12 +-
 arch/powerpc/lib/feature-fixups.c          |  19 +-
 arch/powerpc/tools/Makefile                |   3 +
 arch/powerpc/tools/relocs/.gitignore       |   1 +
 arch/powerpc/tools/relocs/Makefile         |   9 +
 arch/powerpc/tools/relocs/code-patching.c  |  82 ++++++
 arch/powerpc/tools/relocs/code-patching.h  |   7 +
 arch/powerpc/tools/relocs/elf_sections.c   | 337 ++++++++++++++++++++++
 arch/powerpc/tools/relocs/elf_sections.h   |  50 ++++
 arch/powerpc/tools/relocs/process_relocs.c | 437 +++++++++++++++++++++++++++++
 12 files changed, 959 insertions(+), 18 deletions(-)
 create mode 100644 arch/powerpc/tools/Makefile
 create mode 100644 arch/powerpc/tools/relocs/.gitignore
 create mode 100644 arch/powerpc/tools/relocs/Makefile
 create mode 100644 arch/powerpc/tools/relocs/code-patching.c
 create mode 100644 arch/powerpc/tools/relocs/code-patching.h
 create mode 100644 arch/powerpc/tools/relocs/elf_sections.c
 create mode 100644 arch/powerpc/tools/relocs/elf_sections.h
 create mode 100644 arch/powerpc/tools/relocs/process_relocs.c

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index d8d30fc..ca82aa2 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -99,6 +99,10 @@ endif
 LDFLAGS_vmlinux-y := -Bstatic
 LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
 LDFLAGS_vmlinux	:= $(LDFLAGS_vmlinux-y)
+# --emit-relocs required for post-link fixup of alternate feature
+# text section relocations.
+LDFLAGS_vmlinux	+= --emit-relocs
+KBUILD_LDFLAGS_MODULE += --emit-relocs
 
 ifeq ($(CONFIG_PPC64),y)
 ifeq ($(call cc-option-yn,-mcmodel=medium),y)
@@ -277,6 +281,16 @@ relocs_check: arch/powerpc/tools/relocs_check.sh vmlinux
 zImage: relocs_check
 endif
 
+CMD_PROCESS_RELOCS = arch/powerpc/tools/relocs/process_relocs
+quiet_cmd_process_relocs = CALL    $@
+      cmd_process_relocs = $(CMD_PROCESS_RELOCS) $(obj)/vmlinux
+PHONY += process_relocs
+process_relocs: vmlinux FORCE
+	$(call if_changed,process_relocs)
+zImage: process_relocs
+
+KBUILD_MODPOST_TOOL := $(CMD_PROCESS_RELOCS)
+
 $(BOOT_TARGETS1): vmlinux
 	$(Q)$(MAKE) ARCH=ppc64 $(build)=$(boot) $(patsubst %,$(boot)/%,$@)
 $(BOOT_TARGETS2): vmlinux
@@ -419,4 +433,3 @@ checkbin:
 
 
 CLEAN_FILES += $(TOUT)
-
diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index 9a67a38..160f7b1 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -16,6 +16,9 @@
  * useable with the vdso shared library. There is also an assumption
  * that values will be negative, that is, the fixup table has to be
  * located after the code it fixes up.
+ *
+ * Please ensure that new section names, modifications to FTR_ENTRY
+ * encoding, etc., is handled by arch/powerpc/tools/relocs/ code.
  */
 #if defined(CONFIG_PPC64) && !defined(__powerpc64__)
 /* 64 bits kernel, 32 bits code (ie. vdso32) */
@@ -33,7 +36,7 @@
 
 #define FTR_SECTION_ELSE_NESTED(label)			\
 label##2:						\
-	.pushsection __ftr_alt_##label,"a";		\
+	.pushsection __ftr_alt_##label,"ax";		\
 	.align 2;					\
 label##3:
 
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 2dd91f7..552dcbc 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -49,8 +49,7 @@ SECTIONS
 		ALIGN_FUNCTION();
 		HEAD_TEXT
 		_text = .;
-		/* careful! __ftr_alt_* sections need to be close to .text */
-		*(.text .fixup __ftr_alt_* .ref.text)
+		*(.text .fixup .ref.text)
 		SCHED_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
@@ -63,7 +62,6 @@ SECTIONS
 		*(.got2)
 		__got2_end = .;
 #endif /* CONFIG_PPC32 */
-
 	} :kernel
 
 	. = ALIGN(PAGE_SIZE);
@@ -92,6 +90,10 @@ SECTIONS
 	__init_begin = .;
 	INIT_TEXT_SECTION(PAGE_SIZE) :kernel
 
+	.__ftr_alternates.text : AT(ADDR(.__ftr_alternates.text) - LOAD_OFFSET) {
+		*(__ftr_alt*);
+	}
+
 	/* .exit.text is discarded at runtime, not link time,
 	 * to deal with references from __bug_table
 	 */
@@ -123,6 +125,10 @@ SECTIONS
 
 	SECURITY_INIT
 
+	/*
+	 * The _ftr_fixup sections could be discarded after the relocation
+	 * pass, which would save a few bytes.
+	 */
 	. = ALIGN(8);
 	__ftr_fixup : AT(ADDR(__ftr_fixup) - LOAD_OFFSET) {
 		__start___ftr_fixup = .;
diff --git a/arch/powerpc/lib/feature-fixups.c b/arch/powerpc/lib/feature-fixups.c
index 7ce3870..15aeda1 100644
--- a/arch/powerpc/lib/feature-fixups.c
+++ b/arch/powerpc/lib/feature-fixups.c
@@ -44,20 +44,13 @@ static unsigned int *calc_addr(struct fixup_entry *fcur, long offset)
 static int patch_alt_instruction(unsigned int *src, unsigned int *dest,
 				 unsigned int *alt_start, unsigned int *alt_end)
 {
-	unsigned int instr;
+	unsigned int instr = *src;
 
-	instr = *src;
-
-	if (instr_is_relative_branch(*src)) {
-		unsigned int *target = (unsigned int *)branch_target(src);
-
-		/* Branch within the section doesn't need translating */
-		if (target < alt_start || target >= alt_end) {
-			instr = translate_branch(dest, src);
-			if (!instr)
-				return 1;
-		}
-	}
+	/*
+	 * We used to translate relative branches here, however we now
+	 * do that by fixing up relocations after link with process_relocs
+	 * tool in arch/powerpc/tools/relocs/
+	 */
 
 	patch_instruction(dest, instr);
 
diff --git a/arch/powerpc/tools/Makefile b/arch/powerpc/tools/Makefile
new file mode 100644
index 0000000..38dbf04
--- /dev/null
+++ b/arch/powerpc/tools/Makefile
@@ -0,0 +1,3 @@
+always		:= $(hostprogs-y) $(hostprogs-m)
+
+subdir-y	+= relocs
diff --git a/arch/powerpc/tools/relocs/.gitignore b/arch/powerpc/tools/relocs/.gitignore
new file mode 100644
index 0000000..5cf4382
--- /dev/null
+++ b/arch/powerpc/tools/relocs/.gitignore
@@ -0,0 +1 @@
+process_relocs
diff --git a/arch/powerpc/tools/relocs/Makefile b/arch/powerpc/tools/relocs/Makefile
new file mode 100644
index 0000000..c843b85
--- /dev/null
+++ b/arch/powerpc/tools/relocs/Makefile
@@ -0,0 +1,9 @@
+HOST_EXTRACFLAGS		+= -Wno-unused-function
+
+hostprogs-y			:= process_relocs
+
+process_relocs-objs		:= process_relocs.o elf_sections.o code-patching.o
+
+always				:= $(hostprogs-y)
+
+HOSTLOADLIBES_process_relocs	+= -lelf
diff --git a/arch/powerpc/tools/relocs/code-patching.c b/arch/powerpc/tools/relocs/code-patching.c
new file mode 100644
index 0000000..db564a0
--- /dev/null
+++ b/arch/powerpc/tools/relocs/code-patching.c
@@ -0,0 +1,82 @@
+/*
+ *  Copyright 2008 Michael Ellerman, IBM Corporation.
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU General Public License
+ *  as published by the Free Software Foundation; either version
+ *  2 of the License, or (at your option) any later version.
+ */
+#include <stdlib.h>
+#include <stdio.h>
+#include <inttypes.h>
+#include <errno.h>
+#include "code-patching.h"
+
+#define BRANCH_SET_LINK 0x1
+#define BRANCH_ABSOLUTE 0x2
+
+static int set_uncond_branch_target(uint32_t *insn,
+		const uint64_t addr, uint64_t target)
+{
+	uint32_t i = *insn;
+	int64_t offset;
+
+	offset = target;
+	if (!(i & BRANCH_ABSOLUTE))
+		offset = offset - addr;
+
+	/* Check we can represent the target in the instruction format */
+	if (offset < -0x2000000 || offset > 0x1fffffc || offset & 0x3)
+		return -EOVERFLOW;
+
+	/* Mask out the flags and target, so they don't step on each other. */
+	*insn = 0x48000000 | (i & 0x3) | (offset & 0x03FFFFFC);
+
+	return 0;
+}
+
+static int set_cond_branch_target(uint32_t *insn,
+		const uint64_t addr, uint64_t target)
+{
+	uint32_t i = *insn;
+	int64_t offset;
+
+	offset = target;
+	if (!(i & BRANCH_ABSOLUTE))
+		offset = offset - addr;
+
+	/* Check we can represent the target in the instruction format */
+	if (offset < -0x8000 || offset > 0x7FFF || offset & 0x3)
+		return -EOVERFLOW;
+
+	/* Mask out the flags and target, so they don't step on each other. */
+	*insn = 0x40000000 | (i & 0x3FF0003) | (offset & 0xFFFC);
+
+	return 0;
+}
+
+static uint32_t branch_opcode(uint32_t instr)
+{
+	return (instr >> 26) & 0x3F;
+}
+
+static int instr_is_branch_iform(uint32_t instr)
+{
+	return branch_opcode(instr) == 18;
+}
+
+static int instr_is_branch_bform(uint32_t instr)
+{
+	return branch_opcode(instr) == 16;
+}
+
+int set_branch_target(uint32_t *insn,
+		const uint64_t addr, uint64_t target)
+{
+	if (instr_is_branch_iform(*insn))
+		return set_uncond_branch_target(insn, addr, target);
+	else if (instr_is_branch_bform(*insn))
+		return set_cond_branch_target(insn, addr, target);
+
+	return -EINVAL;
+}
diff --git a/arch/powerpc/tools/relocs/code-patching.h b/arch/powerpc/tools/relocs/code-patching.h
new file mode 100644
index 0000000..1d3cbbe
--- /dev/null
+++ b/arch/powerpc/tools/relocs/code-patching.h
@@ -0,0 +1,7 @@
+#ifndef __CODE_PATCHING_H__
+#define __CODE_PATCHING_H__
+
+int set_branch_target(uint32_t *insn,
+		const uint64_t addr, uint64_t target);
+
+#endif
diff --git a/arch/powerpc/tools/relocs/elf_sections.c b/arch/powerpc/tools/relocs/elf_sections.c
new file mode 100644
index 0000000..718020d
--- /dev/null
+++ b/arch/powerpc/tools/relocs/elf_sections.c
@@ -0,0 +1,337 @@
+#define _GNU_SOURCE
+#include <assert.h>
+#include <errno.h>
+#include <string.h>
+#include <fcntl.h>
+#include <gelf.h>
+#include <elf.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+
+#include "elf_sections.h"
+
+#define dbg_printf(...)
+
+static const char *rel_type_name(unsigned int type)
+{
+	static const char *const type_name[] = {
+#define REL_TYPE(X)[X] = #X
+		REL_TYPE(R_PPC64_NONE),
+		REL_TYPE(R_PPC64_ADDR32),
+		REL_TYPE(R_PPC64_ADDR24),
+		REL_TYPE(R_PPC64_ADDR16),
+		REL_TYPE(R_PPC64_ADDR16_LO),
+		REL_TYPE(R_PPC64_ADDR16_HI),
+		REL_TYPE(R_PPC64_ADDR16_HA),
+		REL_TYPE(R_PPC64_ADDR14),
+		REL_TYPE(R_PPC64_ADDR14_BRTAKEN),
+		REL_TYPE(R_PPC64_ADDR14_BRNTAKEN),
+		REL_TYPE(R_PPC64_REL24),
+		REL_TYPE(R_PPC64_REL14),
+		REL_TYPE(R_PPC64_REL14_BRTAKEN),
+		REL_TYPE(R_PPC64_REL14_BRNTAKEN),
+		REL_TYPE(R_PPC64_GOT16),
+		REL_TYPE(R_PPC64_GOT16_LO),
+		REL_TYPE(R_PPC64_GOT16_HI),
+		REL_TYPE(R_PPC64_GOT16_HA),
+		REL_TYPE(R_PPC64_COPY),
+		REL_TYPE(R_PPC64_GLOB_DAT),
+		REL_TYPE(R_PPC64_JMP_SLOT),
+		REL_TYPE(R_PPC64_RELATIVE),
+		REL_TYPE(R_PPC64_UADDR32),
+		REL_TYPE(R_PPC64_UADDR16),
+		REL_TYPE(R_PPC64_REL32),
+		REL_TYPE(R_PPC64_PLT32),
+		REL_TYPE(R_PPC64_PLTREL32),
+		REL_TYPE(R_PPC64_PLT16_LO),
+		REL_TYPE(R_PPC64_PLT16_HI),
+		REL_TYPE(R_PPC64_PLT16_HA),
+		REL_TYPE(R_PPC64_SECTOFF),
+		REL_TYPE(R_PPC64_SECTOFF_LO),
+		REL_TYPE(R_PPC64_SECTOFF_HI),
+		REL_TYPE(R_PPC64_SECTOFF_HA),
+		REL_TYPE(R_PPC64_ADDR30),
+		REL_TYPE(R_PPC64_ADDR64),
+		REL_TYPE(R_PPC64_ADDR16_HIGHER),
+		REL_TYPE(R_PPC64_ADDR16_HIGHERA),
+		REL_TYPE(R_PPC64_ADDR16_HIGHEST),
+		REL_TYPE(R_PPC64_ADDR16_HIGHESTA),
+		REL_TYPE(R_PPC64_UADDR64),
+		REL_TYPE(R_PPC64_REL64),
+		REL_TYPE(R_PPC64_PLT64),
+		REL_TYPE(R_PPC64_PLTREL64),
+		REL_TYPE(R_PPC64_TOC16),
+		REL_TYPE(R_PPC64_TOC16_LO),
+		REL_TYPE(R_PPC64_TOC16_HI),
+		REL_TYPE(R_PPC64_TOC16_HA),
+		REL_TYPE(R_PPC64_TOC),
+		REL_TYPE(R_PPC64_PLTGOT16),
+		REL_TYPE(R_PPC64_PLTGOT16_LO),
+		REL_TYPE(R_PPC64_PLTGOT16_HI),
+		REL_TYPE(R_PPC64_PLTGOT16_HA),
+		REL_TYPE(R_PPC64_ADDR16_DS),
+		REL_TYPE(R_PPC64_ADDR16_LO_DS),
+		REL_TYPE(R_PPC64_GOT16_DS),
+		REL_TYPE(R_PPC64_GOT16_LO_DS),
+		REL_TYPE(R_PPC64_PLT16_LO_DS),
+		REL_TYPE(R_PPC64_SECTOFF_DS),
+		REL_TYPE(R_PPC64_SECTOFF_LO_DS),
+		REL_TYPE(R_PPC64_TOC16_DS),
+		REL_TYPE(R_PPC64_TOC16_LO_DS),
+		REL_TYPE(R_PPC64_PLTGOT16_DS),
+		REL_TYPE(R_PPC64_PLTGOT16_LO_DS),
+		REL_TYPE(R_PPC64_TLS),
+		REL_TYPE(R_PPC64_DTPMOD64),
+		REL_TYPE(R_PPC64_TPREL16),
+		REL_TYPE(R_PPC64_TPREL16_LO),
+		REL_TYPE(R_PPC64_TPREL16_HI),
+		REL_TYPE(R_PPC64_TPREL16_HA),
+		REL_TYPE(R_PPC64_TPREL64),
+		REL_TYPE(R_PPC64_DTPREL16),
+		REL_TYPE(R_PPC64_DTPREL16_LO),
+		REL_TYPE(R_PPC64_DTPREL16_HI),
+		REL_TYPE(R_PPC64_DTPREL16_HA),
+		REL_TYPE(R_PPC64_DTPREL64),
+		REL_TYPE(R_PPC64_GOT_TLSGD16),
+		REL_TYPE(R_PPC64_GOT_TLSGD16_LO),
+		REL_TYPE(R_PPC64_GOT_TLSGD16_HI),
+		REL_TYPE(R_PPC64_GOT_TLSGD16_HA),
+		REL_TYPE(R_PPC64_GOT_TLSLD16),
+		REL_TYPE(R_PPC64_GOT_TLSLD16_LO),
+		REL_TYPE(R_PPC64_GOT_TLSLD16_HI),
+		REL_TYPE(R_PPC64_GOT_TLSLD16_HA),
+		REL_TYPE(R_PPC64_GOT_TPREL16_DS),
+		REL_TYPE(R_PPC64_GOT_TPREL16_LO_DS),
+		REL_TYPE(R_PPC64_GOT_TPREL16_HI),
+		REL_TYPE(R_PPC64_GOT_TPREL16_HA),
+		REL_TYPE(R_PPC64_GOT_DTPREL16_DS),
+		REL_TYPE(R_PPC64_GOT_DTPREL16_LO_DS),
+		REL_TYPE(R_PPC64_GOT_DTPREL16_HI),
+		REL_TYPE(R_PPC64_GOT_DTPREL16_HA),
+		REL_TYPE(R_PPC64_TPREL16_DS),
+		REL_TYPE(R_PPC64_TPREL16_LO_DS),
+		REL_TYPE(R_PPC64_TPREL16_HIGHER),
+		REL_TYPE(R_PPC64_TPREL16_HIGHERA),
+		REL_TYPE(R_PPC64_TPREL16_HIGHEST),
+		REL_TYPE(R_PPC64_TPREL16_HIGHESTA),
+		REL_TYPE(R_PPC64_DTPREL16_DS),
+		REL_TYPE(R_PPC64_DTPREL16_LO_DS),
+		REL_TYPE(R_PPC64_DTPREL16_HIGHER),
+		REL_TYPE(R_PPC64_DTPREL16_HIGHERA),
+		REL_TYPE(R_PPC64_DTPREL16_HIGHEST),
+		REL_TYPE(R_PPC64_DTPREL16_HIGHESTA),
+		REL_TYPE(R_PPC64_TLSGD),
+		REL_TYPE(R_PPC64_TLSLD),
+		REL_TYPE(R_PPC64_TOCSAVE),
+/*		REL_TYPE(R_PPC64_ENTRY), */
+		REL_TYPE(R_PPC64_REL16),
+		REL_TYPE(R_PPC64_REL16_LO),
+		REL_TYPE(R_PPC64_REL16_HI),
+		REL_TYPE(R_PPC64_REL16_HA),
+#undef REL_TYPE
+	};
+	const char *name = "UNKNOWN";
+
+	if (type < sizeof(type_name) / sizeof(typeof(type_name[0])) && type_name[type])
+		name = type_name[type];
+	return name;
+}
+
+static struct section *get_section(struct elf *elf, Elf_Scn *scn)
+{
+	struct section *section;
+
+	section = malloc(sizeof(struct section));
+
+	section->scn = scn;
+
+	if (gelf_getshdr(scn, &section->shdr) == NULL) {
+		fprintf(stderr, "gelf_getshdr failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	section->name = elf_strptr(elf->elf, elf->shstrndx, section->shdr.sh_name);
+	if (section->name == NULL) {
+		fprintf(stderr, "gelf_strptr failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	section->data = elf_getdata(scn, NULL);
+	if (section->data) {
+		assert(elf_getdata(scn, section->data) == NULL);
+	}
+
+	section->symtab = NULL;
+	if (section->shdr.sh_type == SHT_SYMTAB)
+		goto no_symtab;
+	if (section->shdr.sh_type == SHT_DYNSYM)
+		goto no_symtab;
+	if (section->shdr.sh_type == SHT_DYNAMIC)
+		goto no_symtab;
+
+	/* printf("symtab index:%d\n", elf_scnshndx(scn)); ??? */
+	if (section->shdr.sh_link) {
+		Elf_Scn *link_scn;
+
+		link_scn = elf_getscn(elf->elf, section->shdr.sh_link);
+		section->symtab = get_section(elf, link_scn);
+
+		assert(section->symtab->shdr.sh_type == SHT_SYMTAB ||
+			section->symtab->shdr.sh_type == SHT_DYNSYM);
+	}
+
+no_symtab:
+	section->strtab = NULL;
+	if (section->symtab == NULL) {
+		if (section->shdr.sh_link) {
+			Elf_Scn *link_scn;
+
+			link_scn = elf_getscn(elf->elf, section->shdr.sh_link);
+			section->strtab = get_section(elf, link_scn);
+
+			assert(section->strtab->shdr.sh_type == SHT_STRTAB);
+		}
+	}
+
+	return section;
+}
+
+struct symbol *elf_sections_get_symbol(struct elf *elf, struct section *section, unsigned long nr)
+{
+	struct symbol *symbol;
+	Elf_Scn *scn;
+
+	symbol = malloc(sizeof(struct symbol));
+
+	if (gelf_getsym(section->symtab->data, nr, &symbol->sym) == NULL) {
+		fprintf(stderr, "gelf_getsym failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	scn = elf_getscn(elf->elf, symbol->sym.st_shndx);
+	symbol->section = get_section(elf, scn);
+	symbol->_name = symbol->section->name;
+	if (symbol->sym.st_name) {
+		symbol->name = elf_strptr(elf->elf, elf_ndxscn(section->symtab->strtab->scn), symbol->sym.st_name);
+		symbol->_name = symbol->name;
+	} else {
+		symbol->name = NULL;
+	}
+
+	return symbol;
+}
+
+struct relocation *elf_sections_get_reloc(struct elf *elf, struct section *section, size_t n)
+{
+	struct relocation *relocation;
+
+	relocation = malloc(sizeof(struct relocation));
+
+	if (section->shdr.sh_type == SHT_REL) {
+		if (gelf_getrel(section->data, n, &relocation->rel) != &relocation->rel) {
+			return NULL;
+		}
+
+		relocation->type_name = rel_type_name(GELF_R_TYPE(relocation->rel.r_info));
+		relocation->symbol = elf_sections_get_symbol(elf, section, GELF_R_SYM(relocation->rel.r_info));
+		relocation->offset = relocation->rel.r_offset;
+		relocation->target = relocation->symbol->sym.st_value;
+
+	} else if (section->shdr.sh_type == SHT_RELA) {
+		if (gelf_getrela(section->data, n, &relocation->rela) != &relocation->rela) {
+			return NULL;
+		}
+
+		relocation->type_name = rel_type_name(GELF_R_TYPE(relocation->rela.r_info));
+		relocation->symbol = elf_sections_get_symbol(elf, section, GELF_R_SYM(relocation->rela.r_info));
+		relocation->offset = relocation->rela.r_offset;
+		relocation->target = relocation->symbol->sym.st_value;
+		relocation->target += relocation->rela.r_addend;
+
+	}  else {
+		assert(0);
+	}
+
+	return relocation;
+}
+
+struct elf *elf_sections_init(int fd)
+{
+	struct elf *elf;
+	Elf_Scn *scn;
+
+	elf = malloc(sizeof(struct elf));
+	assert(elf);
+
+	if (elf_version(EV_CURRENT) == EV_NONE) {
+		fprintf(stderr, "libelf not initialized: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	if ((elf->elf = elf_begin(fd, ELF_C_READ, NULL)) == NULL) {
+		fprintf(stderr, "elf_begin failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	if (elf_kind(elf->elf) != ELF_K_ELF) {
+		fprintf(stderr, "Not an ELF object.\n");
+		exit(EXIT_FAILURE);
+	}
+
+	if (gelf_getehdr(elf->elf, &elf->ehdr) == NULL) {
+		fprintf(stderr, "gelf_getehdr failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	if (elf->ehdr.e_version != EV_CURRENT) {
+		fprintf(stderr, "Unknown ELF version\n");
+		exit(EXIT_FAILURE);
+	}
+
+	if (elf->ehdr.e_machine != EM_PPC && elf->ehdr.e_machine != EM_PPC64) {
+		fprintf(stderr, "Not a PPC/PPC64 machine\n");
+		exit(EXIT_FAILURE);
+	}
+
+	if (elf_getshdrstrndx(elf->elf, &elf->shstrndx) != 0) {
+		fprintf(stderr, "elf_getshdrstrndx failed: %s\n", elf_errmsg(-1));
+		exit(EXIT_FAILURE);
+	}
+
+	scn = elf_getscn(elf->elf, elf->shstrndx);
+	elf->strtab = get_section(elf, scn);
+	assert(elf->strtab->shdr.sh_type == SHT_STRTAB);
+
+	return elf;
+}
+
+void elf_sections_exit(struct elf *elf)
+{
+	elf_end(elf->elf);
+}
+
+int elf_sections_processor(struct elf *elf,
+				int (*fn)(struct section *section, void *arg),
+				void *arg)
+{
+	Elf_Scn *scn;
+	int err;
+
+	scn = NULL ;
+	while ((scn = elf_nextscn(elf->elf, scn)) != NULL) {
+		struct section *section;
+
+		section = get_section(elf, scn);
+
+		err = fn(section, arg);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
diff --git a/arch/powerpc/tools/relocs/elf_sections.h b/arch/powerpc/tools/relocs/elf_sections.h
new file mode 100644
index 0000000..c3bc744
--- /dev/null
+++ b/arch/powerpc/tools/relocs/elf_sections.h
@@ -0,0 +1,50 @@
+#ifndef __ELF_SECTIONS_H__
+#define __ELF_SECTIONS_H__
+
+#include <gelf.h>
+#include <elf.h>
+
+struct section {
+	Elf_Scn *scn;
+	GElf_Shdr shdr;
+	const char *name;
+	Elf_Data *data;
+
+	struct section *symtab;
+	struct section *strtab;
+};
+
+struct symbol {
+	GElf_Sym sym;
+	struct section *section;
+	const char *name;
+	const char *_name;
+};
+
+struct relocation {
+	GElf_Rel rel;
+	GElf_Rela rela;
+
+	const char *type_name;
+	struct symbol *symbol;
+
+	uint64_t offset;
+	uint64_t target;
+};
+
+struct elf {
+	Elf *elf;
+	GElf_Ehdr ehdr;
+	size_t shstrndx;
+	struct section *strtab;
+};
+
+struct symbol *elf_sections_get_symbol(struct elf *elf, struct section *section, unsigned long nr);
+struct relocation *elf_sections_get_reloc(struct elf *elf, struct section *section, size_t n);
+struct elf *elf_sections_init(int fd);
+void elf_sections_exit(struct elf *elf);
+int elf_sections_processor(struct elf *elf,
+				int (*fn)(struct section *section, void *arg),
+				void *arg);
+
+#endif
diff --git a/arch/powerpc/tools/relocs/process_relocs.c b/arch/powerpc/tools/relocs/process_relocs.c
new file mode 100644
index 0000000..912ab1f
--- /dev/null
+++ b/arch/powerpc/tools/relocs/process_relocs.c
@@ -0,0 +1,437 @@
+#define _GNU_SOURCE
+#include <assert.h>
+#include <errno.h>
+#include <string.h>
+#include <fcntl.h>
+#include <gelf.h>
+#include <elf.h>
+#include <stdio.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <asm/byteorder.h>
+#include "elf_sections.h"
+#include "code-patching.h"
+
+/*
+ * This program runs through relocation data in PPC/PPC64 vmlinux ELF
+ * image generated with --emit-relocs, and performs some processing and
+ * checks.
+ *
+ * Presently, it has the following functions:
+ * 1. Fix relocations for branches inside alternate feature sections
+ *    (the "else" patches), so that they are correct for their destination
+ *    address. They never get executed at their linked location.
+ *
+ *    This is done by parsing all fixup_entry structures in the _ftr_fixup
+ *    sections, and keeping those with non-zero alternate patch. Then all
+ *    relocations in the .__ftr_alternates.text section are parsed, and those
+ *    matching addresses in our fixup_entry alternates patches get
+ *    struct insn_patch created for them. Finally, all struct insn_patch'es
+ *    are iterated and written to the image in-place.
+ */
+
+#define dbg_printf(...)
+
+struct fixup_entry_64 {
+	uint64_t mask;
+	uint64_t value;
+	uint64_t start_off;
+	uint64_t end_off;
+	uint64_t alt_start_off;
+	uint64_t alt_end_off;
+} __attribute__((packed));
+
+#define fixup_entry fixup_entry_64
+
+struct fixup_entry_32 {
+	uint32_t mask;
+	uint32_t value;
+	uint32_t start_off;
+	uint32_t end_off;
+	uint32_t alt_start_off;
+	uint32_t alt_end_off;
+} __attribute__((packed));
+
+struct insn_patch {
+	uint32_t	insn;		/* New instruction */
+	off_t		offset;		/* Image location to patch */
+};
+
+static int is_64bit(struct elf *elf)
+{
+	return elf->ehdr.e_ident[EI_CLASS] == ELFCLASS64;
+}
+
+static int is_32bit(struct elf *elf)
+{
+	return elf->ehdr.e_ident[EI_CLASS] == ELFCLASS32;
+}
+
+static int is_le(struct elf *elf)
+{
+	return elf->ehdr.e_ident[EI_DATA] == ELFDATA2LSB;
+}
+
+static int is_be(struct elf *elf)
+{
+	return elf->ehdr.e_ident[EI_DATA] == ELFDATA2MSB;
+}
+
+
+static struct elf *elf;
+
+static uint16_t f16_to_cpu(uint16_t val)
+{
+	if (is_le(elf))
+		return __le16_to_cpu(val);
+	else
+		return __be16_to_cpu(val);
+}
+
+static uint32_t f32_to_cpu(uint32_t val)
+{
+	if (is_le(elf))
+		return __le32_to_cpu(val);
+	else
+		return __be32_to_cpu(val);
+}
+
+static uint64_t f64_to_cpu(uint64_t val)
+{
+	if (is_le(elf))
+		return __le64_to_cpu(val);
+	else
+		return __be64_to_cpu(val);
+}
+
+static uint16_t cpu_to_f16(uint16_t val)
+{
+	if (is_le(elf))
+		return __cpu_to_le16(val);
+	else
+		return __cpu_to_be16(val);
+}
+
+static uint32_t cpu_to_f32(uint32_t val)
+{
+	if (is_le(elf))
+		return __cpu_to_le32(val);
+	else
+		return __cpu_to_be32(val);
+}
+
+static uint64_t cpu_to_f64(uint64_t val)
+{
+	if (is_le(elf))
+		return __cpu_to_le64(val);
+	else
+		return __cpu_to_be64(val);
+}
+
+static struct section *ftr_alt;
+
+static unsigned int nr_fes = 0;
+static struct fixup_entry *fes = NULL;
+
+static struct fixup_entry *find_fe_altaddr(uint64_t addr)
+{
+	unsigned int i;
+
+	for (i = 0; i < nr_fes; i++) {
+		if (addr >= fes[i].alt_start_off && addr < fes[i].alt_end_off)
+			return &fes[i];
+	}
+	return NULL;
+}
+
+static unsigned int nr_ips = 0;
+static struct insn_patch *ips = NULL;
+
+static void create_branch_patch(struct relocation *relocation, struct fixup_entry *fe)
+{
+	struct insn_patch *ip;
+	uint64_t addr = relocation->offset;
+	uint64_t dst_addr;
+	uint64_t scn_delta;
+	uint64_t offset;
+	uint32_t insn;
+	uint32_t *i;
+
+	assert(addr >= ftr_alt->shdr.sh_addr &&
+		addr < ftr_alt->shdr.sh_addr + ftr_alt->shdr.sh_size);
+
+	scn_delta = addr - ftr_alt->shdr.sh_addr;
+
+	assert(scn_delta < ftr_alt->data->d_size);
+
+	i = ftr_alt->data->d_buf + scn_delta;
+
+	insn = f32_to_cpu(*i);
+
+	offset = ftr_alt->shdr.sh_offset + scn_delta;
+	dst_addr = addr - fe->alt_start_off + fe->start_off;
+
+	if (set_branch_target(&insn, dst_addr, relocation->target)) {
+		fprintf(stderr, "ftr_alt branch target out of range or not a branch\n");
+		exit(EXIT_FAILURE);
+	}
+
+	if (insn == *i) /* Nothing to do */
+		return;
+
+	ips = realloc(ips, (nr_ips + 1) * sizeof(struct insn_patch));
+	ip = &ips[nr_ips];
+	nr_ips++;
+
+	ip->insn = insn;
+	ip->offset = offset;
+
+	dbg_printf("update branch insn (%x->%x)\n", *i, ip->insn);
+}
+
+static int process_alt_data(struct section *section, void *arg)
+{
+	if (strcmp(section->name, ".__ftr_alternates.text") != 0)
+		return 0;
+
+	dbg_printf("section %-4.4jd %s\n", (uintmax_t)elf_ndxscn(section->scn), section->name);
+	assert(section->shdr.sh_type == SHT_PROGBITS);
+
+	ftr_alt = section;
+
+	return 0;
+}
+
+static int process_fixup_entries(struct section *section, void *arg)
+{
+	Elf_Data *data;
+	unsigned int nr, i;
+
+	if (strstr(section->name, "_ftr_fixup") == 0)
+		return 0;
+
+	if (section->shdr.sh_type != SHT_PROGBITS)
+		return 0;
+
+	dbg_printf("section %-4.4jd %s\n", (uintmax_t)elf_ndxscn(section->scn), section->name);
+
+	data = section->data;
+	assert(data);
+	assert(data->d_size > 0);
+
+	if (is_64bit(elf)) {
+		assert(data->d_size % sizeof(struct fixup_entry_64) == 0);
+		nr = data->d_size / sizeof(struct fixup_entry_64);
+	} else {
+		assert(data->d_size % sizeof(struct fixup_entry_32) == 0);
+		nr = data->d_size / sizeof(struct fixup_entry_32);
+	}
+
+	for (i = 0; i < nr; i++) {
+		struct fixup_entry *dst;
+		unsigned long idx;
+		unsigned long long off;
+
+		if (is_64bit(elf)) {
+			struct fixup_entry_64 *src;
+
+			idx = i * sizeof(struct fixup_entry_64);
+
+			off = section->shdr.sh_addr + data->d_off + idx;
+			src = data->d_buf + idx;
+
+			if (src->alt_start_off == src->alt_end_off)
+				continue;
+
+			fes = realloc(fes, (nr_fes + 1) * sizeof(struct fixup_entry));
+			dst = &fes[nr_fes];
+			nr_fes++;
+
+			dst->mask = f64_to_cpu(src->mask);
+			dst->value = f64_to_cpu(src->value);
+			dst->start_off = f64_to_cpu(src->start_off) + off;
+			dst->end_off = f64_to_cpu(src->end_off) + off;
+			dst->alt_start_off = f64_to_cpu(src->alt_start_off) + off;
+			dst->alt_end_off = f64_to_cpu(src->alt_end_off) + off;
+
+		} else {
+			struct fixup_entry_32 *src;
+
+			idx = i * sizeof(struct fixup_entry_32);
+
+			off = section->shdr.sh_addr + data->d_off + idx;
+			src = data->d_buf + idx;
+
+			if (src->alt_start_off == src->alt_end_off)
+				continue;
+
+			fes = realloc(fes, (nr_fes + 1) * sizeof(struct fixup_entry));
+			dst = &fes[nr_fes];
+			nr_fes++;
+
+			dst->mask = f32_to_cpu(src->mask);
+			dst->value = f32_to_cpu(src->value);
+			dst->start_off = f32_to_cpu(src->start_off) + off;
+			dst->end_off = f32_to_cpu(src->end_off) + off;
+			dst->alt_start_off = f32_to_cpu(src->alt_start_off) + off;
+			dst->alt_end_off = f32_to_cpu(src->alt_end_off) + off;
+
+		}
+
+		dbg_printf("%llx fixup entry %llx:%llx (%llx-%llx) <- (%llx-%llx)\n", off,
+			(unsigned long long)dst->mask, (unsigned long long)dst->value,
+			(unsigned long long)dst->start_off, (unsigned long long)dst->end_off,
+			(unsigned long long)dst->alt_start_off, (unsigned long long)dst->alt_end_off);
+	}
+
+	return 0;
+}
+
+static int process_alt_relocations(struct section *section, void *arg)
+{
+	struct relocation *relocation;
+	size_t n;
+
+	if (strcmp(section->name, ".rela.__ftr_alternates.text") != 0)
+		return 0;
+
+	assert(section->shdr.sh_type == SHT_RELA);
+
+	dbg_printf("section %-4.4jd %s\n", (uintmax_t)elf_ndxscn(section->scn), section->name);
+
+	n = 0;
+	while ((relocation = elf_sections_get_reloc(elf, section, n)) != NULL) {
+		struct fixup_entry *fe;
+
+		n++;
+
+		dbg_printf("%llx %s %s %llx + %llx\n",
+			(unsigned long long)relocation->offset,
+			relocation->type_name,
+			relocation->symbol->_name,
+			(unsigned long long)relocation->symbol->sym.st_value,
+			(unsigned long long)relocation->rela.r_addend);
+
+		fe = find_fe_altaddr(relocation->offset);
+		if (fe) {
+			dbg_printf("reloc has fe %llx:%llx (%llx-%llx) <- (%llx-%llx)\n",
+				(unsigned long long)fe->mask,
+				(unsigned long long)fe->value,
+				(unsigned long long)fe->start_off,
+				(unsigned long long)fe->end_off,
+				(unsigned long long)fe->alt_start_off,
+				(unsigned long long)fe->alt_end_off);
+
+			if (relocation->target >= fe->alt_start_off &&
+				relocation->target < fe->alt_end_off) {
+				dbg_printf("  reloc within patch code\n");
+				continue;
+			}
+
+			/*
+			 * We really should check for all branches either side
+			 * of fixup_entry from outside (including within
+			 * different fixup code). It's almost guaranteed to go
+			 * badly. Not just relocations, but branches too,
+			 * because nearby branches might get resolved without
+			 * a relocation.
+			 */
+			if (relocation->target >= ftr_alt->shdr.sh_addr &&
+				relocation->target < ftr_alt->shdr.sh_addr +
+						ftr_alt->shdr.sh_size) {
+				fprintf(stderr, "ftr_alt branch target is another ftr_alt region, which is not allowed\n");
+				exit(EXIT_FAILURE);
+			}
+
+			create_branch_patch(relocation, fe);
+		} else {
+			dbg_printf("  reloc has no fe\n");
+		}
+	}
+
+	return 0;
+}
+
+
+int main(int argc, char *argv[])
+{
+	int fd;
+	int err;
+	unsigned int i;
+	struct stat stat;
+	void *mem;
+
+	if (argc != 2)
+		exit(EXIT_FAILURE);
+
+	fd = open(argv[1], O_RDONLY, 0);
+	if (fd == -1) {
+		fprintf(stderr, "open %s failed: %s\n", argv[1], strerror(errno));
+		exit(EXIT_FAILURE);
+	}
+
+	elf = elf_sections_init(fd);
+
+	err = elf_sections_processor(elf, process_alt_data, NULL);
+	assert(!err);
+
+	err = elf_sections_processor(elf, process_fixup_entries, NULL);
+	assert(!err);
+
+	err = elf_sections_processor(elf, process_alt_relocations, NULL);
+	assert(!err);
+
+	elf_sections_exit(elf);
+
+	if (close(fd) == -1) {
+		fprintf(stderr, "close %s failed: %s\n", argv[1], strerror(errno));
+		exit(EXIT_FAILURE);
+	}
+
+	if (!nr_ips) {
+		dbg_printf("Nothing to do.\n");
+		exit(EXIT_SUCCESS);
+	}
+
+	dbg_printf("%u instructions to patch.\n", nr_ips);
+
+	fd = open(argv[1], O_RDWR, 0);
+	if (fd == -1) {
+		fprintf(stderr, "open %s failed: %s\n", argv[1], strerror(errno));
+		exit(EXIT_FAILURE);
+	}
+
+	if (fstat(fd, &stat) == -1) {
+		perror("stat");
+		exit(EXIT_FAILURE);
+	}
+
+	mem = mmap(0, stat.st_size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
+	if (mem == MAP_FAILED) {
+		perror("mmap");
+		exit(EXIT_FAILURE);
+	}
+
+	for (i = 0; i < nr_ips; i++) {
+		struct insn_patch *ip = &ips[i];
+
+		assert(ip->offset < stat.st_size);
+		*(uint32_t *)(mem + ip->offset) = ip->insn;
+	}
+
+	if (munmap(mem, stat.st_size) == -1) {
+		perror("mmap");
+		exit(EXIT_FAILURE);
+	}
+
+	if (close(fd) == -1) {
+		fprintf(stderr, "close %s failed: %s\n", argv[1], strerror(errno));
+		exit(EXIT_FAILURE);
+	}
+
+	exit(EXIT_SUCCESS);
+}
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/14] powerpc/pseries: move decrementer exception vector out of line
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (2 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets Nicholas Piggin
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

In preparation for extending the reach of exception handler loading,
decrementer interrupt has to be moved out of line. The in-line version
is already 0x80 bytes long in worst case, so there is no more room for
instructions.

This is done to make intermediate patch steps build and run. It gets
reverted in a later patch.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h |  6 ++++++
 arch/powerpc/kernel/exceptions-64s.S     | 11 ++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 93ae809..addc19b 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -438,6 +438,12 @@ label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_STD, SOFTEN_TEST_PR)
 
+#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label)			\
+	.globl label##_pSeries;						\
+label##_pSeries:							\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+
 #define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
 	. = loc;							\
 	.globl label##_hv;						\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 18befa5..79eb752 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -279,9 +279,11 @@ hardware_interrupt_hv:
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x800)
 
 	. = 0x900
-	.globl decrementer_pSeries
-decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+	.globl decrementer_pseries_trampoline
+decrementer_pseries_trampoline:
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	decrementer_pSeries
 
 	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
 
@@ -593,6 +595,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 #endif
 
 	.align	7
+	/* moved from 0x900 */
+	MASKABLE_EXCEPTION_PSERIES_OOL(0x900, decrementer)
+
 	/* moved from 0xe00 */
 	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
 	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (3 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 04/14] powerpc/pseries: move decrementer exception vector out of line Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21 14:34   ` David Laight
  2016-07-21  6:44 ` [PATCH 06/14] powerpc/pseries: h_facility_unavailable realmode exception location Nicholas Piggin
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Add a LOAD_HANDLER_4GB variant which uses an extra instruction to extend
the range of handlers to 32-bit.

With 16-bit handlers, it is very difficult to move exception vector code
around or reserve any more space for new exceptions. So set all handlers
to use the new variant. After subsequent patches, the code will be in
better shape to move back to the _64K variant.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h | 24 +++++++++++++++---------
 arch/powerpc/kernel/exceptions-64s.S     | 27 +++++++++++++--------------
 2 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index addc19b..cdb7dc7 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -54,7 +54,7 @@
 #define __EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)			\
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
 	mfspr	r11,SPRN_##h##SRR0;	/* save SRR0 */			\
-	LOAD_HANDLER(r12,label);					\
+	LOAD_HANDLER_4G(r12,label);					\
 	mtctr	r12;							\
 	mfspr	r12,SPRN_##h##SRR1;	/* and SRR1 */			\
 	li	r10,MSR_RI;						\
@@ -83,14 +83,20 @@
 	EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)
 
 /*
- * We're short on space and time in the exception prolog, so we can't
- * use the normal SET_REG_IMMEDIATE macro. Normally we just need the
- * low halfword of the address, but for Kdump we need the whole low
- * word.
+ * We're short on space and time in the exception prolog, so we use short
+ * sequences to load nearby handlers.
+ *
+ * Normally we just need the low halfword of the address, but for Kdump we need
+ * the whole low word.
+ *
+ * reg must contain kbase, and kbase must be 64K aligned.
  */
-#define LOAD_HANDLER(reg, label)					\
-	/* Handlers must be within 64K of kbase, which must be 64k aligned */ \
-	ori	reg,reg,(label)-_stext;	/* virt addr of handler ... */
+#define LOAD_HANDLER_64K(reg, label)					\
+	ori	reg,reg,(label)-_stext ;
+
+#define LOAD_HANDLER_4G(reg, label)					\
+	ori	reg,reg,((label)-_stext)@l ;				\
+	addis	reg,reg,((label)-_stext)@h ;
 
 /* Exception register prefixes */
 #define EXC_HV	H
@@ -178,7 +184,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
 	ld	r10,PACAKMSR(r13);	/* get MSR value for kernel */	\
 	mfspr	r11,SPRN_##h##SRR0;	/* save SRR0 */			\
-	LOAD_HANDLER(r12,label)						\
+	LOAD_HANDLER_4G(r12,label)					\
 	mtspr	SPRN_##h##SRR0,r12;					\
 	mfspr	r12,SPRN_##h##SRR1;	/* and SRR1 */			\
 	mtspr	SPRN_##h##SRR1,r10;					\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 79eb752..111e327 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -42,7 +42,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_RFID 					\
 	mfspr	r12,SPRN_SRR1 ;					\
 	ld	r10,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r10, system_call_entry) ; 			\
+	LOAD_HANDLER_4G(r10, system_call_common) ; 		\
 	mtspr	SPRN_SRR0,r10 ; 				\
 	ld	r10,PACAKMSR(r13) ;				\
 	mtspr	SPRN_SRR1,r10 ; 				\
@@ -65,7 +65,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_DIRECT				\
 	mflr	r10 ;						\
 	ld	r12,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r12, system_call_entry) ;			\
+	LOAD_HANDLER_4G(r12, system_call_common) ;		\
 	mtctr	r12 ;						\
 	mfspr	r12,SPRN_SRR1 ;					\
 	/* Re-use of r13... No spare regs to do this */	\
@@ -220,7 +220,7 @@ data_access_slb_pSeries:
 	 */
 	mfctr	r11
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
 	mtctr	r10
 	bctr
 #endif
@@ -241,7 +241,7 @@ instruction_access_slb_pSeries:
 #else
 	mfctr	r11
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
 	mtctr	r10
 	bctr
 #endif
@@ -494,7 +494,7 @@ BEGIN_FTR_SECTION
 	ori	r11,r11,MSR_ME		/* turn on ME bit */
 	ori	r11,r11,MSR_RI		/* turn on RI bit */
 	ld	r12,PACAKBASE(r13)	/* get high part of &label */
-	LOAD_HANDLER(r12, machine_check_handle_early)
+	LOAD_HANDLER_4G(r12, machine_check_handle_early)
 1:	mtspr	SPRN_SRR0,r12
 	mtspr	SPRN_SRR1,r11
 	rfid
@@ -507,7 +507,7 @@ BEGIN_FTR_SECTION
 	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
 	ld	r11,PACAKMSR(r13)
 	ld	r12,PACAKBASE(r13)
-	LOAD_HANDLER(r12, unrecover_mce)
+	LOAD_HANDLER_4G(r12, unrecover_mce)
 	li	r10,MSR_ME
 	andc	r11,r11,r10		/* Turn off MSR_ME */
 	b	1b
@@ -739,7 +739,8 @@ kvmppc_skip_Hinterrupt:
  * Ensure that any handlers that get invoked from the exception prologs
  * above are below the first 64KB (0x10000) of the kernel image because
  * the prologs assemble the addresses of these handlers using the
- * LOAD_HANDLER macro, which uses an ori instruction.
+ * LOAD_HANDLER_4G macro, which uses an ori instruction. Care must also
+ * be taken because relative branches can only address 32K in each direction.
  */
 
 /*** Common interrupt handlers ***/
@@ -813,7 +814,7 @@ data_access_slb_relon_pSeries:
 	 */
 	mfctr	r11
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
 	mtctr	r10
 	bctr
 #endif
@@ -833,7 +834,7 @@ instruction_access_slb_relon_pSeries:
 #else
 	mfctr	r11
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
 	mtctr	r10
 	bctr
 #endif
@@ -925,8 +926,6 @@ hv_facility_unavailable_relon_trampoline:
 	STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
 
 	.align	7
-system_call_entry:
-	b	system_call_common
 
 ppc64_runlatch_on_trampoline:
 	b	__ppc64_runlatch_on
@@ -1159,7 +1158,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	 * handlers, so that they are copied to real address 0x100 when running
 	 * a relocatable kernel. This ensures they can be reached from the short
 	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
-	 * directly, without using LOAD_HANDLER().
+	 * directly, without using LOAD_HANDLER_4G().
 	 */
 	.align	7
 	.globl	__end_interrupts
@@ -1332,7 +1331,7 @@ machine_check_handle_early:
 	bne	2f
 1:	mfspr	r11,SPRN_SRR0
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10,unrecover_mce)
+	LOAD_HANDLER_4G(r10,unrecover_mce)
 	mtspr	SPRN_SRR0,r10
 	ld	r10,PACAKMSR(r13)
 	/*
@@ -1432,7 +1431,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
 
 2:	mfspr	r11,SPRN_SRR0
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10,unrecov_slb)
+	LOAD_HANDLER_4G(r10,unrecov_slb)
 	mtspr	SPRN_SRR0,r10
 	ld	r10,PACAKMSR(r13)
 	mtspr	SPRN_SRR1,r10
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/14] powerpc/pseries: h_facility_unavailable realmode exception location
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (4 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 07/14] powerpc/pseries: improved exception vector macros Nicholas Piggin
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

The 0xf80 hv_facility_unavailable trampoline branches to the 0xf60
handler. This works because they both do the same thing, but it should
be fixed.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 111e327..e567da6 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -620,7 +620,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
 	STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
-	STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable)
+	STD_EXCEPTION_HV_OOL(0xf82, hv_facility_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
 
 /*
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/14] powerpc/pseries: improved exception vector macros
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (5 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 06/14] powerpc/pseries: h_facility_unavailable realmode exception location Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 08/14] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Add macros that can specify exception vectors more completely (location,
type, etc). Move most of the exception code naming out of exception-64s.h,
and feed labels to it directly (not quite complete yet).

Code is unchanged except for names. Alignment directives scattered around
are annoying, but done this way so that disassembly can verify identical
instruction generation before and after patch. These get cleaned up in
future patch.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h | 131 +++----
 arch/powerpc/include/asm/head-64.h       | 180 ++++++++++
 arch/powerpc/kernel/exceptions-64s.S     | 593 +++++++++++++++----------------
 3 files changed, 527 insertions(+), 377 deletions(-)
 create mode 100644 arch/powerpc/include/asm/head-64.h

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index cdb7dc7..01fd163 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -34,6 +34,7 @@
  * exception handlers (including pSeries LPAR) and iSeries LPAR
  * implementations as possible.
  */
+#include <asm/head-64.h>
 
 #define EX_R9		0
 #define EX_R10		8
@@ -177,6 +178,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	std	r12,area+EX_R12(r13);					\
 	GET_SCRATCH0(r10);						\
 	std	r10,area+EX_R13(r13)
+
 #define EXCEPTION_PROLOG_1(area, extra, vec)				\
 	__EXCEPTION_PROLOG_1(area, extra, vec)
 
@@ -198,10 +200,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	EXCEPTION_PROLOG_1(area, extra, vec);				\
 	EXCEPTION_PROLOG_PSERIES_1(label, h);
 
-#define __KVMTEST(n)							\
-	lbz	r10,HSTATE_IN_GUEST(r13);			\
+#define __KVMTEST(h, n)							\
+	lbz	r10,HSTATE_IN_GUEST(r13);				\
 	cmpwi	r10,0;							\
-	bne	do_kvm_##n
+	bne	do_kvm_##h##n
 
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 /*
@@ -214,8 +216,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define kvmppc_interrupt kvmppc_interrupt_pr
 #endif
 
-#define __KVM_HANDLER(area, h, n)					\
-do_kvm_##n:								\
+#define __KVM_HANDLER_PROLOG(area, n)					\
 	BEGIN_FTR_SECTION_NESTED(947)					\
 	ld	r10,area+EX_CFAR(r13);					\
 	std	r10,HSTATE_CFAR(r13);					\
@@ -228,21 +229,23 @@ do_kvm_##n:								\
 	stw	r9,HSTATE_SCRATCH1(r13);				\
 	ld	r9,area+EX_R9(r13);					\
 	std	r12,HSTATE_SCRATCH0(r13);				\
+
+#define __KVM_HANDLER(area, h, n)					\
+	__KVM_HANDLER_PROLOG(area, n)					\
 	li	r12,n;							\
 	b	kvmppc_interrupt
 
 #define __KVM_HANDLER_SKIP(area, h, n)					\
-do_kvm_##n:								\
 	cmpwi	r10,KVM_GUEST_MODE_SKIP;				\
 	ld	r10,area+EX_R10(r13);					\
 	beq	89f;							\
-	stw	r9,HSTATE_SCRATCH1(r13);			\
+	stw	r9,HSTATE_SCRATCH1(r13);				\
 	BEGIN_FTR_SECTION_NESTED(948)					\
 	ld	r9,area+EX_PPR(r13);					\
 	std	r9,HSTATE_PPR(r13);					\
 	END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948);	\
 	ld	r9,area+EX_R9(r13);					\
-	std	r12,HSTATE_SCRATCH0(r13);			\
+	std	r12,HSTATE_SCRATCH0(r13);				\
 	li	r12,n;							\
 	b	kvmppc_interrupt;					\
 89:	mtocrf	0x80,r9;						\
@@ -250,12 +253,12 @@ do_kvm_##n:								\
 	b	kvmppc_skip_##h##interrupt
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#define KVMTEST(n)			__KVMTEST(n)
+#define KVMTEST(h, n)			__KVMTEST(h, n)
 #define KVM_HANDLER(area, h, n)		__KVM_HANDLER(area, h, n)
 #define KVM_HANDLER_SKIP(area, h, n)	__KVM_HANDLER_SKIP(area, h, n)
 
 #else
-#define KVMTEST(n)
+#define KVMTEST(h, n)
 #define KVM_HANDLER(area, h, n)
 #define KVM_HANDLER_SKIP(area, h, n)
 #endif
@@ -339,69 +342,50 @@ do_kvm_##n:								\
 /*
  * Exception vectors.
  */
-#define STD_EXCEPTION_PSERIES(vec, label)		\
-	. = vec;					\
-	.globl label##_pSeries;				\
-label##_pSeries:					\
+#define STD_EXCEPTION_PSERIES(vec, label)			\
 	SET_SCRATCH0(r13);		/* save r13 */		\
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common,	\
-				 EXC_STD, KVMTEST, vec)
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label,		\
+				 EXC_STD, KVMTEST_PR, vec);	\
 
 /* Version of above for when we have to branch out-of-line */
+#define __OOL_EXCEPTION(vec, label, hdlr)			\
+	SET_SCRATCH0(r13)					\
+	EXCEPTION_PROLOG_0(PACA_EXGEN)				\
+	b hdlr;	
+
 #define STD_EXCEPTION_PSERIES_OOL(vec, label)			\
-	.globl label##_pSeries;					\
-label##_pSeries:						\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec);	\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD)
-
-#define STD_EXCEPTION_HV(loc, vec, label)		\
-	. = loc;					\
-	.globl label##_hv;				\
-label##_hv:						\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec);	\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
+
+#define STD_EXCEPTION_HV(loc, vec, label)			\
 	SET_SCRATCH0(r13);	/* save r13 */			\
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common,	\
-				 EXC_HV, KVMTEST, vec)
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label,		\
+				 EXC_HV, KVMTEST_HV, vec);
 
-/* Version of above for when we have to branch out-of-line */
-#define STD_EXCEPTION_HV_OOL(vec, label)		\
-	.globl label##_hv;				\
-label##_hv:						\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec);	\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV)
+#define STD_EXCEPTION_HV_OOL(vec, label)			\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, vec);	\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 #define STD_RELON_EXCEPTION_PSERIES(loc, vec, label)	\
-	. = loc;					\
-	.globl label##_relon_pSeries;			\
-label##_relon_pSeries:					\
 	/* No guest interrupts come through here */	\
 	SET_SCRATCH0(r13);		/* save r13 */	\
-	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
-				       EXC_STD, NOTEST, vec)
+	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, EXC_STD, NOTEST, vec);
 
 #define STD_RELON_EXCEPTION_PSERIES_OOL(vec, label)		\
-	.globl label##_relon_pSeries;				\
-label##_relon_pSeries:						\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_STD)
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_STD)
 
 #define STD_RELON_EXCEPTION_HV(loc, vec, label)		\
-	. = loc;					\
-	.globl label##_relon_hv;			\
-label##_relon_hv:					\
 	/* No guest interrupts come through here */	\
 	SET_SCRATCH0(r13);	/* save r13 */		\
-	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
-				       EXC_HV, NOTEST, vec)
+	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, EXC_HV, NOTEST, vec);
 
 #define STD_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	.globl label##_relon_hv;				\
-label##_relon_hv:						\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_HV)
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
 
 /* This associate vector numbers with bits in paca->irq_happened */
 #define SOFTEN_VALUE_0x500	PACA_IRQ_EE
-#define SOFTEN_VALUE_0x502	PACA_IRQ_EE
 #define SOFTEN_VALUE_0x900	PACA_IRQ_DEC
 #define SOFTEN_VALUE_0x982	PACA_IRQ_DEC
 #define SOFTEN_VALUE_0xa00	PACA_IRQ_DBELL
@@ -415,16 +399,23 @@ label##_relon_hv:						\
 	cmpwi	r10,0;							\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	beq	masked_##h##interrupt
+
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
 
 #define SOFTEN_TEST_PR(vec)						\
-	KVMTEST(vec);							\
+	KVMTEST(EXC_STD, vec);						\
 	_SOFTEN_TEST(EXC_STD, vec)
 
 #define SOFTEN_TEST_HV(vec)						\
-	KVMTEST(vec);							\
+	KVMTEST(EXC_HV, vec);						\
 	_SOFTEN_TEST(EXC_HV, vec)
 
+#define KVMTEST_PR(vec)							\
+	KVMTEST(EXC_STD, vec)
+
+#define KVMTEST_HV(vec)							\
+	KVMTEST(EXC_HV, vec)
+
 #define SOFTEN_NOTEST_PR(vec)		_SOFTEN_TEST(EXC_STD, vec)
 #define SOFTEN_NOTEST_HV(vec)		_SOFTEN_TEST(EXC_HV, vec)
 
@@ -432,64 +423,47 @@ label##_relon_hv:						\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
 	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, h);
+	EXCEPTION_PROLOG_PSERIES_1(label, h);
 
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
-	. = loc;							\
-	.globl label##_pSeries;						\
-label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_STD, SOFTEN_TEST_PR)
 
 #define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label)			\
-	.globl label##_pSeries;						\
-label##_pSeries:							\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD);
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
 
 #define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
-	. = loc;							\
-	.globl label##_hv;						\
-label##_hv:								\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_HV, SOFTEN_TEST_HV)
 
 #define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
-	.globl label##_hv;						\
-label##_hv:								\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, h);
-#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)
+
+#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
 
 #define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label)		\
-	. = loc;							\
-	.globl label##_relon_pSeries;					\
-label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_STD, SOFTEN_NOTEST_PR)
 
 #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label)			\
-	. = loc;							\
-	.globl label##_relon_hv;					\
-label##_relon_hv:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_HV, SOFTEN_NOTEST_HV)
 
 #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	.globl label##_relon_hv;					\
-label##_relon_hv:							\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 /*
  * Our exception common code can be passed various "additions"
@@ -515,9 +489,6 @@ BEGIN_FTR_SECTION				\
 END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
 
 #define EXCEPTION_COMMON(trap, label, hdlr, ret, additions)	\
-	.align	7;						\
-	.globl label##_common;					\
-label##_common:							\
 	EXCEPTION_PROLOG_COMMON(trap, PACA_EXGEN);		\
 	/* Volatile regs are potentially clobbered here */	\
 	additions;						\
diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
new file mode 100644
index 0000000..bc06848
--- /dev/null
+++ b/arch/powerpc/include/asm/head-64.h
@@ -0,0 +1,180 @@
+#ifndef _ASM_POWERPC_HEAD_64_H
+#define _ASM_POWERPC_HEAD_64_H
+
+
+#define VECTOR_HANDLER_REAL_BEGIN(name, start, end)			\
+	. = start ;							\
+	.global exc_##start##_##name ;					\
+exc_##start##_##name:
+
+#define VECTOR_HANDLER_REAL_END(name, start, end)
+
+#define VECTOR_HANDLER_VIRT_BEGIN(name, start, end)			\
+	. = start ;							\
+	.global exc_##start##_##name ;					\
+exc_##start##_##name:
+
+#define VECTOR_HANDLER_VIRT_END(name, start, end)
+
+#define COMMON_HANDLER_BEGIN(name)					\
+	.global name;							\
+name:
+
+#define COMMON_HANDLER_END(name)
+
+#define TRAMP_HANDLER_BEGIN(name)					\
+	.global name ;							\
+name:
+
+#define TRAMP_HANDLER_END(name)	
+
+#define TRAMP_KVM_BEGIN(name)						\
+	TRAMP_HANDLER_BEGIN(name)
+
+#define TRAMP_KVM_END(name)						\
+	TRAMP_HANDLER_END(name)
+
+#define VECTOR_HANDLER_REAL_NONE(start, end)
+
+#define VECTOR_HANDLER_VIRT_NONE(start, end)
+
+
+#define VECTOR_HANDLER_REAL(name, start, end)				\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	STD_EXCEPTION_PSERIES(start, name##_common);			\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT(name, start, end, realvec)			\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	STD_RELON_EXCEPTION_PSERIES(start, realvec, name##_common);	\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define VECTOR_HANDLER_REAL_MASKABLE(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	MASKABLE_EXCEPTION_PSERIES(start, start, name##_common);	\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT_MASKABLE(name, start, end, realvec)		\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define VECTOR_HANDLER_REAL_HV(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	STD_EXCEPTION_HV(start, start + 0x2, name##_common);		\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT_HV(name, start, end, realvec)		\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	STD_RELON_EXCEPTION_HV(start, realvec + 0x2, name##_common);	\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define __VECTOR_HANDLER_REAL_OOL(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, tramp_real_##name);		\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL(name, vec)				\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	STD_EXCEPTION_PSERIES_OOL(vec, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL_MASKABLE(name, vec)			\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common);		\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(name, start, end, handler)	\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, handler);				\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define __VECTOR_HANDLER_REAL_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+	
+#define __TRAMP_HANDLER_REAL_OOL_HV(name, vec)				\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	STD_EXCEPTION_HV_OOL(vec + 0x2, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(name, vec)			\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL(name, start, end)			\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, tramp_virt_##name);		\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL(name, realvec)				\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	STD_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec)		\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_HV(name, realvec)			\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	STD_RELON_EXCEPTION_HV_OOL(realvec, name##_common);		\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec)		\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define TRAMP_KVM(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_##n);					\
+	KVM_HANDLER(area, EXC_STD, n);					\
+	TRAMP_KVM_END(do_kvm_##n)
+
+#define TRAMP_KVM_SKIP(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_##n);					\
+	KVM_HANDLER_SKIP(area, EXC_STD, n);				\
+	TRAMP_KVM_END(do_kvm_##n)
+
+#define TRAMP_KVM_HV(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_H##n);					\
+	KVM_HANDLER(area, EXC_HV, n + 0x2);				\
+	TRAMP_KVM_END(do_kvm_H##n)
+
+#define TRAMP_KVM_HV_SKIP(area, n)					\
+	TRAMP_KVM_BEGIN(do_kvm_H##n);					\
+	KVM_HANDLER_SKIP(area, EXC_HV, n + 0x2);			\
+	TRAMP_KVM_END(do_kvm_H##n)
+
+#define COMMON_HANDLER(name, realvec, hdlr)				\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON(realvec, name, hdlr);			\
+	COMMON_HANDLER_END(name);
+
+#define COMMON_HANDLER_ASYNC(name, realvec, hdlr)			\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON_ASYNC(realvec, name, hdlr);		\
+	COMMON_HANDLER_END(name);
+
+#define COMMON_HANDLER_HV(name, realvec, hdlr)				\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON(realvec + 0x2, name, hdlr);		\
+	COMMON_HANDLER_END(name);
+
+#endif	/* _ASM_POWERPC_HEAD_64_H */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index e567da6..82fb261 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -94,8 +94,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	.globl __start_interrupts
 __start_interrupts:
 
-	.globl system_reset_pSeries;
-system_reset_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(system_reset, 0x100, 0x200)
 	SET_SCRATCH0(r13)
 #ifdef CONFIG_PPC_P7_NAP
 BEGIN_FTR_SECTION
@@ -156,9 +155,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 #endif /* CONFIG_PPC_P7_NAP */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
+VECTOR_HANDLER_REAL_END(system_reset, 0x100, 0x200)
 
-	. = 0x200
-machine_check_pSeries_1:
+VECTOR_HANDLER_REAL_BEGIN(machine_check, 0x200, 0x300)
 	/* This is moved out of line as it can be patched by FW, but
 	 * some code path might still want to branch into the original
 	 * vector
@@ -193,20 +192,14 @@ BEGIN_FTR_SECTION
 FTR_SECTION_ELSE
 	b	machine_check_pSeries_0
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
+VECTOR_HANDLER_REAL_END(machine_check, 0x200, 0x300)
 
-	. = 0x300
-	.globl data_access_pSeries
-data_access_pSeries:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD,
-				 KVMTEST, 0x300)
+VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
 
-	. = 0x380
-	.globl data_access_slb_pSeries
-data_access_slb_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
 	std	r3,PACA_EXSLB+EX_R3(r13)
 	mfspr	r3,SPRN_DAR
 	mfspr	r12,SPRN_SRR1
@@ -224,15 +217,14 @@ data_access_slb_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
 
-	STD_EXCEPTION_PSERIES(0x400, instruction_access)
+VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
 
-	. = 0x480
-	.globl instruction_access_slb_pSeries
-instruction_access_slb_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
 	std	r3,PACA_EXSLB+EX_R3(r13)
 	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
 	mfspr	r12,SPRN_SRR1
@@ -245,57 +237,55 @@ instruction_access_slb_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
 
 	/* We open code these as we can't have a ". = x" (even with
 	 * x = "." within a feature section
 	 */
-	. = 0x500;
-	.globl hardware_interrupt_pSeries;
+VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
 	.globl hardware_interrupt_hv;
-hardware_interrupt_pSeries:
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
 					    EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
 					    EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
 
-	/*
-	 * Relon code jumps to these KVM handlers too so can't put them
-	 * in the feature sections.
-	 */
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
+/*
+ * Relon code jumps to these KVM handlers too so can't put them
+ * in the feature sections.
+ */
+TRAMP_KVM_HV(PACA_EXGEN, 0x500)
+TRAMP_KVM(PACA_EXGEN, 0x500)
 
-	STD_EXCEPTION_PSERIES(0x600, alignment)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x600)
+VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
 
-	STD_EXCEPTION_PSERIES(0x700, program_check)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x700)
+TRAMP_KVM(PACA_EXGEN, 0x600)
 
-	STD_EXCEPTION_PSERIES(0x800, fp_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x800)
+VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
 
-	. = 0x900
-	.globl decrementer_pseries_trampoline
-decrementer_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	decrementer_pSeries
+TRAMP_KVM(PACA_EXGEN, 0x700)
 
-	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
+VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
 
-	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
+TRAMP_KVM(PACA_EXGEN, 0x800)
 
-	STD_EXCEPTION_PSERIES(0xb00, trap_0b)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xb00)
+__VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
 
-	. = 0xc00
-	.globl	system_call_pSeries
-system_call_pSeries:
+VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
+
+VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
+
+TRAMP_KVM(PACA_EXGEN, 0xa00)
+
+VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
+
+TRAMP_KVM(PACA_EXGEN, 0xb00)
+
+VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
 	 /*
 	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
 	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
@@ -312,7 +302,7 @@ system_call_pSeries:
 	std	r10,PACA_EXGEN+EX_R10(r13)
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
-	KVMTEST(0xc00)
+	KVMTEST_PR(0xc00)
 	GET_SCRATCH0(r13)
 #else
 	HMT_MEDIUM;
@@ -320,90 +310,58 @@ system_call_pSeries:
 	SYSCALL_PSERIES_1
 	SYSCALL_PSERIES_2_RFID
 	SYSCALL_PSERIES_3
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00)
+VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
+
+TRAMP_KVM(PACA_EXGEN, 0xc00)
+
+VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
+
+TRAMP_KVM(PACA_EXGEN, 0xd00)
 
-	STD_EXCEPTION_PSERIES(0xd00, single_step)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xd00)
 
 	/* At 0xe??? we have a bunch of hypervisor exceptions, we branch
 	 * out of line to handle them
 	 */
-	. = 0xe00
-hv_data_storage_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_data_storage_hv
+__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
 
-	. = 0xe20
-hv_instr_storage_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_instr_storage_hv
+__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
 
-	. = 0xe40
-emulation_assist_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	emulation_assist_hv
+__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
 
-	. = 0xe60
-hv_exception_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hmi_exception_early
+__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
 
-	. = 0xe80
-hv_doorbell_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_doorbell_hv
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
 
-	/* We need to deal with the Altivec unavailable exception
-	 * here which is at 0xf20, thus in the middle of the
-	 * prolog code of the PerformanceMonitor one. A little
-	 * trickery is thus necessary
-	 */
-	. = 0xf00
-performance_monitor_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	performance_monitor_pSeries
+VECTOR_HANDLER_REAL_NONE(0xea0, 0xf00)
 
-	. = 0xf20
-altivec_unavailable_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	altivec_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
 
-	. = 0xf40
-vsx_unavailable_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	vsx_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
 
-	. = 0xf60
-facility_unavailable_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
 
-	. = 0xf80
-hv_facility_unavailable_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_hv
+__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+
+__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+
+VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
+
+	
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1200, 0x1202, cbe_system_error)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202)
-#endif /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
+
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+#endif
 
-	STD_EXCEPTION_PSERIES(0x1300, instruction_breakpoint)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1300)
+VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
 
-	. = 0x1500
-	.global denorm_exception_hv
-denorm_exception_hv:
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
+
+VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
 	mtspr	SPRN_SPRG_HSCRATCH0,r13
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
@@ -416,31 +374,41 @@ denorm_exception_hv:
 	bne+	denorm_assist
 #endif
 
-	KVMTEST(0x1500)
+	KVMTEST_PR(0x1500)
 	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1500)
+VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
+
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1600, 0x1602, cbe_maintenance)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602)
-#endif /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
+
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
+#endif
 
-	STD_EXCEPTION_PSERIES(0x1700, altivec_assist)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x1700)
+VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
+
+TRAMP_KVM(PACA_EXGEN, 0x1700)
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1802)
-#else
+VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
+
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
 	. = 0x1800
-#endif /* CONFIG_CBE_RAS */
+#endif
 
 
 /*** Out of line interrupts support ***/
 
-	.align	7
 	/* moved from 0x200 */
-machine_check_powernv_early:
+	.align 7;
+TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
 BEGIN_FTR_SECTION
 	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
 	/*
@@ -513,25 +481,28 @@ BEGIN_FTR_SECTION
 	b	1b
 	b	.	/* prevent speculative execution */
 END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+TRAMP_HANDLER_END(machine_check_powernv_early)
 
-machine_check_pSeries:
+TRAMP_HANDLER_BEGIN(machine_check_pSeries)
 	.globl machine_check_fwnmi
 machine_check_fwnmi:
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_0(PACA_EXMC)
 machine_check_pSeries_0:
-	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST, 0x200)
+	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
 	EXCEPTION_PROLOG_PSERIES_1(machine_check_common, EXC_STD)
-	KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300)
-	KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x400)
-	KVM_HANDLER(PACA_EXSLB, EXC_STD, 0x480)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x900)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982)
+TRAMP_HANDLER_END(machine_check_pSeries)
+
+TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
+TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
+TRAMP_KVM(PACA_EXGEN, 0x400)
+TRAMP_KVM(PACA_EXSLB, 0x480)
+TRAMP_KVM(PACA_EXGEN, 0x900)
+TRAMP_KVM_HV(PACA_EXGEN, 0x980)
 
 #ifdef CONFIG_PPC_DENORMALISATION
-denorm_assist:
+COMMON_HANDLER_BEGIN(denorm_assist)
 BEGIN_FTR_SECTION
 /*
  * To denormalise we need to move a copy of the register to itself.
@@ -593,35 +564,43 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	HRFID
 	b	.
 #endif
+COMMON_HANDLER_END(denorm_assist)
 
 	.align	7
 	/* moved from 0x900 */
-	MASKABLE_EXCEPTION_PSERIES_OOL(0x900, decrementer)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900)
 
 	/* moved from 0xe00 */
-	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
-	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22)
-	STD_EXCEPTION_HV_OOL(0xe42, emulation_assist)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42)
-	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62)
-
-	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82)
+__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
+
+__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
+
+__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
+
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
+
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
 
 	/* moved from 0xf00 */
-	STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
-	STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
-	STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
-	STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
-	STD_EXCEPTION_HV_OOL(0xf82, hv_facility_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
+__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
+TRAMP_KVM(PACA_EXGEN, 0xf00)
+
+__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
+TRAMP_KVM(PACA_EXGEN, 0xf20)
+
+__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
+TRAMP_KVM(PACA_EXGEN, 0xf40)
+
+__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
+TRAMP_KVM(PACA_EXGEN, 0xf60)
+
+__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
 
 /*
  * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
@@ -674,7 +653,7 @@ masked_##_H##interrupt:					\
  * in the generated frame has EE set to 1 or the exception
  * handler will not properly re-enable them.
  */
-_GLOBAL(__replay_interrupt)
+COMMON_HANDLER_BEGIN(__replay_interrupt)
 	/* We are going to jump to the exception common code which
 	 * will retrieve various register values from the PACA which
 	 * we don't give a damn about, so we don't bother storing them.
@@ -695,22 +674,23 @@ FTR_SECTION_ELSE
 	beq	doorbell_super_common
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	blr
+COMMON_HANDLER_END(__replay_interrupt)
 
 #ifdef CONFIG_PPC_PSERIES
 /*
  * Vectors for the FWNMI option.  Share common code.
  */
-	.globl system_reset_fwnmi
       .align 7
-system_reset_fwnmi:
+TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
+TRAMP_HANDLER_END(system_reset_fwnmi)
 
 #endif /* CONFIG_PPC_PSERIES */
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-kvmppc_skip_interrupt:
+TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
 	/*
 	 * Here all GPRs are unchanged from when the interrupt happened
 	 * except for r13, which is saved in SPRG_SCRATCH0.
@@ -721,8 +701,9 @@ kvmppc_skip_interrupt:
 	GET_SCRATCH0(r13)
 	rfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_interrupt)
 
-kvmppc_skip_Hinterrupt:
+TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
 	/*
 	 * Here all GPRs are unchanged from when the interrupt happened
 	 * except for r13, which is saved in SPRG_SCRATCH0.
@@ -733,6 +714,7 @@ kvmppc_skip_Hinterrupt:
 	GET_SCRATCH0(r13)
 	hrfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
 #endif
 
 /*
@@ -745,38 +727,56 @@ kvmppc_skip_Hinterrupt:
 
 /*** Common interrupt handlers ***/
 
-	STD_EXCEPTION_COMMON(0x100, system_reset, system_reset_exception)
+	.align 7;
+COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
+	.align 7;
+COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
+	.align 7;
+COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
+	.align 7;
+COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-	STD_EXCEPTION_COMMON_ASYNC(0x500, hardware_interrupt, do_IRQ)
-	STD_EXCEPTION_COMMON_ASYNC(0x900, decrementer, timer_interrupt)
-	STD_EXCEPTION_COMMON(0x980, hdecrementer, hdec_interrupt)
+	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
-	STD_EXCEPTION_COMMON_ASYNC(0xa00, doorbell_super, doorbell_exception)
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
 #else
-	STD_EXCEPTION_COMMON_ASYNC(0xa00, doorbell_super, unknown_exception)
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
 #endif
-	STD_EXCEPTION_COMMON(0xb00, trap_0b, unknown_exception)
-	STD_EXCEPTION_COMMON(0xd00, single_step, single_step_exception)
-	STD_EXCEPTION_COMMON(0xe00, trap_0e, unknown_exception)
-	STD_EXCEPTION_COMMON(0xe40, emulation_assist, emulation_assist_interrupt)
-	STD_EXCEPTION_COMMON_ASYNC(0xe60, hmi_exception, handle_hmi_exception)
+	.align 7;
+COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
+	.align 7;
+COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
+	.align 7;
+COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
+	.align 7;
+COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
+	.align 7;
+COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
+	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
-	STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, doorbell_exception)
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
 #else
-	STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, unknown_exception)
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-	STD_EXCEPTION_COMMON_ASYNC(0xf00, performance_monitor, performance_monitor_exception)
-	STD_EXCEPTION_COMMON(0x1300, instruction_breakpoint, instruction_breakpoint_exception)
-	STD_EXCEPTION_COMMON(0x1502, denorm, unknown_exception)
+	.align 7;
+COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
+	.align 7;
+COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
+	.align 7;
+COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
+	.align 7;
 #ifdef CONFIG_ALTIVEC
-	STD_EXCEPTION_COMMON(0x1700, altivec_assist, altivec_assist_exception)
+COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
 #else
-	STD_EXCEPTION_COMMON(0x1700, altivec_assist, unknown_exception)
+COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
 #endif
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_COMMON(0x1200, cbe_system_error, cbe_system_error_exception)
-	STD_EXCEPTION_COMMON(0x1600, cbe_maintenance, cbe_maintenance_exception)
-	STD_EXCEPTION_COMMON(0x1800, cbe_thermal, cbe_thermal_exception)
+	.align 7;
+COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
+	.align 7;
+COMMON_HANDLER(cbe_maintenance, 0x1600, cbe_maintenance_exception)
+	.align 7;
+COMMON_HANDLER(cbe_thermal, 0x1800, cbe_thermal_exception)
 #endif /* CONFIG_CBE_RAS */
 
 	/*
@@ -794,10 +794,12 @@ kvmppc_skip_Hinterrupt:
 	 * only has extra guff for STAB-based processors -- which never
 	 * come here.
 	 */
-	STD_RELON_EXCEPTION_PSERIES(0x4300, 0x300, data_access)
-	. = 0x4380
-	.globl data_access_slb_relon_pSeries
-data_access_slb_relon_pSeries:
+VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
+VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
+
+VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+
+VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
 	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380)
@@ -818,11 +820,11 @@ data_access_slb_relon_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
+
+VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
 
-	STD_RELON_EXCEPTION_PSERIES(0x4400, 0x400, instruction_access)
-	. = 0x4480
-	.globl instruction_access_slb_relon_pSeries
-instruction_access_slb_relon_pSeries:
+VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
 	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
@@ -838,97 +840,88 @@ instruction_access_slb_relon_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
 
-	. = 0x4500
-	.globl hardware_interrupt_relon_pSeries;
+VECTOR_HANDLER_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x4600)
 	.globl hardware_interrupt_relon_hv;
-hardware_interrupt_relon_pSeries:
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt, EXC_HV, SOFTEN_TEST_HV)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
-	STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment)
-	STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check)
-	STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer)
-	STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super)
-	STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b)
-
-	. = 0x4c00
-	.globl system_call_relon_pSeries
-system_call_relon_pSeries:
+VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
+
+VECTOR_HANDLER_VIRT(alignment, 0x4600, 0x4700, 0x600)
+VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
+VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
+VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
+VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
+VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
+VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+
+VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
 	HMT_MEDIUM
 	SYSCALL_PSERIES_1
 	SYSCALL_PSERIES_2_DIRECT
 	SYSCALL_PSERIES_3
+VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
-	STD_RELON_EXCEPTION_PSERIES(0x4d00, 0xd00, single_step)
+VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
 
-	. = 0x4e00
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
 
-	. = 0x4e20
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
 
-	. = 0x4e40
-emulation_assist_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	emulation_assist_relon_hv
+__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
 
-	. = 0x4e60
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
 
-	. = 0x4e80
-h_doorbell_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_doorbell_relon_hv
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
 
-	. = 0x4f00
-performance_monitor_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	performance_monitor_relon_pSeries
+VECTOR_HANDLER_VIRT_NONE(0x4ea0, 0x4f00)
 
-	. = 0x4f20
-altivec_unavailable_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	altivec_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
 
-	. = 0x4f40
-vsx_unavailable_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	vsx_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
 
-	. = 0x4f60
-facility_unavailable_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
 
-	. = 0x4f80
-hv_facility_unavailable_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hv_facility_unavailable_relon_hv
+__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
+
+__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
+
+VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
+
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+
+VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
 
-	STD_RELON_EXCEPTION_PSERIES(0x5300, 0x1300, instruction_breakpoint)
 #ifdef CONFIG_PPC_DENORMALISATION
-	. = 0x5500
-	b	denorm_exception_hv
+VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
+	b	exc_0x1500_denorm_exception_hv
+VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#else
+VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
 #endif
-	STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
 
-	.align	7
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
+
+VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
+
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
 
-ppc64_runlatch_on_trampoline:
+
+	.align	7
+TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
 	b	__ppc64_runlatch_on
+TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
 
 /*
  * Here r13 points to the paca, r9 contains the saved CR,
@@ -936,8 +929,7 @@ ppc64_runlatch_on_trampoline:
  * r9 - r13 are saved in paca->exgen.
  */
 	.align	7
-	.globl data_access_common
-data_access_common:
+COMMON_HANDLER_BEGIN(data_access_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -955,10 +947,10 @@ BEGIN_MMU_FTR_SECTION
 MMU_FTR_SECTION_ELSE
 	b	handle_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
+COMMON_HANDLER_END(data_access_common)
 
 	.align  7
-	.globl  h_data_storage_common
-h_data_storage_common:
+COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr   r10,SPRN_HDSISR
@@ -969,10 +961,10 @@ h_data_storage_common:
 	addi    r3,r1,STACK_FRAME_OVERHEAD
 	bl      unknown_exception
 	b       ret_from_except
+COMMON_HANDLER_END(h_data_storage_common)
 
 	.align	7
-	.globl instruction_access_common
-instruction_access_common:
+COMMON_HANDLER_BEGIN(instruction_access_common)
 	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
 	RECONCILE_IRQ_STATE(r10, r11)
 	ld	r12,_MSR(r1)
@@ -986,17 +978,17 @@ BEGIN_MMU_FTR_SECTION
 MMU_FTR_SECTION_ELSE
 	b	handle_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
+COMMON_HANDLER_END(instruction_access_common)
 
-	STD_EXCEPTION_COMMON(0xe20, h_instr_storage, unknown_exception)
+	.align 7
+COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 	/*
 	 * Machine check is different because we use a different
 	 * save area: PACA_EXMC instead of PACA_EXGEN.
 	 */
 	.align	7
-	.globl machine_check_common
-machine_check_common:
-
+COMMON_HANDLER_BEGIN(machine_check_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -1012,10 +1004,10 @@ machine_check_common:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	machine_check_exception
 	b	ret_from_except
+COMMON_HANDLER_END(machine_check_common)
 
 	.align	7
-	.globl alignment_common
-alignment_common:
+COMMON_HANDLER_BEGIN(alignment_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -1030,20 +1022,20 @@ alignment_common:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	alignment_exception
 	b	ret_from_except
+COMMON_HANDLER_END(alignment_common)
 
 	.align	7
-	.globl program_check_common
-program_check_common:
+COMMON_HANDLER_BEGIN(program_check_common)
 	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
 	bl	save_nvgprs
 	RECONCILE_IRQ_STATE(r10, r11)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	program_check_exception
 	b	ret_from_except
+COMMON_HANDLER_END(program_check_common)
 
 	.align	7
-	.globl fp_unavailable_common
-fp_unavailable_common:
+COMMON_HANDLER_BEGIN(fp_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
 	bne	1f			/* if from user, just load it up */
 	bl	save_nvgprs
@@ -1071,9 +1063,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	bl	fp_unavailable_tm
 	b	ret_from_except
 #endif
+COMMON_HANDLER_END(fp_unavailable_common)
+
 	.align	7
-	.globl altivec_unavailable_common
-altivec_unavailable_common:
+COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
 BEGIN_FTR_SECTION
@@ -1105,10 +1098,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	altivec_unavailable_exception
 	b	ret_from_except
+COMMON_HANDLER_END(altivec_unavailable_common)
 
 	.align	7
-	.globl vsx_unavailable_common
-vsx_unavailable_common:
+COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
 BEGIN_FTR_SECTION
@@ -1139,19 +1132,21 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	vsx_unavailable_exception
 	b	ret_from_except
+COMMON_HANDLER_END(vsx_unavailable_common)
 
-	STD_EXCEPTION_COMMON(0xf60, facility_unavailable, facility_unavailable_exception)
-	STD_EXCEPTION_COMMON(0xf80, hv_facility_unavailable, facility_unavailable_exception)
+	.align 7;
+COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
+	.align 7;
+COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
-	STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
-
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
-	STD_RELON_EXCEPTION_HV_OOL(0xf80, hv_facility_unavailable)
+__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 
 	/*
 	 * The __end_interrupts marker must be past the out-of-line (OOL)
@@ -1179,8 +1174,8 @@ fwnmi_data_area:
 	. = 0x8000
 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
-	.globl hmi_exception_early
-hmi_exception_early:
+	.align 7;
+COMMON_HANDLER_BEGIN(hmi_exception_early)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60)
 	mr	r10,r1			/* Save r1			*/
 	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
@@ -1227,7 +1222,8 @@ hmi_exception_early:
 hmi_exception_after_realmode:
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hmi_exception_hv
+	b	tramp_real_hmi_exception
+COMMON_HANDLER_END(hmi_exception_early)
 
 
 #define MACHINE_CHECK_HANDLER_WINDUP			\
@@ -1266,8 +1262,7 @@ hmi_exception_after_realmode:
 	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
 	 */
 	.align	7
-	.globl machine_check_handle_early
-machine_check_handle_early:
+COMMON_HANDLER_BEGIN(machine_check_handle_early)
 	std	r0,GPR0(r1)	/* Save r0 */
 	EXCEPTION_PROLOG_COMMON_3(0x200)
 	bl	save_nvgprs
@@ -1379,6 +1374,8 @@ unrecover_mce:
 1:	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unrecoverable_exception
 	b	1b
+COMMON_HANDLER_END(machine_check_handle_early)
+
 /*
  * r13 points to the PACA, r9 contains the saved CR,
  * r12 contain the saved SRR1, SRR0 is still ready for return
@@ -1387,7 +1384,7 @@ unrecover_mce:
  * r3 is saved in paca->slb_r3
  * We assume we aren't going to take any exceptions during this procedure.
  */
-slb_miss_realmode:
+COMMON_HANDLER_BEGIN(slb_miss_realmode)
 	mflr	r10
 #ifdef CONFIG_RELOCATABLE
 	mtctr	r11
@@ -1445,15 +1442,17 @@ unrecov_slb:
 1:	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unrecoverable_exception
 	b	1b
+COMMON_HANDLER_END(slb_miss_realmode)
 
 
 #ifdef CONFIG_PPC_970_NAP
-power4_fixup_nap:
+TRAMP_HANDLER_BEGIN(power4_fixup_nap)
 	andc	r9,r9,r10
 	std	r9,TI_LOCAL_FLAGS(r11)
 	ld	r10,_LINK(r1)		/* make idle task do the */
 	std	r10,_NIP(r1)		/* equivalent of a blr */
 	blr
+TRAMP_HANDLER_END(power4_fixup_nap)
 #endif
 
 /*
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/14] powerpc/pseries: consolidate exception handler alignment
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (6 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 07/14] powerpc/pseries: improved exception vector macros Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 09/14] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Move exception handler alignment directives into the head-64.h macros,
beause they will no longer work in-place after the next patch. This
slightly changes functions that have alignments applied and therefore
code generation, which is why it was not done initially (see earlier
patch).

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/head-64.h   |  5 +++++
 arch/powerpc/kernel/exceptions-64s.S | 38 +-----------------------------------
 2 files changed, 6 insertions(+), 37 deletions(-)

diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index bc06848..98cd36b 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -1,9 +1,11 @@
 #ifndef _ASM_POWERPC_HEAD_64_H
 #define _ASM_POWERPC_HEAD_64_H
 
+#include <asm/cache.h>
 
 #define VECTOR_HANDLER_REAL_BEGIN(name, start, end)			\
 	. = start ;							\
+	.align 7;							\
 	.global exc_##start##_##name ;					\
 exc_##start##_##name:
 
@@ -11,18 +13,21 @@ exc_##start##_##name:
 
 #define VECTOR_HANDLER_VIRT_BEGIN(name, start, end)			\
 	. = start ;							\
+	.align 7;	i						\
 	.global exc_##start##_##name ;					\
 exc_##start##_##name:
 
 #define VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define COMMON_HANDLER_BEGIN(name)					\
+	.align 7;							\
 	.global name;							\
 name:
 
 #define COMMON_HANDLER_END(name)
 
 #define TRAMP_HANDLER_BEGIN(name)					\
+	.align 7;							\
 	.global name ;							\
 name:
 
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 82fb261..db13569 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -407,7 +407,6 @@ VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
 /*** Out of line interrupts support ***/
 
 	/* moved from 0x200 */
-	.align 7;
 TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
 BEGIN_FTR_SECTION
 	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
@@ -566,7 +565,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 #endif
 COMMON_HANDLER_END(denorm_assist)
 
-	.align	7
 	/* moved from 0x900 */
 __TRAMP_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900)
 
@@ -680,7 +678,6 @@ COMMON_HANDLER_END(__replay_interrupt)
 /*
  * Vectors for the FWNMI option.  Share common code.
  */
-      .align 7
 TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
@@ -727,55 +724,37 @@ TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
 
 /*** Common interrupt handlers ***/
 
-	.align 7;
 COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
-	.align 7;
 COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
-	.align 7;
 COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
-	.align 7;
 COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
 #else
 COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
 #endif
-	.align 7;
 COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
-	.align 7;
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
-	.align 7;
 COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
-	.align 7;
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
-	.align 7;
 COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
-	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
 #else
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-	.align 7;
 COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
-	.align 7;
 COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
-	.align 7;
 COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
-	.align 7;
 #ifdef CONFIG_ALTIVEC
 COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
 #else
 COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
 #endif
 #ifdef CONFIG_CBE_RAS
-	.align 7;
 COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
-	.align 7;
 COMMON_HANDLER(cbe_maintenance, 0x1600, cbe_maintenance_exception)
-	.align 7;
 COMMON_HANDLER(cbe_thermal, 0x1800, cbe_thermal_exception)
 #endif /* CONFIG_CBE_RAS */
 
@@ -918,7 +897,6 @@ VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
 VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
 
 
-	.align	7
 TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
 	b	__ppc64_runlatch_on
 TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
@@ -928,7 +906,6 @@ TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
  * SRR0 and SRR1 are saved in r11 and r12,
  * r9 - r13 are saved in paca->exgen.
  */
-	.align	7
 COMMON_HANDLER_BEGIN(data_access_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
@@ -949,7 +926,6 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
 COMMON_HANDLER_END(data_access_common)
 
-	.align  7
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
@@ -963,7 +939,6 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(instruction_access_common)
 	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
 	RECONCILE_IRQ_STATE(r10, r11)
@@ -980,14 +955,12 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
 COMMON_HANDLER_END(instruction_access_common)
 
-	.align 7
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 	/*
 	 * Machine check is different because we use a different
 	 * save area: PACA_EXMC instead of PACA_EXGEN.
 	 */
-	.align	7
 COMMON_HANDLER_BEGIN(machine_check_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
@@ -1006,7 +979,6 @@ COMMON_HANDLER_BEGIN(machine_check_common)
 	b	ret_from_except
 COMMON_HANDLER_END(machine_check_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(alignment_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
@@ -1024,7 +996,6 @@ COMMON_HANDLER_BEGIN(alignment_common)
 	b	ret_from_except
 COMMON_HANDLER_END(alignment_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(program_check_common)
 	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
 	bl	save_nvgprs
@@ -1034,7 +1005,6 @@ COMMON_HANDLER_BEGIN(program_check_common)
 	b	ret_from_except
 COMMON_HANDLER_END(program_check_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(fp_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
 	bne	1f			/* if from user, just load it up */
@@ -1065,7 +1035,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 COMMON_HANDLER_END(fp_unavailable_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
@@ -1100,7 +1069,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	b	ret_from_except
 COMMON_HANDLER_END(altivec_unavailable_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
@@ -1134,9 +1102,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	b	ret_from_except
 COMMON_HANDLER_END(vsx_unavailable_common)
 
-	.align 7;
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
-	.align 7;
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
@@ -1174,7 +1140,6 @@ fwnmi_data_area:
 	. = 0x8000
 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
-	.align 7;
 COMMON_HANDLER_BEGIN(hmi_exception_early)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60)
 	mr	r10,r1			/* Save r1			*/
@@ -1261,7 +1226,6 @@ COMMON_HANDLER_END(hmi_exception_early)
 	 * Handle machine check early in real mode. We come here with
 	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
 	 */
-	.align	7
 COMMON_HANDLER_BEGIN(machine_check_handle_early)
 	std	r0,GPR0(r1)	/* Save r0 */
 	EXCEPTION_PROLOG_COMMON_3(0x200)
@@ -1458,7 +1422,7 @@ TRAMP_HANDLER_END(power4_fixup_nap)
 /*
  * Hash table stuff
  */
-	.align	7
+	.align 7
 do_hash_page:
 #ifdef CONFIG_PPC_STD_MMU_64
 	andis.	r0,r4,0xa410		/* weird error? */
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/14] powerpc/64: use gas sections for arranging exception vectors
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (7 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 08/14] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 10/14] powerpc/pseries: move related exception code together Nicholas Piggin
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Use assembler sections of fixed size and location to arrange pseries
exception vector code (64e also using it in head_64.S for 0x0..0x100).

This allows better flexibility in arranging exception code and hiding
unimportant details behind macros.

Gas sections can be a bit painful to use this way, mainly because the
assembler does not know where they will be finally linked. Taking
absolute addresses requires a bit of trickery for example, but it can
be hidden behind macros for the most part.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h |   8 +-
 arch/powerpc/include/asm/head-64.h       | 252 +++++++++++++++++++++++++------
 arch/powerpc/include/asm/ppc_asm.h       |  29 ++--
 arch/powerpc/kernel/exceptions-64s.S     |  90 ++++++++---
 arch/powerpc/kernel/head_64.S            |  84 ++++++-----
 arch/powerpc/kernel/vmlinux.lds.S        |  22 ++-
 6 files changed, 367 insertions(+), 118 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 01fd163..06e2247 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -93,11 +93,11 @@
  * reg must contain kbase, and kbase must be 64K aligned.
  */
 #define LOAD_HANDLER_64K(reg, label)					\
-	ori	reg,reg,(label)-_stext ;
+	ori	reg,reg,ABS_ADDR(label);
 
 #define LOAD_HANDLER_4G(reg, label)					\
-	ori	reg,reg,((label)-_stext)@l ;				\
-	addis	reg,reg,((label)-_stext)@h ;
+	ori	reg,reg,ABS_ADDR(label)@l ;				\
+	addis	reg,reg,ABS_ADDR(label)@h
 
 /* Exception register prefixes */
 #define EXC_HV	H
@@ -186,7 +186,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
 	ld	r10,PACAKMSR(r13);	/* get MSR value for kernel */	\
 	mfspr	r11,SPRN_##h##SRR0;	/* save SRR0 */			\
-	LOAD_HANDLER_4G(r12,label)					\
+	LOAD_HANDLER_4G(r12,label);					\
 	mtspr	SPRN_##h##SRR0,r12;					\
 	mfspr	r12,SPRN_##h##SRR1;	/* and SRR1 */			\
 	mtspr	SPRN_##h##SRR1,r10;					\
diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index 98cd36b..5adb48d 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -1,37 +1,167 @@
 #ifndef _ASM_POWERPC_HEAD_64_H
 #define _ASM_POWERPC_HEAD_64_H
-
+/*
+ * Stuff to help the fixed layout head code, and S exception vectors.
+ */
+#include <asm/ppc_asm.h>
 #include <asm/cache.h>
 
+/*
+ * We can't do CPP stringification and concatination directly into the section
+ * name for some reason, so these macros can do it for us.
+ */
+.macro define_ftsec name
+	.section ".head.text.\name\()","ax",@progbits
+.endm
+.macro use_ftsec name
+	.section ".head.text.\name\()"
+.endm
+
+#define OPEN_FIXED_SECTION(sname, start, end)			\
+	sname##_start = (start);				\
+	sname##_end = (end);					\
+	sname##_len = (end) - (start);				\
+	define_ftsec sname;					\
+	. = 0x0;						\
+	.global start_##sname;					\
+start_##sname:
+
+#define OPEN_TEXT_SECTION(start)				\
+	text_start = (start);					\
+	.section ".text","ax",@progbits;			\
+	. = 0x0;						\
+	.global start_text;					\
+start_text:
+
+#define USE_FIXED_SECTION(sname)				\
+	fs_label = start_##sname;				\
+	fs_start = sname##_start;				\
+	use_ftsec sname;
+
+#define USE_TEXT_SECTION()					\
+	fs_label = start_text;					\
+	fs_start = text_start;					\
+	.text
+
+#define UNUSE_FIXED_SECTION(sname)				\
+	.previous;
+
+#define CLOSE_FIXED_SECTION(sname)				\
+	USE_FIXED_SECTION(sname);				\
+	. = sname##_len;					\
+end_##sname:
+
+#define CLOSE_FIXED_SECTION_LAST(sname)				\
+	USE_FIXED_SECTION(sname);				\
+end_##sname:
+
+
+#define __FIXED_SECTION_ENTRY_BEGIN(sname, name, __align)	\
+	USE_FIXED_SECTION(sname);				\
+	.align __align;						\
+	.global name;						\
+name:
+
+#define FIXED_SECTION_ENTRY_BEGIN(sname, name)			\
+	__FIXED_SECTION_ENTRY_BEGIN(sname, name, 0)
+
+#define FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start)		\
+	USE_FIXED_SECTION(sname);				\
+	name##_start = (start);					\
+	.if (start) < sname##_start;				\
+	.error "Fixed section underflow";			\
+	.abort;							\
+	.endif;							\
+	. = (start) - sname##_start;				\
+	.global name;						\
+name:
+
+#define FIXED_SECTION_ENTRY_END(sname, name)			\
+end_##name:							\
+	UNUSE_FIXED_SECTION(sname);
+
+#define FIXED_SECTION_ENTRY_E_END(sname, name, end)		\
+	.if (end) > sname##_end;				\
+	.error "Fixed section overflow";			\
+	.abort;							\
+	.endif;							\
+end_##name:							\
+	.if (end_##name - name > end - name##_start);		\
+	.error "Fixed entry overflow";				\
+	.abort;							\
+	.endif;							\
+	/* Pad out the end with traps */			\
+	.rept (((end) - name##_start) - (. - name)) / 4;	\
+	trap;							\
+	.endr;							\
+	. = ((end) - sname##_start);				\
+	UNUSE_FIXED_SECTION(sname);
+
+#define FIXED_SECTION_ENTRY_S(sname, name, start, entry)	\
+	FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start);	\
+	entry;							\
+	FIXED_SECTION_ENTRY_END(sname, name);			\
+
+#define FIXED_SECTION_ENTRY(sname, name, start, end, entry)	\
+	FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start);	\
+	entry;							\
+	FIXED_SECTION_ENTRY_E_END(sname, name, end);
+
+#define FIXED_SECTION_ENTRY_ZERO(sname, start, end)		\
+	FIXED_SECTION_ENTRY_S_BEGIN(sname, sname##_##zero, start); \
+	.zero (end) - (start);					\
+	FIXED_SECTION_ENTRY_E_END(sname, sname##_##zero, end);
+
+#define ABS_ADDR(label) (label - fs_label + fs_start)
+
+/*
+ * These macros are used to change symbols in other fixed sections to be
+ * absolute or related to our current fixed section.
+ *
+ * GAS makes things as painful as it possibly can.
+ */
+#define FIXED_SECTION_ABS_ADDR(sname, target)				\
+	(target - start_##sname + sname##_start)
+
+#define FIXED_SECTION_REL_ADDR(sname, target)				\
+	(FIXED_SECTION_ABS_ADDR(sname, target) + fs_label - fs_start)
+
+#define FTR_SECTION_FIXED_SECTION_RELADDR(label, sname, target)		\
+	FTR_SECTION_EXT_RELADDR(label, FIXED_SECTION_REL_ADDR(sname, target))
+
+
 #define VECTOR_HANDLER_REAL_BEGIN(name, start, end)			\
-	. = start ;							\
-	.align 7;							\
-	.global exc_##start##_##name ;					\
-exc_##start##_##name:
+	FIXED_SECTION_ENTRY_S_BEGIN(real_vectors, exc_##start##_##name, start)
 
-#define VECTOR_HANDLER_REAL_END(name, start, end)
+#define VECTOR_HANDLER_REAL_END(name, start, end)			\
+	FIXED_SECTION_ENTRY_E_END(real_vectors, exc_##start##_##name, end)
 
 #define VECTOR_HANDLER_VIRT_BEGIN(name, start, end)			\
-	. = start ;							\
-	.align 7;	i						\
-	.global exc_##start##_##name ;					\
-exc_##start##_##name:
+	FIXED_SECTION_ENTRY_S_BEGIN(virt_vectors, exc_##start##_##name, start)
 
-#define VECTOR_HANDLER_VIRT_END(name, start, end)
+#define VECTOR_HANDLER_VIRT_END(name, start, end)			\
+	FIXED_SECTION_ENTRY_E_END(virt_vectors, exc_##start##_##name, end)
 
 #define COMMON_HANDLER_BEGIN(name)					\
+	USE_TEXT_SECTION();						\
 	.align 7;							\
 	.global name;							\
 name:
 
-#define COMMON_HANDLER_END(name)
+#define COMMON_HANDLER_END(name)					\
+	.previous
 
 #define TRAMP_HANDLER_BEGIN(name)					\
-	.align 7;							\
-	.global name ;							\
-name:
+	__FIXED_SECTION_ENTRY_BEGIN(real_trampolines, name, 7)
+
+#define TRAMP_HANDLER_END(name)						\
+	FIXED_SECTION_ENTRY_END(real_trampolines, name)
 
-#define TRAMP_HANDLER_END(name)	
+#define VTRAMP_HANDLER_BEGIN(name)					\
+	__FIXED_SECTION_ENTRY_BEGIN(virt_trampolines, name, 7)
+
+#define VTRAMP_HANDLER_END(name)					\
+	FIXED_SECTION_ENTRY_END(virt_trampolines, name)
 
 #define TRAMP_KVM_BEGIN(name)						\
 	TRAMP_HANDLER_BEGIN(name)
@@ -39,63 +169,75 @@ name:
 #define TRAMP_KVM_END(name)						\
 	TRAMP_HANDLER_END(name)
 
-#define VECTOR_HANDLER_REAL_NONE(start, end)
+#define VECTOR_HANDLER_REAL_NONE(start, end)				\
+	FIXED_SECTION_ENTRY_S_BEGIN(real_vectors, exc_##start##_##unused, start); \
+	FIXED_SECTION_ENTRY_E_END(real_vectors, exc_##start##_##unused, end)
 
-#define VECTOR_HANDLER_VIRT_NONE(start, end)
+#define VECTOR_HANDLER_VIRT_NONE(start, end)				\
+	FIXED_SECTION_ENTRY_S_BEGIN(virt_vectors, exc_##start##_##unused, start); \
+	FIXED_SECTION_ENTRY_E_END(virt_vectors, exc_##start##_##unused, end);
 
 
 #define VECTOR_HANDLER_REAL(name, start, end)				\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	STD_EXCEPTION_PSERIES(start, name##_common);			\
-	VECTOR_HANDLER_REAL_END(name, start, end);
+	VECTOR_HANDLER_REAL_END(name, start, end)
 
 #define VECTOR_HANDLER_VIRT(name, start, end, realvec)			\
 	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
 	STD_RELON_EXCEPTION_PSERIES(start, realvec, name##_common);	\
-	VECTOR_HANDLER_VIRT_END(name, start, end);
+	VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define VECTOR_HANDLER_REAL_MASKABLE(name, start, end)			\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	MASKABLE_EXCEPTION_PSERIES(start, start, name##_common);	\
-	VECTOR_HANDLER_REAL_END(name, start, end);
+	VECTOR_HANDLER_REAL_END(name, start, end)
 
 #define VECTOR_HANDLER_VIRT_MASKABLE(name, start, end, realvec)		\
 	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
 	MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
-	VECTOR_HANDLER_VIRT_END(name, start, end);
+	VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define VECTOR_HANDLER_REAL_HV(name, start, end)			\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	STD_EXCEPTION_HV(start, start + 0x2, name##_common);		\
-	VECTOR_HANDLER_REAL_END(name, start, end);
+	VECTOR_HANDLER_REAL_END(name, start, end)
 
 #define VECTOR_HANDLER_VIRT_HV(name, start, end, realvec)		\
 	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
 	STD_RELON_EXCEPTION_HV(start, realvec + 0x2, name##_common);	\
-	VECTOR_HANDLER_VIRT_END(name, start, end);
+	VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define __VECTOR_HANDLER_REAL_OOL(name, start, end)			\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	__OOL_EXCEPTION(start, label, tramp_real_##name);		\
-	VECTOR_HANDLER_REAL_END(name, start, end);
+	VECTOR_HANDLER_REAL_END(name, start, end)
 
 #define __TRAMP_HANDLER_REAL_OOL(name, vec)				\
 	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
 	STD_EXCEPTION_PSERIES_OOL(vec, name##_common);			\
-	TRAMP_HANDLER_END(tramp_real_##name);
+	TRAMP_HANDLER_END(tramp_real_##name)
+
+#define VECTOR_HANDLER_REAL_OOL(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);			\
+	__TRAMP_HANDLER_REAL_OOL(name, start)
 
 #define __VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
-	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+	__VECTOR_HANDLER_REAL_OOL(name, start, end)
 
 #define __TRAMP_HANDLER_REAL_OOL_MASKABLE(name, vec)			\
 	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
 	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common);		\
-	TRAMP_HANDLER_END(tramp_real_##name);
+	TRAMP_HANDLER_END(tramp_real_##name)
+
+#define VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end);		\
+	__TRAMP_HANDLER_REAL_OOL_MASKABLE(name, start)
 
 #define __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(name, start, end, handler)	\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	__OOL_EXCEPTION(start, label, handler);				\
-	VECTOR_HANDLER_REAL_END(name, start, end);
+	VECTOR_HANDLER_REAL_END(name, start, end)
 
 #define __VECTOR_HANDLER_REAL_OOL_HV(name, start, end)			\
 	__VECTOR_HANDLER_REAL_OOL(name, start, end);
@@ -103,7 +245,11 @@ name:
 #define __TRAMP_HANDLER_REAL_OOL_HV(name, vec)				\
 	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
 	STD_EXCEPTION_HV_OOL(vec + 0x2, name##_common);			\
-	TRAMP_HANDLER_END(tramp_real_##name);
+	TRAMP_HANDLER_END(tramp_real_##name)
+
+#define VECTOR_HANDLER_REAL_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL_HV(name, start, end);			\
+	__TRAMP_HANDLER_REAL_OOL_HV(name, start)
 
 #define __VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
 	__VECTOR_HANDLER_REAL_OOL(name, start, end);
@@ -111,41 +257,61 @@ name:
 #define __TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(name, vec)			\
 	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
 	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common);			\
-	TRAMP_HANDLER_END(tramp_real_##name);
+	TRAMP_HANDLER_END(tramp_real_##name)
+
+#define VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end);	\
+	__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(name, start)
 
 #define __VECTOR_HANDLER_VIRT_OOL(name, start, end)			\
 	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
 	__OOL_EXCEPTION(start, label, tramp_virt_##name);		\
-	VECTOR_HANDLER_VIRT_END(name, start, end);
+	VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define __TRAMP_HANDLER_VIRT_OOL(name, realvec)				\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	STD_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name)
+
+#define VECTOR_HANDLER_VIRT_OOL(name, start, end, realvec)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);			\
+	__TRAMP_HANDLER_VIRT_OOL(name, realvec)
 
 #define __VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end)		\
-	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end)
 
 #define __TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec)		\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name)
+
+#define VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end, realvec)	\
+	__VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end);		\
+	__TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec)
 
 #define __VECTOR_HANDLER_VIRT_OOL_HV(name, start, end)			\
-	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end)
 
 #define __TRAMP_HANDLER_VIRT_OOL_HV(name, realvec)			\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	STD_RELON_EXCEPTION_HV_OOL(realvec, name##_common);		\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name)
+
+#define VECTOR_HANDLER_VIRT_OOL_HV(name, start, end, realvec)		\
+	__VECTOR_HANDLER_VIRT_OOL_HV(name, start, end);			\
+	__TRAMP_HANDLER_VIRT_OOL_HV(name, realvec)
 
 #define __VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end)		\
-	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end)
 
 #define __TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec)		\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name)
+
+#define VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end, realvec)	\
+	__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end);	\
+	__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec)
 
 #define TRAMP_KVM(area, n)						\
 	TRAMP_KVM_BEGIN(do_kvm_##n);					\
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 2b31632..18e04ac 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -200,29 +200,26 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 
 #if defined(_CALL_ELF) && _CALL_ELF == 2
 
-#define _GLOBAL(name) \
-	.section ".text"; \
+#define ____GLOBAL(name) \
 	.align 2 ; \
 	.type name,@function; \
 	.globl name; \
 name:
 
+#define _GLOBAL(name) \
+	.section ".text"; \
+	____GLOBAL(name)
+
 #define _GLOBAL_TOC(name) \
 	.section ".text"; \
-	.align 2 ; \
-	.type name,@function; \
-	.globl name; \
-name: \
+	____GLOBAL(name) \
 0:	addis r2,r12,(.TOC.-0b)@ha; \
 	addi r2,r2,(.TOC.-0b)@l; \
 	.localentry name,.-name
 
 #define _KPROBE(name) \
 	.section ".kprobes.text","a"; \
-	.align 2 ; \
-	.type name,@function; \
-	.globl name; \
-name:
+	____GLOBAL(name) \
 
 #define DOTSYM(a)	a
 
@@ -231,6 +228,18 @@ name:
 #define XGLUE(a,b) a##b
 #define GLUE(a,b) XGLUE(a,b)
 
+#define ____GLOBAL(name) \
+	.align 2 ; \
+	.globl name; \
+	.globl GLUE(.,name); \
+	.section ".opd","aw"; \
+name: \
+	.quad GLUE(.,name); \
+	.quad .TOC.@tocbase; \
+	.quad 0; \
+	.type GLUE(.,name),@function; \
+GLUE(.,name):
+
 #define _GLOBAL(name) \
 	.section ".text"; \
 	.align 2 ; \
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index db13569..9093521 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -18,16 +18,60 @@
 #include <asm/cpuidle.h>
 
 /*
+ * There are a few constraints to be conerned with.
+ * - Exception vectors must be placed according to specification.
+ * - Real mode code and data must be located at thier physical location.
+ * - Virtual mode exceptions must be located at 0xc000... virtual address.
+ * - LOAD_HANDLER_64K and conditional branch targets must be within 64K/32K 
+ *
+ *
+ * "Virtual exceptions" run with relocation on (MSR_IR=1, MSR_DR=1), and
+ * therefore don't have to run in physically located code or rfid to
+ * virtual mode kernel code. However on relocatable kernels they do have
+ * to branch to KERNELBASE offset because the rest of the kernel (outside
+ * the exception vectors) may be located elsewhere.
+ *
+ * Virtual exceptions correspond with physical, except their entry points
+ * are offset by 0xc000000000000000 and also tend to get an added 0x4000
+ * offset applied. Virtual exceptions are enabled with the Alternate
+ * Interrupt Location (AIL) bit set in the LPCR. However this does not
+ * guarantee they will be delivered virtually. Some conditions (see the ISA)
+ * cause exceptions to be delivered in real mode.
+ *
+ * It's impossible to receive interrupts below 0x300 via AIL.
+ *
+ * KVM: None of these traps are from the guest ; anything that escalated
+ * to HV=1 from HV=0 is delivered via real mode handlers.
+ *
+ *
  * We layout physical memory as follows:
  * 0x0000 - 0x00ff : Secondary processor spin code
- * 0x0100 - 0x17ff : pSeries Interrupt prologs
- * 0x1800 - 0x4000 : interrupt support common interrupt prologs
- * 0x4000 - 0x5fff : pSeries interrupts with IR=1,DR=1
- * 0x6000 - 0x6fff : more interrupt support including for IR=1,DR=1
+ * 0x0100 - 0x17ff : Real mode pSeries interrupt vectors
+ * 0x1800 - 0x1fff : Reserved for vectors
+ * 0x2000 - 0x3fff : Real mode trampolines
+ * 0x4000 - 0x5fff : Relon (IR=1,DR=1) mode pSeries interrupt vectors
+ * 0x6000 - 0x6fff : Relon mode trampolines
  * 0x7000 - 0x7fff : FWNMI data area
- * 0x8000 - 0x8fff : Initial (CPU0) segment table
- * 0x9000 -        : Early init and support code
+ * 0x8000 -   .... : Common interrupt handlers, remaining early
+ *                   setup code, rest of kernel.
+ *
+ * 0x0000 - 0x3000 runs in real mode, other kernel code runs virtual.
+ * 0x0000 - 0x6fff is mapped to PAGE_OFFSET, other kernel code is relocatable.
  */
+OPEN_FIXED_SECTION(real_vectors,        0x0100, 0x2000)
+OPEN_FIXED_SECTION(real_trampolines,    0x2000, 0x4000)
+OPEN_FIXED_SECTION(virt_vectors,        0x4000, 0x6000)
+OPEN_FIXED_SECTION(virt_trampolines,    0x6000, 0x7000)
+#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
+OPEN_FIXED_SECTION(fwnmi_page,          0x7000, 0x8000)
+OPEN_TEXT_SECTION(0x8000)
+#else
+OPEN_TEXT_SECTION(0x7000)
+#endif
+
+USE_FIXED_SECTION(real_vectors)
+
+
 	/* Syscall routine is used twice, in reloc-off and reloc-on paths */
 #define SYSCALL_PSERIES_1 					\
 BEGIN_FTR_SECTION						\
@@ -90,7 +134,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
  * Therefore any relative branches in this section must only
  * branch to labels in this section.
  */
-	. = 0x100
 	.globl __start_interrupts
 __start_interrupts:
 
@@ -239,9 +282,6 @@ VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
 #endif
 VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
 
-	/* We open code these as we can't have a ". = x" (even with
-	 * x = "." within a feature section
-	 */
 VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
 	.globl hardware_interrupt_hv;
 hardware_interrupt_hv:
@@ -249,7 +289,7 @@ hardware_interrupt_hv:
 		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
 					    EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
 					    EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
@@ -400,7 +440,6 @@ TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
-	. = 0x1800
 #endif
 
 
@@ -637,9 +676,11 @@ masked_##_H##interrupt:					\
 	GET_SCRATCH0(r13);				\
 	##_H##rfid;					\
 	b	.
-	
+
+USE_FIXED_SECTION(real_trampolines)
 	MASKED_INTERRUPT()
 	MASKED_INTERRUPT(H)
+UNUSE_FIXED_SECTION(real_trampolines)
 
 /*
  * Called from arch_local_irq_enable when an interrupt needs
@@ -827,7 +868,7 @@ hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common), EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
 
@@ -1114,6 +1155,7 @@ __TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
 __TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
 __TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 
+USE_FIXED_SECTION(virt_trampolines)
 	/*
 	 * The __end_interrupts marker must be past the out-of-line (OOL)
 	 * handlers, so that they are copied to real address 0x100 when running
@@ -1124,20 +1166,16 @@ __TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 	.align	7
 	.globl	__end_interrupts
 __end_interrupts:
+UNUSE_FIXED_SECTION(virt_trampolines)
 
 #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
 /*
  * Data area reserved for FWNMI option.
  * This address (0x7000) is fixed by the RPA.
+ * pseries and powernv need to keep the whole page from
+ * 0x7000 to 0x8000 free for use by the firmware
  */
-	.= 0x7000
-	.globl fwnmi_data_area
-fwnmi_data_area:
-
-	/* pseries and powernv need to keep the whole page from
-	 * 0x7000 to 0x8000 free for use by the firmware
-	 */
-	. = 0x8000
+FIXED_SECTION_ENTRY_ZERO(fwnmi_page, 0x7000, 0x8000)
 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
 COMMON_HANDLER_BEGIN(hmi_exception_early)
@@ -1419,6 +1457,14 @@ TRAMP_HANDLER_BEGIN(power4_fixup_nap)
 TRAMP_HANDLER_END(power4_fixup_nap)
 #endif
 
+CLOSE_FIXED_SECTION(real_vectors);
+CLOSE_FIXED_SECTION(real_trampolines);
+CLOSE_FIXED_SECTION(virt_vectors);
+CLOSE_FIXED_SECTION(virt_trampolines);
+CLOSE_FIXED_SECTION(fwnmi_page);
+
+USE_TEXT_SECTION()
+
 /*
  * Hash table stuff
  */
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 2d14774..9006e51 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -28,6 +28,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/ppc_asm.h>
+#include <asm/head-64.h>
 #include <asm/asm-offsets.h>
 #include <asm/bug.h>
 #include <asm/cputable.h>
@@ -65,10 +66,10 @@
  *   2. The kernel is entered at __start
  */
 
-	.text
-	.globl  _stext
-_stext:
-_GLOBAL(__start)
+OPEN_FIXED_SECTION(first_256B, 0x0, 0x100)
+
+USE_FIXED_SECTION(first_256B)
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __start, 0x0)
 	/* NOP this out unconditionally */
 BEGIN_FTR_SECTION
 	FIXUP_ENDIAN
@@ -77,6 +78,7 @@ END_FTR_SECTION(0, 1)
 
 	/* Catch branch to 0 in real mode */
 	trap
+FIXED_SECTION_ENTRY_END(first_256B, __start)
 
 	/* Secondary processors spin on this value until it becomes non-zero.
 	 * When non-zero, it contains the real address of the function the cpu
@@ -101,24 +103,22 @@ __secondary_hold_acknowledge:
 	 * observing the alignment requirement.
 	 */
 	/* Do not move this variable as kexec-tools knows about it. */
-	. = 0x5c
-	.globl	__run_at_load
-__run_at_load:
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __run_at_load, 0x5c)
 	.long	0x72756e30	/* "run0" -- relocate to 0 by default */
+FIXED_SECTION_ENTRY_END(first_256B, __run_at_load)
+
 #endif
 
-	. = 0x60
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __secondary_hold, 0x60)
 /*
  * The following code is used to hold secondary processors
  * in a spin loop after they have entered the kernel, but
  * before the bulk of the kernel has been relocated.  This code
  * is relocated to physical address 0x60 before prom_init is run.
  * All of it must fit below the first exception vector at 0x100.
- * Use .globl here not _GLOBAL because we want __secondary_hold
+ * Use .globl here not ____GLOBAL because we want __secondary_hold
  * to be the actual text address, not a descriptor.
  */
-	.globl	__secondary_hold
-__secondary_hold:
 	FIXUP_ENDIAN
 #ifndef CONFIG_PPC_BOOK3E
 	mfmsr	r24
@@ -133,7 +133,7 @@ __secondary_hold:
 	/* Tell the master cpu we're here */
 	/* Relocation is off & we are located at an address less */
 	/* than 0x100, so only need to grab low order offset.    */
-	std	r24,__secondary_hold_acknowledge-_stext(0)
+	std	r24,ABS_ADDR(__secondary_hold_acknowledge)(0)
 	sync
 
 	li	r26,0
@@ -141,7 +141,7 @@ __secondary_hold:
 	tovirt(r26,r26)
 #endif
 	/* All secondary cpus wait here until told to start. */
-100:	ld	r12,__secondary_hold_spinloop-_stext(r26)
+100:	ld	r12,ABS_ADDR(__secondary_hold_spinloop)(r26)
 	cmpdi	0,r12,0
 	beq	100b
 
@@ -166,12 +166,15 @@ __secondary_hold:
 #else
 	BUG_OPCODE
 #endif
+FIXED_SECTION_ENTRY_END(first_256B, __secondary_hold)
+
+CLOSE_FIXED_SECTION(first_256B)
 
 /* This value is used to mark exception frames on the stack. */
 	.section ".toc","aw"
 exception_marker:
 	.tc	ID_72656773_68657265[TC],0x7265677368657265
-	.text
+	.previous
 
 /*
  * On server, we include the exception vectors code here as it
@@ -180,8 +183,12 @@ exception_marker:
  */
 #ifdef CONFIG_PPC_BOOK3S
 #include "exceptions-64s.S"
+#else
+OPEN_TEXT_SECTION(0x100)
 #endif
 
+USE_TEXT_SECTION()
+
 #ifdef CONFIG_PPC_BOOK3E
 /*
  * The booting_thread_hwid holds the thread id we want to boot in cpu
@@ -199,7 +206,7 @@ booting_thread_hwid:
  * r3 = the thread physical id
  * r4 = the entry point where thread starts
  */
-_GLOBAL(book3e_start_thread)
+____GLOBAL(book3e_start_thread)
 	LOAD_REG_IMMEDIATE(r5, MSR_KERNEL)
 	cmpi	0, r3, 0
 	beq	10f
@@ -227,7 +234,7 @@ _GLOBAL(book3e_start_thread)
  * input parameter:
  * r3 = the thread physical id
  */
-_GLOBAL(book3e_stop_thread)
+____GLOBAL(book3e_stop_thread)
 	cmpi	0, r3, 0
 	beq	10f
 	cmpi	0, r3, 1
@@ -241,7 +248,7 @@ _GLOBAL(book3e_stop_thread)
 13:
 	blr
 
-_GLOBAL(fsl_secondary_thread_init)
+____GLOBAL(fsl_secondary_thread_init)
 	mfspr	r4,SPRN_BUCSR
 
 	/* Enable branch prediction */
@@ -278,7 +285,7 @@ _GLOBAL(fsl_secondary_thread_init)
 1:
 #endif
 
-_GLOBAL(generic_secondary_thread_init)
+____GLOBAL(generic_secondary_thread_init)
 	mr	r24,r3
 
 	/* turn on 64-bit mode */
@@ -304,7 +311,7 @@ _GLOBAL(generic_secondary_thread_init)
  * this core already exists (setup via some other mechanism such
  * as SCOM before entry).
  */
-_GLOBAL(generic_secondary_smp_init)
+____GLOBAL(generic_secondary_smp_init)
 	FIXUP_ENDIAN
 	mr	r24,r3
 	mr	r25,r4
@@ -558,7 +565,7 @@ __after_prom_start:
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)		/* on booke, we already run at PAGE_OFFSET */
 #endif
-	lwz	r7,__run_at_load-_stext(r26)
+	lwz	r7,ABS_ADDR(__run_at_load)(r26)
 #if defined(CONFIG_PPC_BOOK3E)
 	tophys(r26,r26)
 #endif
@@ -601,7 +608,7 @@ __after_prom_start:
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)		/* on booke, we already run at PAGE_OFFSET */
 #endif
-	lwz	r7,__run_at_load-_stext(r26)
+	lwz	r7,ABS_ADDR(__run_at_load)(r26)
 	cmplwi	cr0,r7,1
 	bne	3f
 
@@ -611,28 +618,31 @@ __after_prom_start:
 	sub	r5,r5,r11
 #else
 	/* just copy interrupts */
-	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
+	LOAD_REG_IMMEDIATE(r5, FIXED_SECTION_ABS_ADDR(virt_trampolines, __end_interrupts))
 #endif
 	b	5f
 3:
 #endif
-	lis	r5,(copy_to_here - _stext)@ha
-	addi	r5,r5,(copy_to_here - _stext)@l /* # bytes of memory to copy */
+	/* # bytes of memory to copy */
+	lis	r5,ABS_ADDR(copy_to_here)@ha
+	addi	r5,r5,ABS_ADDR(copy_to_here)@l
 
 	bl	copy_and_flush		/* copy the first n bytes	 */
 					/* this includes the code being	 */
 					/* executed here.		 */
-	addis	r8,r3,(4f - _stext)@ha	/* Jump to the copy of this code */
-	addi	r12,r8,(4f - _stext)@l	/* that we just made */
+	/* Jump to the copy of this code that we just made*/
+	addis	r8,r3, ABS_ADDR(4f)@ha
+	addi	r12,r8, ABS_ADDR(4f)@l
 	mtctr	r12
 	bctr
 
-.balign 8
-p_end:	.llong	_end - _stext
+p_end: .llong _end - copy_to_here
 
 4:	/* Now copy the rest of the kernel up to _end */
-	addis	r5,r26,(p_end - _stext)@ha
-	ld	r5,(p_end - _stext)@l(r5)	/* get _end */
+	addis   r8,r26,ABS_ADDR(p_end)@ha
+	/* load p_end */
+	ld      r8,ABS_ADDR(p_end)@l(r8)
+	add	r5,r5,r8
 5:	bl	copy_and_flush		/* copy the rest */
 
 9:	b	start_here_multiplatform
@@ -645,7 +655,7 @@ p_end:	.llong	_end - _stext
  *
  * Note: this routine *only* clobbers r0, r6 and lr
  */
-_GLOBAL(copy_and_flush)
+____GLOBAL(copy_and_flush)
 	addi	r5,r5,-8
 	addi	r6,r6,-8
 4:	li	r0,8			/* Use the smallest common	*/
@@ -676,15 +686,15 @@ _GLOBAL(copy_and_flush)
 .align 8
 copy_to_here:
 
+	.text
+
 #ifdef CONFIG_SMP
 #ifdef CONFIG_PPC_PMAC
 /*
  * On PowerMac, secondary processors starts from the reset vector, which
  * is temporarily turned into a call to one of the functions below.
  */
-	.section ".text";
 	.align 2 ;
-
 	.globl	__secondary_start_pmac_0
 __secondary_start_pmac_0:
 	/* NB the entries for cpus 0, 1, 2 must each occupy 8 bytes. */
@@ -697,7 +707,7 @@ __secondary_start_pmac_0:
 	li	r24,3
 1:
 	
-_GLOBAL(pmac_secondary_start)
+____GLOBAL(pmac_secondary_start)
 	/* turn on 64-bit mode */
 	bl	enable_64b_mode
 
@@ -758,7 +768,6 @@ _GLOBAL(pmac_secondary_start)
  *   r13       = paca virtual address
  *   SPRG_PACA = paca virtual address
  */
-	.section ".text";
 	.align 2 ;
 
 	.globl	__secondary_start
@@ -818,7 +827,7 @@ start_secondary_prolog:
  * to continue with online operation when woken up
  * from cede in cpu offline.
  */
-_GLOBAL(start_secondary_resume)
+____GLOBAL(start_secondary_resume)
 	ld	r1,PACAKSAVE(r13)	/* Reload kernel stack pointer */
 	li	r3,0
 	std	r3,0(r1)		/* Zero the stack frame pointer	*/
@@ -855,7 +864,7 @@ enable_64b_mode:
  * accessed later with the MMU on. We use tovirt() at the call
  * sites to handle this.
  */
-_GLOBAL(relative_toc)
+____GLOBAL(relative_toc)
 	mflr	r0
 	bcl	20,31,$+4
 0:	mflr	r11
@@ -864,6 +873,7 @@ _GLOBAL(relative_toc)
 	mtlr	r0
 	blr
 
+
 .balign 8
 p_toc:	.llong	__toc_start + 0x8000 - 0b
 
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 552dcbc..6d5d551 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -36,6 +36,7 @@ jiffies = jiffies_64;
 OUTPUT_ARCH(powerpc:common)
 jiffies = jiffies_64 + 4;
 #endif
+
 SECTIONS
 {
 	. = KERNELBASE;
@@ -47,9 +48,25 @@ SECTIONS
 	/* Text and gots */
 	.text : AT(ADDR(.text) - LOAD_OFFSET) {
 		ALIGN_FUNCTION();
-		HEAD_TEXT
 		_text = .;
-		*(.text .fixup .ref.text)
+		_stext = .;
+		*(.head.text.first_256B);
+		*(.head.text.real_vectors);
+		*(.head.text.real_trampolines);
+		*(.head.text.virt_vectors);
+		*(.head.text.virt_trampolines);
+		/*
+		 * If the build dies here, it's normally due to the linker
+		 * placing branch stubs inside a fixed section but before the
+		 * fixed section start label. Must use branches that can
+		 * directly reach their target.
+		 */
+		. = 0x7000;
+#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
+		*(.head.text.fwnmi_page);
+		. = 0x8000;
+#endif
+		*(.text .fixup .ref.text);
 		SCHED_TEXT
 		LOCK_TEXT
 		KPROBES_TEXT
@@ -276,3 +293,4 @@ SECTIONS
 	/* Sections to be discarded. */
 	DISCARDS
 }
+
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/14] powerpc/pseries: move related exception code together
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (8 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 09/14] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 11/14] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

This is mostly juggling. Generated code should remain the same,
except for placement and offsets. We can do this now that code
gets moved into the correct section according to its type.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 2005 ++++++++++++++++------------------
 1 file changed, 967 insertions(+), 1038 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 9093521..7893af7 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -64,6 +64,13 @@ OPEN_FIXED_SECTION(virt_vectors,        0x4000, 0x6000)
 OPEN_FIXED_SECTION(virt_trampolines,    0x6000, 0x7000)
 #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
 OPEN_FIXED_SECTION(fwnmi_page,          0x7000, 0x8000)
+/*
+ * Data area reserved for FWNMI option.
+ * This address (0x7000) is fixed by the RPA.
+ * pseries and powernv need to keep the whole page from
+ * 0x7000 to 0x8000 free for use by the firmware
+ */
+FIXED_SECTION_ENTRY_ZERO(fwnmi_page, 0x7000, 0x8000)
 OPEN_TEXT_SECTION(0x8000)
 #else
 OPEN_TEXT_SECTION(0x7000)
@@ -72,60 +79,6 @@ OPEN_TEXT_SECTION(0x7000)
 USE_FIXED_SECTION(real_vectors)
 
 
-	/* Syscall routine is used twice, in reloc-off and reloc-on paths */
-#define SYSCALL_PSERIES_1 					\
-BEGIN_FTR_SECTION						\
-	cmpdi	r0,0x1ebe ; 					\
-	beq-	1f ;						\
-END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
-	mr	r9,r13 ;					\
-	GET_PACA(r13) ;						\
-	mfspr	r11,SPRN_SRR0 ;					\
-0:
-
-#define SYSCALL_PSERIES_2_RFID 					\
-	mfspr	r12,SPRN_SRR1 ;					\
-	ld	r10,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER_4G(r10, system_call_common) ; 		\
-	mtspr	SPRN_SRR0,r10 ; 				\
-	ld	r10,PACAKMSR(r13) ;				\
-	mtspr	SPRN_SRR1,r10 ; 				\
-	rfid ; 							\
-	b	. ;	/* prevent speculative execution */
-
-#define SYSCALL_PSERIES_3					\
-	/* Fast LE/BE switch system call */			\
-1:	mfspr	r12,SPRN_SRR1 ;					\
-	xori	r12,r12,MSR_LE ;				\
-	mtspr	SPRN_SRR1,r12 ;					\
-	rfid ;		/* return to userspace */		\
-	b	. ;	/* prevent speculative execution */
-
-#if defined(CONFIG_RELOCATABLE)
-	/*
-	 * We can't branch directly so we do it via the CTR which
-	 * is volatile across system calls.
-	 */
-#define SYSCALL_PSERIES_2_DIRECT				\
-	mflr	r10 ;						\
-	ld	r12,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER_4G(r12, system_call_common) ;		\
-	mtctr	r12 ;						\
-	mfspr	r12,SPRN_SRR1 ;					\
-	/* Re-use of r13... No spare regs to do this */	\
-	li	r13,MSR_RI ;					\
-	mtmsrd 	r13,1 ;						\
-	GET_PACA(r13) ;	/* get r13 back */			\
-	bctr ;
-#else
-	/* We can branch directly */
-#define SYSCALL_PSERIES_2_DIRECT				\
-	mfspr	r12,SPRN_SRR1 ;					\
-	li	r10,MSR_RI ;					\
-	mtmsrd 	r10,1 ;			/* Set RI (EE=0) */	\
-	b	system_call_common ;
-#endif
-
 /*
  * This is the start of the interrupt handlers for pSeries
  * This code runs with relocation off.
@@ -199,6 +152,20 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
 VECTOR_HANDLER_REAL_END(system_reset, 0x100, 0x200)
+VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
+COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
+
+#ifdef CONFIG_PPC_PSERIES
+/*
+ * Vectors for the FWNMI option.  Share common code.
+ */
+TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
+	SET_SCRATCH0(r13)		/* save r13 */
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
+				 NOTEST, 0x100)
+TRAMP_HANDLER_END(system_reset_fwnmi)
+#endif /* CONFIG_PPC_PSERIES */
+
 
 VECTOR_HANDLER_REAL_BEGIN(machine_check, 0x200, 0x300)
 	/* This is moved out of line as it can be patched by FW, but
@@ -236,588 +203,320 @@ FTR_SECTION_ELSE
 	b	machine_check_pSeries_0
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_REAL_END(machine_check, 0x200, 0x300)
+VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
 
-VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
-
-VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_DAR
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
+TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
+BEGIN_FTR_SECTION
+	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
 	/*
-	 * We can't just use a direct branch to slb_miss_realmode
-	 * because the distance from here to there depends on where
-	 * the kernel ends up being put.
+	 * Register contents:
+	 * R13		= PACA
+	 * R9		= CR
+	 * Original R9 to R13 is saved on PACA_EXMC
+	 *
+	 * Switch to mc_emergency stack and handle re-entrancy (we limit
+	 * the nested MCE upto level 4 to avoid stack overflow).
+	 * Save MCE registers srr1, srr0, dar and dsisr and then set ME=1
+	 *
+	 * We use paca->in_mce to check whether this is the first entry or
+	 * nested machine check. We increment paca->in_mce to track nested
+	 * machine checks.
+	 *
+	 * If this is the first entry then set stack pointer to
+	 * paca->mc_emergency_sp, otherwise r1 is already pointing to
+	 * stack frame on mc_emergency stack.
+	 *
+	 * NOTE: We are here with MSR_ME=0 (off), which means we risk a
+	 * checkstop if we get another machine check exception before we do
+	 * rfid with MSR_ME=1.
 	 */
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
-
-VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
-
-VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
-
-VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
-	.globl hardware_interrupt_hv;
-hardware_interrupt_hv:
-	BEGIN_FTR_SECTION
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
-					    EXC_HV, SOFTEN_TEST_HV)
-	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
-					    EXC_STD, SOFTEN_TEST_PR)
-	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
-
-/*
- * Relon code jumps to these KVM handlers too so can't put them
- * in the feature sections.
- */
-TRAMP_KVM_HV(PACA_EXGEN, 0x500)
-TRAMP_KVM(PACA_EXGEN, 0x500)
-
-VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
-
-TRAMP_KVM(PACA_EXGEN, 0x600)
-
-VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
-
-TRAMP_KVM(PACA_EXGEN, 0x700)
-
-VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
-
-TRAMP_KVM(PACA_EXGEN, 0x800)
-
-__VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
-
-VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
+	mr	r11,r1			/* Save r1 */
+	lhz	r10,PACA_IN_MCE(r13)
+	cmpwi	r10,0			/* Are we in nested machine check */
+	bne	0f			/* Yes, we are. */
+	/* First machine check entry */
+	ld	r1,PACAMCEMERGSP(r13)	/* Use MC emergency stack */
+0:	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame */
+	addi	r10,r10,1		/* increment paca->in_mce */
+	sth	r10,PACA_IN_MCE(r13)
+	/* Limit nested MCE to level 4 to avoid stack overflow */
+	cmpwi	r10,4
+	bgt	2f			/* Check if we hit limit of 4 */
+	std	r11,GPR1(r1)		/* Save r1 on the stack. */
+	std	r11,0(r1)		/* make stack chain pointer */
+	mfspr	r11,SPRN_SRR0		/* Save SRR0 */
+	std	r11,_NIP(r1)
+	mfspr	r11,SPRN_SRR1		/* Save SRR1 */
+	std	r11,_MSR(r1)
+	mfspr	r11,SPRN_DAR		/* Save DAR */
+	std	r11,_DAR(r1)
+	mfspr	r11,SPRN_DSISR		/* Save DSISR */
+	std	r11,_DSISR(r1)
+	std	r9,_CCR(r1)		/* Save CR in stackframe */
+	/* Save r9 through r13 from EXMC save area to stack frame. */
+	EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
+	mfmsr	r11			/* get MSR value */
+	ori	r11,r11,MSR_ME		/* turn on ME bit */
+	ori	r11,r11,MSR_RI		/* turn on RI bit */
+	ld	r12,PACAKBASE(r13)	/* get high part of &label */
+	LOAD_HANDLER_4G(r12, machine_check_handle_early)
+1:	mtspr	SPRN_SRR0,r12
+	mtspr	SPRN_SRR1,r11
+	rfid
+	b	.	/* prevent speculative execution */
+2:
+	/* Stack overflow. Stay on emergency stack and panic.
+	 * Keep the ME bit off while panic-ing, so that if we hit
+	 * another machine check we checkstop.
+	 */
+	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
+	ld	r11,PACAKMSR(r13)
+	ld	r12,PACAKBASE(r13)
+	LOAD_HANDLER_4G(r12, unrecover_mce)
+	li	r10,MSR_ME
+	andc	r11,r11,r10		/* Turn off MSR_ME */
+	b	1b
+	b	.	/* prevent speculative execution */
+END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+TRAMP_HANDLER_END(machine_check_powernv_early)
 
-VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
+TRAMP_HANDLER_BEGIN(machine_check_pSeries)
+	.globl machine_check_fwnmi
+machine_check_fwnmi:
+	SET_SCRATCH0(r13)		/* save r13 */
+	EXCEPTION_PROLOG_0(PACA_EXMC)
+machine_check_pSeries_0:
+	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
+	EXCEPTION_PROLOG_PSERIES_1(machine_check_common, EXC_STD)
+TRAMP_HANDLER_END(machine_check_pSeries)
 
-TRAMP_KVM(PACA_EXGEN, 0xa00)
+TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
 
-VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
+COMMON_HANDLER_BEGIN(machine_check_common)
+	/*
+	 * Machine check is different because we use a different
+	 * save area: PACA_EXMC instead of PACA_EXGEN.
+	 */
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)
+	FINISH_NAP
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+	bl	save_nvgprs
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_exception
+	b	ret_from_except
+COMMON_HANDLER_END(machine_check_common)
 
-TRAMP_KVM(PACA_EXGEN, 0xb00)
-
-VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
-	 /*
-	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
-	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
-	  * code to save that value into the guest state (it is the guest's PPR
-	  * value). Otherwise just change to HMT_MEDIUM as userspace has
-	  * already saved the PPR.
-	  */
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	SET_SCRATCH0(r13)
-	GET_PACA(r13)
-	std	r9,PACA_EXGEN+EX_R9(r13)
-	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
-	HMT_MEDIUM;
-	std	r10,PACA_EXGEN+EX_R10(r13)
-	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
-	mfcr	r9
-	KVMTEST_PR(0xc00)
-	GET_SCRATCH0(r13)
-#else
-	HMT_MEDIUM;
-#endif
-	SYSCALL_PSERIES_1
-	SYSCALL_PSERIES_2_RFID
-	SYSCALL_PSERIES_3
-VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
-
-TRAMP_KVM(PACA_EXGEN, 0xc00)
-
-VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
-
-TRAMP_KVM(PACA_EXGEN, 0xd00)
-
-
-	/* At 0xe??? we have a bunch of hypervisor exceptions, we branch
-	 * out of line to handle them
-	 */
-__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-
-__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-
-__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
-
-__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
-
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
-
-VECTOR_HANDLER_REAL_NONE(0xea0, 0xf00)
-
-__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
-
-__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
-
-__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
-
-__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
-
-__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
-
-VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
-
-	
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
-#endif
-
-VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
-
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
-
-VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
-	mtspr	SPRN_SPRG_HSCRATCH0,r13
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
-
-#ifdef CONFIG_PPC_DENORMALISATION
-	mfspr	r10,SPRN_HSRR1
-	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
-	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
-	addi	r11,r11,-4		/* HSRR0 is next instruction */
-	bne+	denorm_assist
-#endif
-
-	KVMTEST_PR(0x1500)
-	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
-VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
-
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
-#endif
-
-VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
-
-TRAMP_KVM(PACA_EXGEN, 0x1700)
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
-#endif
-
-
-/*** Out of line interrupts support ***/
-
-	/* moved from 0x200 */
-TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
-BEGIN_FTR_SECTION
-	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
-	/*
-	 * Register contents:
-	 * R13		= PACA
-	 * R9		= CR
-	 * Original R9 to R13 is saved on PACA_EXMC
-	 *
-	 * Switch to mc_emergency stack and handle re-entrancy (we limit
-	 * the nested MCE upto level 4 to avoid stack overflow).
-	 * Save MCE registers srr1, srr0, dar and dsisr and then set ME=1
-	 *
-	 * We use paca->in_mce to check whether this is the first entry or
-	 * nested machine check. We increment paca->in_mce to track nested
-	 * machine checks.
-	 *
-	 * If this is the first entry then set stack pointer to
-	 * paca->mc_emergency_sp, otherwise r1 is already pointing to
-	 * stack frame on mc_emergency stack.
-	 *
-	 * NOTE: We are here with MSR_ME=0 (off), which means we risk a
-	 * checkstop if we get another machine check exception before we do
-	 * rfid with MSR_ME=1.
-	 */
-	mr	r11,r1			/* Save r1 */
-	lhz	r10,PACA_IN_MCE(r13)
-	cmpwi	r10,0			/* Are we in nested machine check */
-	bne	0f			/* Yes, we are. */
-	/* First machine check entry */
-	ld	r1,PACAMCEMERGSP(r13)	/* Use MC emergency stack */
-0:	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame */
-	addi	r10,r10,1		/* increment paca->in_mce */
-	sth	r10,PACA_IN_MCE(r13)
-	/* Limit nested MCE to level 4 to avoid stack overflow */
-	cmpwi	r10,4
-	bgt	2f			/* Check if we hit limit of 4 */
-	std	r11,GPR1(r1)		/* Save r1 on the stack. */
-	std	r11,0(r1)		/* make stack chain pointer */
-	mfspr	r11,SPRN_SRR0		/* Save SRR0 */
-	std	r11,_NIP(r1)
-	mfspr	r11,SPRN_SRR1		/* Save SRR1 */
-	std	r11,_MSR(r1)
-	mfspr	r11,SPRN_DAR		/* Save DAR */
-	std	r11,_DAR(r1)
-	mfspr	r11,SPRN_DSISR		/* Save DSISR */
-	std	r11,_DSISR(r1)
-	std	r9,_CCR(r1)		/* Save CR in stackframe */
-	/* Save r9 through r13 from EXMC save area to stack frame. */
-	EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
-	mfmsr	r11			/* get MSR value */
-	ori	r11,r11,MSR_ME		/* turn on ME bit */
-	ori	r11,r11,MSR_RI		/* turn on RI bit */
-	ld	r12,PACAKBASE(r13)	/* get high part of &label */
-	LOAD_HANDLER_4G(r12, machine_check_handle_early)
-1:	mtspr	SPRN_SRR0,r12
-	mtspr	SPRN_SRR1,r11
-	rfid
-	b	.	/* prevent speculative execution */
-2:
-	/* Stack overflow. Stay on emergency stack and panic.
-	 * Keep the ME bit off while panic-ing, so that if we hit
-	 * another machine check we checkstop.
-	 */
-	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
-	ld	r11,PACAKMSR(r13)
-	ld	r12,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r12, unrecover_mce)
-	li	r10,MSR_ME
-	andc	r11,r11,r10		/* Turn off MSR_ME */
-	b	1b
-	b	.	/* prevent speculative execution */
-END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
-TRAMP_HANDLER_END(machine_check_powernv_early)
-
-TRAMP_HANDLER_BEGIN(machine_check_pSeries)
-	.globl machine_check_fwnmi
-machine_check_fwnmi:
-	SET_SCRATCH0(r13)		/* save r13 */
-	EXCEPTION_PROLOG_0(PACA_EXMC)
-machine_check_pSeries_0:
-	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
-	EXCEPTION_PROLOG_PSERIES_1(machine_check_common, EXC_STD)
-TRAMP_HANDLER_END(machine_check_pSeries)
-
-TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
-TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
-TRAMP_KVM(PACA_EXGEN, 0x400)
-TRAMP_KVM(PACA_EXSLB, 0x480)
-TRAMP_KVM(PACA_EXGEN, 0x900)
-TRAMP_KVM_HV(PACA_EXGEN, 0x980)
-
-#ifdef CONFIG_PPC_DENORMALISATION
-COMMON_HANDLER_BEGIN(denorm_assist)
-BEGIN_FTR_SECTION
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER6 do that here for all FP regs.
- */
-	mfmsr	r10
-	ori	r10,r10,(MSR_FP|MSR_FE0|MSR_FE1)
-	xori	r10,r10,(MSR_FE0|MSR_FE1)
-	mtmsrd	r10
-	sync
-
-#define FMR2(n)  fmr (n), (n) ; fmr n+1, n+1
-#define FMR4(n)  FMR2(n) ; FMR2(n+2)
-#define FMR8(n)  FMR4(n) ; FMR4(n+4)
-#define FMR16(n) FMR8(n) ; FMR8(n+8)
-#define FMR32(n) FMR16(n) ; FMR16(n+16)
-	FMR32(0)
-
-FTR_SECTION_ELSE
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER7 do that here for the first 32 VSX registers only.
- */
-	mfmsr	r10
-	oris	r10,r10,MSR_VSX@h
-	mtmsrd	r10
-	sync
-
-#define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1)
-#define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2)
-#define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4)
-#define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8)
-#define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16)
-	XVCPSGNDP32(0)
-
-ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206)
-
-BEGIN_FTR_SECTION
-	b	denorm_done
-END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER8 we need to do that for all 64 VSX registers
- */
-	XVCPSGNDP32(32)
-denorm_done:
-	mtspr	SPRN_HSRR0,r11
-	mtcrf	0x80,r9
-	ld	r9,PACA_EXGEN+EX_R9(r13)
-	RESTORE_PPR_PACA(PACA_EXGEN, r10)
-BEGIN_FTR_SECTION
-	ld	r10,PACA_EXGEN+EX_CFAR(r13)
-	mtspr	SPRN_CFAR,r10
-END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
-	ld	r10,PACA_EXGEN+EX_R10(r13)
-	ld	r11,PACA_EXGEN+EX_R11(r13)
-	ld	r12,PACA_EXGEN+EX_R12(r13)
-	ld	r13,PACA_EXGEN+EX_R13(r13)
-	HRFID
-	b	.
-#endif
-COMMON_HANDLER_END(denorm_assist)
-
-	/* moved from 0x900 */
-__TRAMP_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900)
-
-	/* moved from 0xe00 */
-__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
-
-__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
-
-__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
-
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
-
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
-
-	/* moved from 0xf00 */
-__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
-TRAMP_KVM(PACA_EXGEN, 0xf00)
-
-__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
-TRAMP_KVM(PACA_EXGEN, 0xf20)
-
-__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
-TRAMP_KVM(PACA_EXGEN, 0xf40)
-
-__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
-TRAMP_KVM(PACA_EXGEN, 0xf60)
-
-__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
-TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
-
-/*
- * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
- * - If it was a decrementer interrupt, we bump the dec to max and and return.
- * - If it was a doorbell we return immediately since doorbells are edge
- *   triggered and won't automatically refire.
- * - If it was a HMI we return immediately since we handled it in realmode
- *   and it won't refire.
- * - else we hard disable and return.
- * This is called with r10 containing the value to OR to the paca field.
- */
-#define MASKED_INTERRUPT(_H)				\
-masked_##_H##interrupt:					\
-	std	r11,PACA_EXGEN+EX_R11(r13);		\
-	lbz	r11,PACAIRQHAPPENED(r13);		\
-	or	r11,r11,r10;				\
-	stb	r11,PACAIRQHAPPENED(r13);		\
-	cmpwi	r10,PACA_IRQ_DEC;			\
-	bne	1f;					\
-	lis	r10,0x7fff;				\
-	ori	r10,r10,0xffff;				\
-	mtspr	SPRN_DEC,r10;				\
-	b	2f;					\
-1:	cmpwi	r10,PACA_IRQ_DBELL;			\
-	beq	2f;					\
-	cmpwi	r10,PACA_IRQ_HMI;			\
-	beq	2f;					\
-	mfspr	r10,SPRN_##_H##SRR1;			\
-	rldicl	r10,r10,48,1; /* clear MSR_EE */	\
-	rotldi	r10,r10,16;				\
-	mtspr	SPRN_##_H##SRR1,r10;			\
-2:	mtcrf	0x80,r9;				\
-	ld	r9,PACA_EXGEN+EX_R9(r13);		\
-	ld	r10,PACA_EXGEN+EX_R10(r13);		\
-	ld	r11,PACA_EXGEN+EX_R11(r13);		\
-	GET_SCRATCH0(r13);				\
-	##_H##rfid;					\
-	b	.
-
-USE_FIXED_SECTION(real_trampolines)
-	MASKED_INTERRUPT()
-	MASKED_INTERRUPT(H)
-UNUSE_FIXED_SECTION(real_trampolines)
-
-/*
- * Called from arch_local_irq_enable when an interrupt needs
- * to be resent. r3 contains 0x500, 0x900, 0xa00 or 0xe80 to indicate
- * which kind of interrupt. MSR:EE is already off. We generate a
- * stackframe like if a real interrupt had happened.
- *
- * Note: While MSR:EE is off, we need to make sure that _MSR
- * in the generated frame has EE set to 1 or the exception
- * handler will not properly re-enable them.
- */
-COMMON_HANDLER_BEGIN(__replay_interrupt)
-	/* We are going to jump to the exception common code which
-	 * will retrieve various register values from the PACA which
-	 * we don't give a damn about, so we don't bother storing them.
-	 */
-	mfmsr	r12
-	mflr	r11
-	mfcr	r9
-	ori	r12,r12,MSR_EE
-	cmpwi	r3,0x900
-	beq	decrementer_common
-	cmpwi	r3,0x500
-	beq	hardware_interrupt_common
-BEGIN_FTR_SECTION
-	cmpwi	r3,0xe80
-	beq	h_doorbell_common
-FTR_SECTION_ELSE
-	cmpwi	r3,0xa00
-	beq	doorbell_super_common
-ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
-	blr
-COMMON_HANDLER_END(__replay_interrupt)
-
-#ifdef CONFIG_PPC_PSERIES
-/*
- * Vectors for the FWNMI option.  Share common code.
- */
-TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
-	SET_SCRATCH0(r13)		/* save r13 */
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
-				 NOTEST, 0x100)
-TRAMP_HANDLER_END(system_reset_fwnmi)
+#define MACHINE_CHECK_HANDLER_WINDUP			\
+	/* Clear MSR_RI before setting SRR0 and SRR1. */\
+	li	r0,MSR_RI;				\
+	mfmsr	r9;		/* get MSR value */	\
+	andc	r9,r9,r0;				\
+	mtmsrd	r9,1;		/* Clear MSR_RI */	\
+	/* Move original SRR0 and SRR1 into the respective regs */	\
+	ld	r9,_MSR(r1);				\
+	mtspr	SPRN_SRR1,r9;				\
+	ld	r3,_NIP(r1);				\
+	mtspr	SPRN_SRR0,r3;				\
+	ld	r9,_CTR(r1);				\
+	mtctr	r9;					\
+	ld	r9,_XER(r1);				\
+	mtxer	r9;					\
+	ld	r9,_LINK(r1);				\
+	mtlr	r9;					\
+	REST_GPR(0, r1);				\
+	REST_8GPRS(2, r1);				\
+	REST_GPR(10, r1);				\
+	ld	r11,_CCR(r1);				\
+	mtcr	r11;					\
+	/* Decrement paca->in_mce. */			\
+	lhz	r12,PACA_IN_MCE(r13);			\
+	subi	r12,r12,1;				\
+	sth	r12,PACA_IN_MCE(r13);			\
+	REST_GPR(11, r1);				\
+	REST_2GPRS(12, r1);				\
+	/* restore original r1. */			\
+	ld	r1,GPR1(r1)
 
-#endif /* CONFIG_PPC_PSERIES */
+	/*
+	 * Handle machine check early in real mode. We come here with
+	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
+	 */
+COMMON_HANDLER_BEGIN(machine_check_handle_early)
+	std	r0,GPR0(r1)	/* Save r0 */
+	EXCEPTION_PROLOG_COMMON_3(0x200)
+	bl	save_nvgprs
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_early
+	std	r3,RESULT(r1)	/* Save result */
+	ld	r12,_MSR(r1)
+#ifdef	CONFIG_PPC_P7_NAP
+	/*
+	 * Check if thread was in power saving mode. We come here when any
+	 * of the following is true:
+	 * a. thread wasn't in power saving mode
+	 * b. thread was in power saving mode with no state loss or
+	 *    supervisor state loss
+	 *
+	 * Go back to nap again if (b) is true.
+	 */
+	rlwinm.	r11,r12,47-31,30,31	/* Was it in power saving mode? */
+	beq	4f			/* No, it wasn;t */
+	/* Thread was in power saving mode. Go back to nap again. */
+	cmpwi	r11,2
+	bne	3f
+	/* Supervisor state loss */
+	li	r0,1
+	stb	r0,PACA_NAPSTATELOST(r13)
+3:	bl	machine_check_queue_event
+	MACHINE_CHECK_HANDLER_WINDUP
+	GET_PACA(r13)
+	ld	r1,PACAR1(r13)
+	li	r3,PNV_THREAD_NAP
+	b	power7_enter_nap_mode
+4:
+#endif
+	/*
+	 * Check if we are coming from hypervisor userspace. If yes then we
+	 * continue in host kernel in V mode to deliver the MC event.
+	 */
+	rldicl.	r11,r12,4,63		/* See if MC hit while in HV mode. */
+	beq	5f
+	andi.	r11,r12,MSR_PR		/* See if coming from user. */
+	bne	9f			/* continue in V mode if we are. */
 
+5:
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
 	/*
-	 * Here all GPRs are unchanged from when the interrupt happened
-	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 * We are coming from kernel context. Check if we are coming from
+	 * guest. if yes, then we can continue. We will fall through
+	 * do_kvm_200->kvmppc_interrupt to deliver the MC event to guest.
 	 */
-	mfspr	r13, SPRN_SRR0
-	addi	r13, r13, 4
-	mtspr	SPRN_SRR0, r13
-	GET_SCRATCH0(r13)
+	lbz	r11,HSTATE_IN_GUEST(r13)
+	cmpwi	r11,0			/* Check if coming from guest */
+	bne	9f			/* continue if we are. */
+#endif
+	/*
+	 * At this point we are not sure about what context we come from.
+	 * Queue up the MCE event and return from the interrupt.
+	 * But before that, check if this is an un-recoverable exception.
+	 * If yes, then stay on emergency stack and panic.
+	 */
+	andi.	r11,r12,MSR_RI
+	bne	2f
+1:	mfspr	r11,SPRN_SRR0
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER_4G(r10,unrecover_mce)
+	mtspr	SPRN_SRR0,r10
+	ld	r10,PACAKMSR(r13)
+	/*
+	 * We are going down. But there are chances that we might get hit by
+	 * another MCE during panic path and we may run into unstable state
+	 * with no way out. Hence, turn ME bit off while going down, so that
+	 * when another MCE is hit during panic path, system will checkstop
+	 * and hypervisor will get restarted cleanly by SP.
+	 */
+	li	r3,MSR_ME
+	andc	r10,r10,r3		/* Turn off MSR_ME */
+	mtspr	SPRN_SRR1,r10
 	rfid
 	b	.
-TRAMP_HANDLER_END(kvmppc_skip_interrupt)
-
-TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
+2:
 	/*
-	 * Here all GPRs are unchanged from when the interrupt happened
-	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 * Check if we have successfully handled/recovered from error, if not
+	 * then stay on emergency stack and panic.
 	 */
-	mfspr	r13, SPRN_HSRR0
-	addi	r13, r13, 4
-	mtspr	SPRN_HSRR0, r13
-	GET_SCRATCH0(r13)
-	hrfid
-	b	.
-TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
-#endif
+	ld	r3,RESULT(r1)	/* Load result */
+	cmpdi	r3,0		/* see if we handled MCE successfully */
 
-/*
- * Ensure that any handlers that get invoked from the exception prologs
- * above are below the first 64KB (0x10000) of the kernel image because
- * the prologs assemble the addresses of these handlers using the
- * LOAD_HANDLER_4G macro, which uses an ori instruction. Care must also
- * be taken because relative branches can only address 32K in each direction.
- */
+	beq	1b		/* if !handled then panic */
+	/*
+	 * Return from MC interrupt.
+	 * Queue up the MCE event so that we can log it later, while
+	 * returning from kernel or opal call.
+	 */
+	bl	machine_check_queue_event
+	MACHINE_CHECK_HANDLER_WINDUP
+	rfid
+9:
+	/* Deliver the machine check to host kernel in V mode. */
+	MACHINE_CHECK_HANDLER_WINDUP
+	b	machine_check_pSeries
+COMMON_HANDLER_END(machine_check_handle_early)
 
-/*** Common interrupt handlers ***/
+COMMON_HANDLER_BEGIN(unrecover_mce)
+	/* Invoke machine_check_exception to print MCE event and panic. */
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_exception
+	/*
+	 * We will not reach here. Even if we did, there is no way out. Call
+	 * unrecoverable_exception and die.
+	 */
+1:	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	unrecoverable_exception
+	b	1b
+COMMON_HANDLER_END(unrecover_mce)
 
-COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
-COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
-COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
-COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-#ifdef CONFIG_PPC_DOORBELL
-COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
-#else
-COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
-#endif
-COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
-COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
-COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
-COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
-COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
-#ifdef CONFIG_PPC_DOORBELL
-COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
-#else
-COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
-#endif
-COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
-COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
-COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
-#ifdef CONFIG_ALTIVEC
-COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
-#else
-COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
-#endif
-#ifdef CONFIG_CBE_RAS
-COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
-COMMON_HANDLER(cbe_maintenance, 0x1600, cbe_maintenance_exception)
-COMMON_HANDLER(cbe_thermal, 0x1800, cbe_thermal_exception)
-#endif /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
+VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
 
+COMMON_HANDLER_BEGIN(data_access_common)
 	/*
-	 * Relocation-on interrupts: A subset of the interrupts can be delivered
-	 * with IR=1/DR=1, if AIL==2 and MSR.HV won't be changed by delivering
-	 * it.  Addresses are the same as the original interrupt addresses, but
-	 * offset by 0xc000000000004000.
-	 * It's impossible to receive interrupts below 0x300 via this mechanism.
-	 * KVM: None of these traps are from the guest ; anything that escalated
-	 * to HV=1 from HV=0 is delivered via real mode handlers.
+	 * Here r13 points to the paca, r9 contains the saved CR,
+	 * SRR0 and SRR1 are saved in r11 and r12,
+	 * r9 - r13 are saved in paca->exgen.
 	 */
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN)
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r12,_MSR(r1)
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	li	r5,0x300
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
+	b	do_hash_page		/* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+	b	handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
+COMMON_HANDLER_END(data_access_common)
 
+
+VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXSLB)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
+	std	r3,PACA_EXSLB+EX_R3(r13)
+	mfspr	r3,SPRN_DAR
+	mfspr	r12,SPRN_SRR1
+#ifndef CONFIG_RELOCATABLE
+	b	slb_miss_realmode
+#else
 	/*
-	 * This uses the standard macro, since the original 0x300 vector
-	 * only has extra guff for STAB-based processors -- which never
-	 * come here.
+	 * We can't just use a direct branch to slb_miss_realmode
+	 * because the distance from here to there depends on where
+	 * the kernel ends up being put.
 	 */
-VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
-VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
-
-VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+	mfctr	r11
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
+	mtctr	r10
+	bctr
+#endif
+VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
 
 VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	SET_SCRATCH0(r13)
@@ -841,9 +540,46 @@ VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	bctr
 #endif
 VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
+TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
 
+
+VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
 VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
+TRAMP_KVM(PACA_EXGEN, 0x400)
+COMMON_HANDLER_BEGIN(instruction_access_common)
+	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r12,_MSR(r1)
+	ld	r3,_NIP(r1)
+	andis.	r4,r12,0x5820
+	li	r5,0x400
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
+	b	do_hash_page		/* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+	b	handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
+COMMON_HANDLER_END(instruction_access_common)
 
+
+VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXSLB)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
+	std	r3,PACA_EXSLB+EX_R3(r13)
+	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
+	mfspr	r12,SPRN_SRR1
+#ifndef CONFIG_RELOCATABLE
+	b	slb_miss_realmode
+#else
+	mfctr	r11
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER_4G(r10, slb_miss_realmode)
+	mtctr	r10
+	bctr
+#endif
+VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
 VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
@@ -861,6 +597,91 @@ VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
 	bctr
 #endif
 VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
+TRAMP_KVM(PACA_EXSLB, 0x480)
+
+
+TRAMP_HANDLER_BEGIN(slb_miss_realmode)
+	/*
+	 * r13 points to the PACA, r9 contains the saved CR,
+	 * r12 contain the saved SRR1, SRR0 is still ready for return
+	 * r3 has the faulting address
+	 * r9 - r13 are saved in paca->exslb.
+	 * r3 is saved in paca->slb_r3
+	 * We assume we aren't going to take any exceptions during this
+	 * procedure.
+	 */
+	mflr	r10
+#ifdef CONFIG_RELOCATABLE
+	mtctr	r11
+#endif
+
+	stw	r9,PACA_EXSLB+EX_CCR(r13)	/* save CR in exc. frame */
+	std	r10,PACA_EXSLB+EX_LR(r13)	/* save LR */
+
+#ifdef CONFIG_PPC_STD_MMU_64
+BEGIN_MMU_FTR_SECTION
+	bl	slb_allocate_realmode
+END_MMU_FTR_SECTION_IFCLR(MMU_FTR_RADIX)
+#endif
+	/* All done -- return from exception. */
+
+	ld	r10,PACA_EXSLB+EX_LR(r13)
+	ld	r3,PACA_EXSLB+EX_R3(r13)
+	lwz	r9,PACA_EXSLB+EX_CCR(r13)	/* get saved CR */
+
+	mtlr	r10
+BEGIN_MMU_FTR_SECTION
+	b	2f
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
+	andi.	r10,r12,MSR_RI	/* check for unrecoverable exception */
+	beq-	2f
+
+.machine	push
+.machine	"power4"
+	mtcrf	0x80,r9
+	mtcrf	0x01,r9		/* slb_allocate uses cr0 and cr7 */
+.machine	pop
+
+	RESTORE_PPR_PACA(PACA_EXSLB, r9)
+	ld	r9,PACA_EXSLB+EX_R9(r13)
+	ld	r10,PACA_EXSLB+EX_R10(r13)
+	ld	r11,PACA_EXSLB+EX_R11(r13)
+	ld	r12,PACA_EXSLB+EX_R12(r13)
+	ld	r13,PACA_EXSLB+EX_R13(r13)
+	rfid
+	b	.	/* prevent speculative execution */
+
+2:	mfspr	r11,SPRN_SRR0
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER_4G(r10,unrecov_slb)
+	mtspr	SPRN_SRR0,r10
+	ld	r10,PACAKMSR(r13)
+	mtspr	SPRN_SRR1,r10
+	rfid
+	b	.
+TRAMP_HANDLER_END(slb_miss_realmode)
+
+COMMON_HANDLER_BEGIN(unrecov_slb)
+	EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
+	RECONCILE_IRQ_STATE(r10, r11)
+	bl	save_nvgprs
+1:	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	unrecoverable_exception
+	b	1b
+COMMON_HANDLER_END(unrecov_slb)
+
+
+VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
+	.globl hardware_interrupt_hv;
+hardware_interrupt_hv:
+	BEGIN_FTR_SECTION
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+					    EXC_HV, SOFTEN_TEST_HV)
+	FTR_SECTION_ELSE
+		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
+					    EXC_STD, SOFTEN_TEST_PR)
+	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
 
 VECTOR_HANDLER_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x4600)
 	.globl hardware_interrupt_relon_hv;
@@ -872,101 +693,210 @@ hardware_interrupt_relon_hv:
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
 
+TRAMP_KVM_HV(PACA_EXGEN, 0x500)
+TRAMP_KVM(PACA_EXGEN, 0x500)
+COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
+
+
+VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
 VECTOR_HANDLER_VIRT(alignment, 0x4600, 0x4700, 0x600)
-VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
-VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
-VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
-VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
-VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
-VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+TRAMP_KVM(PACA_EXGEN, 0x600)
+COMMON_HANDLER_BEGIN(alignment_common)
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x600, PACA_EXGEN)
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	alignment_exception
+	b	ret_from_except
+COMMON_HANDLER_END(alignment_common)
 
-VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
-	HMT_MEDIUM
-	SYSCALL_PSERIES_1
-	SYSCALL_PSERIES_2_DIRECT
-	SYSCALL_PSERIES_3
-VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
-VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
+VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
+VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
+TRAMP_KVM(PACA_EXGEN, 0x700)
+COMMON_HANDLER_BEGIN(program_check_common)
+	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	program_check_exception
+	b	ret_from_except
+COMMON_HANDLER_END(program_check_common)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
+VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
+TRAMP_KVM(PACA_EXGEN, 0x800)
+COMMON_HANDLER_BEGIN(fp_unavailable_common)
+	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
+	bne	1f			/* if from user, just load it up */
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	kernel_fp_unavailable_exception
+	BUG_OPCODE
+1:
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+BEGIN_FTR_SECTION
+	/* Test if 2 TM state bits are zero.  If non-zero (ie. userspace was in
+	 * transaction), go do TM stuff
+	 */
+	rldicl.	r0, r12, (64-MSR_TS_LG), (64-2)
+	bne-	2f
+END_FTR_SECTION_IFSET(CPU_FTR_TM)
+#endif
+	bl	load_up_fpu
+	b	fast_exception_return
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+2:	/* User process was in a transaction */
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	fp_unavailable_tm
+	b	ret_from_except
+#endif
+COMMON_HANDLER_END(fp_unavailable_common)
 
-__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+__VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900)
+VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
+TRAMP_KVM(PACA_EXGEN, 0x900)
+COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
 
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
 
-VECTOR_HANDLER_VIRT_NONE(0x4ea0, 0x4f00)
+VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
+VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
+TRAMP_KVM_HV(PACA_EXGEN, 0x980)
+COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
 
-__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
+VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
+VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
+TRAMP_KVM(PACA_EXGEN, 0xa00)
+#ifdef CONFIG_PPC_DOORBELL
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
+#else
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
+#endif
 
-__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
 
-__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
+VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
+VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
+TRAMP_KVM(PACA_EXGEN, 0xb00)
 
-__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
 
-VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
+/* Syscall routine is used twice, in reloc-off and reloc-on paths */
+#define SYSCALL_PSERIES_1 					\
+BEGIN_FTR_SECTION						\
+	cmpdi	r0,0x1ebe ; 					\
+	beq-	1f ;						\
+END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
+	mr	r9,r13 ;					\
+	GET_PACA(r13) ;						\
+	mfspr	r11,SPRN_SRR0 ;					\
+0:
 
-VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+#define SYSCALL_PSERIES_2_RFID 					\
+	mfspr	r12,SPRN_SRR1 ;					\
+	ld	r10,PACAKBASE(r13) ; 				\
+	LOAD_HANDLER_4G(r10, system_call_common) ; 		\
+	mtspr	SPRN_SRR0,r10 ; 				\
+	ld	r10,PACAKMSR(r13) ;				\
+	mtspr	SPRN_SRR1,r10 ; 				\
+	rfid ; 							\
+	b	. ;	/* prevent speculative execution */
 
-VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
+#define SYSCALL_PSERIES_3					\
+	/* Fast LE/BE switch system call */			\
+1:	mfspr	r12,SPRN_SRR1 ;					\
+	xori	r12,r12,MSR_LE ;				\
+	mtspr	SPRN_SRR1,r12 ;					\
+	rfid ;		/* return to userspace */		\
+	b	. ;	/* prevent speculative execution */
 
-#ifdef CONFIG_PPC_DENORMALISATION
-VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
-	b	exc_0x1500_denorm_exception_hv
-VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#if defined(CONFIG_RELOCATABLE)
+	/*
+	 * We can't branch directly so we do it via the CTR which
+	 * is volatile across system calls.
+	 */
+#define SYSCALL_PSERIES_2_DIRECT				\
+	mflr	r10 ;						\
+	ld	r12,PACAKBASE(r13) ; 				\
+	LOAD_HANDLER_4G(r12, system_call_common) ;		\
+	mtctr	r12 ;						\
+	mfspr	r12,SPRN_SRR1 ;					\
+	/* Re-use of r13... No spare regs to do this */	\
+	li	r13,MSR_RI ;					\
+	mtmsrd 	r13,1 ;						\
+	GET_PACA(r13) ;	/* get r13 back */			\
+	bctr ;
+#else
+	/* We can branch directly */
+#define SYSCALL_PSERIES_2_DIRECT				\
+	mfspr	r12,SPRN_SRR1 ;					\
+	li	r10,MSR_RI ;					\
+	mtmsrd 	r10,1 ;			/* Set RI (EE=0) */	\
+	b	system_call_common ;
+#endif
+
+VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
+	 /*
+	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
+	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
+	  * code to save that value into the guest state (it is the guest's PPR
+	  * value). Otherwise just change to HMT_MEDIUM as userspace has
+	  * already saved the PPR.
+	  */
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+	SET_SCRATCH0(r13)
+	GET_PACA(r13)
+	std	r9,PACA_EXGEN+EX_R9(r13)
+	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
+	HMT_MEDIUM;
+	std	r10,PACA_EXGEN+EX_R10(r13)
+	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
+	mfcr	r9
+	KVMTEST_PR(0xc00)
+	GET_SCRATCH0(r13)
 #else
-VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
+	HMT_MEDIUM;
 #endif
+	SYSCALL_PSERIES_1
+	SYSCALL_PSERIES_2_RFID
+	SYSCALL_PSERIES_3
+VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
 
-VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
-
-VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
-
-VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
+VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
+	HMT_MEDIUM
+	SYSCALL_PSERIES_1
+	SYSCALL_PSERIES_2_DIRECT
+	SYSCALL_PSERIES_3
+VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
+TRAMP_KVM(PACA_EXGEN, 0xc00)
 
-TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
-	b	__ppc64_runlatch_on
-TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
 
-/*
- * Here r13 points to the paca, r9 contains the saved CR,
- * SRR0 and SRR1 are saved in r11 and r12,
- * r9 - r13 are saved in paca->exgen.
- */
-COMMON_HANDLER_BEGIN(data_access_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXGEN+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXGEN+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN)
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r12,_MSR(r1)
-	ld	r3,PACA_EXGEN+EX_DAR(r13)
-	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
-	li	r5,0x300
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-BEGIN_MMU_FTR_SECTION
-	b	do_hash_page		/* Try to handle as hpte fault */
-MMU_FTR_SECTION_ELSE
-	b	handle_page_fault
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
-COMMON_HANDLER_END(data_access_common)
+VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
+VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
+TRAMP_KVM(PACA_EXGEN, 0xd00)
+COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
 
+__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
+__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
@@ -979,103 +909,113 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	bl      unknown_exception
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
+COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
-COMMON_HANDLER_BEGIN(instruction_access_common)
-	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r12,_MSR(r1)
-	ld	r3,_NIP(r1)
-	andis.	r4,r12,0x5820
-	li	r5,0x400
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-BEGIN_MMU_FTR_SECTION
-	b	do_hash_page		/* Try to handle as hpte fault */
-MMU_FTR_SECTION_ELSE
-	b	handle_page_fault
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
-COMMON_HANDLER_END(instruction_access_common)
 
+__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
+__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
-	/*
-	 * Machine check is different because we use a different
-	 * save area: PACA_EXMC instead of PACA_EXGEN.
-	 */
-COMMON_HANDLER_BEGIN(machine_check_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXGEN+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXGEN+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)
-	FINISH_NAP
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r3,PACA_EXGEN+EX_DAR(r13)
-	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-	bl	save_nvgprs
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_exception
-	b	ret_from_except
-COMMON_HANDLER_END(machine_check_common)
 
-COMMON_HANDLER_BEGIN(alignment_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXGEN+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXGEN+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x600, PACA_EXGEN)
-	ld	r3,PACA_EXGEN+EX_DAR(r13)
-	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	alignment_exception
-	b	ret_from_except
-COMMON_HANDLER_END(alignment_common)
+__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
+__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
+__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
+__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
+COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
-COMMON_HANDLER_BEGIN(program_check_common)
-	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	program_check_exception
-	b	ret_from_except
-COMMON_HANDLER_END(program_check_common)
 
-COMMON_HANDLER_BEGIN(fp_unavailable_common)
-	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
-	bne	1f			/* if from user, just load it up */
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
+__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
+COMMON_HANDLER_BEGIN(hmi_exception_early)
+	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60)
+	mr	r10,r1			/* Save r1			*/
+	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
+	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
+	std	r9,_CCR(r1)		/* save CR in stackframe	*/
+	mfspr	r11,SPRN_HSRR0		/* Save HSRR0 */
+	std	r11,_NIP(r1)		/* save HSRR0 in stackframe	*/
+	mfspr	r12,SPRN_HSRR1		/* Save SRR1 */
+	std	r12,_MSR(r1)		/* save SRR1 in stackframe	*/
+	std	r10,0(r1)		/* make stack chain pointer	*/
+	std	r0,GPR0(r1)		/* save r0 in stackframe	*/
+	std	r10,GPR1(r1)		/* save r1 in stackframe	*/
+	EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN)
+	EXCEPTION_PROLOG_COMMON_3(0xe60)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	kernel_fp_unavailable_exception
-	BUG_OPCODE
-1:
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-BEGIN_FTR_SECTION
-	/* Test if 2 TM state bits are zero.  If non-zero (ie. userspace was in
-	 * transaction), go do TM stuff
+	bl	hmi_exception_realmode
+	/* Windup the stack. */
+	/* Move original HSRR0 and HSRR1 into the respective regs */
+	ld	r9,_MSR(r1)
+	mtspr	SPRN_HSRR1,r9
+	ld	r3,_NIP(r1)
+	mtspr	SPRN_HSRR0,r3
+	ld	r9,_CTR(r1)
+	mtctr	r9
+	ld	r9,_XER(r1)
+	mtxer	r9
+	ld	r9,_LINK(r1)
+	mtlr	r9
+	REST_GPR(0, r1)
+	REST_8GPRS(2, r1)
+	REST_GPR(10, r1)
+	ld	r11,_CCR(r1)
+	mtcr	r11
+	REST_GPR(11, r1)
+	REST_2GPRS(12, r1)
+	/* restore original r1. */
+	ld	r1,GPR1(r1)
+
+	/*
+	 * Go to virtual mode and pull the HMI event information from
+	 * firmware.
 	 */
-	rldicl.	r0, r12, (64-MSR_TS_LG), (64-2)
-	bne-	2f
-END_FTR_SECTION_IFSET(CPU_FTR_TM)
-#endif
-	bl	load_up_fpu
-	b	fast_exception_return
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-2:	/* User process was in a transaction */
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	fp_unavailable_tm
-	b	ret_from_except
+	.globl hmi_exception_after_realmode
+hmi_exception_after_realmode:
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	tramp_real_hmi_exception
+COMMON_HANDLER_END(hmi_exception_early)
+COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
+#ifdef CONFIG_PPC_DOORBELL
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
+#else
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-COMMON_HANDLER_END(fp_unavailable_common)
 
+
+VECTOR_HANDLER_REAL_NONE(0xea0, 0xf00)
+VECTOR_HANDLER_VIRT_NONE(0x4ea0, 0x4f00)
+
+
+__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
+__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
+__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
+__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+TRAMP_KVM(PACA_EXGEN, 0xf00)
+COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
+__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
+__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
+__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+TRAMP_KVM(PACA_EXGEN, 0xf20)
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
@@ -1110,6 +1050,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	b	ret_from_except
 COMMON_HANDLER_END(altivec_unavailable_common)
 
+
+
+__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
+__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
+__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
+__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+TRAMP_KVM(PACA_EXGEN, 0xf40)
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
@@ -1143,309 +1090,291 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	b	ret_from_except
 COMMON_HANDLER_END(vsx_unavailable_common)
 
-COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
-COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
-	/* Equivalents to the above handlers for relocation-on interrupt vectors */
-__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
-__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
-__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
+__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
 __TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+TRAMP_KVM(PACA_EXGEN, 0xf60)
+COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
+__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
 __TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
+COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
-USE_FIXED_SECTION(virt_trampolines)
-	/*
-	 * The __end_interrupts marker must be past the out-of-line (OOL)
-	 * handlers, so that they are copied to real address 0x100 when running
-	 * a relocatable kernel. This ensures they can be reached from the short
-	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
-	 * directly, without using LOAD_HANDLER_4G().
-	 */
-	.align	7
-	.globl	__end_interrupts
-__end_interrupts:
-UNUSE_FIXED_SECTION(virt_trampolines)
 
-#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
-/*
- * Data area reserved for FWNMI option.
- * This address (0x7000) is fixed by the RPA.
- * pseries and powernv need to keep the whole page from
- * 0x7000 to 0x8000 free for use by the firmware
- */
-FIXED_SECTION_ENTRY_ZERO(fwnmi_page, 0x7000, 0x8000)
-#endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
+VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
+VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
 
-COMMON_HANDLER_BEGIN(hmi_exception_early)
-	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60)
-	mr	r10,r1			/* Save r1			*/
-	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
-	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
-	std	r9,_CCR(r1)		/* save CR in stackframe	*/
-	mfspr	r11,SPRN_HSRR0		/* Save HSRR0 */
-	std	r11,_NIP(r1)		/* save HSRR0 in stackframe	*/
-	mfspr	r12,SPRN_HSRR1		/* Save SRR1 */
-	std	r12,_MSR(r1)		/* save SRR1 in stackframe	*/
-	std	r10,0(r1)		/* make stack chain pointer	*/
-	std	r0,GPR0(r1)		/* save r0 in stackframe	*/
-	std	r10,GPR1(r1)		/* save r1 in stackframe	*/
-	EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN)
-	EXCEPTION_PROLOG_COMMON_3(0xe60)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	hmi_exception_realmode
-	/* Windup the stack. */
-	/* Move original HSRR0 and HSRR1 into the respective regs */
-	ld	r9,_MSR(r1)
-	mtspr	SPRN_HSRR1,r9
-	ld	r3,_NIP(r1)
-	mtspr	SPRN_HSRR0,r3
-	ld	r9,_CTR(r1)
-	mtctr	r9
-	ld	r9,_XER(r1)
-	mtxer	r9
-	ld	r9,_LINK(r1)
-	mtlr	r9
-	REST_GPR(0, r1)
-	REST_8GPRS(2, r1)
-	REST_GPR(10, r1)
-	ld	r11,_CCR(r1)
-	mtcr	r11
-	REST_GPR(11, r1)
-	REST_2GPRS(12, r1)
-	/* restore original r1. */
-	ld	r1,GPR1(r1)
+	
+#ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
+COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
 
-	/*
-	 * Go to virtual mode and pull the HMI event information from
-	 * firmware.
-	 */
-	.globl hmi_exception_after_realmode
-hmi_exception_after_realmode:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	tramp_real_hmi_exception
-COMMON_HANDLER_END(hmi_exception_early)
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+#endif
 
 
-#define MACHINE_CHECK_HANDLER_WINDUP			\
-	/* Clear MSR_RI before setting SRR0 and SRR1. */\
-	li	r0,MSR_RI;				\
-	mfmsr	r9;		/* get MSR value */	\
-	andc	r9,r9,r0;				\
-	mtmsrd	r9,1;		/* Clear MSR_RI */	\
-	/* Move original SRR0 and SRR1 into the respective regs */	\
-	ld	r9,_MSR(r1);				\
-	mtspr	SPRN_SRR1,r9;				\
-	ld	r3,_NIP(r1);				\
-	mtspr	SPRN_SRR0,r3;				\
-	ld	r9,_CTR(r1);				\
-	mtctr	r9;					\
-	ld	r9,_XER(r1);				\
-	mtxer	r9;					\
-	ld	r9,_LINK(r1);				\
-	mtlr	r9;					\
-	REST_GPR(0, r1);				\
-	REST_8GPRS(2, r1);				\
-	REST_GPR(10, r1);				\
-	ld	r11,_CCR(r1);				\
-	mtcr	r11;					\
-	/* Decrement paca->in_mce. */			\
-	lhz	r12,PACA_IN_MCE(r13);			\
-	subi	r12,r12,1;				\
-	sth	r12,PACA_IN_MCE(r13);			\
-	REST_GPR(11, r1);				\
-	REST_2GPRS(12, r1);				\
-	/* restore original r1. */			\
-	ld	r1,GPR1(r1)
+VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
+VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
+COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
 
-	/*
-	 * Handle machine check early in real mode. We come here with
-	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
-	 */
-COMMON_HANDLER_BEGIN(machine_check_handle_early)
-	std	r0,GPR0(r1)	/* Save r0 */
-	EXCEPTION_PROLOG_COMMON_3(0x200)
-	bl	save_nvgprs
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_early
-	std	r3,RESULT(r1)	/* Save result */
-	ld	r12,_MSR(r1)
-#ifdef	CONFIG_PPC_P7_NAP
-	/*
-	 * Check if thread was in power saving mode. We come here when any
-	 * of the following is true:
-	 * a. thread wasn't in power saving mode
-	 * b. thread was in power saving mode with no state loss or
-	 *    supervisor state loss
-	 *
-	 * Go back to nap again if (b) is true.
-	 */
-	rlwinm.	r11,r12,47-31,30,31	/* Was it in power saving mode? */
-	beq	4f			/* No, it wasn;t */
-	/* Thread was in power saving mode. Go back to nap again. */
-	cmpwi	r11,2
-	bne	3f
-	/* Supervisor state loss */
-	li	r0,1
-	stb	r0,PACA_NAPSTATELOST(r13)
-3:	bl	machine_check_queue_event
-	MACHINE_CHECK_HANDLER_WINDUP
-	GET_PACA(r13)
-	ld	r1,PACAR1(r13)
-	li	r3,PNV_THREAD_NAP
-	b	power7_enter_nap_mode
-4:
+
+VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
+	mtspr	SPRN_SPRG_HSCRATCH0,r13
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
+
+#ifdef CONFIG_PPC_DENORMALISATION
+	mfspr	r10,SPRN_HSRR1
+	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
+	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
+	addi	r11,r11,-4		/* HSRR0 is next instruction */
+	bne+	denorm_assist
 #endif
-	/*
-	 * Check if we are coming from hypervisor userspace. If yes then we
-	 * continue in host kernel in V mode to deliver the MC event.
-	 */
-	rldicl.	r11,r12,4,63		/* See if MC hit while in HV mode. */
-	beq	5f
-	andi.	r11,r12,MSR_PR		/* See if coming from user. */
-	bne	9f			/* continue in V mode if we are. */
 
-5:
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	/*
-	 * We are coming from kernel context. Check if we are coming from
-	 * guest. if yes, then we can continue. We will fall through
-	 * do_kvm_200->kvmppc_interrupt to deliver the MC event to guest.
-	 */
-	lbz	r11,HSTATE_IN_GUEST(r13)
-	cmpwi	r11,0			/* Check if coming from guest */
-	bne	9f			/* continue if we are. */
+	KVMTEST_PR(0x1500)
+	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
+VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
+
+#ifdef CONFIG_PPC_DENORMALISATION
+VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
+	b	exc_0x1500_denorm_exception_hv
+VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#else
+VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
 #endif
-	/*
-	 * At this point we are not sure about what context we come from.
-	 * Queue up the MCE event and return from the interrupt.
-	 * But before that, check if this is an un-recoverable exception.
-	 * If yes, then stay on emergency stack and panic.
-	 */
-	andi.	r11,r12,MSR_RI
-	bne	2f
-1:	mfspr	r11,SPRN_SRR0
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10,unrecover_mce)
-	mtspr	SPRN_SRR0,r10
-	ld	r10,PACAKMSR(r13)
-	/*
-	 * We are going down. But there are chances that we might get hit by
-	 * another MCE during panic path and we may run into unstable state
-	 * with no way out. Hence, turn ME bit off while going down, so that
-	 * when another MCE is hit during panic path, system will checkstop
-	 * and hypervisor will get restarted cleanly by SP.
-	 */
-	li	r3,MSR_ME
-	andc	r10,r10,r3		/* Turn off MSR_ME */
-	mtspr	SPRN_SRR1,r10
-	rfid
-	b	.
-2:
-	/*
-	 * Check if we have successfully handled/recovered from error, if not
-	 * then stay on emergency stack and panic.
-	 */
-	ld	r3,RESULT(r1)	/* Load result */
-	cmpdi	r3,0		/* see if we handled MCE successfully */
 
-	beq	1b		/* if !handled then panic */
-	/*
-	 * Return from MC interrupt.
-	 * Queue up the MCE event so that we can log it later, while
-	 * returning from kernel or opal call.
-	 */
-	bl	machine_check_queue_event
-	MACHINE_CHECK_HANDLER_WINDUP
-	rfid
-9:
-	/* Deliver the machine check to host kernel in V mode. */
-	MACHINE_CHECK_HANDLER_WINDUP
-	b	machine_check_pSeries
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
+
+#ifdef CONFIG_PPC_DENORMALISATION
+TRAMP_HANDLER_BEGIN(denorm_assist)
+BEGIN_FTR_SECTION
+/*
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER6 do that here for all FP regs.
+ */
+	mfmsr	r10
+	ori	r10,r10,(MSR_FP|MSR_FE0|MSR_FE1)
+	xori	r10,r10,(MSR_FE0|MSR_FE1)
+	mtmsrd	r10
+	sync
+
+#define FMR2(n)  fmr (n), (n) ; fmr n+1, n+1
+#define FMR4(n)  FMR2(n) ; FMR2(n+2)
+#define FMR8(n)  FMR4(n) ; FMR4(n+4)
+#define FMR16(n) FMR8(n) ; FMR8(n+8)
+#define FMR32(n) FMR16(n) ; FMR16(n+16)
+	FMR32(0)
+
+FTR_SECTION_ELSE
+/*
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER7 do that here for the first 32 VSX registers only.
+ */
+	mfmsr	r10
+	oris	r10,r10,MSR_VSX@h
+	mtmsrd	r10
+	sync
 
-unrecover_mce:
-	/* Invoke machine_check_exception to print MCE event and panic. */
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_exception
-	/*
-	 * We will not reach here. Even if we did, there is no way out. Call
-	 * unrecoverable_exception and die.
-	 */
-1:	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unrecoverable_exception
-	b	1b
-COMMON_HANDLER_END(machine_check_handle_early)
+#define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1)
+#define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2)
+#define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4)
+#define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8)
+#define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16)
+	XVCPSGNDP32(0)
+
+ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206)
 
+BEGIN_FTR_SECTION
+	b	denorm_done
+END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
 /*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * r3 is saved in paca->slb_r3
- * We assume we aren't going to take any exceptions during this procedure.
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER8 we need to do that for all 64 VSX registers
  */
-COMMON_HANDLER_BEGIN(slb_miss_realmode)
-	mflr	r10
-#ifdef CONFIG_RELOCATABLE
-	mtctr	r11
+	XVCPSGNDP32(32)
+denorm_done:
+	mtspr	SPRN_HSRR0,r11
+	mtcrf	0x80,r9
+	ld	r9,PACA_EXGEN+EX_R9(r13)
+	RESTORE_PPR_PACA(PACA_EXGEN, r10)
+BEGIN_FTR_SECTION
+	ld	r10,PACA_EXGEN+EX_CFAR(r13)
+	mtspr	SPRN_CFAR,r10
+END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+	ld	r10,PACA_EXGEN+EX_R10(r13)
+	ld	r11,PACA_EXGEN+EX_R11(r13)
+	ld	r12,PACA_EXGEN+EX_R12(r13)
+	ld	r13,PACA_EXGEN+EX_R13(r13)
+	HRFID
+	b	.
 #endif
+TRAMP_HANDLER_END(denorm_assist)
 
-	stw	r9,PACA_EXSLB+EX_CCR(r13)	/* save CR in exc. frame */
-	std	r10,PACA_EXSLB+EX_LR(r13)	/* save LR */
+COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
 
-#ifdef CONFIG_PPC_STD_MMU_64
-BEGIN_MMU_FTR_SECTION
-	bl	slb_allocate_realmode
-END_MMU_FTR_SECTION_IFCLR(MMU_FTR_RADIX)
+
+#ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
+COMMON_HANDLER(cbe_maintenance, 0x1600, cbe_maintenance_exception)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
 #endif
-	/* All done -- return from exception. */
 
-	ld	r10,PACA_EXSLB+EX_LR(r13)
-	ld	r3,PACA_EXSLB+EX_R3(r13)
-	lwz	r9,PACA_EXSLB+EX_CCR(r13)	/* get saved CR */
 
-	mtlr	r10
-BEGIN_MMU_FTR_SECTION
-	b	2f
-END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
-	andi.	r10,r12,MSR_RI	/* check for unrecoverable exception */
-	beq-	2f
+VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
+VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
+TRAMP_KVM(PACA_EXGEN, 0x1700)
+#ifdef CONFIG_ALTIVEC
+COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
+#else
+COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
+#endif
 
-.machine	push
-.machine	"power4"
-	mtcrf	0x80,r9
-	mtcrf	0x01,r9		/* slb_allocate uses cr0 and cr7 */
-.machine	pop
 
-	RESTORE_PPR_PACA(PACA_EXSLB, r9)
-	ld	r9,PACA_EXSLB+EX_R9(r13)
-	ld	r10,PACA_EXSLB+EX_R10(r13)
-	ld	r11,PACA_EXSLB+EX_R11(r13)
-	ld	r12,PACA_EXSLB+EX_R12(r13)
-	ld	r13,PACA_EXSLB+EX_R13(r13)
-	rfid
-	b	.	/* prevent speculative execution */
+#ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
+COMMON_HANDLER(cbe_thermal, 0x1800, cbe_thermal_exception)
 
-2:	mfspr	r11,SPRN_SRR0
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10,unrecov_slb)
-	mtspr	SPRN_SRR0,r10
-	ld	r10,PACAKMSR(r13)
-	mtspr	SPRN_SRR1,r10
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
+#endif
+
+
+/*
+ * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
+ * - If it was a decrementer interrupt, we bump the dec to max and and return.
+ * - If it was a doorbell we return immediately since doorbells are edge
+ *   triggered and won't automatically refire.
+ * - If it was a HMI we return immediately since we handled it in realmode
+ *   and it won't refire.
+ * - else we hard disable and return.
+ * This is called with r10 containing the value to OR to the paca field.
+ */
+#define MASKED_INTERRUPT(_H)				\
+masked_##_H##interrupt:					\
+	std	r11,PACA_EXGEN+EX_R11(r13);		\
+	lbz	r11,PACAIRQHAPPENED(r13);		\
+	or	r11,r11,r10;				\
+	stb	r11,PACAIRQHAPPENED(r13);		\
+	cmpwi	r10,PACA_IRQ_DEC;			\
+	bne	1f;					\
+	lis	r10,0x7fff;				\
+	ori	r10,r10,0xffff;				\
+	mtspr	SPRN_DEC,r10;				\
+	b	2f;					\
+1:	cmpwi	r10,PACA_IRQ_DBELL;			\
+	beq	2f;					\
+	cmpwi	r10,PACA_IRQ_HMI;			\
+	beq	2f;					\
+	mfspr	r10,SPRN_##_H##SRR1;			\
+	rldicl	r10,r10,48,1; /* clear MSR_EE */	\
+	rotldi	r10,r10,16;				\
+	mtspr	SPRN_##_H##SRR1,r10;			\
+2:	mtcrf	0x80,r9;				\
+	ld	r9,PACA_EXGEN+EX_R9(r13);		\
+	ld	r10,PACA_EXGEN+EX_R10(r13);		\
+	ld	r11,PACA_EXGEN+EX_R11(r13);		\
+	GET_SCRATCH0(r13);				\
+	##_H##rfid;					\
+	b	.
+
+USE_FIXED_SECTION(real_trampolines)
+	MASKED_INTERRUPT()
+	MASKED_INTERRUPT(H)
+UNUSE_FIXED_SECTION(real_trampolines)
+
+/*
+ * Called from arch_local_irq_enable when an interrupt needs
+ * to be resent. r3 contains 0x500, 0x900, 0xa00 or 0xe80 to indicate
+ * which kind of interrupt. MSR:EE is already off. We generate a
+ * stackframe like if a real interrupt had happened.
+ *
+ * Note: While MSR:EE is off, we need to make sure that _MSR
+ * in the generated frame has EE set to 1 or the exception
+ * handler will not properly re-enable them.
+ */
+COMMON_HANDLER_BEGIN(__replay_interrupt)
+	/* We are going to jump to the exception common code which
+	 * will retrieve various register values from the PACA which
+	 * we don't give a damn about, so we don't bother storing them.
+	 */
+	mfmsr	r12
+	mflr	r11
+	mfcr	r9
+	ori	r12,r12,MSR_EE
+	cmpwi	r3,0x900
+	beq	decrementer_common
+	cmpwi	r3,0x500
+	beq	hardware_interrupt_common
+BEGIN_FTR_SECTION
+	cmpwi	r3,0xe80
+	beq	h_doorbell_common
+FTR_SECTION_ELSE
+	cmpwi	r3,0xa00
+	beq	doorbell_super_common
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
+	blr
+COMMON_HANDLER_END(__replay_interrupt)
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
+	/*
+	 * Here all GPRs are unchanged from when the interrupt happened
+	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 */
+	mfspr	r13, SPRN_SRR0
+	addi	r13, r13, 4
+	mtspr	SPRN_SRR0, r13
+	GET_SCRATCH0(r13)
 	rfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_interrupt)
 
-unrecov_slb:
-	EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
-	RECONCILE_IRQ_STATE(r10, r11)
-	bl	save_nvgprs
-1:	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unrecoverable_exception
-	b	1b
-COMMON_HANDLER_END(slb_miss_realmode)
+TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
+	/*
+	 * Here all GPRs are unchanged from when the interrupt happened
+	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 */
+	mfspr	r13, SPRN_HSRR0
+	addi	r13, r13, 4
+	mtspr	SPRN_HSRR0, r13
+	GET_SCRATCH0(r13)
+	hrfid
+	b	.
+TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
+#endif
 
+TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
+	b	__ppc64_runlatch_on
+TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
+
+USE_FIXED_SECTION(virt_trampolines)
+	/*
+	 * The __end_interrupts marker must be past the out-of-line (OOL)
+	 * handlers, so that they are copied to real address 0x100 when running
+	 * a relocatable kernel. This ensures they can be reached from the short
+	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
+	 * directly, without using LOAD_HANDLER_4G().
+	 */
+	.align	7
+	.globl	__end_interrupts
+__end_interrupts:
+UNUSE_FIXED_SECTION(virt_trampolines)
 
 #ifdef CONFIG_PPC_970_NAP
 TRAMP_HANDLER_BEGIN(power4_fixup_nap)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 11/14] powerpc/pseries: use single macro for both parts of OOL exception
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (9 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 10/14] powerpc/pseries: move related exception code together Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 12/14] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

Simple substitution.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 53 ++++++++++++------------------------
 1 file changed, 17 insertions(+), 36 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 7893af7..9832765 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -766,8 +766,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 COMMON_HANDLER_END(fp_unavailable_common)
 
 
-__VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
-__TRAMP_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900)
+VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
 VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
 TRAMP_KVM(PACA_EXGEN, 0x900)
 COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
@@ -891,8 +890,7 @@ VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
 TRAMP_KVM(PACA_EXGEN, 0xd00)
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
 VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
 	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
 VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
@@ -912,8 +910,7 @@ COMMON_HANDLER_END(h_data_storage_common)
 COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
 VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
 	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
 VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
@@ -921,16 +918,13 @@ TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
-__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
-__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
-__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
+VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60, 0xe40)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
 
 __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
 VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
 	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
 VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
@@ -987,10 +981,8 @@ COMMON_HANDLER_END(hmi_exception_early)
 COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
+VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0, 0xe80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
@@ -1003,18 +995,14 @@ VECTOR_HANDLER_REAL_NONE(0xea0, 0xf00)
 VECTOR_HANDLER_VIRT_NONE(0x4ea0, 0x4f00)
 
 
-__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
-__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
-__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
-__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
+VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20, 0xf00)
 TRAMP_KVM(PACA_EXGEN, 0xf00)
 COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
-__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
-__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
-__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
+VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40, 0xf20)
 TRAMP_KVM(PACA_EXGEN, 0xf20)
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
@@ -1051,11 +1039,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 COMMON_HANDLER_END(altivec_unavailable_common)
 
 
-
-__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
-__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
-__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
-__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
+VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60, 0xf40)
 TRAMP_KVM(PACA_EXGEN, 0xf40)
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
@@ -1091,18 +1076,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 COMMON_HANDLER_END(vsx_unavailable_common)
 
 
-__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
-__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
-__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
-__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80, 0xf60)
 TRAMP_KVM(PACA_EXGEN, 0xf60)
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
-__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
-__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
-__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
+VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0, 0xf80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 12/14] powerpc/pseries: remove unused exception code, small cleanups
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (10 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 11/14] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 13/14] powerpc/pseries: consolidate slb exceptions Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 14/14] powerpc/pseries: exceptions use short handler load again Nicholas Piggin
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

This was not done before the big patches because I only noticed
them afterwards. It has become much easier to see which handlers
are branched to from which exception vectors now, and to see
exactly what vector space is being used for what.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 9832765..b7a8a66 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -90,6 +90,9 @@ USE_FIXED_SECTION(real_vectors)
 	.globl __start_interrupts
 __start_interrupts:
 
+/* No virt vectors corresponding with 0x0..0x100 */
+VECTOR_HANDLER_VIRT_NONE(0x4000, 0x4100)
+
 VECTOR_HANDLER_REAL_BEGIN(system_reset, 0x100, 0x200)
 	SET_SCRATCH0(r13)
 #ifdef CONFIG_PPC_P7_NAP
@@ -891,9 +894,7 @@ TRAMP_KVM(PACA_EXGEN, 0xd00)
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
 
 VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
+VECTOR_HANDLER_VIRT_NONE(0x4e00, 0x4e20)
 TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
@@ -907,13 +908,10 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	bl      unknown_exception
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
-COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
 
 VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+VECTOR_HANDLER_VIRT_NONE(0x4e20, 0x4e40)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
@@ -925,9 +923,7 @@ COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
 
 __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+VECTOR_HANDLER_VIRT_NONE(0x4e60, 0x4e80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
 COMMON_HANDLER_BEGIN(hmi_exception_early)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60)
@@ -1100,6 +1096,7 @@ COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
 #endif
 
 
@@ -1212,6 +1209,7 @@ COMMON_HANDLER(cbe_maintenance, 0x1600, cbe_maintenance_exception)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
 #endif
 
 
@@ -1233,8 +1231,11 @@ COMMON_HANDLER(cbe_thermal, 0x1800, cbe_thermal_exception)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
 #endif
 
+VECTOR_HANDLER_REAL_NONE(0x1900, 0x2000)
+VECTOR_HANDLER_VIRT_NONE(0x5900, 0x6000)
 
 /*
  * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 13/14] powerpc/pseries: consolidate slb exceptions
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (11 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 12/14] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  2016-07-21  6:44 ` [PATCH 14/14] powerpc/pseries: exceptions use short handler load again Nicholas Piggin
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

slb exceptions all follow the same pattern, so put it in a macro.
Generated code should be the same.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 172 ++++++++++++++---------------------
 1 file changed, 69 insertions(+), 103 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index b7a8a66..c317faf 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -497,111 +497,32 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
 COMMON_HANDLER_END(data_access_common)
 
-
-VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_DAR
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	/*
-	 * We can't just use a direct branch to slb_miss_realmode
-	 * because the distance from here to there depends on where
-	 * the kernel ends up being put.
-	 */
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
-
-VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_DAR
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	/*
-	 * We can't just use a direct branch to slb_miss_realmode
-	 * because the distance from here to there depends on where
-	 * the kernel ends up being put.
-	 */
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
-TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
-
-
-VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
-VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
-TRAMP_KVM(PACA_EXGEN, 0x400)
-COMMON_HANDLER_BEGIN(instruction_access_common)
-	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r12,_MSR(r1)
-	ld	r3,_NIP(r1)
-	andis.	r4,r12,0x5820
-	li	r5,0x400
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-BEGIN_MMU_FTR_SECTION
-	b	do_hash_page		/* Try to handle as hpte fault */
-MMU_FTR_SECTION_ELSE
-	b	handle_page_fault
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
-COMMON_HANDLER_END(instruction_access_common)
-
-
-VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
-VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
+/*
+ * SLB miss macro takes care of all 4 cases (I/D, real/virt)
+ */
+#if defined(CONFIG_RELOCATABLE)
+#define SLB_ACCESS_EXCEPTION(addr_spr, vec, TEST)		\
+	SET_SCRATCH0(r13);					\
+	EXCEPTION_PROLOG_0(PACA_EXSLB);				\
+	EXCEPTION_PROLOG_1(PACA_EXSLB, TEST, vec);		\
+	std	r3,PACA_EXSLB+EX_R3(r13);			\
+	mfspr	r3,addr_spr;					\
+	mfspr	r12,SPRN_SRR1;					\
+	mfctr	r11;						\
+	ld	r10,PACAKBASE(r13);				\
+	LOAD_HANDLER_4G(r10, slb_miss_realmode);		\
+	mtctr	r10;						\
+	bctr;
 #else
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
+#define SLB_ACCESS_EXCEPTION(addr_spr, vec, TEST)		\
+	SET_SCRATCH0(r13);					\
+	EXCEPTION_PROLOG_0(PACA_EXSLB);				\
+	EXCEPTION_PROLOG_1(PACA_EXSLB, TEST, vec);		\
+	std	r3,PACA_EXSLB+EX_R3(r13);			\
+	mfspr	r3,addr_spr;					\
+	mfspr	r12,SPRN_SRR1;					\
+	b	slb_miss_realmode;
 #endif
-VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
-TRAMP_KVM(PACA_EXSLB, 0x480)
-
 
 TRAMP_HANDLER_BEGIN(slb_miss_realmode)
 	/*
@@ -674,6 +595,45 @@ COMMON_HANDLER_BEGIN(unrecov_slb)
 COMMON_HANDLER_END(unrecov_slb)
 
 
+VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
+	SLB_ACCESS_EXCEPTION(SPRN_DAR, 0x380, KVMTEST_PR)
+VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
+
+VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
+	SLB_ACCESS_EXCEPTION(SPRN_DAR, 0x380, NOTEST)
+VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
+TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
+
+
+VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
+VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
+TRAMP_KVM(PACA_EXGEN, 0x400)
+COMMON_HANDLER_BEGIN(instruction_access_common)
+	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r12,_MSR(r1)
+	ld	r3,_NIP(r1)
+	andis.	r4,r12,0x5820
+	li	r5,0x400
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
+	b	do_hash_page		/* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+	b	handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
+COMMON_HANDLER_END(instruction_access_common)
+
+
+VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
+	SLB_ACCESS_EXCEPTION(SPRN_SRR0, 0x480, KVMTEST_PR)
+VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
+VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
+	SLB_ACCESS_EXCEPTION(SPRN_SRR0, 0x480, NOTEST)
+VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
+TRAMP_KVM(PACA_EXSLB, 0x480)
+
+
 VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
 	.globl hardware_interrupt_hv;
 hardware_interrupt_hv:
@@ -922,7 +882,13 @@ TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
 
+/*
+ * hmi_exception trampoline is a special case. It jumps to hmi_exception_early
+ * first, and then eventaully from there to the trampoline to get into virtual
+ * mode.
+ */
 __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
 VECTOR_HANDLER_VIRT_NONE(0x4e60, 0x4e80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
 COMMON_HANDLER_BEGIN(hmi_exception_early)
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 14/14] powerpc/pseries: exceptions use short handler load again
  2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
                   ` (12 preceding siblings ...)
  2016-07-21  6:44 ` [PATCH 13/14] powerpc/pseries: consolidate slb exceptions Nicholas Piggin
@ 2016-07-21  6:44 ` Nicholas Piggin
  13 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21  6:44 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin, Benjamin Herrenschmidt, Michael Ellerman

addis generated by LOAD_HANDLER_4GB is always 0, so it is safe to
use 64K handlers. Move decrementer exception back inline.

Signed-off-by: Nick Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h |  4 ++--
 arch/powerpc/kernel/exceptions-64s.S     | 21 ++++++++++++---------
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 06e2247..eaad38f 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -55,7 +55,7 @@
 #define __EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)			\
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
 	mfspr	r11,SPRN_##h##SRR0;	/* save SRR0 */			\
-	LOAD_HANDLER_4G(r12,label);					\
+	LOAD_HANDLER_64K(r12,label);					\
 	mtctr	r12;							\
 	mfspr	r12,SPRN_##h##SRR1;	/* and SRR1 */			\
 	li	r10,MSR_RI;						\
@@ -186,7 +186,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	ld	r12,PACAKBASE(r13);	/* get high part of &label */	\
 	ld	r10,PACAKMSR(r13);	/* get MSR value for kernel */	\
 	mfspr	r11,SPRN_##h##SRR0;	/* save SRR0 */			\
-	LOAD_HANDLER_4G(r12,label);					\
+	LOAD_HANDLER_64K(r12,label);					\
 	mtspr	SPRN_##h##SRR0,r12;					\
 	mfspr	r12,SPRN_##h##SRR1;	/* and SRR1 */			\
 	mtspr	SPRN_##h##SRR1,r10;					\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index c317faf..462bf67 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -262,7 +262,7 @@ BEGIN_FTR_SECTION
 	ori	r11,r11,MSR_ME		/* turn on ME bit */
 	ori	r11,r11,MSR_RI		/* turn on RI bit */
 	ld	r12,PACAKBASE(r13)	/* get high part of &label */
-	LOAD_HANDLER_4G(r12, machine_check_handle_early)
+	LOAD_HANDLER_64K(r12, machine_check_handle_early)
 1:	mtspr	SPRN_SRR0,r12
 	mtspr	SPRN_SRR1,r11
 	rfid
@@ -275,7 +275,7 @@ BEGIN_FTR_SECTION
 	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
 	ld	r11,PACAKMSR(r13)
 	ld	r12,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r12, unrecover_mce)
+	LOAD_HANDLER_64K(r12, unrecover_mce)
 	li	r10,MSR_ME
 	andc	r11,r11,r10		/* Turn off MSR_ME */
 	b	1b
@@ -416,7 +416,7 @@ COMMON_HANDLER_BEGIN(machine_check_handle_early)
 	bne	2f
 1:	mfspr	r11,SPRN_SRR0
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10,unrecover_mce)
+	LOAD_HANDLER_64K(r10,unrecover_mce)
 	mtspr	SPRN_SRR0,r10
 	ld	r10,PACAKMSR(r13)
 	/*
@@ -510,7 +510,7 @@ COMMON_HANDLER_END(data_access_common)
 	mfspr	r12,SPRN_SRR1;					\
 	mfctr	r11;						\
 	ld	r10,PACAKBASE(r13);				\
-	LOAD_HANDLER_4G(r10, slb_miss_realmode);		\
+	LOAD_HANDLER_64K(r10, slb_miss_realmode);		\
 	mtctr	r10;						\
 	bctr;
 #else
@@ -577,7 +577,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
 
 2:	mfspr	r11,SPRN_SRR0
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER_4G(r10,unrecov_slb)
+	LOAD_HANDLER_64K(r10,unrecov_slb)
 	mtspr	SPRN_SRR0,r10
 	ld	r10,PACAKMSR(r13)
 	mtspr	SPRN_SRR1,r10
@@ -729,7 +729,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 COMMON_HANDLER_END(fp_unavailable_common)
 
 
-VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)
+VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
 VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
 TRAMP_KVM(PACA_EXGEN, 0x900)
 COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
@@ -771,7 +771,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_RFID 					\
 	mfspr	r12,SPRN_SRR1 ;					\
 	ld	r10,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER_4G(r10, system_call_common) ; 		\
+	LOAD_HANDLER_64K(r10, system_call_common) ; 		\
 	mtspr	SPRN_SRR0,r10 ; 				\
 	ld	r10,PACAKMSR(r13) ;				\
 	mtspr	SPRN_SRR1,r10 ; 				\
@@ -794,7 +794,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_DIRECT				\
 	mflr	r10 ;						\
 	ld	r12,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER_4G(r12, system_call_common) ;		\
+	LOAD_HANDLER_64K(r12, system_call_common) ;		\
 	mtctr	r12 ;						\
 	mfspr	r12,SPRN_SRR1 ;					\
 	/* Re-use of r13... No spare regs to do this */	\
@@ -1317,7 +1317,10 @@ USE_FIXED_SECTION(virt_trampolines)
 	 * handlers, so that they are copied to real address 0x100 when running
 	 * a relocatable kernel. This ensures they can be reached from the short
 	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
-	 * directly, without using LOAD_HANDLER_4G().
+	 * directly, without using LOAD_HANDLER_*().
+	 *
+	 * This needs to be aligned according to copy_and_flush, which copies
+	 * cacheline at a time.
 	 */
 	.align	7
 	.globl	__end_interrupts
-- 
2.8.1

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses
  2016-07-21  6:44 ` [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses Nicholas Piggin
@ 2016-07-21 13:39   ` Nicholas Piggin
  0 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-21 13:39 UTC (permalink / raw)
  To: linuxppc-dev

On Thu, 21 Jul 2016 16:44:02 +1000
Nicholas Piggin <nicholas.piggin@gmail.com> wrote:

> fixup_entry data could be stripped at build time after this change.

Actually that's crazy, no it couldn't! Fortunately I didn't attempt
to try that. Laziness pays off yet again.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets
  2016-07-21  6:44 ` [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets Nicholas Piggin
@ 2016-07-21 14:34   ` David Laight
  2016-07-22  7:52     ` Nicholas Piggin
  0 siblings, 1 reply; 18+ messages in thread
From: David Laight @ 2016-07-21 14:34 UTC (permalink / raw)
  To: 'Nicholas Piggin', linuxppc-dev@lists.ozlabs.org; +Cc: Nicholas Piggin

RnJvbTogTmljaG9sYXMgUGlnZ2luDQo+IFNlbnQ6IDIxIEp1bHkgMjAxNiAwNzo0NA0KLi4uDQo+
IEBAIC03MzksNyArNzM5LDggQEAga3ZtcHBjX3NraXBfSGludGVycnVwdDoNCj4gICAqIEVuc3Vy
ZSB0aGF0IGFueSBoYW5kbGVycyB0aGF0IGdldCBpbnZva2VkIGZyb20gdGhlIGV4Y2VwdGlvbiBw
cm9sb2dzDQo+ICAgKiBhYm92ZSBhcmUgYmVsb3cgdGhlIGZpcnN0IDY0S0IgKDB4MTAwMDApIG9m
IHRoZSBrZXJuZWwgaW1hZ2UgYmVjYXVzZQ0KPiAgICogdGhlIHByb2xvZ3MgYXNzZW1ibGUgdGhl
IGFkZHJlc3NlcyBvZiB0aGVzZSBoYW5kbGVycyB1c2luZyB0aGUNCj4gLSAqIExPQURfSEFORExF
UiBtYWNybywgd2hpY2ggdXNlcyBhbiBvcmkgaW5zdHJ1Y3Rpb24uDQo+ICsgKiBMT0FEX0hBTkRM
RVJfNEcgbWFjcm8sIHdoaWNoIHVzZXMgYW4gb3JpIGluc3RydWN0aW9uLiBDYXJlIG11c3QgYWxz
bw0KPiArICogYmUgdGFrZW4gYmVjYXVzZSByZWxhdGl2ZSBicmFuY2hlcyBjYW4gb25seSBhZGRy
ZXNzIDMySyBpbiBlYWNoIGRpcmVjdGlvbi4NCj4gICAqLw0KDQpUaGF0IGNvbW1lbnQgbm93IGxv
b2tzIHdyb25nLg0KDQoJRGF2aWQNCg0K

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets
  2016-07-21 14:34   ` David Laight
@ 2016-07-22  7:52     ` Nicholas Piggin
  0 siblings, 0 replies; 18+ messages in thread
From: Nicholas Piggin @ 2016-07-22  7:52 UTC (permalink / raw)
  To: David Laight; +Cc: linuxppc-dev@lists.ozlabs.org, Nicholas Piggin

On Thu, 21 Jul 2016 14:34:10 +0000
David Laight <David.Laight@ACULAB.COM> wrote:

> From: Nicholas Piggin
> > Sent: 21 July 2016 07:44  
> ...
> > @@ -739,7 +739,8 @@ kvmppc_skip_Hinterrupt:
> >   * Ensure that any handlers that get invoked from the exception
> > prologs
> >   * above are below the first 64KB (0x10000) of the kernel image
> > because
> >   * the prologs assemble the addresses of these handlers using the
> > - * LOAD_HANDLER macro, which uses an ori instruction.
> > + * LOAD_HANDLER_4G macro, which uses an ori instruction. Care must
> > also
> > + * be taken because relative branches can only address 32K in each
> > direction. */  
> 
> That comment now looks wrong.

You're right, I'll correct it.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-07-22  7:52 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-21  6:43 [RFC][PATCH 00/14] pseries exception cleanups Nicholas Piggin
2016-07-21  6:44 ` [PATCH 01/14] powerpc: add arch/powerpc/tools directory Nicholas Piggin
2016-07-21  6:44 ` [PATCH 02/14] powerpc/pseries: remove cross-fixup branches in exception code Nicholas Piggin
2016-07-21  6:44 ` [PATCH 03/14] powerpc: build-time fixup alternate feature relative addresses Nicholas Piggin
2016-07-21 13:39   ` Nicholas Piggin
2016-07-21  6:44 ` [PATCH 04/14] powerpc/pseries: move decrementer exception vector out of line Nicholas Piggin
2016-07-21  6:44 ` [PATCH 05/14] powerpc/pseries: 4GB exception handler offsets Nicholas Piggin
2016-07-21 14:34   ` David Laight
2016-07-22  7:52     ` Nicholas Piggin
2016-07-21  6:44 ` [PATCH 06/14] powerpc/pseries: h_facility_unavailable realmode exception location Nicholas Piggin
2016-07-21  6:44 ` [PATCH 07/14] powerpc/pseries: improved exception vector macros Nicholas Piggin
2016-07-21  6:44 ` [PATCH 08/14] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
2016-07-21  6:44 ` [PATCH 09/14] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
2016-07-21  6:44 ` [PATCH 10/14] powerpc/pseries: move related exception code together Nicholas Piggin
2016-07-21  6:44 ` [PATCH 11/14] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
2016-07-21  6:44 ` [PATCH 12/14] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin
2016-07-21  6:44 ` [PATCH 13/14] powerpc/pseries: consolidate slb exceptions Nicholas Piggin
2016-07-21  6:44 ` [PATCH 14/14] powerpc/pseries: exceptions use short handler load again Nicholas Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).