From: Will Deacon <will@kernel.org>
To: David Brazdil <dbrazdil@google.com>
Cc: Marc Zyngier <maz@kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Dennis Zhou <dennis@kernel.org>, Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@linux.com>, Arnd Bergmann <arnd@arndb.de>,
James Morse <james.morse@arm.com>,
Julien Thierry <julien.thierry.kdev@gmail.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, kvmarm@lists.cs.columbia.edu,
linux-arch@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v2 00/10] Independent per-CPU data section for nVHE
Date: Wed, 16 Sep 2020 13:39:14 +0100 [thread overview]
Message-ID: <20200916123913.GA28056@willie-the-truck> (raw)
In-Reply-To: <20200916122412.elxfxbdygvmdgrj5@google.com>
On Wed, Sep 16, 2020 at 01:24:12PM +0100, David Brazdil wrote:
> > I was also wondering about another approach - using the PERCPU_SECTION macro
> > unchanged in the hyp linker script. It would lay out a single .data..percpu and
> > we would then prefix it with .hyp and the symbols with __kvm_nvhe_ as with
> > everything else. WDYT? Haven't tried that yet, could be a naive idea.
>
> Seems to work. Can't use PERCPU_SECTION directly because then we couldn't
> rename it in the same linker script, but if we just unwrap that one layer
> we can use PERCPU_INPUT. No global macro changes needed.
>
> Let me know what you think.
>
> ------8<------
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index 5904a4de9f40..9e6bf21268f1 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -195,11 +195,9 @@ SECTIONS
> PERCPU_SECTION(L1_CACHE_BYTES)
>
> /* KVM nVHE per-cpu section */
> - #undef PERCPU_SECTION_NAME
> - #undef PERCPU_SYMBOL_NAME
> - #define PERCPU_SECTION_NAME(suffix) CONCAT3(.hyp, PERCPU_SECTION_BASE_NAME, suffix)
> - #define PERCPU_SYMBOL_NAME(name) __kvm_nvhe_ ## name
> - PERCPU_SECTION(L1_CACHE_BYTES)
> + . = ALIGN(PAGE_SIZE);
> + .hyp.data..percpu : { *(.hyp.data..percpu) }
> + . = ALIGN(PAGE_SIZE);
>
> .rela.dyn : ALIGN(8) {
> *(.rela .rela*)
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp.lds.S b/arch/arm64/kvm/hyp/nvhe/hyp.lds.S
> index 7d8c3fa004f4..1d8e4f7edc29 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp.lds.S
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp.lds.S
> @@ -4,6 +4,10 @@
> * Written by David Brazdil <dbrazdil@google.com>
> */
>
> +#include <asm-generic/vmlinux.lds.h>
> +#include <asm/cache.h>
> +#include <asm/memory.h>
> +
> /*
> * Defines an ELF hyp section from input section @NAME and its subsections.
> */
> @@ -11,9 +15,9 @@
>
> SECTIONS {
> HYP_SECTION(.text)
> - HYP_SECTION(.data..percpu)
> - HYP_SECTION(.data..percpu..first)
> - HYP_SECTION(.data..percpu..page_aligned)
> - HYP_SECTION(.data..percpu..read_mostly)
> - HYP_SECTION(.data..percpu..shared_aligned)
> +
> + .hyp..data..percpu : {
Too many '.'s here?
> + __per_cpu_load = .;
I don't think we need this symbol.
Otherwise, idea looks good to me. Can you respin like this, but also
incorporating some of the cleanup in the diff I posted, please?
Will
next prev parent reply other threads:[~2020-09-16 16:41 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-03 9:17 [PATCH v2 00/10] Independent per-CPU data section for nVHE David Brazdil
2020-09-03 9:17 ` [PATCH v2 01/10] Macros to override naming of percpu symbols and sections David Brazdil
2020-09-03 9:17 ` [PATCH v2 02/10] kvm: arm64: Partially link nVHE hyp code, simplify HYPCOPY David Brazdil
2020-09-10 9:54 ` Andrew Scull
2020-09-14 13:09 ` Will Deacon
2020-09-03 9:17 ` [PATCH v2 03/10] kvm: arm64: Remove __hyp_this_cpu_read David Brazdil
2020-09-10 11:12 ` Andrew Scull
2020-09-17 8:34 ` David Brazdil
2020-09-03 9:17 ` [PATCH v2 04/10] kvm: arm64: Remove hyp_adr/ldr_this_cpu David Brazdil
2020-09-10 12:34 ` Andrew Scull
2020-09-03 9:17 ` [PATCH v2 05/10] kvm: arm64: Add helpers for accessing nVHE hyp per-cpu vars David Brazdil
2020-09-03 9:17 ` [PATCH v2 06/10] kvm: arm64: Duplicate arm64_ssbd_callback_required for nVHE hyp David Brazdil
2020-09-03 9:17 ` [PATCH v2 07/10] kvm: arm64: Create separate instances of kvm_host_data for VHE/nVHE David Brazdil
2020-09-03 9:17 ` [PATCH v2 08/10] kvm: arm64: Mark hyp stack pages reserved David Brazdil
2020-09-03 9:17 ` [PATCH v2 09/10] kvm: arm64: Set up hyp percpu data for nVHE David Brazdil
2020-09-03 9:17 ` [PATCH v2 10/10] kvm: arm64: Remove unnecessary hyp mappings David Brazdil
2020-09-10 14:07 ` Andrew Scull
2020-09-16 13:35 ` David Brazdil
2020-09-14 17:40 ` [PATCH v2 00/10] Independent per-CPU data section for nVHE Will Deacon
2020-09-16 11:54 ` David Brazdil
2020-09-16 12:24 ` David Brazdil
2020-09-16 12:39 ` Will Deacon [this message]
2020-09-16 12:40 ` David Brazdil
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200916123913.GA28056@willie-the-truck \
--to=will@kernel.org \
--cc=arnd@arndb.de \
--cc=catalin.marinas@arm.com \
--cc=cl@linux.com \
--cc=dbrazdil@google.com \
--cc=dennis@kernel.org \
--cc=james.morse@arm.com \
--cc=julien.thierry.kdev@gmail.com \
--cc=kernel-team@android.com \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arch@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=maz@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox