From: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>
To: Catalin Marinas <catalin.marinas@arm.com>
Cc: <sohil.mehta@intel.com>, <baohua@kernel.org>, <david@redhat.com>,
<kbingham@kernel.org>, <weixugc@google.com>,
<Liam.Howlett@oracle.com>, <alexandre.chartre@oracle.com>,
<kas@kernel.org>, <mark.rutland@arm.com>,
<trintaeoitogc@gmail.com>, <axelrasmussen@google.com>,
<yuanchu@google.com>, <joey.gouly@arm.com>,
<samitolvanen@google.com>, <joel.granados@kernel.org>,
<graf@amazon.com>, <vincenzo.frascino@arm.com>, <kees@kernel.org>,
<ardb@kernel.org>, <thiago.bauermann@linaro.org>,
<glider@google.com>, <thuth@redhat.com>,
<kuan-ying.lee@canonical.com>, <pasha.tatashin@soleen.com>,
<nick.desaulniers+lkml@gmail.com>, <vbabka@suse.cz>,
<kaleshsingh@google.com>, <justinstitt@google.com>,
<alexander.shishkin@linux.intel.com>, <samuel.holland@sifive.com>,
<dave.hansen@linux.intel.com>, <corbet@lwn.net>, <xin@zytor.com>,
<dvyukov@google.com>, <tglx@linutronix.de>,
<scott@os.amperecomputing.com>, <jason.andryuk@amd.com>,
<morbo@google.com>, <nathan@kernel.org>,
<lorenzo.stoakes@oracle.com>, <mingo@redhat.com>,
<brgerst@gmail.com>, <kristina.martsenko@arm.com>,
<bigeasy@linutronix.de>, <luto@kernel.org>, <jgross@suse.com>,
<jpoimboe@kernel.org>, <urezki@gmail.com>, <mhocko@suse.com>,
<ada.coupriediaz@arm.com>, <hpa@zytor.com>, <leitao@debian.org>,
<peterz@infradead.org>, <wangkefeng.wang@huawei.com>,
<surenb@google.com>, <ziy@nvidia.com>, <smostafa@google.com>,
<ryabinin.a.a@gmail.com>, <ubizjak@gmail.com>, <jbohac@suse.cz>,
<broonie@kernel.org>, <akpm@linux-foundation.org>,
<guoweikang.kernel@gmail.com>, <rppt@kernel.org>,
<pcc@google.com>, <jan.kiszka@siemens.com>,
<nicolas.schier@linux.dev>, <will@kernel.org>,
<andreyknvl@gmail.com>, <jhubbard@nvidia.com>, <bp@alien8.de>,
<x86@kernel.org>, <linux-doc@vger.kernel.org>,
<linux-mm@kvack.org>, <llvm@lists.linux.dev>,
<linux-kbuild@vger.kernel.org>, <kasan-dev@googlegroups.com>,
<linux-kernel@vger.kernel.org>,
<linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation
Date: Wed, 27 Aug 2025 08:26:57 +0200 [thread overview]
Message-ID: <cwxjbxch5mu6ji7dhus2kfygys2kky2agu4gqrusnz2autk22t@k2cq7qgqmmvm> (raw)
In-Reply-To: <aK4MlVgsaUv-u7mS@arm.com>
On 2025-08-26 at 20:35:49 +0100, Catalin Marinas wrote:
>On Mon, Aug 25, 2025 at 10:24:26PM +0200, Maciej Wieczor-Retman wrote:
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index e9bbfacc35a6..82cbfc7d1233 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -431,11 +431,11 @@ config KASAN_SHADOW_OFFSET
>> default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
>> default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
>> default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
>> - default 0xefff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> - default 0xefffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> - default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> - default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> - default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>> + default 0xffff800000000000 if (ARM64_VA_BITS_48 || (ARM64_VA_BITS_52 && !ARM64_16K_PAGES)) && KASAN_SW_TAGS
>> + default 0xffffc00000000000 if (ARM64_VA_BITS_47 || ARM64_VA_BITS_52) && ARM64_16K_PAGES && KASAN_SW_TAGS
>> + default 0xfffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
>> + default 0xffffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
>> + default 0xfffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>> default 0xffffffffffffffff
>>
>> config UNWIND_TABLES
>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>> index 5213248e081b..277d56ceeb01 100644
>> --- a/arch/arm64/include/asm/memory.h
>> +++ b/arch/arm64/include/asm/memory.h
>> @@ -89,7 +89,15 @@
>> *
>> * KASAN_SHADOW_END is defined first as the shadow address that corresponds to
>> * the upper bound of possible virtual kernel memory addresses UL(1) << 64
>> - * according to the mapping formula.
>> + * according to the mapping formula. For Generic KASAN, the address in the
>> + * mapping formula is treated as unsigned (part of the compiler's ABI), so the
>> + * end of the shadow memory region is at a large positive offset from
>> + * KASAN_SHADOW_OFFSET. For Software Tag-Based KASAN, the address in the
>> + * formula is treated as signed. Since all kernel addresses are negative, they
>> + * map to shadow memory below KASAN_SHADOW_OFFSET, making KASAN_SHADOW_OFFSET
>> + * itself the end of the shadow memory region. (User pointers are positive and
>> + * would map to shadow memory above KASAN_SHADOW_OFFSET, but shadow memory is
>> + * not allocated for them.)
>> *
>> * KASAN_SHADOW_START is defined second based on KASAN_SHADOW_END. The shadow
>> * memory start must map to the lowest possible kernel virtual memory address
>> @@ -100,7 +108,11 @@
>> */
>> #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
>> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>> +#ifdef CONFIG_KASAN_GENERIC
>> #define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) + KASAN_SHADOW_OFFSET)
>> +#else
>> +#define KASAN_SHADOW_END KASAN_SHADOW_OFFSET
>> +#endif
>> #define _KASAN_SHADOW_START(va) (KASAN_SHADOW_END - (UL(1) << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
>> #define KASAN_SHADOW_START _KASAN_SHADOW_START(vabits_actual)
>> #define PAGE_END KASAN_SHADOW_START
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index d541ce45daeb..dc2de12c4f26 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -198,8 +198,11 @@ static bool __init root_level_aligned(u64 addr)
>> /* The early shadow maps everything to a single page of zeroes */
>> asmlinkage void __init kasan_early_init(void)
>> {
>> - BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> + if (IS_ENABLED(CONFIG_KASAN_GENERIC))
>> + BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
>> + KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
>> + else
>> + BUILD_BUG_ON(KASAN_SHADOW_OFFSET != KASAN_SHADOW_END);
>> BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), SHADOW_ALIGN));
>> BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), SHADOW_ALIGN));
>> BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, SHADOW_ALIGN));
>
>For the arm64 parts:
>
>Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Thanks :)
>
>I wonder whether it's worth keeping the generic KASAN mode for arm64.
>We've had the hardware TBI from the start, so the architecture version
>is not an issue. The compiler support may differ though.
>
>Anyway, that would be more suitable for a separate cleanup patch.
>
>--
>Catalin
I want to test it at some point, but I was always under the impression (that at
least in theory) different modes should be able to catch slightly different
errors. Not a big set but an example being accessing wrong address, but
allocated memory - on Generic it should be okay since shadow memory only says if
and how much is allocated. On sw-tags it will fault because randomized tags
would mismatch. Now I can't think of any examples the other way around but I
assume there is a few.
--
Kind regards
Maciej Wieczór-Retman
next prev parent reply other threads:[~2025-08-27 6:28 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-25 20:24 [PATCH v5 00/19] kasan: x86: arm64: KASAN tag-based mode for x86 Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 01/19] kasan: sw_tags: Use arithmetic shift for shadow computation Maciej Wieczor-Retman
2025-08-26 19:35 ` Catalin Marinas
2025-08-27 6:26 ` Maciej Wieczor-Retman [this message]
2025-08-25 20:24 ` [PATCH v5 02/19] kasan: sw_tags: Support tag widths less than 8 bits Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 03/19] kasan: Fix inline mode for x86 tag-based mode Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 04/19] x86: Add arch specific kasan functions Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 05/19] kasan: arm64: x86: Make special tags arch specific Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 06/19] x86: Reset tag for virtual to physical address conversions Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 07/19] mm: x86: Untag addresses in EXECMEM_ROX related pointer arithmetic Maciej Wieczor-Retman
2025-08-28 9:50 ` Mike Rapoport
2025-08-28 16:22 ` Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 08/19] x86: Physical address comparisons in fill_p*d/pte Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 09/19] x86: KASAN raw shadow memory PTE init Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 10/19] x86: LAM compatible non-canonical definition Maciej Wieczor-Retman
2025-08-25 20:59 ` Samuel Holland
2025-08-27 6:32 ` Maciej Wieczor-Retman
2025-08-25 21:36 ` Dave Hansen
2025-08-26 8:08 ` Maciej Wieczor-Retman
2025-08-27 0:46 ` Samuel Holland
2025-08-27 6:08 ` Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 11/19] x86: LAM initialization Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 12/19] x86: Minimal SLAB alignment Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 13/19] kasan: x86: Handle int3 for inline KASAN reports Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 14/19] arm64: Unify software tag-based KASAN inline recovery path Maciej Wieczor-Retman
2025-08-26 19:35 ` Catalin Marinas
2025-08-25 20:24 ` [PATCH v5 15/19] kasan: x86: Apply multishot to the inline report handler Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 16/19] kasan: x86: Logical bit shift for kasan_mem_to_shadow Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 17/19] mm: Unpoison pcpu chunks with base address tag Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 18/19] mm: Unpoison vms[area] addresses with a common tag Maciej Wieczor-Retman
2025-08-25 20:24 ` [PATCH v5 19/19] x86: Make software tag-based kasan available Maciej Wieczor-Retman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cwxjbxch5mu6ji7dhus2kfygys2kky2agu4gqrusnz2autk22t@k2cq7qgqmmvm \
--to=maciej.wieczor-retman@intel.com \
--cc=Liam.Howlett@oracle.com \
--cc=ada.coupriediaz@arm.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=alexandre.chartre@oracle.com \
--cc=andreyknvl@gmail.com \
--cc=ardb@kernel.org \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=bigeasy@linutronix.de \
--cc=bp@alien8.de \
--cc=brgerst@gmail.com \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dvyukov@google.com \
--cc=glider@google.com \
--cc=graf@amazon.com \
--cc=guoweikang.kernel@gmail.com \
--cc=hpa@zytor.com \
--cc=jan.kiszka@siemens.com \
--cc=jason.andryuk@amd.com \
--cc=jbohac@suse.cz \
--cc=jgross@suse.com \
--cc=jhubbard@nvidia.com \
--cc=joel.granados@kernel.org \
--cc=joey.gouly@arm.com \
--cc=jpoimboe@kernel.org \
--cc=justinstitt@google.com \
--cc=kaleshsingh@google.com \
--cc=kas@kernel.org \
--cc=kasan-dev@googlegroups.com \
--cc=kbingham@kernel.org \
--cc=kees@kernel.org \
--cc=kristina.martsenko@arm.com \
--cc=kuan-ying.lee@canonical.com \
--cc=leitao@debian.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kbuild@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=llvm@lists.linux.dev \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=morbo@google.com \
--cc=nathan@kernel.org \
--cc=nick.desaulniers+lkml@gmail.com \
--cc=nicolas.schier@linux.dev \
--cc=pasha.tatashin@soleen.com \
--cc=pcc@google.com \
--cc=peterz@infradead.org \
--cc=rppt@kernel.org \
--cc=ryabinin.a.a@gmail.com \
--cc=samitolvanen@google.com \
--cc=samuel.holland@sifive.com \
--cc=scott@os.amperecomputing.com \
--cc=smostafa@google.com \
--cc=sohil.mehta@intel.com \
--cc=surenb@google.com \
--cc=tglx@linutronix.de \
--cc=thiago.bauermann@linaro.org \
--cc=thuth@redhat.com \
--cc=trintaeoitogc@gmail.com \
--cc=ubizjak@gmail.com \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
--cc=vincenzo.frascino@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=weixugc@google.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=xin@zytor.com \
--cc=yuanchu@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).