* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
[not found] <20170601162338.23540-1-aryabinin@virtuozzo.com>
@ 2017-06-01 16:23 ` Andrey Ryabinin
2017-06-01 16:34 ` Mark Rutland
0 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2017-06-01 16:23 UTC (permalink / raw)
To: linux-arm-kernel
We used to read several bytes of the shadow memory in advance.
Therefore additional shadow memory mapped to prevent crash if
speculative load would happen near the end of the mapped shadow memory.
Now we don't have such speculative loads, so we no longer need to map
additional shadow memory.
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel at lists.infradead.org
---
arch/arm64/mm/kasan_init.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 687a358a3733..81f03959a4ab 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -191,14 +191,8 @@ void __init kasan_init(void)
if (start >= end)
break;
- /*
- * end + 1 here is intentional. We check several shadow bytes in
- * advance to slightly speed up fastpath. In some rare cases
- * we could cross boundary of mapped shadow, so we just map
- * some more here.
- */
vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
- (unsigned long)kasan_mem_to_shadow(end) + 1,
+ (unsigned long)kasan_mem_to_shadow(end),
pfn_to_nid(virt_to_pfn(start)));
}
--
2.13.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 16:23 ` [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory Andrey Ryabinin
@ 2017-06-01 16:34 ` Mark Rutland
2017-06-01 16:45 ` Dmitry Vyukov
0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2017-06-01 16:34 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> We used to read several bytes of the shadow memory in advance.
> Therefore additional shadow memory mapped to prevent crash if
> speculative load would happen near the end of the mapped shadow memory.
>
> Now we don't have such speculative loads, so we no longer need to map
> additional shadow memory.
I see that patch 1 fixed up the Linux helpers for outline
instrumentation.
Just to check, is it also true that the inline instrumentation never
performs unaligned accesses to the shadow memory?
If so, this looks good to me; it also avoids a potential fencepost issue
when memory exists right at the end of the linear map. Assuming that
holds:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Thanks,
Mark.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: linux-arm-kernel at lists.infradead.org
> ---
> arch/arm64/mm/kasan_init.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 687a358a3733..81f03959a4ab 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -191,14 +191,8 @@ void __init kasan_init(void)
> if (start >= end)
> break;
>
> - /*
> - * end + 1 here is intentional. We check several shadow bytes in
> - * advance to slightly speed up fastpath. In some rare cases
> - * we could cross boundary of mapped shadow, so we just map
> - * some more here.
> - */
> vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
> - (unsigned long)kasan_mem_to_shadow(end) + 1,
> + (unsigned long)kasan_mem_to_shadow(end),
> pfn_to_nid(virt_to_pfn(start)));
> }
>
> --
> 2.13.0
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 16:34 ` Mark Rutland
@ 2017-06-01 16:45 ` Dmitry Vyukov
2017-06-01 16:52 ` Mark Rutland
0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2017-06-01 16:45 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>> We used to read several bytes of the shadow memory in advance.
>> Therefore additional shadow memory mapped to prevent crash if
>> speculative load would happen near the end of the mapped shadow memory.
>>
>> Now we don't have such speculative loads, so we no longer need to map
>> additional shadow memory.
>
> I see that patch 1 fixed up the Linux helpers for outline
> instrumentation.
>
> Just to check, is it also true that the inline instrumentation never
> performs unaligned accesses to the shadow memory?
Inline instrumentation generally accesses only a single byte.
> If so, this looks good to me; it also avoids a potential fencepost issue
> when memory exists right at the end of the linear map. Assuming that
> holds:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> Thanks,
> Mark.
>
>>
>> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: linux-arm-kernel at lists.infradead.org
>> ---
>> arch/arm64/mm/kasan_init.c | 8 +-------
>> 1 file changed, 1 insertion(+), 7 deletions(-)
>>
>> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
>> index 687a358a3733..81f03959a4ab 100644
>> --- a/arch/arm64/mm/kasan_init.c
>> +++ b/arch/arm64/mm/kasan_init.c
>> @@ -191,14 +191,8 @@ void __init kasan_init(void)
>> if (start >= end)
>> break;
>>
>> - /*
>> - * end + 1 here is intentional. We check several shadow bytes in
>> - * advance to slightly speed up fastpath. In some rare cases
>> - * we could cross boundary of mapped shadow, so we just map
>> - * some more here.
>> - */
>> vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
>> - (unsigned long)kasan_mem_to_shadow(end) + 1,
>> + (unsigned long)kasan_mem_to_shadow(end),
>> pfn_to_nid(virt_to_pfn(start)));
>> }
>>
>> --
>> 2.13.0
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 16:45 ` Dmitry Vyukov
@ 2017-06-01 16:52 ` Mark Rutland
2017-06-01 16:59 ` Andrey Ryabinin
0 siblings, 1 reply; 8+ messages in thread
From: Mark Rutland @ 2017-06-01 16:52 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
> >> We used to read several bytes of the shadow memory in advance.
> >> Therefore additional shadow memory mapped to prevent crash if
> >> speculative load would happen near the end of the mapped shadow memory.
> >>
> >> Now we don't have such speculative loads, so we no longer need to map
> >> additional shadow memory.
> >
> > I see that patch 1 fixed up the Linux helpers for outline
> > instrumentation.
> >
> > Just to check, is it also true that the inline instrumentation never
> > performs unaligned accesses to the shadow memory?
>
> Inline instrumentation generally accesses only a single byte.
Sorry to be a little pedantic, but does that mean we'll never access the
additional shadow, or does that mean it's very unlikely that we will?
I'm guessing/hoping it's the former!
Thanks,
Mark.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 16:52 ` Mark Rutland
@ 2017-06-01 16:59 ` Andrey Ryabinin
2017-06-01 17:00 ` Andrey Ryabinin
0 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2017-06-01 16:59 UTC (permalink / raw)
To: linux-arm-kernel
On 06/01/2017 07:52 PM, Mark Rutland wrote:
> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>> We used to read several bytes of the shadow memory in advance.
>>>> Therefore additional shadow memory mapped to prevent crash if
>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>
>>>> Now we don't have such speculative loads, so we no longer need to map
>>>> additional shadow memory.
>>>
>>> I see that patch 1 fixed up the Linux helpers for outline
>>> instrumentation.
>>>
>>> Just to check, is it also true that the inline instrumentation never
>>> performs unaligned accesses to the shadow memory?
>>
Correct, inline instrumentation assumes that all accesses are properly aligned as it
required by C standard. I knew that the kernel violates this rule in many places,
therefore I decided to add checks for unaligned accesses in outline case.
>> Inline instrumentation generally accesses only a single byte.
>
> Sorry to be a little pedantic, but does that mean we'll never access the
> additional shadow, or does that mean it's very unlikely that we will?
>
> I'm guessing/hoping it's the former!
>
Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
> Thanks,
> Mark.
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 16:59 ` Andrey Ryabinin
@ 2017-06-01 17:00 ` Andrey Ryabinin
2017-06-01 17:05 ` Dmitry Vyukov
0 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2017-06-01 17:00 UTC (permalink / raw)
To: linux-arm-kernel
On 06/01/2017 07:59 PM, Andrey Ryabinin wrote:
>
>
> On 06/01/2017 07:52 PM, Mark Rutland wrote:
>> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>>> We used to read several bytes of the shadow memory in advance.
>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>
>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>> additional shadow memory.
>>>>
>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>> instrumentation.
>>>>
>>>> Just to check, is it also true that the inline instrumentation never
>>>> performs unaligned accesses to the shadow memory?
>>>
>
> Correct, inline instrumentation assumes that all accesses are properly aligned as it
> required by C standard. I knew that the kernel violates this rule in many places,
> therefore I decided to add checks for unaligned accesses in outline case.
>
>
>>> Inline instrumentation generally accesses only a single byte.
>>
>> Sorry to be a little pedantic, but does that mean we'll never access the
>> additional shadow, or does that mean it's very unlikely that we will?
>>
>> I'm guessing/hoping it's the former!
>>
>
> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
s/Outline/inline of course.
>
>> Thanks,
>> Mark.
>>
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 17:00 ` Andrey Ryabinin
@ 2017-06-01 17:05 ` Dmitry Vyukov
2017-06-01 17:38 ` Dmitry Vyukov
0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2017-06-01 17:05 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 1, 2017 at 7:00 PM, Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
> On 06/01/2017 07:59 PM, Andrey Ryabinin wrote:
>>
>>
>> On 06/01/2017 07:52 PM, Mark Rutland wrote:
>>> On Thu, Jun 01, 2017 at 06:45:32PM +0200, Dmitry Vyukov wrote:
>>>> On Thu, Jun 1, 2017 at 6:34 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>>>>> On Thu, Jun 01, 2017 at 07:23:37PM +0300, Andrey Ryabinin wrote:
>>>>>> We used to read several bytes of the shadow memory in advance.
>>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>>
>>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>>> additional shadow memory.
>>>>>
>>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>>> instrumentation.
>>>>>
>>>>> Just to check, is it also true that the inline instrumentation never
>>>>> performs unaligned accesses to the shadow memory?
>>>>
>>
>> Correct, inline instrumentation assumes that all accesses are properly aligned as it
>> required by C standard. I knew that the kernel violates this rule in many places,
>> therefore I decided to add checks for unaligned accesses in outline case.
>>
>>
>>>> Inline instrumentation generally accesses only a single byte.
>>>
>>> Sorry to be a little pedantic, but does that mean we'll never access the
>>> additional shadow, or does that mean it's very unlikely that we will?
>>>
>>> I'm guessing/hoping it's the former!
>>>
>>
>> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
>
> s/Outline/inline of course.
I suspect that actual implementations have diverged from that
description. Trying to follow asan_expand_check_ifn in:
https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/asan.c?revision=246703&view=markup
but it's not trivial.
+Yuri, maybe you know off the top of your head if asan instrumentation
in gcc ever accesses off-by-one shadow byte (i.e. 1 byte after actual
object end)?
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory
2017-06-01 17:05 ` Dmitry Vyukov
@ 2017-06-01 17:38 ` Dmitry Vyukov
0 siblings, 0 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2017-06-01 17:38 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Jun 1, 2017 at 7:05 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>>>>>>> We used to read several bytes of the shadow memory in advance.
>>>>>>> Therefore additional shadow memory mapped to prevent crash if
>>>>>>> speculative load would happen near the end of the mapped shadow memory.
>>>>>>>
>>>>>>> Now we don't have such speculative loads, so we no longer need to map
>>>>>>> additional shadow memory.
>>>>>>
>>>>>> I see that patch 1 fixed up the Linux helpers for outline
>>>>>> instrumentation.
>>>>>>
>>>>>> Just to check, is it also true that the inline instrumentation never
>>>>>> performs unaligned accesses to the shadow memory?
>>>>>
>>>
>>> Correct, inline instrumentation assumes that all accesses are properly aligned as it
>>> required by C standard. I knew that the kernel violates this rule in many places,
>>> therefore I decided to add checks for unaligned accesses in outline case.
>>>
>>>
>>>>> Inline instrumentation generally accesses only a single byte.
>>>>
>>>> Sorry to be a little pedantic, but does that mean we'll never access the
>>>> additional shadow, or does that mean it's very unlikely that we will?
>>>>
>>>> I'm guessing/hoping it's the former!
>>>>
>>>
>>> Outline will never access additional shadow byte: https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#unaligned-accesses
>>
>> s/Outline/inline of course.
>
>
> I suspect that actual implementations have diverged from that
> description. Trying to follow asan_expand_check_ifn in:
> https://gcc.gnu.org/viewcvs/gcc/trunk/gcc/asan.c?revision=246703&view=markup
> but it's not trivial.
>
> +Yuri, maybe you know off the top of your head if asan instrumentation
> in gcc ever accesses off-by-one shadow byte (i.e. 1 byte after actual
> object end)?
Thinking of this more. There is at least 1 case in user-space asan
where off-by-one shadow access would lead to similar crashes: for
mmap-ed regions we don't have redzones and map shadow only for the
region itself, so any off-by-one access would lead to crashes. So I
guess we are safe here. Or at least any crash would be gcc bug.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-06-01 17:38 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20170601162338.23540-1-aryabinin@virtuozzo.com>
2017-06-01 16:23 ` [PATCH 3/4] arm64/kasan: don't allocate extra shadow memory Andrey Ryabinin
2017-06-01 16:34 ` Mark Rutland
2017-06-01 16:45 ` Dmitry Vyukov
2017-06-01 16:52 ` Mark Rutland
2017-06-01 16:59 ` Andrey Ryabinin
2017-06-01 17:00 ` Andrey Ryabinin
2017-06-01 17:05 ` Dmitry Vyukov
2017-06-01 17:38 ` Dmitry Vyukov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox