From: David Hildenbrand <david@redhat.com>
To: Vlastimil Babka <vbabka@suse.cz>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Suren Baghdasaryan <surenb@google.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Matthew Wilcox <willy@infradead.org>,
"Paul E . McKenney" <paulmck@kernel.org>,
Jann Horn <jannh@google.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Muchun Song <muchun.song@linux.dev>,
Richard Henderson <richard.henderson@linaro.org>,
Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
Matt Turner <mattst88@gmail.com>,
Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
"James E . J . Bottomley" <James.Bottomley@HansenPartnership.com>,
Helge Deller <deller@gmx.de>, Chris Zankel <chris@zankel.net>,
Max Filippov <jcmvbkbc@gmail.com>, Arnd Bergmann <arnd@arndb.de>,
linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org,
linux-parisc@vger.kernel.org, linux-arch@vger.kernel.org,
Shuah Khan <shuah@kernel.org>,
Christian Brauner <brauner@kernel.org>,
linux-kselftest@vger.kernel.org,
Sidhartha Kumar <sidhartha.kumar@oracle.com>,
Jeff Xu <jeffxu@chromium.org>,
Christoph Hellwig <hch@infradead.org>,
linux-api@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>
Subject: Re: [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism
Date: Mon, 21 Oct 2024 22:17:32 +0200 [thread overview]
Message-ID: <6c282299-506f-45c9-9ddc-9ef4de582394@redhat.com> (raw)
In-Reply-To: <c37ada68-5bf5-4ca5-9de8-c0838160c443@suse.cz>
On 21.10.24 22:11, Vlastimil Babka wrote:
> On 10/20/24 18:20, Lorenzo Stoakes wrote:
>> Implement a new lightweight guard page feature, that is regions of userland
>> virtual memory that, when accessed, cause a fatal signal to arise.
>>
>> Currently users must establish PROT_NONE ranges to achieve this.
>>
>> However this is very costly memory-wise - we need a VMA for each and every
>> one of these regions AND they become unmergeable with surrounding VMAs.
>>
>> In addition repeated mmap() calls require repeated kernel context switches
>> and contention of the mmap lock to install these ranges, potentially also
>> having to unmap memory if installed over existing ranges.
>>
>> The lightweight guard approach eliminates the VMA cost altogether - rather
>> than establishing a PROT_NONE VMA, it operates at the level of page table
>> entries - poisoning PTEs such that accesses to them cause a fault followed
>> by a SIGSGEV signal being raised.
>>
>> This is achieved through the PTE marker mechanism, which a previous commit
>> in this series extended to permit this to be done, installed via the
>> generic page walking logic, also extended by a prior commit for this
>> purpose.
>>
>> These poison ranges are established with MADV_GUARD_POISON, and if the
>> range in which they are installed contain any existing mappings, they will
>> be zapped, i.e. free the range and unmap memory (thus mimicking the
>> behaviour of MADV_DONTNEED in this respect).
>>
>> Any existing poison entries will be left untouched. There is no nesting of
>> poisoned pages.
>>
>> Poisoned ranges are NOT cleared by MADV_DONTNEED, as this would be rather
>> unexpected behaviour, but are cleared on process teardown or unmapping of
>> memory ranges.
>>
>> Ranges can have the poison property removed by MADV_GUARD_UNPOISON -
>> 'remedying' the poisoning. The ranges over which this is applied, should
>> they contain non-poison entries, will be untouched, only poison entries
>> will be cleared.
>>
>> We permit this operation on anonymous memory only, and only VMAs which are
>> non-special, non-huge and not mlock()'d (if we permitted this we'd have to
>> drop locked pages which would be rather counterintuitive).
>>
>> Suggested-by: Vlastimil Babka <vbabka@suse.cz>
>> Suggested-by: Jann Horn <jannh@google.com>
>> Suggested-by: David Hildenbrand <david@redhat.com>
>> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> <snip>
>
>> +static long madvise_guard_poison(struct vm_area_struct *vma,
>> + struct vm_area_struct **prev,
>> + unsigned long start, unsigned long end)
>> +{
>> + long err;
>> +
>> + *prev = vma;
>> + if (!is_valid_guard_vma(vma, /* allow_locked = */false))
>> + return -EINVAL;
>> +
>> + /*
>> + * If we install poison markers, then the range is no longer
>> + * empty from a page table perspective and therefore it's
>> + * appropriate to have an anon_vma.
>> + *
>> + * This ensures that on fork, we copy page tables correctly.
>> + */
>> + err = anon_vma_prepare(vma);
>> + if (err)
>> + return err;
>> +
>> + /*
>> + * Optimistically try to install the guard poison pages first. If any
>> + * non-guard pages are encountered, give up and zap the range before
>> + * trying again.
>> + */
>
> Should the page walker become powerful enough to handle this in one go? :)
> But sure, if it's too big a task to teach it to zap ptes with all the tlb
> flushing etc (I assume it's something page walkers don't do today), it makes
> sense to do it this way.
> Or we could require userspace to zap first (MADV_DONTNEED), but that would
> unnecessarily mean extra syscalls for the use case of an allocator debug
> mode that wants to turn freed memory to guards to catch use after free.
> So this seems like a good compromise...
Yes please, KIS. We can always implement support for that later if
really required (leave behavior open when documenting).
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-10-21 20:17 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-20 16:20 [PATCH v2 0/5] implement lightweight guard pages Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 1/5] mm: pagewalk: add the ability to install PTEs Lorenzo Stoakes
2024-10-21 13:27 ` Vlastimil Babka
2024-10-21 13:50 ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 2/5] mm: add PTE_MARKER_GUARD PTE marker Lorenzo Stoakes
2024-10-21 13:45 ` Vlastimil Babka
2024-10-21 19:57 ` Lorenzo Stoakes
2024-10-21 20:42 ` Lorenzo Stoakes
2024-10-21 21:13 ` Lorenzo Stoakes
2024-10-21 21:20 ` Dave Hansen
2024-10-21 14:13 ` Vlastimil Babka
2024-10-21 14:33 ` Lorenzo Stoakes
2024-10-21 14:54 ` Vlastimil Babka
2024-10-21 15:33 ` Lorenzo Stoakes
2024-10-21 15:41 ` Lorenzo Stoakes
2024-10-21 16:00 ` David Hildenbrand
2024-10-21 16:23 ` Lorenzo Stoakes
2024-10-21 16:44 ` David Hildenbrand
2024-10-21 16:51 ` Lorenzo Stoakes
2024-10-21 17:00 ` David Hildenbrand
2024-10-21 17:14 ` Lorenzo Stoakes
2024-10-21 17:21 ` David Hildenbrand
2024-10-21 17:26 ` Vlastimil Babka
2024-10-22 19:13 ` David Hildenbrand
2024-10-20 16:20 ` [PATCH v2 3/5] mm: madvise: implement lightweight guard page mechanism Lorenzo Stoakes
2024-10-21 17:05 ` David Hildenbrand
2024-10-21 17:15 ` Lorenzo Stoakes
2024-10-21 17:23 ` David Hildenbrand
2024-10-21 19:25 ` John Hubbard
2024-10-21 19:39 ` Lorenzo Stoakes
2024-10-21 20:18 ` David Hildenbrand
2024-10-21 20:11 ` Vlastimil Babka
2024-10-21 20:17 ` David Hildenbrand [this message]
2024-10-21 20:25 ` Vlastimil Babka
2024-10-21 20:30 ` Lorenzo Stoakes
2024-10-21 20:37 ` David Hildenbrand
2024-10-21 20:49 ` Lorenzo Stoakes
2024-10-21 21:20 ` David Hildenbrand
2024-10-21 21:33 ` Lorenzo Stoakes
2024-10-21 21:35 ` Vlastimil Babka
2024-10-21 21:46 ` Lorenzo Stoakes
2024-10-22 19:18 ` David Hildenbrand
2024-10-21 20:27 ` Lorenzo Stoakes
2024-10-21 20:45 ` Vlastimil Babka
2024-10-22 19:08 ` Jann Horn
2024-10-22 19:35 ` Lorenzo Stoakes
2024-10-22 19:57 ` Jann Horn
2024-10-22 20:45 ` Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 4/5] tools: testing: update tools UAPI header for mman-common.h Lorenzo Stoakes
2024-10-20 16:20 ` [PATCH v2 5/5] selftests/mm: add self tests for guard page feature Lorenzo Stoakes
2024-10-21 21:31 ` Shuah Khan
2024-10-22 10:25 ` Lorenzo Stoakes
2024-10-20 17:37 ` [PATCH v2 0/5] implement lightweight guard pages Florian Weimer
2024-10-20 19:45 ` Lorenzo Stoakes
2024-10-23 6:24 ` Dmitry Vyukov
2024-10-23 7:19 ` David Hildenbrand
2024-10-23 8:11 ` Lorenzo Stoakes
2024-10-23 8:56 ` Dmitry Vyukov
2024-10-23 9:06 ` Vlastimil Babka
2024-10-23 9:13 ` David Hildenbrand
2024-10-23 9:18 ` Lorenzo Stoakes
2024-10-23 9:29 ` David Hildenbrand
2024-10-23 11:31 ` Marco Elver
2024-10-23 11:36 ` David Hildenbrand
2024-10-23 11:40 ` Lorenzo Stoakes
2024-10-23 9:17 ` Dmitry Vyukov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6c282299-506f-45c9-9ddc-9ef4de582394@redhat.com \
--to=david@redhat.com \
--cc=James.Bottomley@HansenPartnership.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=brauner@kernel.org \
--cc=chris@zankel.net \
--cc=deller@gmx.de \
--cc=hch@infradead.org \
--cc=ink@jurassic.park.msu.ru \
--cc=jannh@google.com \
--cc=jcmvbkbc@gmail.com \
--cc=jeffxu@chromium.org \
--cc=jhubbard@nvidia.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mattst88@gmail.com \
--cc=muchun.song@linux.dev \
--cc=paulmck@kernel.org \
--cc=richard.henderson@linaro.org \
--cc=shuah@kernel.org \
--cc=sidhartha.kumar@oracle.com \
--cc=surenb@google.com \
--cc=tsbogend@alpha.franken.de \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).