From: David Hildenbrand <david@redhat.com>
To: Asahi Lina <lina@asahilina.net>, Zi Yan <ziy@nvidia.com>
Cc: "Miguel Ojeda" <ojeda@kernel.org>,
"Alex Gaynor" <alex.gaynor@gmail.com>,
"Boqun Feng" <boqun.feng@gmail.com>,
"Gary Guo" <gary@garyguo.net>,
"Björn Roy Baron" <bjorn3_gh@protonmail.com>,
"Benno Lossin" <benno.lossin@proton.me>,
"Andreas Hindborg" <a.hindborg@kernel.org>,
"Alice Ryhl" <aliceryhl@google.com>,
"Trevor Gross" <tmgross@umich.edu>,
"Jann Horn" <jannh@google.com>,
"Matthew Wilcox" <willy@infradead.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"Wedson Almeida Filho" <wedsonaf@gmail.com>,
"Valentin Obst" <kernel@valentinobst.de>,
"Andrew Morton" <akpm@linux-foundation.org>,
linux-mm@kvack.org, airlied@redhat.com,
"Abdiel Janulgue" <abdiel.janulgue@gmail.com>,
rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org,
asahi@lists.linux.dev, "Oscar Salvador" <osalvador@suse.de>,
"Muchun Song" <muchun.song@linux.dev>
Subject: Re: [PATCH 0/6] rust: page: Support borrowing `struct page` and physaddr conversion
Date: Wed, 12 Feb 2025 20:06:08 +0100 [thread overview]
Message-ID: <d809a46d-0bb2-4e78-8810-24e374131dcd@redhat.com> (raw)
In-Reply-To: <f042dcf3-10b9-4b58-9c98-5b83910ab188@asahilina.net>
On 06.02.25 20:27, Asahi Lina wrote:
>
>
> On 2/7/25 4:18 AM, Asahi Lina wrote:
>>
>>
>> On 2/7/25 2:58 AM, David Hildenbrand wrote:
>>> On 04.02.25 22:06, Asahi Lina wrote:
>>>>
>>>>
>>>> On 2/5/25 5:10 AM, David Hildenbrand wrote:
>>>>> On 04.02.25 18:59, Asahi Lina wrote:
>>>>>> On 2/4/25 11:38 PM, David Hildenbrand wrote:
>>>>>>>>>> If the answer is "no" then that's fine. It's still an unsafe
>>>>>>>>>> function
>>>>>>>>>> and we need to document in the safety section that it should
>>>>>>>>>> only be
>>>>>>>>>> used for memory that is either known to be allocated and pinned and
>>>>>>>>>> will
>>>>>>>>>> not be freed while the `struct page` is borrowed, or memory that is
>>>>>>>>>> reserved and not owned by the buddy allocator, so in practice
>>>>>>>>>> correct
>>>>>>>>>> use would not be racy with memory hot-remove anyway.
>>>>>>>>>>
>>>>>>>>>> This is already the case for the drm/asahi use case, where the pfns
>>>>>>>>>> looked up will only ever be one of:
>>>>>>>>>>
>>>>>>>>>> - GEM objects that are mapped to the GPU and whose physical
>>>>>>>>>> pages are
>>>>>>>>>> therefore pinned (and the VM is locked while this happens so the
>>>>>>>>>> objects
>>>>>>>>>> cannot become unpinned out from under the running code),
>>>>>>>>>
>>>>>>>>> How exactly are these pages pinned/obtained?
>>>>>>>>
>>>>>>>> Under the hood it's shmem. For pinning, it winds up at
>>>>>>>> `drm_gem_get_pages()`, which I think does a
>>>>>>>> `shmem_read_folio_gfp()` on
>>>>>>>> a mapping set as unevictable.
>>>>>>>
>>>>>>> Thanks. So we grab another folio reference via shmem_read_folio_gfp()-
>>>>>>>> shmem_get_folio_gfp().
>>>>>>>
>>>>>>> Hm, I wonder if we might end up holding folios residing in
>>>>>>> ZONE_MOVABLE/
>>>>>>> MIGRATE_CMA longer than we should.
>>>>>>>
>>>>>>> Compared to memfd_pin_folios(), which simulates FOLL_LONGTERM and
>>>>>>> makes
>>>>>>> sure to migrate pages out of ZONE_MOVABLE/MIGRATE_CMA.
>>>>>>>
>>>>>>> But that's a different discussion, just pointing it out, maybe I'm
>>>>>>> missing something :)
>>>>>>
>>>>>> I think this is a little over my head. Though I only just realized that
>>>>>> we seem to be keeping the GEM objects pinned forever, even after unmap,
>>>>>> in the drm-shmem core API (I see no drm-shmem entry point that would
>>>>>> allow the sgt to be freed and its corresponding pages ref to be
>>>>>> dropped,
>>>>>> other than a purge of purgeable objects or final destruction of the
>>>>>> object). I'll poke around since this feels wrong, I thought we were
>>>>>> supposed to be able to have shrinker support for swapping out whole GPU
>>>>>> VMs in the modern GPU MM model, but I guess there's no
>>>>>> implementation of
>>>>>> that for gem-shmem drivers yet...?
>>>>>
>>>>> I recall that shrinker as well, ... or at least a discussion around it.
>>>>>
>>>>> [...]
>>>>>
>>>>>>>
>>>>>>> If it's only for crash dumps etc. that might even be opt-in, it makes
>>>>>>> the whole thing a lot less scary. Maybe this could be opt-in
>>>>>>> somewhere,
>>>>>>> to "unlock" this interface? Just an idea.
>>>>>>
>>>>>> Just to make sure we're on the same page, I don't think there's
>>>>>> anything
>>>>>> to unlock in the Rust abstraction side (this series). At the end of the
>>>>>> day, if nothing else, the unchecked interface (which the regular
>>>>>> non-crash page table management code uses for performance) will let you
>>>>>> use any pfn you want, it's up to documentation and human review to
>>>>>> specify how it should be used by drivers. What Rust gives us here is
>>>>>> the
>>>>>> mandatory `unsafe {}`, so any attempts to use this API will necessarily
>>>>>> stick out during review as potentially dangerous code that needs extra
>>>>>> scrutiny.
>>>>>>
>>>>>> For the client driver itself, I could gate the devcoredump stuff behind
>>>>>> a module parameter or something... but I don't think it's really worth
>>>>>> it. We don't have a way to reboot the firmware or recover from this
>>>>>> condition (platform limitations), so end users are stuck rebooting to
>>>>>> get back a usable machine anyway. If something goes wrong in the
>>>>>> crashdump code and the machine oopses or locks up worse... it doesn't
>>>>>> really make much of a difference for normal end users. I don't think
>>>>>> this will ever really happen given the constraints I described, but if
>>>>>> somehow it does (some other bug somewhere?), well... the machine was
>>>>>> already in an unrecoverable state anyway.
>>>>>>
>>>>>> It would be nice to have userspace tooling deployed by default that
>>>>>> saves off the devcoredump somewhere, so we can have a chance at
>>>>>> debugging hard-to-hit firmware crashes... if it's opt-in, it would only
>>>>>> really be useful for developers and CI machines.
>>>>>
>>>>> Is this something that possibly kdump can save or analyze? Because that
>>>>> is our default "oops, kernel crashed, let's dump the old content so we
>>>>> can dump it" mechanism on production systems.
>>>>
>>>> kdump does not work on Apple ARM systems because kexec is broken and
>>>> cannot be fully fixed, due to multiple platform/firmware limitations. A
>>>> very limited version of kexec might work well enough for kdump, but I
>>>> don't think anyone has looked into making that work yet...
>>>>
>>>>> but ... I am not familiar with devcoredump. So I don't know when/how it
>>>>> runs, and if the source system is still alive (and remains alive -- in
>>>>> contrast to a kernel crash).
>>>>
>>>> Devcoredump just makes the dump available via /sys so it can be
>>>> collected by the user. The system is still alive, the GPU is just dead
>>>> and all future GPU job submissions fail. You can still SSH in or (at
>>>> least in theory, if enough moving parts are graceful about it) VT-switch
>>>> to a TTY. The display controller is not part of the GPU, it is separate
>>>> hardware.
>>>
>>>
>>> Thanks for all the details (and sorry for the delay, I'm on PTO until
>>> Monday ... :)
>>>
>>> (regarding the other mail) Adding that stuff to rust just so we have a
>>> devcoredump that ideally wouldn't exist is a bit unfortunate.
>>>
>>> So I'm curious: we do have /proc/kcore, where we do all of the required
>>> filtering, only allowing for reading memory that is online, not
>>> hwpoisoned etc.
>>>
>>> makedumpfile already supports /proc/kcore.
>>>
>>> Would it be possible to avoid Devcoredump completely either by dumping /
>>> proc/kcore directly or by having a user-space script that walks the page
>>> tables to dump the content purely based on /proc/kcore?
>>>
>>> If relevant memory ranges are inaccessible from /proc/kcore, we could
>>> look into exposing them.
>>
>> I'm not sure that's a good idea... the dump code runs when the GPU
>> crashes, and makes copies of all the memory pages into newly allocated
>> pages (this is around 16MB for a typical dump, and if allocation fails
>> we just bail and clean up). Then userspace can read the coredump at its
>> leisure. AIUI, this is exactly the intended use case of devcoredump. It
>> also means that anyone can grab a core dump with just a `cp`, without
>> needing any bespoke tools.
>>
>> After the snapshot is taken, the kernel will complete (fail) all GPU
>> jobs, which means much of the shared memory will be freed and some
>> structures will change contents. If we defer the coredump to userspace,
>> then it would not be able to capture the state of all relevant memory
>> exactly at the crash time, which could be very confusing.
>>
>> In theory I could change the allocators to not free or touch anything
>> after a crash, and add guards to any mutations in the driver to avoid
>> any changes after a crash... but that feels a lot more brittle and
>> error-prone than just taking the core dump at the right time.
>>
>
> If the arbitrary page lookups are that big a problem, I think I would
> rather just memremap the all the bootloader-mapped firmware areas, hook
> into all the allocators to provide a backdoor into the backing objects,
> and just piece everything together by mapping page addresses to those.
> It would be a bunch of extra code and scaffolding in the driver, and
> require device tree and bootloader changes to link up the GPU node to
> its firmware nodes, but it's still better than trying to do it all from
> userspace IMO...
Yes. Ideally, we'd not open up the can of worms of arbitrary pfn -> page
conversions (including the pfn_to_online_page() etc nastiness) if it can
be avoided in rust. Once there is an interface do do it, it's likely
that new users will pop up that are not just "create a simple dump, I
know what I am doing and only want sanity checks".
So best if we could prevent new pfn walkers in rust somehow; they are
already a pain to maintain+fix in C (including the upcoming more severe
folio/memdesc work), that would be good.
But if it's too hard to avoid, then it also doesn't make sense to
overcomplicate things to work around it.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-02-12 19:06 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-02 13:05 [PATCH 0/6] rust: page: Support borrowing `struct page` and physaddr conversion Asahi Lina
2025-02-02 13:05 ` [PATCH 1/6] rust: types: Add Ownable/Owned types Asahi Lina
2025-02-03 9:13 ` Alice Ryhl
2025-02-03 14:17 ` Asahi Lina
2025-02-03 18:17 ` Alice Ryhl
2025-02-03 19:17 ` Asahi Lina
2025-02-19 8:34 ` Andreas Hindborg
2025-02-19 8:37 ` Andreas Hindborg
2025-02-02 13:05 ` [PATCH 2/6] rust: page: Convert to Ownable Asahi Lina
2025-02-03 9:17 ` Alice Ryhl
2025-02-03 9:39 ` Fiona Behrens
2025-02-19 8:46 ` Andreas Hindborg
2025-02-02 13:05 ` [PATCH 3/6] rust: page: Make with_page_mapped() and with_pointer_into_page() public Asahi Lina
2025-02-03 9:10 ` Alice Ryhl
2025-02-03 9:43 ` Fiona Behrens
2025-02-19 8:48 ` Andreas Hindborg
2025-02-02 13:05 ` [PATCH 4/6] rust: addr: Add a module to declare core address types Asahi Lina
2025-02-03 9:09 ` Alice Ryhl
2025-02-03 15:04 ` Boqun Feng
2025-02-04 11:50 ` Asahi Lina
2025-02-04 14:50 ` Boqun Feng
2025-02-19 8:51 ` Andreas Hindborg
2025-02-02 13:05 ` [PATCH 5/6] rust: page: Add physical address conversion functions Asahi Lina
2025-02-03 9:35 ` Alice Ryhl
2025-02-04 11:43 ` Asahi Lina
2025-02-03 9:53 ` Fiona Behrens
2025-02-03 10:01 ` Alice Ryhl
2025-02-19 9:06 ` Andreas Hindborg
2025-02-02 13:05 ` [PATCH 6/6] rust: page: Make Page::as_ptr() pub(crate) Asahi Lina
2025-02-03 9:08 ` Alice Ryhl
2025-02-19 9:08 ` Andreas Hindborg
2025-02-03 9:58 ` [PATCH 0/6] rust: page: Support borrowing `struct page` and physaddr conversion Simona Vetter
2025-02-03 14:32 ` Asahi Lina
2025-02-03 21:05 ` Zi Yan
2025-02-04 10:26 ` David Hildenbrand
2025-02-04 11:41 ` Asahi Lina
2025-02-04 11:59 ` David Hildenbrand
2025-02-04 13:05 ` Asahi Lina
2025-02-04 14:38 ` David Hildenbrand
2025-02-04 17:59 ` Asahi Lina
2025-02-04 20:10 ` David Hildenbrand
2025-02-04 21:06 ` Asahi Lina
2025-02-06 17:58 ` David Hildenbrand
2025-02-06 19:18 ` Asahi Lina
2025-02-06 19:27 ` Asahi Lina
2025-02-12 19:06 ` David Hildenbrand [this message]
2025-02-12 19:01 ` David Hildenbrand
2025-02-05 7:40 ` Simona Vetter
2025-02-12 19:07 ` David Hildenbrand
2025-02-04 10:33 ` David Hildenbrand
2025-02-04 18:39 ` Jason Gunthorpe
2025-02-04 19:01 ` Asahi Lina
2025-02-04 20:05 ` David Hildenbrand
2025-02-04 20:26 ` Jason Gunthorpe
2025-02-04 20:41 ` David Hildenbrand
2025-02-04 20:47 ` David Hildenbrand
2025-02-04 21:18 ` Asahi Lina
2025-02-06 18:02 ` David Hildenbrand
2025-02-04 20:49 ` Jason Gunthorpe
2025-02-05 23:17 ` Matthew Wilcox
2025-02-06 18:04 ` David Hildenbrand
2025-02-03 10:22 ` Danilo Krummrich
2025-02-03 14:41 ` Asahi Lina
2025-02-15 19:47 ` Asahi Lina
2025-02-17 8:50 ` Abdiel Janulgue
2025-02-19 9:24 ` Andreas Hindborg
-- strict thread matches above, loose matches on Subject: below --
2025-03-06 19:21 Oliver Mangold
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d809a46d-0bb2-4e78-8810-24e374131dcd@redhat.com \
--to=david@redhat.com \
--cc=a.hindborg@kernel.org \
--cc=abdiel.janulgue@gmail.com \
--cc=airlied@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.gaynor@gmail.com \
--cc=aliceryhl@google.com \
--cc=asahi@lists.linux.dev \
--cc=benno.lossin@proton.me \
--cc=bjorn3_gh@protonmail.com \
--cc=boqun.feng@gmail.com \
--cc=dakr@kernel.org \
--cc=gary@garyguo.net \
--cc=jannh@google.com \
--cc=kernel@valentinobst.de \
--cc=lina@asahilina.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=ojeda@kernel.org \
--cc=osalvador@suse.de \
--cc=pbonzini@redhat.com \
--cc=rust-for-linux@vger.kernel.org \
--cc=tmgross@umich.edu \
--cc=wedsonaf@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).