From: Matthew Wilcox <willy@infradead.org>
To: Kees Cook <keescook@chromium.org>
Cc: Uladzislau Rezki <urezki@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Yu Zhao <yuzhao@google.com>,
dev@der-flo.net, linux-mm@kvack.org,
linux-hardening@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org, x86@kernel.org,
linux-perf-users@vger.kernel.org, linux-arch@vger.kernel.org
Subject: Re: [PATCH 3/3] usercopy: Add find_vmap_area_try() to avoid deadlocks
Date: Fri, 16 Sep 2022 20:15:24 +0100 [thread overview]
Message-ID: <YyTLTBM4OC6/RnjG@casper.infradead.org> (raw)
In-Reply-To: <202209160805.CA47B2D673@keescook>
On Fri, Sep 16, 2022 at 08:09:16AM -0700, Kees Cook wrote:
> On Fri, Sep 16, 2022 at 03:46:07PM +0100, Matthew Wilcox wrote:
> > On Fri, Sep 16, 2022 at 06:59:57AM -0700, Kees Cook wrote:
> > > The check_object_size() checks under CONFIG_HARDENED_USERCOPY need to be
> > > more defensive against running from interrupt context. Use a best-effort
> > > check for VMAP areas when running in interrupt context
> >
> > I had something more like this in mind:
>
> Yeah, I like -EAGAIN. I'd like to keep the interrupt test to choose lock
> vs trylock, otherwise it's trivial to bypass the hardening test by having
> all the other CPUs beating on the spinlock.
I was thinking about this:
+++ b/mm/vmalloc.c
@@ -1844,12 +1844,19 @@
{
struct vmap_area *va;
- if (!spin_lock(&vmap_area_lock))
- return ERR_PTR(-EAGAIN);
+ /*
+ * It's safe to walk the rbtree under the RCU lock, but we may
+ * incorrectly find no vmap_area if the tree is being modified.
+ */
+ rcu_read_lock();
va = __find_vmap_area(addr, &vmap_area_root);
- spin_unlock(&vmap_area_lock);
+ if (!va && in_interrupt())
+ va = ERR_PTR(-EAGAIN);
+ rcu_read_unlock();
- return va;
+ if (va)
+ return va;
+ return find_vmap_area(addr);
}
/*** Per cpu kva allocator ***/
... but I don't think that works since vmap_areas aren't freed by RCU,
and I think they're reused without going through an RCU cycle.
So here's attempt #4, which actually compiles, and is, I think, what you
had in mind.
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 096d48aa3437..2b7c52e76856 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -215,7 +215,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size,
void free_vm_area(struct vm_struct *area);
extern struct vm_struct *remove_vm_area(const void *addr);
extern struct vm_struct *find_vm_area(const void *addr);
-struct vmap_area *find_vmap_area(unsigned long addr);
+struct vmap_area *find_vmap_area_try(unsigned long addr);
static inline bool is_vm_area_hugepages(const void *addr)
{
diff --git a/mm/usercopy.c b/mm/usercopy.c
index c1ee15a98633..e0fb605c1b38 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -173,7 +173,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
}
if (is_vmalloc_addr(ptr)) {
- struct vmap_area *area = find_vmap_area(addr);
+ struct vmap_area *area = find_vmap_area_try(addr);
+
+ /* We may be in NMI context */
+ if (area == ERR_PTR(-EAGAIN))
+ return;
if (!area)
usercopy_abort("vmalloc", "no area", to_user, 0, n);
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index dd6cdb201195..c47b3b5d1c2d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1829,7 +1829,7 @@ static void free_unmap_vmap_area(struct vmap_area *va)
free_vmap_area_noflush(va);
}
-struct vmap_area *find_vmap_area(unsigned long addr)
+static struct vmap_area *find_vmap_area(unsigned long addr)
{
struct vmap_area *va;
@@ -1840,6 +1840,26 @@ struct vmap_area *find_vmap_area(unsigned long addr)
return va;
}
+/*
+ * The vmap_area_lock is not interrupt-safe, and we can end up here from
+ * NMI context, so it's not worth even trying to make it IRQ-safe.
+ */
+struct vmap_area *find_vmap_area_try(unsigned long addr)
+{
+ struct vmap_area *va;
+
+ if (in_interrupt()) {
+ if (!spin_trylock(&vmap_area_lock))
+ return ERR_PTR(-EAGAIN);
+ } else {
+ spin_lock(&vmap_area_lock);
+ }
+ va = __find_vmap_area(addr, &vmap_area_root);
+ spin_unlock(&vmap_area_lock);
+
+ return va;
+}
+
/*** Per cpu kva allocator ***/
/*
next prev parent reply other threads:[~2022-09-16 19:15 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-16 13:59 [PATCH 0/3] x86/dumpstack: Inline copy_from_user_nmi() Kees Cook
2022-09-16 13:59 ` [PATCH 1/3] x86/uaccess: Move nmi_uaccess_okay() into uaccess.h Kees Cook
2022-09-16 13:59 ` [PATCH 2/3] x86/dumpstack: Inline copy_from_user_nmi() Kees Cook
2022-09-16 13:59 ` [PATCH 3/3] usercopy: Add find_vmap_area_try() to avoid deadlocks Kees Cook
2022-09-16 14:46 ` Matthew Wilcox
2022-09-16 15:09 ` Kees Cook
2022-09-16 19:15 ` Matthew Wilcox [this message]
2022-09-19 8:29 ` Uladzislau Rezki
2022-09-16 17:29 ` Uladzislau Rezki
2022-09-16 19:57 ` [PATCH 0/3] x86/dumpstack: Inline copy_from_user_nmi() Andrew Morton
2022-09-19 14:46 ` Peter Zijlstra
2022-09-19 19:26 ` Kees Cook
2022-09-17 2:20 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YyTLTBM4OC6/RnjG@casper.infradead.org \
--to=willy@infradead.org \
--cc=akpm@linux-foundation.org \
--cc=dev@der-flo.net \
--cc=keescook@chromium.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=urezki@gmail.com \
--cc=x86@kernel.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).