From: Kees Cook <keescook@chromium.org>
To: Eric Biederman <ebiederm@xmission.com>
Cc: "Kees Cook" <keescook@chromium.org>,
"Alexander Viro" <viro@zeniv.linux.org.uk>,
"Christian Brauner" <brauner@kernel.org>,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
"Sebastian Ott" <sebott@redhat.com>,
"Thomas Weißschuh" <linux@weissschuh.net>,
"Pedro Falcato" <pedro.falcato@gmail.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org
Subject: [PATCH v4 5/6] binfmt_elf: Only report padzero() errors when PROT_WRITE
Date: Thu, 28 Sep 2023 20:24:33 -0700 [thread overview]
Message-ID: <20230929032435.2391507-5-keescook@chromium.org> (raw)
In-Reply-To: <20230929031716.it.155-kees@kernel.org>
Errors with padzero() should be caught unless we're expecting a
pathological (non-writable) segment. Report -EFAULT only when PROT_WRITE
is present.
Additionally add some more documentation to padzero(), elf_map(), and
elf_load().
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-mm@kvack.org
Suggested-by: Eric Biederman <ebiederm@xmission.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
fs/binfmt_elf.c | 32 +++++++++++++++++++++++---------
1 file changed, 23 insertions(+), 9 deletions(-)
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index f8b4747f87ed..22027b0a5923 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -110,19 +110,19 @@ static struct linux_binfmt elf_format = {
#define BAD_ADDR(x) (unlikely((unsigned long)(x) >= TASK_SIZE))
-/* We need to explicitly zero any fractional pages
- after the data section (i.e. bss). This would
- contain the junk from the file that should not
- be in memory
+/*
+ * We need to explicitly zero any trailing portion of the page that follows
+ * p_filesz when it ends before the page ends (e.g. bss), otherwise this
+ * memory will contain the junk from the file that should not be present.
*/
-static int padzero(unsigned long elf_bss)
+static int padzero(unsigned long address)
{
unsigned long nbyte;
- nbyte = ELF_PAGEOFFSET(elf_bss);
+ nbyte = ELF_PAGEOFFSET(address);
if (nbyte) {
nbyte = ELF_MIN_ALIGN - nbyte;
- if (clear_user((void __user *) elf_bss, nbyte))
+ if (clear_user((void __user *)address, nbyte))
return -EFAULT;
}
return 0;
@@ -348,6 +348,11 @@ create_elf_tables(struct linux_binprm *bprm, const struct elfhdr *exec,
return 0;
}
+/*
+ * Map "eppnt->p_filesz" bytes from "filep" offset "eppnt->p_offset"
+ * into memory at "addr". (Note that p_filesz is rounded up to the
+ * next page, so any extra bytes from the file must be wiped.)
+ */
static unsigned long elf_map(struct file *filep, unsigned long addr,
const struct elf_phdr *eppnt, int prot, int type,
unsigned long total_size)
@@ -387,6 +392,11 @@ static unsigned long elf_map(struct file *filep, unsigned long addr,
return(map_addr);
}
+/*
+ * Map "eppnt->p_filesz" bytes from "filep" offset "eppnt->p_offset"
+ * into memory at "addr". Memory from "p_filesz" through "p_memsz"
+ * rounded up to the next page is zeroed.
+ */
static unsigned long elf_load(struct file *filep, unsigned long addr,
const struct elf_phdr *eppnt, int prot, int type,
unsigned long total_size)
@@ -404,8 +414,12 @@ static unsigned long elf_load(struct file *filep, unsigned long addr,
zero_end = map_addr + ELF_PAGEOFFSET(eppnt->p_vaddr) +
eppnt->p_memsz;
- /* Zero the end of the last mapped page */
- padzero(zero_start);
+ /*
+ * Zero the end of the last mapped page but ignore
+ * any errors if the segment isn't writable.
+ */
+ if (padzero(zero_start) && (prot & PROT_WRITE))
+ return -EFAULT;
}
} else {
map_addr = zero_start = ELF_PAGESTART(addr);
--
2.34.1
next prev parent reply other threads:[~2023-09-29 3:24 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-29 3:24 [PATCH v4 0/6] binfmt_elf: Support segments with 0 filesz and misaligned starts Kees Cook
2023-09-29 3:24 ` [PATCH v4 1/6] " Kees Cook
2023-09-29 12:06 ` Pedro Falcato
2023-09-29 15:23 ` Eric W. Biederman
2023-09-29 3:24 ` [PATCH v4 2/6] binfmt_elf: elf_bss no longer used by load_elf_binary() Kees Cook
2023-09-29 3:24 ` [PATCH v4 3/6] binfmt_elf: Use elf_load() for interpreter Kees Cook
2023-09-29 3:24 ` [PATCH v4 4/6] binfmt_elf: Use elf_load() for library Kees Cook
2023-09-29 12:12 ` Pedro Falcato
2023-09-29 15:32 ` Eric W. Biederman
2023-09-29 17:06 ` Kees Cook
2023-09-29 3:24 ` Kees Cook [this message]
2023-09-29 3:24 ` [PATCH v4 6/6] mm: Remove unused vm_brk() Kees Cook
2023-09-29 11:33 ` [PATCH v4 0/6] binfmt_elf: Support segments with 0 filesz and misaligned starts Sebastian Ott
2023-09-29 15:45 ` Eric W. Biederman
2023-09-29 17:09 ` Kees Cook
2023-09-29 11:58 ` Pedro Falcato
2023-09-29 15:39 ` Eric W. Biederman
2023-09-29 17:07 ` Kees Cook
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230929032435.2391507-5-keescook@chromium.org \
--to=keescook@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=ebiederm@xmission.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-hardening@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux@weissschuh.net \
--cc=pedro.falcato@gmail.com \
--cc=sebott@redhat.com \
--cc=viro@zeniv.linux.org.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).