From: Janani Venkataraman <jananive@in.ibm.com>
To: linux-kernel@vger.kernel.org
Cc: amwang@redhat.com, rdunlap@xenotime.net, andi@firstfloor.org,
aravinda@linux.vnet.ibm.com, hch@lst.de, mhiramat@redhat.com,
jeremy.fitzhardinge@citrix.com, xemul@parallels.com,
suzuki@linux.vnet.ibm.com, kosaki.motohiro@jp.fujitsu.com,
adobriyan@gmail.com, tarundsk@linux.vnet.ibm.com,
vapier@gentoo.org, roland@hack.frob.com, tj@kernel.org,
ananth@linux.vnet.ibm.com, gorcunov@openvz.org,
avagin@openvz.org, oleg@redhat.com, eparis@redhat.com,
d.hatayama@jp.fujitsu.com, james.hogan@imgtec.com,
akpm@linux-foundation.org, torvalds@linux-foundation.org
Subject: [PATCH 16/19] Generate the data sections for ELF Core
Date: Fri, 04 Oct 2013 16:02:39 +0530 [thread overview]
Message-ID: <20131004103239.1612.31930.stgit@f19-x64> (raw)
In-Reply-To: <20131004102532.1612.24185.stgit@f19-x64>
From:Suzuki K. Poulose <suzuki@in.ibm.com>
Generate the "data" for the memory regions. Also write down the section header
if we have, number of phdrs > PN_XNUM.
The vma areas are read, page by page using access_process_vm() without an
mmap_sem. If there are active threads, then we may miss a vma if it is removed
while we are doing the read.
Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com>
Signed-off-by: Ananth N. Mavinakayanahalli <ananth@in.ibm.com>
---
fs/proc/gencore-elf.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 88 insertions(+), 2 deletions(-)
diff --git a/fs/proc/gencore-elf.c b/fs/proc/gencore-elf.c
index efd1722..244e5bb 100644
--- a/fs/proc/gencore-elf.c
+++ b/fs/proc/gencore-elf.c
@@ -314,10 +314,29 @@ static int create_elf_header(struct core_proc *cp)
return 0;
}
+/*
+ * Verify if the fpos asked for in read is valid.
+ * Returns the phdr corresponding to offset, else NULL.
+ */
+static struct elf_phdr *get_pos_elfphdr(struct core_proc *cp, loff_t pos)
+{
+ struct elfhdr *elf_hdr = (struct elfhdr *)cp->elf_buf;
+ struct elf_phdr *phdr = (struct elf_phdr*)(cp->elf_buf + elf_hdr->e_phoff);
+ int i;
+
+ for (i = 0; i < cp->nphdrs; i++, phdr++) {
+ unsigned long end = phdr->p_offset + phdr->p_filesz;
+ if ((pos >= phdr->p_offset) && (pos < end) && phdr->p_filesz)
+ return phdr;
+ }
+ return NULL;
+}
+
ssize_t elf_read_gencore(struct core_proc *cp, char __user *buffer,
size_t buflen, loff_t *fpos)
{
- ssize_t ret = 0;
+ ssize_t ret = 0, acc = 0;
+ struct elfhdr *elf_hdr = (struct elfhdr *)cp->elf_buf;
if (!cp->notes_size) {
if (!collect_notes(cp)) {
@@ -349,13 +368,80 @@ ssize_t elf_read_gencore(struct core_proc *cp, char __user *buffer,
goto out;
} else {
ret = bcp;
+ acc = bcp;
*fpos += bcp;
buflen -= bcp;
buffer += bcp;
}
}
+
if (*fpos > cp->size)
- goto out;
+ goto done;
+
+ /*
+ * Read from the vma segments
+ * a. verify if the *fpos is within a phdr
+ * b. Use access_process_vm() to get data page by page
+ * c. copy_to_user into user buffer
+ */
+
+ while (buflen) {
+ size_t bufsz, offset, bytes;
+ char *readbuf;
+ struct elf_phdr *phdr = get_pos_elfphdr(cp, *fpos);
+
+ if (!phdr)
+ break;
+
+ bufsz = (buflen > PAGE_SIZE) ? PAGE_SIZE : buflen;
+ readbuf = kmalloc(bufsz, GFP_KERNEL);
+ if (!readbuf) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ offset = *fpos - phdr->p_offset;
+ bytes = access_process_vm(cp->task, (phdr->p_vaddr + offset),
+ readbuf, bufsz, 0);
+ if (!bytes) {
+ ret = -EIO;
+ goto out;
+ }
+ if (copy_to_user(buffer, readbuf, bytes)) {
+ ret = -EFAULT;
+ kfree(readbuf);
+ goto out;
+ } else
+ acc += bytes;
+
+ kfree(readbuf);
+ buflen -= bytes;
+ buffer += bytes;
+ *fpos += bytes;
+ }
+
+ /* Fill extnum section header if present */
+ if (buflen &&
+ elf_hdr->e_shoff &&
+ (*fpos >= elf_hdr->e_shoff) &&
+ (*fpos < (elf_hdr->e_shoff + sizeof(struct elf_shdr)))) {
+
+ off_t offset = *fpos - elf_hdr->e_shoff;
+ size_t shdrsz = sizeof(struct elf_shdr) - offset;
+
+ shdrsz = (buflen < shdrsz) ? buflen : shdrsz;
+ if (copy_to_user(buffer, ((char *)cp->shdr) + offset, shdrsz)) {
+ ret = -EFAULT;
+ goto out;
+ } else {
+ acc += shdrsz;
+ buflen -= shdrsz;
+ buffer += shdrsz;
+ }
+ }
+
+done:
+ ret = acc;
out:
return ret;
next prev parent reply other threads:[~2013-10-04 10:33 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-04 10:30 [RFC] [PATCH 00/19] Non disruptive application core dump infrastructure using task_work_add() Janani Venkataraman
2013-10-04 10:30 ` [PATCH 01/19] Create elfcore-common.c for ELF class independent core generation helpers Janani Venkataraman
2013-10-04 10:30 ` [PATCH 02/19] Make vma_dump_size() generic Janani Venkataraman
2013-10-08 0:23 ` Ryan Mallon
2013-10-08 3:52 ` Janani Venkataraman1
2013-10-04 10:31 ` [PATCH 03/19] Make fill_psinfo generic Janani Venkataraman
2013-10-04 10:31 ` [PATCH 04/19] Rename compat versions of the reusable core generation routines Janani Venkataraman
2013-10-04 10:31 ` [PATCH 05/19] Export the reusable ELF " Janani Venkataraman
2013-10-04 10:31 ` [PATCH 06/19] Define API for reading arch specif Program Headers for Core Janani Venkataraman
2013-10-04 10:31 ` [PATCH 07/19] ia64 impelementation for elf_core_copy_extra_phdrs() Janani Venkataraman
2013-10-04 10:31 ` [PATCH 08/19] elf_core_copy_extra_phdrs() for UML Janani Venkataraman
2013-10-04 10:31 ` [PATCH 09/19] Create /proc/pid/core entry Janani Venkataraman
2013-10-04 10:31 ` [PATCH 10/19] Track the core generation requests Janani Venkataraman
2013-10-04 10:31 ` [PATCH 11/19] Check if the process is an ELF executable Janani Venkataraman
2013-10-04 10:32 ` [PATCH 12/19] Hold the threads using task_work_add Janani Venkataraman
2013-10-04 10:32 ` [PATCH 13/19] Create ELF Header Janani Venkataraman
2013-10-04 10:32 ` [PATCH 14/19] Create ELF Core notes Data Janani Venkataraman
2013-10-04 10:32 ` [PATCH 15/19] Calculate the size of the core file Janani Venkataraman
2013-10-04 10:32 ` Janani Venkataraman [this message]
2013-10-04 10:32 ` [PATCH 17/19] Identify the ELF class of the process Janani Venkataraman
2013-10-04 10:33 ` [PATCH 18/19] Adding support for compat ELF class data structures Janani Venkataraman
2013-10-04 10:33 ` [PATCH 19/19] Compat ELF class core generation support Janani Venkataraman
2013-10-04 10:38 ` [RFC] [PATCH 00/19] Non disruptive application core dump infrastructure using task_work_add() Pavel Emelyanov
2013-10-07 18:57 ` Tejun Heo
2013-10-08 10:14 ` Janani Venkataraman1
2013-10-08 10:12 ` Janani Venkataraman1
2013-10-09 8:57 ` Pavel Emelyanov
2013-10-04 13:44 ` Andi Kleen
2013-10-07 6:07 ` Suzuki K. Poulose
2013-10-07 13:58 ` Oleg Nesterov
2013-10-07 18:10 ` Andi Kleen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131004103239.1612.31930.stgit@f19-x64 \
--to=jananive@in.ibm.com \
--cc=adobriyan@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=amwang@redhat.com \
--cc=ananth@linux.vnet.ibm.com \
--cc=andi@firstfloor.org \
--cc=aravinda@linux.vnet.ibm.com \
--cc=avagin@openvz.org \
--cc=d.hatayama@jp.fujitsu.com \
--cc=eparis@redhat.com \
--cc=gorcunov@openvz.org \
--cc=hch@lst.de \
--cc=james.hogan@imgtec.com \
--cc=jeremy.fitzhardinge@citrix.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mhiramat@redhat.com \
--cc=oleg@redhat.com \
--cc=rdunlap@xenotime.net \
--cc=roland@hack.frob.com \
--cc=suzuki@linux.vnet.ibm.com \
--cc=tarundsk@linux.vnet.ibm.com \
--cc=tj@kernel.org \
--cc=torvalds@linux-foundation.org \
--cc=vapier@gentoo.org \
--cc=xemul@parallels.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).