* [PATCH 0/3] Convert vmcore to use an iov_iter
@ 2021-12-13 0:06 Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-13 0:06 UTC (permalink / raw)
To: Baoquan He, Vivek Goyal, Dave Young, kexec
Cc: Matthew Wilcox (Oracle), Tiezhu Yang, linux-kernel,
Amit Daniel Kachhap, Christoph Hellwig, linux-s390, linux-fsdevel
For some reason several people have been sending bad patches to fix
compiler warnings in vmcore recently. Here's how it should be done.
Compile-tested only on x86. As noted in the first patch, s390 should
take this conversion a bit further, but I'm not inclined to do that
work myself.
Matthew Wilcox (Oracle) (3):
vmcore: Convert copy_oldmem_page() to take an iov_iter
vmcore: Convert __read_vmcore to use an iov_iter
vmcore: Convert read_from_oldmem() to take an iov_iter
arch/arm/kernel/crash_dump.c | 14 +---
arch/arm64/kernel/crash_dump.c | 14 +---
arch/ia64/kernel/crash_dump.c | 12 +--
arch/mips/kernel/crash_dump.c | 13 +---
arch/powerpc/kernel/crash_dump.c | 20 +----
arch/riscv/kernel/crash_dump.c | 13 +---
arch/s390/kernel/crash_dump.c | 12 +--
arch/sh/kernel/crash_dump.c | 15 +---
arch/x86/kernel/crash_dump_32.c | 13 +---
arch/x86/kernel/crash_dump_64.c | 31 ++++----
fs/proc/vmcore.c | 130 +++++++++++++------------------
include/linux/crash_dump.h | 19 ++---
12 files changed, 112 insertions(+), 194 deletions(-)
--
2.33.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take an iov_iter
2021-12-13 0:06 [PATCH 0/3] Convert vmcore to use an iov_iter Matthew Wilcox (Oracle)
@ 2021-12-13 0:06 ` Matthew Wilcox (Oracle)
2021-12-13 1:34 ` Matthew Wilcox
2021-12-13 7:57 ` Christoph Hellwig
2021-12-13 0:06 ` [PATCH 2/3] vmcore: Convert __read_vmcore to use " Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 3/3] vmcore: Convert read_from_oldmem() to take " Matthew Wilcox (Oracle)
2 siblings, 2 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-13 0:06 UTC (permalink / raw)
To: Baoquan He, Vivek Goyal, Dave Young, kexec
Cc: Matthew Wilcox (Oracle), Tiezhu Yang, linux-kernel,
Amit Daniel Kachhap, Christoph Hellwig, linux-s390, linux-fsdevel
Instead of passing in a 'buf' and 'userbuf' argument, pass in an iov_iter.
s390 needs more work to pass the iov_iter down further, or refactor,
but I'd be more comfortable if someone who can test on s390 did that work.
It's more convenient to convert the whole of read_from_oldmem() to
take an iov_iter at the same time, so rename it to read_from_oldmem_iter()
and add a temporary read_from_oldmem() wrapper that creates an iov_iter.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
arch/arm/kernel/crash_dump.c | 14 ++-------
arch/arm64/kernel/crash_dump.c | 14 ++-------
arch/ia64/kernel/crash_dump.c | 12 ++------
arch/mips/kernel/crash_dump.c | 13 ++------
arch/powerpc/kernel/crash_dump.c | 20 +++---------
arch/riscv/kernel/crash_dump.c | 13 ++------
arch/s390/kernel/crash_dump.c | 12 +++++---
arch/sh/kernel/crash_dump.c | 15 +++------
arch/x86/kernel/crash_dump_32.c | 13 ++------
arch/x86/kernel/crash_dump_64.c | 24 ++++++---------
fs/proc/vmcore.c | 53 ++++++++++++++++++++------------
include/linux/crash_dump.h | 9 +++---
12 files changed, 79 insertions(+), 133 deletions(-)
diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c
index 53cb92435392..15d7db782f24 100644
--- a/arch/arm/kernel/crash_dump.c
+++ b/arch/arm/kernel/crash_dump.c
@@ -27,9 +27,8 @@
* @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
* copied or negative error in case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -40,14 +39,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user(buf, vaddr + offset, csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ csize = copy_to_iter(vaddr + offset, csize, iter);
iounmap(vaddr);
return csize;
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index 58303a9ec32c..991d30aac439 100644
--- a/arch/arm64/kernel/crash_dump.c
+++ b/arch/arm64/kernel/crash_dump.c
@@ -25,9 +25,8 @@
* @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
* copied or negative error in case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -38,14 +37,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else {
- memcpy(buf, vaddr + offset, csize);
- }
+ csize = copy_to_iter(vaddr + offset, csize, iter);
memunmap(vaddr);
diff --git a/arch/ia64/kernel/crash_dump.c b/arch/ia64/kernel/crash_dump.c
index 0ed3c3dee4cd..bc331272bbd6 100644
--- a/arch/ia64/kernel/crash_dump.c
+++ b/arch/ia64/kernel/crash_dump.c
@@ -31,21 +31,15 @@
* copying the data to a pre-allocated kernel page and then copying to user
* space in non-atomic context.
*/
-ssize_t
-copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
if (!csize)
return 0;
vaddr = __va(pfn<<PAGE_SHIFT);
- if (userbuf) {
- if (copy_to_user(buf, (vaddr + offset), csize)) {
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
+ csize = copy_to_iter(vaddr + offset, csize, iter);
return csize;
}
diff --git a/arch/mips/kernel/crash_dump.c b/arch/mips/kernel/crash_dump.c
index 2e50f55185a6..926665c798b6 100644
--- a/arch/mips/kernel/crash_dump.c
+++ b/arch/mips/kernel/crash_dump.c
@@ -15,8 +15,8 @@
* Copy a page from "oldmem". For this page, there is no pte mapped
* in the current kernel.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -24,14 +24,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
-
+ csize = copy_to_iter(vaddr + offset, csize, iter);
kunmap_local(vaddr);
return csize;
diff --git a/arch/powerpc/kernel/crash_dump.c b/arch/powerpc/kernel/crash_dump.c
index 5693e1c67c2b..541bcc01490c 100644
--- a/arch/powerpc/kernel/crash_dump.c
+++ b/arch/powerpc/kernel/crash_dump.c
@@ -68,18 +68,6 @@ void __init setup_kdump_trampoline(void)
}
#endif /* CONFIG_NONSTATIC_KERNEL */
-static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
- unsigned long offset, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *)buf, (vaddr + offset), csize))
- return -EFAULT;
- } else
- memcpy(buf, (vaddr + offset), csize);
-
- return csize;
-}
-
/**
* copy_oldmem_page - copy one page from "oldmem"
* @pfn: page frame number to be copied
@@ -93,8 +81,8 @@ static size_t copy_oldmem_vaddr(void *vaddr, char *buf, size_t csize,
* Copy a page from "oldmem". For this page, there is no pte mapped
* in the current kernel. We stitch up a pte, similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
phys_addr_t paddr;
@@ -107,10 +95,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (memblock_is_region_memory(paddr, csize)) {
vaddr = __va(paddr);
- csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
+ csize = copy_to_iter(vaddr + offset, csize, iter);
} else {
vaddr = ioremap_cache(paddr, PAGE_SIZE);
- csize = copy_oldmem_vaddr(vaddr, buf, csize, offset, userbuf);
+ csize = copy_to_iter(vaddr + offset, csize, iter);
iounmap(vaddr);
}
diff --git a/arch/riscv/kernel/crash_dump.c b/arch/riscv/kernel/crash_dump.c
index 86cc0ada5752..f4dffb8c7eb7 100644
--- a/arch/riscv/kernel/crash_dump.c
+++ b/arch/riscv/kernel/crash_dump.c
@@ -20,9 +20,8 @@
* @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
* copied or negative error in case of failure.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void *vaddr;
@@ -33,13 +32,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((char __user *)buf, vaddr + offset, csize)) {
- memunmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ csize = copy_to_iter(vaddr + offset, csize, iter);
memunmap(vaddr);
return csize;
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index 785d54c9350c..09bf559f2c5b 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -214,8 +214,8 @@ static int copy_oldmem_user(void __user *dst, void *src, size_t count)
/*
* Copy one page from "oldmem"
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
+ unsigned long offset)
{
void *src;
int rc;
@@ -223,10 +223,12 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!csize)
return 0;
src = (void *) (pfn << PAGE_SHIFT) + offset;
- if (userbuf)
- rc = copy_oldmem_user((void __force __user *) buf, src, csize);
+
+ /* XXX: pass the iov_iter down to a common function */
+ if (iter_is_iovec(iter))
+ rc = copy_oldmem_user(iter->iov->iov_base, src, csize);
else
- rc = copy_oldmem_kernel((void *) buf, src, csize);
+ rc = copy_oldmem_kernel(iter->kvec->iov_base, src, csize);
return rc;
}
diff --git a/arch/sh/kernel/crash_dump.c b/arch/sh/kernel/crash_dump.c
index 5b41b59698c1..26081861cb62 100644
--- a/arch/sh/kernel/crash_dump.c
+++ b/arch/sh/kernel/crash_dump.c
@@ -23,8 +23,8 @@
* Copy a page from "oldmem". For this page, there is no pte mapped
* in the current kernel. We stitch up a pte, similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
void __iomem *vaddr;
@@ -32,15 +32,8 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
return 0;
vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE);
-
- if (userbuf) {
- if (copy_to_user((void __user *)buf, (vaddr + offset), csize)) {
- iounmap(vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, (vaddr + offset), csize);
-
+ csize = copy_to_iter(vaddr + offset, csize, iter);
iounmap(vaddr);
+
return csize;
}
diff --git a/arch/x86/kernel/crash_dump_32.c b/arch/x86/kernel/crash_dump_32.c
index 5fcac46aaf6b..23d38ecf5de2 100644
--- a/arch/x86/kernel/crash_dump_32.c
+++ b/arch/x86/kernel/crash_dump_32.c
@@ -42,8 +42,8 @@ static inline bool is_crashed_pfn_valid(unsigned long pfn)
* Copy a page from "oldmem". For this page, there might be no pte mapped
* in the current kernel.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
+ unsigned long offset)
{
void *vaddr;
@@ -54,14 +54,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
return -EFAULT;
vaddr = kmap_local_pfn(pfn);
-
- if (!userbuf) {
- memcpy(buf, vaddr + offset, csize);
- } else {
- if (copy_to_user(buf, vaddr + offset, csize))
- csize = -EFAULT;
- }
-
+ csize = copy_to_iter(vaddr + offset, csize, iter);
kunmap_local(vaddr);
return csize;
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index a7f617a3981d..8d91e5f2e14d 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,8 +12,8 @@
#include <linux/io.h>
#include <linux/cc_platform.h>
-static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf,
+static ssize_t __copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset,
bool encrypted)
{
void *vaddr;
@@ -29,13 +29,7 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
if (!vaddr)
return -ENOMEM;
- if (userbuf) {
- if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
- iounmap((void __iomem *)vaddr);
- return -EFAULT;
- }
- } else
- memcpy(buf, vaddr + offset, csize);
+ csize = copy_to_iter(vaddr + offset, csize, iter);
set_iounmap_nonlazy();
iounmap((void __iomem *)vaddr);
@@ -55,10 +49,10 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
* Copy a page from the old kernel's memory. For this page, there is no pte
* mapped in the current kernel. We stitch up a pte, similar to kmap_atomic.
*/
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn, size_t csize,
+ unsigned long offset)
{
- return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false);
+ return __copy_oldmem_page(iter, pfn, csize, offset, false);
}
/**
@@ -66,10 +60,10 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
* memory with the encryption mask set to accommodate kdump on SME-enabled
* machines.
*/
-ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t copy_oldmem_page_encrypted(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset)
{
- return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, true);
+ return __copy_oldmem_page(iter, pfn, csize, offset, true);
}
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 509f85148fee..049bafe9c007 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -132,9 +132,8 @@ static int open_vmcore(struct inode *inode, struct file *file)
}
/* Reads a page from the oldmem device from given offset. */
-ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
+ssize_t read_from_oldmem_iter(struct iov_iter *iter, size_t count,
+ u64 *ppos, bool encrypted)
{
unsigned long pfn, offset;
size_t nr_bytes;
@@ -155,29 +154,23 @@ ssize_t read_from_oldmem(char *buf, size_t count,
/* If pfn is not ram, return zeros for sparse dump files */
if (!pfn_is_ram(pfn)) {
- tmp = 0;
- if (!userbuf)
- memset(buf, 0, nr_bytes);
- else if (clear_user(buf, nr_bytes))
- tmp = -EFAULT;
+ tmp = iov_iter_zero(nr_bytes, iter);
} else {
if (encrypted)
- tmp = copy_oldmem_page_encrypted(pfn, buf,
+ tmp = copy_oldmem_page_encrypted(iter, pfn,
nr_bytes,
- offset,
- userbuf);
+ offset);
else
- tmp = copy_oldmem_page(pfn, buf, nr_bytes,
- offset, userbuf);
+ tmp = copy_oldmem_page(iter, pfn, nr_bytes,
+ offset);
}
- if (tmp < 0) {
+ if (tmp < nr_bytes) {
up_read(&vmcore_cb_rwsem);
- return tmp;
+ return -EFAULT;
}
*ppos += nr_bytes;
count -= nr_bytes;
- buf += nr_bytes;
read += nr_bytes;
++pfn;
offset = 0;
@@ -187,6 +180,27 @@ ssize_t read_from_oldmem(char *buf, size_t count,
return read;
}
+ssize_t read_from_oldmem(char *buf, size_t count,
+ u64 *ppos, int userbuf,
+ bool encrypted)
+{
+ struct iov_iter iter;
+ struct iovec iov;
+ struct kvec kvec;
+
+ if (userbuf) {
+ iov.iov_base = (__force void __user *)buf;
+ iov.iov_len = count;
+ iov_iter_init(&iter, READ, &iov, 1, count);
+ } else {
+ kvec.iov_base = buf;
+ kvec.iov_len = count;
+ iov_iter_kvec(&iter, READ, &kvec, 1, count);
+ }
+
+ return read_from_oldmem_iter(&iter, count, ppos, encrypted);
+}
+
/*
* Architectures may override this function to allocate ELF header in 2nd kernel
*/
@@ -231,11 +245,10 @@ int __weak remap_oldmem_pfn_range(struct vm_area_struct *vma,
/*
* Architectures which support memory encryption override this.
*/
-ssize_t __weak
-copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,
- unsigned long offset, int userbuf)
+ssize_t __weak copy_oldmem_page_encrypted(struct iov_iter *iter,
+ unsigned long pfn, size_t csize, unsigned long offset)
{
- return copy_oldmem_page(pfn, buf, csize, offset, userbuf);
+ return copy_oldmem_page(iter, pfn, csize, offset);
}
/*
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index 620821549b23..a1cf7d5c03c7 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -24,11 +24,10 @@ extern int remap_oldmem_pfn_range(struct vm_area_struct *vma,
unsigned long from, unsigned long pfn,
unsigned long size, pgprot_t prot);
-extern ssize_t copy_oldmem_page(unsigned long, char *, size_t,
- unsigned long, int);
-extern ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf,
- size_t csize, unsigned long offset,
- int userbuf);
+ssize_t copy_oldmem_page(struct iov_iter *i, unsigned long pfn, size_t csize,
+ unsigned long offset);
+ssize_t copy_oldmem_page_encrypted(struct iov_iter *iter, unsigned long pfn,
+ size_t csize, unsigned long offset);
void vmcore_cleanup(void);
--
2.33.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] vmcore: Convert __read_vmcore to use an iov_iter
2021-12-13 0:06 [PATCH 0/3] Convert vmcore to use an iov_iter Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
@ 2021-12-13 0:06 ` Matthew Wilcox (Oracle)
2021-12-13 8:00 ` Christoph Hellwig
2021-12-13 0:06 ` [PATCH 3/3] vmcore: Convert read_from_oldmem() to take " Matthew Wilcox (Oracle)
2 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-13 0:06 UTC (permalink / raw)
To: Baoquan He, Vivek Goyal, Dave Young, kexec
Cc: Matthew Wilcox (Oracle), Tiezhu Yang, linux-kernel,
Amit Daniel Kachhap, Christoph Hellwig, linux-s390, linux-fsdevel
This gets rid of copy_to() and let us use proc_read_iter() instead
of proc_read().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/proc/vmcore.c | 83 ++++++++++++++++++------------------------------
1 file changed, 31 insertions(+), 52 deletions(-)
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 049bafe9c007..443cbaf16ec8 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -251,22 +251,8 @@ ssize_t __weak copy_oldmem_page_encrypted(struct iov_iter *iter,
return copy_oldmem_page(iter, pfn, csize, offset);
}
-/*
- * Copy to either kernel or user space
- */
-static int copy_to(void *target, void *src, size_t size, int userbuf)
-{
- if (userbuf) {
- if (copy_to_user((char __user *) target, src, size))
- return -EFAULT;
- } else {
- memcpy(target, src, size);
- }
- return 0;
-}
-
#ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
-static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
+static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size)
{
struct vmcoredd_node *dump;
u64 offset = 0;
@@ -279,14 +265,13 @@ static int vmcoredd_copy_dumps(void *dst, u64 start, size_t size, int userbuf)
if (start < offset + dump->size) {
tsz = min(offset + (u64)dump->size - start, (u64)size);
buf = dump->buf + start - offset;
- if (copy_to(dst, buf, tsz, userbuf)) {
+ if (copy_to_iter(buf, tsz, iter) < tsz) {
ret = -EFAULT;
goto out_unlock;
}
size -= tsz;
start += tsz;
- dst += tsz;
/* Leave now if buffer filled already */
if (!size)
@@ -342,33 +327,30 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
/* Read from the ELF header and then the crash dump. On error, negative value is
* returned otherwise number of bytes read are returned.
*/
-static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
- int userbuf)
+static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos)
{
ssize_t acc = 0, tmp;
size_t tsz;
u64 start;
struct vmcore *m = NULL;
- if (buflen == 0 || *fpos >= vmcore_size)
+ if (iter->count == 0 || *fpos >= vmcore_size)
return 0;
- /* trim buflen to not go beyond EOF */
- if (buflen > vmcore_size - *fpos)
- buflen = vmcore_size - *fpos;
+ /* trim iter to not go beyond EOF */
+ if (iter->count > vmcore_size - *fpos)
+ iter->count = vmcore_size - *fpos;
/* Read ELF core header */
if (*fpos < elfcorebuf_sz) {
- tsz = min(elfcorebuf_sz - (size_t)*fpos, buflen);
- if (copy_to(buffer, elfcorebuf + *fpos, tsz, userbuf))
+ tsz = min(elfcorebuf_sz - (size_t)*fpos, iter->count);
+ if (copy_to_iter(elfcorebuf + *fpos, tsz, iter) < tsz)
return -EFAULT;
- buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
acc += tsz;
/* leave now if filled buffer already */
- if (buflen == 0)
+ if (iter->count == 0)
return acc;
}
@@ -389,35 +371,31 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
/* Read device dumps */
if (*fpos < elfcorebuf_sz + vmcoredd_orig_sz) {
tsz = min(elfcorebuf_sz + vmcoredd_orig_sz -
- (size_t)*fpos, buflen);
+ (size_t)*fpos, iter->count);
start = *fpos - elfcorebuf_sz;
- if (vmcoredd_copy_dumps(buffer, start, tsz, userbuf))
+ if (vmcoredd_copy_dumps(iter, start, tsz))
return -EFAULT;
- buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
acc += tsz;
/* leave now if filled buffer already */
- if (!buflen)
+ if (!iter->count)
return acc;
}
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
/* Read remaining elf notes */
- tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, buflen);
+ tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)*fpos, iter->count);
kaddr = elfnotes_buf + *fpos - elfcorebuf_sz - vmcoredd_orig_sz;
- if (copy_to(buffer, kaddr, tsz, userbuf))
+ if (copy_to_iter(kaddr, tsz, iter) < tsz)
return -EFAULT;
- buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
acc += tsz;
/* leave now if filled buffer already */
- if (buflen == 0)
+ if (iter->count == 0)
return acc;
}
@@ -425,19 +403,17 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
if (*fpos < m->offset + m->size) {
tsz = (size_t)min_t(unsigned long long,
m->offset + m->size - *fpos,
- buflen);
+ iter->count);
start = m->paddr + *fpos - m->offset;
- tmp = read_from_oldmem(buffer, tsz, &start,
- userbuf, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
+ tmp = read_from_oldmem_iter(iter, tsz, &start,
+ cc_platform_has(CC_ATTR_MEM_ENCRYPT));
if (tmp < 0)
return tmp;
- buflen -= tsz;
*fpos += tsz;
- buffer += tsz;
acc += tsz;
/* leave now if filled buffer already */
- if (buflen == 0)
+ if (iter->count == 0)
return acc;
}
}
@@ -445,15 +421,14 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
return acc;
}
-static ssize_t read_vmcore(struct file *file, char __user *buffer,
- size_t buflen, loff_t *fpos)
+static ssize_t read_vmcore(struct kiocb *iocb, struct iov_iter *iter)
{
- return __read_vmcore((__force char *) buffer, buflen, fpos, 1);
+ return __read_vmcore(iter, &iocb->ki_pos);
}
/*
* The vmcore fault handler uses the page cache and fills data using the
- * standard __vmcore_read() function.
+ * standard __read_vmcore() function.
*
* On s390 the fault handler is used for memory regions that can't be mapped
* directly with remap_pfn_range().
@@ -463,9 +438,10 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf)
#ifdef CONFIG_S390
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
pgoff_t index = vmf->pgoff;
+ struct iov_iter iter;
+ struct kvec kvec;
struct page *page;
loff_t offset;
- char *buf;
int rc;
page = find_or_create_page(mapping, index, GFP_KERNEL);
@@ -473,8 +449,11 @@ static vm_fault_t mmap_vmcore_fault(struct vm_fault *vmf)
return VM_FAULT_OOM;
if (!PageUptodate(page)) {
offset = (loff_t) index << PAGE_SHIFT;
- buf = __va((page_to_pfn(page) << PAGE_SHIFT));
- rc = __read_vmcore(buf, PAGE_SIZE, &offset, 0);
+ kvec.iov_base = page_address(page);
+ kvec.iov_len = PAGE_SIZE;
+ iov_iter_kvec(&iter, READ, &kvec, 1, PAGE_SIZE);
+
+ rc = __read_vmcore(&iter, &offset);
if (rc < 0) {
unlock_page(page);
put_page(page);
@@ -724,7 +703,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
static const struct proc_ops vmcore_proc_ops = {
.proc_open = open_vmcore,
- .proc_read = read_vmcore,
+ .proc_read_iter = read_vmcore,
.proc_lseek = default_llseek,
.proc_mmap = mmap_vmcore,
};
--
2.33.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] vmcore: Convert read_from_oldmem() to take an iov_iter
2021-12-13 0:06 [PATCH 0/3] Convert vmcore to use an iov_iter Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 2/3] vmcore: Convert __read_vmcore to use " Matthew Wilcox (Oracle)
@ 2021-12-13 0:06 ` Matthew Wilcox (Oracle)
2021-12-13 8:02 ` Christoph Hellwig
2 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-13 0:06 UTC (permalink / raw)
To: Baoquan He, Vivek Goyal, Dave Young, kexec
Cc: Matthew Wilcox (Oracle), Tiezhu Yang, linux-kernel,
Amit Daniel Kachhap, Christoph Hellwig, linux-s390, linux-fsdevel
Remove the read_from_oldmem() wrapper introduced earlier and convert
all the remaining callers to pass an iov_iter.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
arch/x86/kernel/crash_dump_64.c | 7 +++++-
fs/proc/vmcore.c | 40 +++++++++++++--------------------
include/linux/crash_dump.h | 10 ++++-----
3 files changed, 25 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 8d91e5f2e14d..991e09b80852 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -68,6 +68,11 @@ ssize_t copy_oldmem_page_encrypted(struct iov_iter *iter, unsigned long pfn,
ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0,
+ struct kvec kvec = { .iov_base = buf, .iov_len = count };
+ struct iov_iter iter;
+
+ iov_iter_kvec(&iter, READ, &kvec, 1, count);
+
+ return read_from_oldmem(&iter, count, ppos,
cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 443cbaf16ec8..986474f02a02 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -132,7 +132,7 @@ static int open_vmcore(struct inode *inode, struct file *file)
}
/* Reads a page from the oldmem device from given offset. */
-ssize_t read_from_oldmem_iter(struct iov_iter *iter, size_t count,
+ssize_t read_from_oldmem(struct iov_iter *iter, size_t count,
u64 *ppos, bool encrypted)
{
unsigned long pfn, offset;
@@ -180,27 +180,6 @@ ssize_t read_from_oldmem_iter(struct iov_iter *iter, size_t count,
return read;
}
-ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
-{
- struct iov_iter iter;
- struct iovec iov;
- struct kvec kvec;
-
- if (userbuf) {
- iov.iov_base = (__force void __user *)buf;
- iov.iov_len = count;
- iov_iter_init(&iter, READ, &iov, 1, count);
- } else {
- kvec.iov_base = buf;
- kvec.iov_len = count;
- iov_iter_kvec(&iter, READ, &kvec, 1, count);
- }
-
- return read_from_oldmem_iter(&iter, count, ppos, encrypted);
-}
-
/*
* Architectures may override this function to allocate ELF header in 2nd kernel
*/
@@ -220,7 +199,12 @@ void __weak elfcorehdr_free(unsigned long long addr)
*/
ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, false);
+ struct kvec kvec = { .iov_base = buf, .iov_len = count };
+ struct iov_iter iter;
+
+ iov_iter_kvec(&iter, READ, &kvec, 1, count);
+
+ return read_from_oldmem(&iter, count, ppos, false);
}
/*
@@ -228,7 +212,13 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
*/
ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
+ struct kvec kvec = { .iov_base = buf, .iov_len = count };
+ struct iov_iter iter;
+
+ iov_iter_kvec(&iter, READ, &kvec, 1, count);
+
+ return read_from_oldmem(&iter, count, ppos,
+ cc_platform_has(CC_ATTR_MEM_ENCRYPT));
}
/*
@@ -405,7 +395,7 @@ static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos)
m->offset + m->size - *fpos,
iter->count);
start = m->paddr + *fpos - m->offset;
- tmp = read_from_oldmem_iter(iter, tsz, &start,
+ tmp = read_from_oldmem(iter, tsz, &start,
cc_platform_has(CC_ATTR_MEM_ENCRYPT));
if (tmp < 0)
return tmp;
diff --git a/include/linux/crash_dump.h b/include/linux/crash_dump.h
index a1cf7d5c03c7..0f3a656293b0 100644
--- a/include/linux/crash_dump.h
+++ b/include/linux/crash_dump.h
@@ -134,13 +134,11 @@ static inline int vmcore_add_device_dump(struct vmcoredd_data *data)
#endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
#ifdef CONFIG_PROC_VMCORE
-ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted);
+ssize_t read_from_oldmem(struct iov_iter *iter, size_t count,
+ u64 *ppos, bool encrypted);
#else
-static inline ssize_t read_from_oldmem(char *buf, size_t count,
- u64 *ppos, int userbuf,
- bool encrypted)
+static inline ssize_t read_from_oldmem(struct iov_iter *iter, size_t count,
+ u64 *ppos, bool encrypted)
{
return -EOPNOTSUPP;
}
--
2.33.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take an iov_iter
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
@ 2021-12-13 1:34 ` Matthew Wilcox
2021-12-13 7:57 ` Christoph Hellwig
1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2021-12-13 1:34 UTC (permalink / raw)
To: Baoquan He, Vivek Goyal, Dave Young, kexec
Cc: Tiezhu Yang, linux-kernel, Amit Daniel Kachhap, Christoph Hellwig,
linux-s390, linux-fsdevel
On Mon, Dec 13, 2021 at 12:06:34AM +0000, Matthew Wilcox (Oracle) wrote:
> +++ b/arch/arm/kernel/crash_dump.c
> @@ -27,9 +27,8 @@
> * @buf. If @buf is in userspace, set @userbuf to %1. Returns number of bytes
> * copied or negative error in case of failure.
> */
> -ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
> - size_t csize, unsigned long offset,
> - int userbuf)
> +ssize_t copy_oldmem_page(struct iov_iter *iter, unsigned long pfn,
> + size_t csize, unsigned long offset)
> {
> void *vaddr;
>
I forgot to mention that I didn't adjust the kernel-doc. I'm tempted to
remove it entirely, to be honest. It's not included into the rst files,
and having the documentation repeated ten times across arch directories
isn't useful.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take an iov_iter
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
2021-12-13 1:34 ` Matthew Wilcox
@ 2021-12-13 7:57 ` Christoph Hellwig
2021-12-16 9:28 ` Heiko Carstens
1 sibling, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2021-12-13 7:57 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Baoquan He, Vivek Goyal, Dave Young, kexec, Tiezhu Yang,
linux-kernel, Amit Daniel Kachhap, Christoph Hellwig, linux-s390,
linux-fsdevel
On Mon, Dec 13, 2021 at 12:06:34AM +0000, Matthew Wilcox (Oracle) wrote:
> Instead of passing in a 'buf' and 'userbuf' argument, pass in an iov_iter.
> s390 needs more work to pass the iov_iter down further, or refactor,
> but I'd be more comfortable if someone who can test on s390 did that work.
>
> It's more convenient to convert the whole of read_from_oldmem() to
> take an iov_iter at the same time, so rename it to read_from_oldmem_iter()
> and add a temporary read_from_oldmem() wrapper that creates an iov_iter.
This looks pretty reasonable. s390 could use some love from people that
know the code, and yes, the kerneldoc comments should go away.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] vmcore: Convert __read_vmcore to use an iov_iter
2021-12-13 0:06 ` [PATCH 2/3] vmcore: Convert __read_vmcore to use " Matthew Wilcox (Oracle)
@ 2021-12-13 8:00 ` Christoph Hellwig
0 siblings, 0 replies; 11+ messages in thread
From: Christoph Hellwig @ 2021-12-13 8:00 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Baoquan He, Vivek Goyal, Dave Young, kexec, Tiezhu Yang,
linux-kernel, Amit Daniel Kachhap, Christoph Hellwig, linux-s390,
linux-fsdevel
On Mon, Dec 13, 2021 at 12:06:35AM +0000, Matthew Wilcox (Oracle) wrote:
> + /* trim iter to not go beyond EOF */
> + if (iter->count > vmcore_size - *fpos)
> + iter->count = vmcore_size - *fpos;
Nit: iov_iter_truncate()
Otherwise this looks good from a cursory view.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] vmcore: Convert read_from_oldmem() to take an iov_iter
2021-12-13 0:06 ` [PATCH 3/3] vmcore: Convert read_from_oldmem() to take " Matthew Wilcox (Oracle)
@ 2021-12-13 8:02 ` Christoph Hellwig
2021-12-13 9:29 ` Baoquan He
2021-12-13 13:42 ` Matthew Wilcox
0 siblings, 2 replies; 11+ messages in thread
From: Christoph Hellwig @ 2021-12-13 8:02 UTC (permalink / raw)
To: Matthew Wilcox (Oracle)
Cc: Baoquan He, Vivek Goyal, Dave Young, kexec, Tiezhu Yang,
linux-kernel, Amit Daniel Kachhap, Christoph Hellwig, linux-s390,
linux-fsdevel
>
> ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> {
> - return read_from_oldmem(buf, count, ppos, 0,
> + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> + struct iov_iter iter;
> +
> + iov_iter_kvec(&iter, READ, &kvec, 1, count);
> +
> + return read_from_oldmem(&iter, count, ppos,
> cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
> }
elfcorehdr_read should probably also take an iov_iter while we're at it.
I also don't quite understand why we even need the arch overrides for it,
but that would require some digging into the history of this interface.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] vmcore: Convert read_from_oldmem() to take an iov_iter
2021-12-13 8:02 ` Christoph Hellwig
@ 2021-12-13 9:29 ` Baoquan He
2021-12-13 13:42 ` Matthew Wilcox
1 sibling, 0 replies; 11+ messages in thread
From: Baoquan He @ 2021-12-13 9:29 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Matthew Wilcox (Oracle), Vivek Goyal, Dave Young, kexec,
Tiezhu Yang, linux-kernel, Amit Daniel Kachhap, linux-s390,
linux-fsdevel
On 12/13/21 at 09:02am, Christoph Hellwig wrote:
> >
> > ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> > {
> > - return read_from_oldmem(buf, count, ppos, 0,
> > + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> > + struct iov_iter iter;
> > +
> > + iov_iter_kvec(&iter, READ, &kvec, 1, count);
> > +
> > + return read_from_oldmem(&iter, count, ppos,
> > cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
> > }
>
> elfcorehdr_read should probably also take an iov_iter while we're at it.
>
> I also don't quite understand why we even need the arch overrides for it,
> but that would require some digging into the history of this interface.
>
Below patchset removing sec_active() from generic code added this arch
override on x86_64. Before that, s390 and arm64 have had arch overrides.
And arm64 says "elfcorehdr_read() is simple as the region is always mapped."
in commit e62aaeac42 "arm64: kdump: provide /proc/vmcore file".
[v3,0/6] Remove x86-specific code from generic headers
https://patchwork.kernel.org/project/linux-fsdevel/cover/20190718032858.28744-1-bauerman@linux.ibm.com/
5cbdaeefb655 s390/mm: Remove sev_active() function
ae7eb82a92fa fs/core/vmcore: Move sev_active() reference to x86 arch code
284e21fab2cf x86, s390/mm: Move sme_active() and sme_me_mask to x86-specific header
e740815a97e2 dma-mapping: Remove dma_check_mask()
47e5d8f9ed34 swiotlb: Remove call to sme_active()
0c9c1d563975 x86, s390: Move ARCH_HAS_MEM_ENCRYPT definition to arch/Kconfig
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] vmcore: Convert read_from_oldmem() to take an iov_iter
2021-12-13 8:02 ` Christoph Hellwig
2021-12-13 9:29 ` Baoquan He
@ 2021-12-13 13:42 ` Matthew Wilcox
1 sibling, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2021-12-13 13:42 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Baoquan He, Vivek Goyal, Dave Young, kexec, Tiezhu Yang,
linux-kernel, Amit Daniel Kachhap, linux-s390, linux-fsdevel
On Mon, Dec 13, 2021 at 09:02:57AM +0100, Christoph Hellwig wrote:
> >
> > ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
> > {
> > - return read_from_oldmem(buf, count, ppos, 0,
> > + struct kvec kvec = { .iov_base = buf, .iov_len = count };
> > + struct iov_iter iter;
> > +
> > + iov_iter_kvec(&iter, READ, &kvec, 1, count);
> > +
> > + return read_from_oldmem(&iter, count, ppos,
> > cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
> > }
>
> elfcorehdr_read should probably also take an iov_iter while we're at it.
I don't like how that looks. For example:
@@ -1246,11 +1247,14 @@ static int __init parse_crash_elf64_headers(void)
int rc=0;
Elf64_Ehdr ehdr;
u64 addr;
+ struct iov_iter iter;
+ struct kvec kvec = { .iov_base = &ehdr, .iov_len = sizeof(ehdr) };
addr = elfcorehdr_addr;
/* Read Elf header */
- rc = elfcorehdr_read((char *)&ehdr, sizeof(Elf64_Ehdr), &addr);
+ iov_iter_kvec(&iter, READ, &kvec, 1, sizeof(ehdr));
+ rc = elfcorehdr_read(&iter, &addr);
if (rc < 0)
return rc;
@@ -1277,7 +1281,10 @@ static int __init parse_crash_elf64_headers(void)
if (!elfcorebuf)
return -ENOMEM;
addr = elfcorehdr_addr;
- rc = elfcorehdr_read(elfcorebuf, elfcorebuf_sz_orig, &addr);
+ kvec.iov_base = elfcorebuf;
+ kvec.iov_len = elfcorebuf_sz_orig;
+ iov_iter_kvec(&iter, READ, &kvec, 1, elfcorebuf_sz_orig);
+ rc = elfcorehdr_read(&iter, &addr);
if (rc < 0)
goto fail;
I don't think this makes the code clearer, and it's already pretty ugly
code. I'd rather leave constructing the iov_iter to elfcorehdr_read().
(Same remarks apply to elfcorehdr_read_notes())
> I also don't quite understand why we even need the arch overrides for it,
> but that would require some digging into the history of this interface.
I decided I didn't want to know ;-)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take an iov_iter
2021-12-13 7:57 ` Christoph Hellwig
@ 2021-12-16 9:28 ` Heiko Carstens
0 siblings, 0 replies; 11+ messages in thread
From: Heiko Carstens @ 2021-12-16 9:28 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Matthew Wilcox (Oracle), Baoquan He, Vivek Goyal, Dave Young,
kexec, Tiezhu Yang, linux-kernel, Amit Daniel Kachhap, linux-s390,
linux-fsdevel, Vasily Gorbik, Alexander Gordeev
On Mon, Dec 13, 2021 at 08:57:25AM +0100, Christoph Hellwig wrote:
> On Mon, Dec 13, 2021 at 12:06:34AM +0000, Matthew Wilcox (Oracle) wrote:
> > Instead of passing in a 'buf' and 'userbuf' argument, pass in an iov_iter.
> > s390 needs more work to pass the iov_iter down further, or refactor,
> > but I'd be more comfortable if someone who can test on s390 did that work.
> >
> > It's more convenient to convert the whole of read_from_oldmem() to
> > take an iov_iter at the same time, so rename it to read_from_oldmem_iter()
> > and add a temporary read_from_oldmem() wrapper that creates an iov_iter.
>
> This looks pretty reasonable. s390 could use some love from people that
> know the code, and yes, the kerneldoc comments should go away.
Sure, we will take care of this. Added to ToDo list.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-12-16 9:29 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-12-13 0:06 [PATCH 0/3] Convert vmcore to use an iov_iter Matthew Wilcox (Oracle)
2021-12-13 0:06 ` [PATCH 1/3] vmcore: Convert copy_oldmem_page() to take " Matthew Wilcox (Oracle)
2021-12-13 1:34 ` Matthew Wilcox
2021-12-13 7:57 ` Christoph Hellwig
2021-12-16 9:28 ` Heiko Carstens
2021-12-13 0:06 ` [PATCH 2/3] vmcore: Convert __read_vmcore to use " Matthew Wilcox (Oracle)
2021-12-13 8:00 ` Christoph Hellwig
2021-12-13 0:06 ` [PATCH 3/3] vmcore: Convert read_from_oldmem() to take " Matthew Wilcox (Oracle)
2021-12-13 8:02 ` Christoph Hellwig
2021-12-13 9:29 ` Baoquan He
2021-12-13 13:42 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).