* [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations
@ 2011-04-02 0:44 Grant Erickson
2011-04-04 7:27 ` Artem Bityutskiy
2011-04-04 18:19 ` [PATCH v2] " Grant Erickson
0 siblings, 2 replies; 11+ messages in thread
From: Grant Erickson @ 2011-04-02 0:44 UTC (permalink / raw)
To: linux-mtd
When handling user space read or write requests via mtd_{read,write},
exponentially back off on the size of the requested kernel transfer
buffer until it succeeds or until the requested transfer buffer size
falls below the page size.
This helps ensure the operation can succeed under low-memory,
highly-fragmented situations albeit somewhat more slowly.
Signed-off-by: Grant Erickson <marathon96@gmail.com>
---
drivers/mtd/mtdchar.c | 30 ++++++++++++++----------------
1 files changed, 14 insertions(+), 16 deletions(-)
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index 145b3d0d..d887b91 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -179,6 +179,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t
size_t total_retlen=0;
int ret=0;
int len;
+ size_t size;
char *kbuf;
DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n");
@@ -192,20 +193,18 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t
/* FIXME: Use kiovec in 2.5 to lock down the user's buffers
and pass them directly to the MTD functions */
- if (count > MAX_KMALLOC_SIZE)
- kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL);
- else
- kbuf=kmalloc(count, GFP_KERNEL);
+ size = min_t(size_t, count, MAX_KMALLOC_SIZE);
+
+ do {
+ kbuf=kmalloc(size, GFP_KERNEL);
+ } while (!kbuf && ((size >>= 1) >= PAGE_SIZE));
if (!kbuf)
return -ENOMEM;
while (count) {
- if (count > MAX_KMALLOC_SIZE)
- len = MAX_KMALLOC_SIZE;
- else
- len = count;
+ len = min_t(size_t, count, size);
switch (mfi->mode) {
case MTD_MODE_OTP_FACTORY:
@@ -268,6 +267,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count
{
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
+ size_t size;
char *kbuf;
size_t retlen;
size_t total_retlen=0;
@@ -285,20 +285,18 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count
if (!count)
return 0;
- if (count > MAX_KMALLOC_SIZE)
- kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL);
- else
- kbuf=kmalloc(count, GFP_KERNEL);
+ size = min_t(size_t, count, MAX_KMALLOC_SIZE);
+
+ do {
+ kbuf=kmalloc(size, GFP_KERNEL);
+ } while (!kbuf && ((size >>= 1) >= PAGE_SIZE));
if (!kbuf)
return -ENOMEM;
while (count) {
- if (count > MAX_KMALLOC_SIZE)
- len = MAX_KMALLOC_SIZE;
- else
- len = count;
+ len = min_t(size_t, count, size);
if (copy_from_user(kbuf, buf, len)) {
kfree(kbuf);
--
1.7.4.2
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-02 0:44 [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations Grant Erickson @ 2011-04-04 7:27 ` Artem Bityutskiy 2011-04-04 7:41 ` Artem Bityutskiy 2011-04-04 16:05 ` Grant Erickson 2011-04-04 18:19 ` [PATCH v2] " Grant Erickson 1 sibling, 2 replies; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-04 7:27 UTC (permalink / raw) To: Grant Erickson; +Cc: linux-mtd, linux-kernel [CCing LKML in a hope to get good suggestions] [The patch: http://lists.infradead.org/pipermail/linux-mtd/2011-April/034645.html] Hi Grant, Just in case, Jarkko was trying to address the same issue recently: http://lists.infradead.org/pipermail/linux-mtd/2011-March/034416.html On Fri, 2011-04-01 at 17:44 -0700, Grant Erickson wrote: > When handling user space read or write requests via mtd_{read,write}, > exponentially back off on the size of the requested kernel transfer > buffer until it succeeds or until the requested transfer buffer size > falls below the page size. > > This helps ensure the operation can succeed under low-memory, > highly-fragmented situations albeit somewhat more slowly. > > Signed-off-by: Grant Erickson <marathon96@gmail.com> > --- > drivers/mtd/mtdchar.c | 30 ++++++++++++++---------------- > 1 files changed, 14 insertions(+), 16 deletions(-) > > diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c > index 145b3d0d..d887b91 100644 > --- a/drivers/mtd/mtdchar.c > +++ b/drivers/mtd/mtdchar.c > @@ -179,6 +179,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t > size_t total_retlen=0; > int ret=0; > int len; > + size_t size; > char *kbuf; > > DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); > @@ -192,20 +193,18 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t > /* FIXME: Use kiovec in 2.5 to lock down the user's buffers > and pass them directly to the MTD functions */ > > - if (count > MAX_KMALLOC_SIZE) > - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); > - else > - kbuf=kmalloc(count, GFP_KERNEL); > + size = min_t(size_t, count, MAX_KMALLOC_SIZE); > + > + do { > + kbuf=kmalloc(size, GFP_KERNEL); > + } while (!kbuf && ((size >>= 1) >= PAGE_SIZE)); This should be a bit more complex I think. First of all, I think it is better to make this a separate function. Second, you should make sure the system does not print scary warnings when the allocation fails - use __GFP_NOWARN flag, just like Jarkko did. An third, as I wrote in my answer to Jarkko, allocating large contiguous buffers is bad for performance: if the system memory is fragmented and there is no such large contiguous areas, the kernel will start writing back dirty FS data, killing FS caches, shrinking caches and buggers, probably even swapping out applications. We do not want MTD to cause this at all. Probably we can mitigate this with kmalloc flags. Now, I'm not sure what flags are the optimal, but I'd do: __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY May be even __GFP_WAIT flag could be kicked out. > > if (!kbuf) > return -ENOMEM; > > while (count) { > > - if (count > MAX_KMALLOC_SIZE) > - len = MAX_KMALLOC_SIZE; > - else > - len = count; > + len = min_t(size_t, count, size); > > switch (mfi->mode) { > case MTD_MODE_OTP_FACTORY: > @@ -268,6 +267,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count > { > struct mtd_file_info *mfi = file->private_data; > struct mtd_info *mtd = mfi->mtd; > + size_t size; > char *kbuf; > size_t retlen; > size_t total_retlen=0; > @@ -285,20 +285,18 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count > if (!count) > return 0; > > - if (count > MAX_KMALLOC_SIZE) > - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); > - else > - kbuf=kmalloc(count, GFP_KERNEL); > + size = min_t(size_t, count, MAX_KMALLOC_SIZE); > + > + do { > + kbuf=kmalloc(size, GFP_KERNEL); > + } while (!kbuf && ((size >>= 1) >= PAGE_SIZE)); > > if (!kbuf) > return -ENOMEM; > > while (count) { > > - if (count > MAX_KMALLOC_SIZE) > - len = MAX_KMALLOC_SIZE; > - else > - len = count; > + len = min_t(size_t, count, size); > > if (copy_from_user(kbuf, buf, len)) { > kfree(kbuf); -- Best Regards, Artem Bityutskiy (Артём Битюцкий) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-04 7:27 ` Artem Bityutskiy @ 2011-04-04 7:41 ` Artem Bityutskiy 2011-04-04 16:05 ` Grant Erickson 1 sibling, 0 replies; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-04 7:41 UTC (permalink / raw) To: Grant Erickson; +Cc: linux-mtd, linux-kernel On Mon, 2011-04-04 at 10:27 +0300, Artem Bityutskiy wrote: > An third, as I wrote in my answer to Jarkko, allocating large contiguous > buffers is bad for performance: if the system memory is fragmented and > there is no such large contiguous areas, the kernel will start writing > back dirty FS data, killing FS caches, shrinking caches and buggers, > probably even swapping out applications. We do not want MTD to cause > this at all. s/buggers/buffers/ > Probably we can mitigate this with kmalloc flags. Now, I'm not sure what > flags are the optimal, but I'd do: > > __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY Of course I meant you should use special flags as long as you are allocating more than 1 contiguous page. But the last PAGE_SIZE allocation should be done with standard GFP_KERNEL flag. -- Best Regards, Artem Bityutskiy (Артём Битюцкий) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-04 7:27 ` Artem Bityutskiy 2011-04-04 7:41 ` Artem Bityutskiy @ 2011-04-04 16:05 ` Grant Erickson 2011-04-05 4:39 ` Artem Bityutskiy 1 sibling, 1 reply; 11+ messages in thread From: Grant Erickson @ 2011-04-04 16:05 UTC (permalink / raw) To: Artem Bityutskiy; +Cc: Jarkko Lavinen, linux-mtd, linux-kernel On 4/4/11 12:27 AM, Artem Bityutskiy wrote: > [CCing LKML in a hope to get good suggestions] > [The patch: > http://lists.infradead.org/pipermail/linux-mtd/2011-April/034645.html] > > Hi Grant, > > Just in case, Jarkko was trying to address the same issue recently: > > http://lists.infradead.org/pipermail/linux-mtd/2011-March/034416.html > > This should be a bit more complex I think. First of all, I think it is > better to make this a separate function. Second, you should make sure > the system does not print scary warnings when the allocation fails - use > __GFP_NOWARN flag, just like Jarkko did. > > An third, as I wrote in my answer to Jarkko, allocating large contiguous > buffers is bad for performance: if the system memory is fragmented and > there is no such large contiguous areas, the kernel will start writing > back dirty FS data, killing FS caches, shrinking caches and buggers, > probably even swapping out applications. We do not want MTD to cause > this at all. > > Probably we can mitigate this with kmalloc flags. Now, I'm not sure what > flags are the optimal, but I'd do: > > __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY > > May be even __GFP_WAIT flag could be kicked out. Artem: Thanks for the feedback and the link to Jarkko's very similar patch. Your suggestions will be incorporated into a subsequent patch. For reference, I pursued a second uses-less-memory-but-is-more-complex approach that does get_user_pages, builds up a series of iovecs for the page extents. This worked well for all read cases I could test; however, for the write case, the approach required yet more refinement and overhead since the head and tail of the transfer need to be deblocked with read-modify-write due to the NOTALIGNED checks in nand_base.c:nand_do_write_ops. I am happy to share the work-in-progress with the list if anyone is interested. I propose a two-stage approach. This issue has been in the kernel for about six years. Can we take a modified version of Jarkko's or my simpler fixes for the first pass and then iterate toward the get_user_pages scatter/gather approach later? Best, Grant ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-04 16:05 ` Grant Erickson @ 2011-04-05 4:39 ` Artem Bityutskiy 0 siblings, 0 replies; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-05 4:39 UTC (permalink / raw) To: Grant Erickson; +Cc: Jarkko Lavinen, linux-mtd, linux-kernel On Mon, 2011-04-04 at 09:05 -0700, Grant Erickson wrote: > I propose a two-stage approach. This issue has been in the kernel for about > six years. > > Can we take a modified version of Jarkko's or my simpler fixes for the first > pass and then iterate toward the get_user_pages scatter/gather approach Sure, I've just commented on your patch. -- Best Regards, Artem Bityutskiy (Битюцкий Артём) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-02 0:44 [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations Grant Erickson 2011-04-04 7:27 ` Artem Bityutskiy @ 2011-04-04 18:19 ` Grant Erickson 2011-04-05 4:39 ` Artem Bityutskiy 2011-04-05 4:48 ` Artem Bityutskiy 1 sibling, 2 replies; 11+ messages in thread From: Grant Erickson @ 2011-04-04 18:19 UTC (permalink / raw) To: linux-mtd When handling user space read or write requests via mtd_{read,write}, exponentially back off on the size of the requested kernel transfer buffer until it succeeds or until the requested transfer buffer size falls below the page size. This helps ensure the operation can succeed under low-memory, highly-fragmented situations albeit somewhat more slowly. v2: Added __GFP_NOWARN flag and made common retry loop a function as recommended by Artem. Signed-off-by: Grant Erickson <marathon96@gmail.com> --- drivers/mtd/mtdchar.c | 66 +++++++++++++++++++++++++++++++++--------------- 1 files changed, 45 insertions(+), 21 deletions(-) diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 145b3d0d..df9be51 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -166,11 +166,44 @@ static int mtd_close(struct inode *inode, struct file *file) return 0; } /* mtd_close */ -/* FIXME: This _really_ needs to die. In 2.5, we should lock the - userspace buffer down and use it directly with readv/writev. -*/ +/* Back in April 2005, Linus wrote: + * + * FIXME: This _really_ needs to die. In 2.5, we should lock the + * userspace buffer down and use it directly with readv/writev. + * + * The implementation below, using mtd_try_alloc, mitigates allocation + * failures when the sytem is under low-memory situations or if memory + * is highly fragmented at the cost of reducing the performance of the + * requested transfer due to a smaller buffer size. + * + * A more complex but more memory-efficient implementation based on + * get_user_pages and iovecs to cover extents of those pages is a + * longer-term goal, as intimated by Linus above. However, for the + * write case, this requires yet more complex head and tail transfer + * handling when those head and tail offsets and sizes are such that + * alignment requirements are not met in the NAND subdriver. + */ #define MAX_KMALLOC_SIZE 0x20000 +static void *mtd_try_alloc(size_t *size) +{ + const gfp_t flags = (GFP_KERNEL | __GFP_NOWARN); + size_t try; + void *kbuf; + + try = min_t(size_t, *size, MAX_KMALLOC_SIZE); + + do { + kbuf = kmalloc(try, flags); + } while (!kbuf && ((try >>= 1) >= PAGE_SIZE)); + + if (kbuf) { + *size = try; + } + + return kbuf; +} + static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t *ppos) { struct mtd_file_info *mfi = file->private_data; @@ -179,6 +212,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t size_t total_retlen=0; int ret=0; int len; + size_t size; char *kbuf; DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); @@ -189,23 +223,16 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t if (!count) return 0; - /* FIXME: Use kiovec in 2.5 to lock down the user's buffers - and pass them directly to the MTD functions */ + size = count; - if (count > MAX_KMALLOC_SIZE) - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); - else - kbuf=kmalloc(count, GFP_KERNEL); + kbuf = mtd_try_alloc(&size); if (!kbuf) return -ENOMEM; while (count) { - if (count > MAX_KMALLOC_SIZE) - len = MAX_KMALLOC_SIZE; - else - len = count; + len = min_t(size_t, count, size); switch (mfi->mode) { case MTD_MODE_OTP_FACTORY: @@ -268,6 +295,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count { struct mtd_file_info *mfi = file->private_data; struct mtd_info *mtd = mfi->mtd; + size_t size; char *kbuf; size_t retlen; size_t total_retlen=0; @@ -285,21 +313,16 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count if (!count) return 0; - if (count > MAX_KMALLOC_SIZE) - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); - else - kbuf=kmalloc(count, GFP_KERNEL); + size = count; + + kbuf = mtd_try_alloc(&size); if (!kbuf) return -ENOMEM; while (count) { - if (count > MAX_KMALLOC_SIZE) - len = MAX_KMALLOC_SIZE; - else - len = count; + len = min_t(size_t, count, size); if (copy_from_user(kbuf, buf, len)) { kfree(kbuf); -- 1.7.4.2 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v2] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-04 18:19 ` [PATCH v2] " Grant Erickson @ 2011-04-05 4:39 ` Artem Bityutskiy 2011-04-05 15:54 ` Grant Erickson 2011-04-05 4:48 ` Artem Bityutskiy 1 sibling, 1 reply; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-05 4:39 UTC (permalink / raw) To: Grant Erickson; +Cc: linux-mtd Hi, On Mon, 2011-04-04 at 11:19 -0700, Grant Erickson wrote: > When handling user space read or write requests via mtd_{read,write}, > exponentially back off on the size of the requested kernel transfer > buffer until it succeeds or until the requested transfer buffer size > falls below the page size. > > This helps ensure the operation can succeed under low-memory, > highly-fragmented situations albeit somewhat more slowly. > > v2: Added __GFP_NOWARN flag and made common retry loop a function > as recommended by Artem. > > Signed-off-by: Grant Erickson <marathon96@gmail.com> > --- > drivers/mtd/mtdchar.c | 66 +++++++++++++++++++++++++++++++++--------------- > 1 files changed, 45 insertions(+), 21 deletions(-) > > diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c > index 145b3d0d..df9be51 100644 > --- a/drivers/mtd/mtdchar.c > +++ b/drivers/mtd/mtdchar.c > @@ -166,11 +166,44 @@ static int mtd_close(struct inode *inode, struct file *file) > return 0; > } /* mtd_close */ > > -/* FIXME: This _really_ needs to die. In 2.5, we should lock the > - userspace buffer down and use it directly with readv/writev. > -*/ > +/* Back in April 2005, Linus wrote: > + * > + * FIXME: This _really_ needs to die. In 2.5, we should lock the > + * userspace buffer down and use it directly with readv/writev. > + * > + * The implementation below, using mtd_try_alloc, mitigates allocation > + * failures when the sytem is under low-memory situations or if memory s/sytem/system/ > + * is highly fragmented at the cost of reducing the performance of the > + * requested transfer due to a smaller buffer size. > + * > + * A more complex but more memory-efficient implementation based on > + * get_user_pages and iovecs to cover extents of those pages is a > + * longer-term goal, as intimated by Linus above. However, for the > + * write case, this requires yet more complex head and tail transfer > + * handling when those head and tail offsets and sizes are such that > + * alignment requirements are not met in the NAND subdriver. > + */ > #define MAX_KMALLOC_SIZE 0x20000 > > +static void *mtd_try_alloc(size_t *size) > +{ > + const gfp_t flags = (GFP_KERNEL | __GFP_NOWARN); I still think you'll damage the performance when you try to do kmalloc(128KiB, flags) because as I wrote in my previous e-mail your system will start doing the following to free memory for you: 1. write-back dirty FS data = overall slowdown = e.g., background mp3 playback glitches 2. drop FS caches = slow down later because the system will have to re-read the dropped data from the media later. 3. not really sure, needs checking if this is the case, but I think the kernel may start swapping out apps. This is why I suggested to use the following flags here: gfp_t flags = __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY; > + size_t try; > + void *kbuf; > + > + try = min_t(size_t, *size, MAX_KMALLOC_SIZE); > + > + do { > + kbuf = kmalloc(try, flags); > + } while (!kbuf && ((try >>= 1) >= PAGE_SIZE)); So, you try 128KiB, 64KiB, 32KiB, 16KiB, 8KiB and fail, it is OK. But 4KiB is the last resort allocation. If it fails, you do want to see scary kmalloc warning, so you should not use __GFP_NOWARN for this last allocation. Also, you do want kmalloc to try hard, so for this last PAGE_SIZE allocation you want to use GFP_KERNEL flags. > + > + if (kbuf) { > + *size = try; > + } Braces are not necessary here. But actually the whole if is not needed - just make the function interface so that if it returns NULL then *size is undefined and the user of this function should not look at it. I think it is the case in your code. I mean, just *size = try; return kbuf; > + > + return kbuf; > +} > + > static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t *ppos) > { > struct mtd_file_info *mfi = file->private_data; > @@ -179,6 +212,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t > size_t total_retlen=0; > int ret=0; > int len; > + size_t size; > char *kbuf; > > DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); > @@ -189,23 +223,16 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t > if (!count) > return 0; > > - /* FIXME: Use kiovec in 2.5 to lock down the user's buffers > - and pass them directly to the MTD functions */ > + size = count; I think you can do this assignment when you declare 'size'; > > - if (count > MAX_KMALLOC_SIZE) > - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); > - else > - kbuf=kmalloc(count, GFP_KERNEL); > + kbuf = mtd_try_alloc(&size); > > if (!kbuf) > return -ENOMEM; No need to put extra new lines, too many of them make the code less readable. I think allocating and checking should have not space in between. > > while (count) { > > - if (count > MAX_KMALLOC_SIZE) > - len = MAX_KMALLOC_SIZE; > - else > - len = count; Please, kill the extra white-space after "while" as well. > + len = min_t(size_t, count, size); > > switch (mfi->mode) { > case MTD_MODE_OTP_FACTORY: > @@ -268,6 +295,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count > { > struct mtd_file_info *mfi = file->private_data; > struct mtd_info *mtd = mfi->mtd; > + size_t size; > char *kbuf; > size_t retlen; > size_t total_retlen=0; > @@ -285,21 +313,16 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count > if (!count) > return 0; > > - if (count > MAX_KMALLOC_SIZE) > - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); > - else > - kbuf=kmalloc(count, GFP_KERNEL); > + size = count; > + > + kbuf = mtd_try_alloc(&size); > > if (!kbuf) > return -ENOMEM; > > while (count) { > > - if (count > MAX_KMALLOC_SIZE) > - len = MAX_KMALLOC_SIZE; > - else > - len = count; > + len = min_t(size_t, count, size); > > if (copy_from_user(kbuf, buf, len)) { > kfree(kbuf); Similar requests for this "symmetric" piece of code. -- Best Regards, Artem Bityutskiy (Битюцкий Артём) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-05 4:39 ` Artem Bityutskiy @ 2011-04-05 15:54 ` Grant Erickson 2011-04-05 16:54 ` Artem Bityutskiy 0 siblings, 1 reply; 11+ messages in thread From: Grant Erickson @ 2011-04-05 15:54 UTC (permalink / raw) To: Artem Bityutskiy; +Cc: linux-mtd Artem: Thanks for the quick turnaround in feedback. Please see inline below. On 4/4/11 9:39 PM, Artem Bityutskiy wrote: > On Mon, 2011-04-04 at 11:19 -0700, Grant Erickson wrote: >> + * is highly fragmented at the cost of reducing the performance of the >> + * requested transfer due to a smaller buffer size. >> + * >> + * A more complex but more memory-efficient implementation based on >> + * get_user_pages and iovecs to cover extents of those pages is a >> + * longer-term goal, as intimated by Linus above. However, for the >> + * write case, this requires yet more complex head and tail transfer >> + * handling when those head and tail offsets and sizes are such that >> + * alignment requirements are not met in the NAND subdriver. >> + */ >> #define MAX_KMALLOC_SIZE 0x20000 >> >> +static void *mtd_try_alloc(size_t *size) >> +{ >> + const gfp_t flags = (GFP_KERNEL | __GFP_NOWARN); > > I still think you'll damage the performance when you try to do > > kmalloc(128KiB, flags) > > because as I wrote in my previous e-mail your system will start doing > the following to free memory for you: > > 1. write-back dirty FS data = overall slowdown = e.g., background mp3 > playback glitches > 2. drop FS caches = slow down later because the system will have to > re-read the dropped data from the media later. > 3. not really sure, needs checking if this is the case, but I think > the kernel may start swapping out apps. > > This is why I suggested to use the following flags here: > > gfp_t flags = __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY; On my system (64 MiB RAM, 256 MiB Flash), there is no swap and under these allocation conditions for jffs2_scan_medium, mtd_read or mtd_write, I don't see the kernel doing (1), (2) or (3). My impression is that the above behaviors are only activated when a swap store exists and, in general, most systems using JFFS2 and MTD do not have swap. Regardless, adding the additional flags should not be detrimental for systems with no swap and, it sounds like, helpful for systems with it. Regarding the suggestion of mtd_alloc_upto() or mtd_alloc_as_much(), are you OK exporting these from mtdchar.c or would you rather they be moved to and exported from mtdcore.c? Stay tuned for v3. -Grant ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-05 15:54 ` Grant Erickson @ 2011-04-05 16:54 ` Artem Bityutskiy 0 siblings, 0 replies; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-05 16:54 UTC (permalink / raw) To: Grant Erickson; +Cc: linux-mtd On Tue, 2011-04-05 at 08:54 -0700, Grant Erickson wrote: > Artem: > > Thanks for the quick turnaround in feedback. Please see inline below. > > On 4/4/11 9:39 PM, Artem Bityutskiy wrote: > > On Mon, 2011-04-04 at 11:19 -0700, Grant Erickson wrote: > >> + * is highly fragmented at the cost of reducing the performance of the > >> + * requested transfer due to a smaller buffer size. > >> + * > >> + * A more complex but more memory-efficient implementation based on > >> + * get_user_pages and iovecs to cover extents of those pages is a > >> + * longer-term goal, as intimated by Linus above. However, for the > >> + * write case, this requires yet more complex head and tail transfer > >> + * handling when those head and tail offsets and sizes are such that > >> + * alignment requirements are not met in the NAND subdriver. > >> + */ > >> #define MAX_KMALLOC_SIZE 0x20000 > >> > >> +static void *mtd_try_alloc(size_t *size) > >> +{ > >> + const gfp_t flags = (GFP_KERNEL | __GFP_NOWARN); > > > > I still think you'll damage the performance when you try to do > > > > kmalloc(128KiB, flags) > > > > because as I wrote in my previous e-mail your system will start doing > > the following to free memory for you: > > > > 1. write-back dirty FS data = overall slowdown = e.g., background mp3 > > playback glitches > > 2. drop FS caches = slow down later because the system will have to > > re-read the dropped data from the media later. > > 3. not really sure, needs checking if this is the case, but I think > > the kernel may start swapping out apps. > > > > This is why I suggested to use the following flags here: > > > > gfp_t flags = __GFP_NOWARN | __GFP_WAIT | __GFP_NORETRY; > > On my system (64 MiB RAM, 256 MiB Flash), there is no swap and under these > allocation conditions for jffs2_scan_medium, mtd_read or mtd_write, I don't > see the kernel doing (1), (2) or (3). Well, the code is complex and not easy to follow if you do not know it. But I navigated it to the 'do_try_to_free_pages()' function. This function can be called by kmalloc(), and it does some of the things I described. And yes, kmalloc() may cause kswapd to wake up and start swapping, I can see it in '__alloc_pages_slowpath()'. To prevent this we need __GFP_NO_KSWAPD flag which I suggest you to also add. > My impression is that the above behaviors are only activated when a swap > store exists and, in general, most systems using JFFS2 and MTD do not have > swap. Well, most but not all, I worked with one with swap (N900 phone). > Regardless, adding the additional flags should not be detrimental for > systems with no swap and, it sounds like, helpful for systems with it. OK. But things 1 and 2 which I described are relevant for non-swap systems anyway. Dunno why you did not observe them, probably you did not have high enough memory pressure and your flusher threads and other things kept the memory within limits (there are watermarks which cause background processes to start and free RAM if you cross them). > Regarding the suggestion of mtd_alloc_upto() or mtd_alloc_as_much(), are you > OK exporting these from mtdchar.c or would you rather they be moved to and > exported from mtdcore.c? I guess mtdcore is better place. -- Best Regards, Artem Bityutskiy (Артём Битюцкий) ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v2] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-04 18:19 ` [PATCH v2] " Grant Erickson 2011-04-05 4:39 ` Artem Bityutskiy @ 2011-04-05 4:48 ` Artem Bityutskiy 1 sibling, 0 replies; 11+ messages in thread From: Artem Bityutskiy @ 2011-04-05 4:48 UTC (permalink / raw) To: Grant Erickson; +Cc: linux-mtd On Mon, 2011-04-04 at 11:19 -0700, Grant Erickson wrote: > When handling user space read or write requests via mtd_{read,write}, > exponentially back off on the size of the requested kernel transfer > buffer until it succeeds or until the requested transfer buffer size > falls below the page size. > > This helps ensure the operation can succeed under low-memory, > highly-fragmented situations albeit somewhat more slowly. > > v2: Added __GFP_NOWARN flag and made common retry loop a function > as recommended by Artem. > > Signed-off-by: Grant Erickson <marathon96@gmail.com> > --- > drivers/mtd/mtdchar.c | 66 +++++++++++++++++++++++++++++++++--------------- > 1 files changed, 45 insertions(+), 21 deletions(-) > > diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c > index 145b3d0d..df9be51 100644 > --- a/drivers/mtd/mtdchar.c > +++ b/drivers/mtd/mtdchar.c > @@ -166,11 +166,44 @@ static int mtd_close(struct inode *inode, struct file *file) > return 0; > } /* mtd_close */ > > -/* FIXME: This _really_ needs to die. In 2.5, we should lock the > - userspace buffer down and use it directly with readv/writev. > -*/ > +/* Back in April 2005, Linus wrote: > + * > + * FIXME: This _really_ needs to die. In 2.5, we should lock the > + * userspace buffer down and use it directly with readv/writev. > + * > + * The implementation below, using mtd_try_alloc, mitigates allocation > + * failures when the sytem is under low-memory situations or if memory > + * is highly fragmented at the cost of reducing the performance of the > + * requested transfer due to a smaller buffer size. > + * > + * A more complex but more memory-efficient implementation based on > + * get_user_pages and iovecs to cover extents of those pages is a > + * longer-term goal, as intimated by Linus above. However, for the > + * write case, this requires yet more complex head and tail transfer > + * handling when those head and tail offsets and sizes are such that > + * alignment requirements are not met in the NAND subdriver. > + */ > #define MAX_KMALLOC_SIZE 0x20000 > > +static void *mtd_try_alloc(size_t *size) Also, if you do the changes I request and make this function allow scary kmalloc warnings on the last resort PAGE_SIZE allocation, the "try" in the function name becomes not very appropriate, because in the kernel APIs it is usually used for something like "try, if did not succeed, no worry, just return". E.g., mutex_try_lock() or something. I think it is better to name it mtd_alloc or something like this, but without "try". And probably you want to reuse this function in JFFS2, so we should give it some name which is good for exported API function. May be mtd_alloc_upto() ? Or mtd_alloc_as_much() ? Or better ideas? -- Best Regards, Artem Bityutskiy (Битюцкий Артём) ^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] JFFS2: Retry Medium Scan Buffer Allocations @ 2011-04-02 1:29 Grant Erickson 2011-04-04 18:39 ` [PATCH] MTD: Retry Read/Write Transfer " Grant Erickson 0 siblings, 1 reply; 11+ messages in thread From: Grant Erickson @ 2011-04-02 1:29 UTC (permalink / raw) To: linux-mtd When handling a JFFS2 medium scan request exponentially back off on the size of the requested scan buffer until it succeeds or until the requested scan buffer size falls below a page. This helps ensure the allocation and subsequent scan operation can succeed under low-memory, highly-fragmented situations albeit somewhat more slowly. Signed-off-by: Grant Erickson <marathon96@gmail.com> --- fs/jffs2/scan.c | 11 +++++++---- 1 files changed, 7 insertions(+), 4 deletions(-) diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c index b632ddd..4d8746d 100644 --- a/fs/jffs2/scan.c +++ b/fs/jffs2/scan.c @@ -118,11 +118,14 @@ int jffs2_scan_medium(struct jffs2_sb_info *c) buf_size = PAGE_SIZE; /* Respect kmalloc limitations */ - if (buf_size > 128*1024) - buf_size = 128*1024; + buf_size = min_t(uint32_t, buf_size, 128*1024); + + do { + D1(printk(KERN_DEBUG "Trying to allocate readbuf of %d " + "bytes\n", buf_size)); + flashbuf = kmalloc(buf_size, GFP_KERNEL); + } while (!flashbuf && ((buf_size >>= 1) >= PAGE_SIZE)); - D1(printk(KERN_DEBUG "Allocating readbuf of %d bytes\n", buf_size)); - flashbuf = kmalloc(buf_size, GFP_KERNEL); if (!flashbuf) return -ENOMEM; } -- 1.7.4.2 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* RE: [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations 2011-04-02 1:29 [PATCH] JFFS2: Retry Medium Scan " Grant Erickson @ 2011-04-04 18:39 ` Grant Erickson 0 siblings, 0 replies; 11+ messages in thread From: Grant Erickson @ 2011-04-04 18:39 UTC (permalink / raw) To: linux-mtd; +Cc: Jarkko Lavinen, Artem Bityutskiy As promised, this is the alternate, work-in-progress implementation of this patch using the more complex get_user_pages and iovecs to perform a user-requested read or write operation, reusing the buffer passed from user-space. As implemented below, reads work 100% of the time for the few test cases I've thrown at it (fw_printenv and nanddump). However, writes will not work in this implementation unless the size and offset meet the alignment requirements stipulated by nand_do_write_ops. Addressing this will require deblocking the head and tail of the transfer. I see this as a follow on, interative approach to the simpler, short-term patch submitted by Jarkko and myself. Hopefully, we can upstream that patch now and follow up with this later when complete. Comments and tweaks welcomed. Best, Grant --- drivers/mtd/mtdchar.c | 342 ++++++++++++++++++++++++++++++++++++++----------- 1 files changed, 265 insertions(+), 77 deletions(-) diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c index 145b3d0d..cd33128 100644 --- a/drivers/mtd/mtdchar.c +++ b/drivers/mtd/mtdchar.c @@ -51,6 +51,18 @@ struct mtd_file_info { enum mtd_file_modes mode; }; +/* + * Data structure to handle scatter/gather transfer pages and + * page extents for user-space read and write requests. + */ +struct mtd_xio { + bool write; + struct page **pages; + unsigned long npages; + struct iovec *iovecs; + unsigned long niovecs; +}; + static loff_t mtd_lseek (struct file *file, loff_t offset, int orig) { struct mtd_file_info *mfi = file->private_data; @@ -166,60 +178,170 @@ static int mtd_close(struct inode *inode, struct file *file) return 0; } /* mtd_close */ -/* FIXME: This _really_ needs to die. In 2.5, we should lock the - userspace buffer down and use it directly with readv/writev. -*/ -#define MAX_KMALLOC_SIZE 0x20000 +static inline bool pages_are_adjacent(struct page *first, struct page *second) +{ + return ((page_address(first) + PAGE_SIZE) == page_address(second)); +} -static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t *ppos) +static void pages_to_iovecs(size_t len, unsigned long off, struct page **pages, unsigned long npages, struct iovec *iovecs, unsigned long *niovecsp) { - struct mtd_file_info *mfi = file->private_data; - struct mtd_info *mtd = mfi->mtd; - size_t retlen=0; - size_t total_retlen=0; - int ret=0; - int len; - char *kbuf; + unsigned long iovec_len = 0; + size_t head_len, tail_len; + struct page *last_page, *curr_page; + void *address; + unsigned long p, v; - DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); + /* Handle the head. */ - if (*ppos + count > mtd->size) - count = mtd->size - *ppos; + head_len = min_t(size_t, len, PAGE_SIZE - off); + tail_len = (len - head_len) % PAGE_SIZE; - if (!count) - return 0; + iovecs[0].iov_base = page_address(pages[0]) + off; + iovecs[0].iov_len = head_len; - /* FIXME: Use kiovec in 2.5 to lock down the user's buffers - and pass them directly to the MTD functions */ + /* Handle the body + tail. */ - if (count > MAX_KMALLOC_SIZE) - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); - else - kbuf=kmalloc(count, GFP_KERNEL); + for (v = 0, p = 1; p < npages; p++) { + last_page = pages[p - 1]; + curr_page = pages[p]; + address = page_address(curr_page); - if (!kbuf) - return -ENOMEM; + if (!pages_are_adjacent(last_page, curr_page)) { + iovecs[++v].iov_base = address; + } + + iovecs[v].iov_len += PAGE_SIZE; + } + + /* Fix-up the tail length. */ + + iovecs[v].iov_len -= (PAGE_SIZE - tail_len); + + *niovecsp = v + 1; + + for (v = 0; v < *niovecsp; v++) { + iovec_len += iovecs[v].iov_len; + } +} + +static void put_pages(struct page **pages, unsigned long npages, bool write) +{ + unsigned long p; + struct page *page; + + for (p = 0; p < npages; p++) { + page = pages[p]; + if (write) + set_page_dirty_lock(page); + put_page(page); + } +} + +static int mtd_get_user_pages(struct mtd_xio *mxio, void __user *base, size_t len, bool write) +{ + unsigned long ustart, off, npages, uend, niovecs; + struct page **pages; + struct iovec *iovecs; + int status; + + if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, base, len))) { + status = -EFAULT; + goto done; + } + + ustart = (unsigned long)base & PAGE_MASK; + off = (unsigned long)base & ~PAGE_MASK; + npages = (off + len + PAGE_SIZE - 1) >> PAGE_SHIFT; + uend = (unsigned long)(base + len); + + niovecs = npages; + + pages = kmalloc(npages * sizeof (struct page *), GFP_KERNEL); + if (!pages) { + status = -ENOMEM; + goto done; + } + + iovecs = kzalloc(niovecs * sizeof (struct iovec), GFP_KERNEL); + if (!iovecs) { + status = -ENOMEM; + goto done_free_pages; + } + + status = get_user_pages_fast(ustart, npages, write, pages); + if (status < 0) + goto done_free_iovecs; + + if (status != npages) { + npages = status; + status = -EFAULT; + goto done_pages; + } + + pages_to_iovecs(len, off, pages, npages, iovecs, &niovecs); + + mxio->write = write; + mxio->pages = pages; + mxio->npages = npages; + mxio->iovecs = iovecs; + mxio->niovecs = niovecs; + + return status; + + done_pages: + put_pages(pages, npages, write); + + done_free_iovecs: + kfree(iovecs); + + done_free_pages: + kfree(pages); + + done: + return status; +} + +static void mtd_put_user_pages(struct mtd_xio *mxio) +{ + if (!mxio) + return; + + if (mxio->npages && mxio->pages) { + put_pages(mxio->pages, mxio->npages, mxio->write); + kfree(mxio->pages); + } + + if (mxio->niovecs && mxio->iovecs) { + kfree(mxio->iovecs); + } +} + +static ssize_t mtd_do_read(struct file *file, char __kernel *buf, size_t count, loff_t *ppos) +{ + struct mtd_file_info *mfi = file->private_data; + struct mtd_info *mtd = mfi->mtd; + size_t retlen=0; + size_t total_retlen=0; + int ret=0; + int len; while (count) { - if (count > MAX_KMALLOC_SIZE) - len = MAX_KMALLOC_SIZE; - else - len = count; + len = count; switch (mfi->mode) { case MTD_MODE_OTP_FACTORY: - ret = mtd->read_fact_prot_reg(mtd, *ppos, len, &retlen, kbuf); + ret = mtd->read_fact_prot_reg(mtd, *ppos, len, &retlen, buf); break; case MTD_MODE_OTP_USER: - ret = mtd->read_user_prot_reg(mtd, *ppos, len, &retlen, kbuf); + ret = mtd->read_user_prot_reg(mtd, *ppos, len, &retlen, buf); break; case MTD_MODE_RAW: { struct mtd_oob_ops ops; ops.mode = MTD_OOB_RAW; - ops.datbuf = kbuf; + ops.datbuf = buf; ops.oobbuf = NULL; ops.len = len; @@ -228,7 +350,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t break; } default: - ret = mtd->read(mtd, *ppos, len, &retlen, kbuf); + ret = mtd->read(mtd, *ppos, len, &retlen, buf); } /* Nand returns -EBADMSG on ecc errors, but it returns * the data. For our userspace tools it is important @@ -241,12 +363,7 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t */ if (!ret || (ret == -EUCLEAN) || (ret == -EBADMSG)) { *ppos += retlen; - if (copy_to_user(buf, kbuf, retlen)) { - kfree(kbuf); - return -EFAULT; - } - else - total_retlen += retlen; + total_retlen += retlen; count -= retlen; buf += retlen; @@ -254,56 +371,26 @@ static ssize_t mtd_read(struct file *file, char __user *buf, size_t count,loff_t count = 0; } else { - kfree(kbuf); return ret; } } - kfree(kbuf); return total_retlen; -} /* mtd_read */ +} -static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count,loff_t *ppos) +static ssize_t mtd_do_write(struct file *file, const char __kernel *buf, size_t count, loff_t *ppos) { struct mtd_file_info *mfi = file->private_data; struct mtd_info *mtd = mfi->mtd; - char *kbuf; size_t retlen; size_t total_retlen=0; int ret=0; int len; - DEBUG(MTD_DEBUG_LEVEL0,"MTD_write\n"); - - if (*ppos == mtd->size) - return -ENOSPC; - - if (*ppos + count > mtd->size) - count = mtd->size - *ppos; - - if (!count) - return 0; - - if (count > MAX_KMALLOC_SIZE) - kbuf=kmalloc(MAX_KMALLOC_SIZE, GFP_KERNEL); - else - kbuf=kmalloc(count, GFP_KERNEL); - - if (!kbuf) - return -ENOMEM; - while (count) { - if (count > MAX_KMALLOC_SIZE) - len = MAX_KMALLOC_SIZE; - else - len = count; - - if (copy_from_user(kbuf, buf, len)) { - kfree(kbuf); - return -EFAULT; - } + len = count; switch (mfi->mode) { case MTD_MODE_OTP_FACTORY: @@ -314,7 +401,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count ret = -EOPNOTSUPP; break; } - ret = mtd->write_user_prot_reg(mtd, *ppos, len, &retlen, kbuf); + ret = mtd->write_user_prot_reg(mtd, *ppos, len, &retlen, (char __kernel *)buf); break; case MTD_MODE_RAW: @@ -322,7 +409,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count struct mtd_oob_ops ops; ops.mode = MTD_OOB_RAW; - ops.datbuf = kbuf; + ops.datbuf = (char __kernel *)buf; ops.oobbuf = NULL; ops.len = len; @@ -332,7 +419,7 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count } default: - ret = (*(mtd->write))(mtd, *ppos, len, &retlen, kbuf); + ret = (*(mtd->write))(mtd, *ppos, len, &retlen, buf); } if (!ret) { *ppos += retlen; @@ -341,13 +428,114 @@ static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count buf += retlen; } else { - kfree(kbuf); return ret; } } - kfree(kbuf); return total_retlen; +} + +typedef ssize_t (*mtd_io_fn_t)(struct file *, char *, size_t, loff_t *); + +static ssize_t mtd_do_loop_readv_writev(struct file *file, const struct iovec *iov, unsigned long iovcnt, loff_t *ppos, mtd_io_fn_t fn) +{ + const struct iovec *vector = iov; + ssize_t ret = 0; + + while (iovcnt > 0) { + void *base; + size_t len; + ssize_t nr; + + base = vector->iov_base; + len = vector->iov_len; + vector++; + iovcnt--; + + nr = fn(file, base, len, ppos); + + if (nr < 0) { + if (!ret) + ret = nr; + break; + } + ret += nr; + if (nr != len) + break; + } + + return ret; +} + +static ssize_t mtd_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) +{ + struct mtd_file_info *mfi = file->private_data; + struct mtd_info *mtd = mfi->mtd; + ssize_t ret; + size_t total_retlen=0; + struct mtd_xio mxio; + + DEBUG(MTD_DEBUG_LEVEL0,"MTD_read\n"); + + if (*ppos + count > mtd->size) + count = mtd->size - *ppos; + + if (!count) + return 0; + + ret = mtd_get_user_pages(&mxio, buf, count, false); + if (ret < 0) + goto done_put_pages; + + total_retlen = mtd_do_loop_readv_writev(file, + mxio.iovecs, + mxio.niovecs, + ppos, + mtd_do_read); + + ret = total_retlen; + + done_put_pages: + mtd_put_user_pages(&mxio); + + return ret; +} /* mtd_read */ + +static ssize_t mtd_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) +{ + struct mtd_file_info *mfi = file->private_data; + struct mtd_info *mtd = mfi->mtd; + ssize_t ret; + size_t total_retlen=0; + struct mtd_xio mxio; + + DEBUG(MTD_DEBUG_LEVEL0,"MTD_write\n"); + + if (*ppos == mtd->size) + return -ENOSPC; + + if (*ppos + count > mtd->size) + count = mtd->size - *ppos; + + if (!count) + return 0; + + ret = mtd_get_user_pages(&mxio, (void __user *)buf, count, true); + if (ret < 0) + goto done_put_pages; + + total_retlen = mtd_do_loop_readv_writev(file, + mxio.iovecs, + mxio.niovecs, + ppos, + (mtd_io_fn_t)mtd_do_write); + + ret = total_retlen; + + done_put_pages: + mtd_put_user_pages(&mxio); + + return ret; } /* mtd_write */ /*====================================================================== ^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2011-04-05 16:57 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-04-02 0:44 [PATCH] MTD: Retry Read/Write Transfer Buffer Allocations Grant Erickson 2011-04-04 7:27 ` Artem Bityutskiy 2011-04-04 7:41 ` Artem Bityutskiy 2011-04-04 16:05 ` Grant Erickson 2011-04-05 4:39 ` Artem Bityutskiy 2011-04-04 18:19 ` [PATCH v2] " Grant Erickson 2011-04-05 4:39 ` Artem Bityutskiy 2011-04-05 15:54 ` Grant Erickson 2011-04-05 16:54 ` Artem Bityutskiy 2011-04-05 4:48 ` Artem Bityutskiy -- strict thread matches above, loose matches on Subject: below -- 2011-04-02 1:29 [PATCH] JFFS2: Retry Medium Scan " Grant Erickson 2011-04-04 18:39 ` [PATCH] MTD: Retry Read/Write Transfer " Grant Erickson
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox