* extremely long blockages when doing random writes to SSD
@ 2015-06-24 21:54 Luigi Semenzato
2015-06-24 22:25 ` Andrew Morton
0 siblings, 1 reply; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-24 21:54 UTC (permalink / raw)
To: Linux Memory Management List
Greetings,
we have an app that writes 4k blocks to an SSD partition with more or
less random seeks. (For the curious: it's called "update engine" and
it's used to install a new Chrome OS version in the background.) The
total size of the writes can be a few hundred megabytes. During this
time, we see that other apps, such as the browser, block for seconds,
or tens of seconds.
I have reproduced this behavior with a small program that writes 2GB
worth of 4k blocks randomly to the SSD partition. I can get apps to
block for over 2 minutes, at which point our hang detector triggers
and panics the kernel.
CPU: Intel Haswell i7
RAM: 4GB
SSD: 16GB SanDisk
kernel: 3.8
>From /proc/meminfo I see that the "Buffers:" entry easily gets over
1GB. The problem goes away completely, as expected, if I use O_SYNC
when doing the random writes, but then the average size of the I/O
requests goes down a lot, also as expected.
First of all, it seems that there may be some kind of resource
management bug. Maybe it has been fixed in later kernels? But, if
not, is there any way of encouraging some in-between behavior? That
is, limit the allocation of I/O buffers to a smaller amount, which
still give the system a chance to do some coalescing, but perhaps
avoid the extreme badness that we are seeing?
Thank you for any insight!
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-24 21:54 extremely long blockages when doing random writes to SSD Luigi Semenzato
@ 2015-06-24 22:25 ` Andrew Morton
2015-06-24 23:43 ` Luigi Semenzato
0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2015-06-24 22:25 UTC (permalink / raw)
To: Luigi Semenzato; +Cc: Linux Memory Management List
On Wed, 24 Jun 2015 14:54:09 -0700 Luigi Semenzato <semenzato@google.com> wrote:
> Greetings,
>
> we have an app that writes 4k blocks to an SSD partition with more or
> less random seeks. (For the curious: it's called "update engine" and
> it's used to install a new Chrome OS version in the background.) The
> total size of the writes can be a few hundred megabytes. During this
> time, we see that other apps, such as the browser, block for seconds,
> or tens of seconds.
>
> I have reproduced this behavior with a small program that writes 2GB
> worth of 4k blocks randomly to the SSD partition. I can get apps to
> block for over 2 minutes, at which point our hang detector triggers
> and panics the kernel.
>
> CPU: Intel Haswell i7
> RAM: 4GB
> SSD: 16GB SanDisk
> kernel: 3.8
>
> >From /proc/meminfo I see that the "Buffers:" entry easily gets over
> 1GB. The problem goes away completely, as expected, if I use O_SYNC
> when doing the random writes, but then the average size of the I/O
> requests goes down a lot, also as expected.
>
> First of all, it seems that there may be some kind of resource
> management bug. Maybe it has been fixed in later kernels? But, if
> not, is there any way of encouraging some in-between behavior? That
> is, limit the allocation of I/O buffers to a smaller amount, which
> still give the system a chance to do some coalescing, but perhaps
> avoid the extreme badness that we are seeing?
>
What kernel version?
Are you able to share that little test app with us?
Which filesystem is being used and with what mount options etc?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-24 22:25 ` Andrew Morton
@ 2015-06-24 23:43 ` Luigi Semenzato
2015-06-25 18:24 ` Luigi Semenzato
0 siblings, 1 reply; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-24 23:43 UTC (permalink / raw)
To: Andrew Morton; +Cc: Linux Memory Management List
Kernel version is 3.8.
I am not using a file system, I am writing directly into a partition.
Here's the little test app. I call it "random-write" but you're
welcome to call it whatever you wish.
My apologies for the copyright notice.
/* Copyright 2015 The Chromium OS Authors. All rights reserved.
* Use of this source code is governed by a BSD-style license that can be
* found in the LICENSE file.
*/
#define _FILE_OFFSET_BITS 64
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <strings.h>
#include <unistd.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <sys/types.h>
#define PAGE_SIZE 4096
#define GIGA (1024 * 1024 * 1024)
typedef u_int8_t u8;
typedef u_int64_t u64;
typedef char bool;
const bool true = 1;
const bool false = 0;
void permute_randomly(u64 *offsets, int offset_count) {
int i;
for (i = 0; i < offset_count; i++) {
int r = random() % (offset_count - i) + i;
u64 t = offsets[r];
offsets[r] = offsets[i];
offsets[i] = t;
}
}
u8 page[4096];
off_t offsets[2 * (GIGA / PAGE_SIZE)];
int main(int ac, char **av) {
u64 i;
int out;
/* Make "page" slightly non-empty, why not. */
page[4] = 1;
page[34] = 1;
page[234] = 1;
page[1234] = 1;
for (i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
offsets[i] = i * PAGE_SIZE;
}
permute_randomly(offsets, sizeof(offsets) / sizeof(offsets[0]));
if (ac < 2) {
fprintf(stderr, "usage: %s <device>\n", av[0]);
exit(1);
}
out = open(av[1], O_WRONLY);
if (out < 0) {
perror(av[1]);
exit(1);
}
for (i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
int rc;
if (lseek(out, offsets[i], SEEK_SET) < 0) {
perror("lseek");
exit(1);
}
rc = write(out, page, sizeof(page));
if (rc < 0) {
perror("write");
exit(1);
} else if (rc != sizeof(page)) {
fprintf(stderr, "wrote %d bytes, expected %d\n", rc, sizeof(page));
exit(1);
}
}
}
On Wed, Jun 24, 2015 at 3:25 PM, Andrew Morton
<akpm@linux-foundation.org> wrote:
> On Wed, 24 Jun 2015 14:54:09 -0700 Luigi Semenzato <semenzato@google.com> wrote:
>
>> Greetings,
>>
>> we have an app that writes 4k blocks to an SSD partition with more or
>> less random seeks. (For the curious: it's called "update engine" and
>> it's used to install a new Chrome OS version in the background.) The
>> total size of the writes can be a few hundred megabytes. During this
>> time, we see that other apps, such as the browser, block for seconds,
>> or tens of seconds.
>>
>> I have reproduced this behavior with a small program that writes 2GB
>> worth of 4k blocks randomly to the SSD partition. I can get apps to
>> block for over 2 minutes, at which point our hang detector triggers
>> and panics the kernel.
>>
>> CPU: Intel Haswell i7
>> RAM: 4GB
>> SSD: 16GB SanDisk
>> kernel: 3.8
>>
>> >From /proc/meminfo I see that the "Buffers:" entry easily gets over
>> 1GB. The problem goes away completely, as expected, if I use O_SYNC
>> when doing the random writes, but then the average size of the I/O
>> requests goes down a lot, also as expected.
>>
>> First of all, it seems that there may be some kind of resource
>> management bug. Maybe it has been fixed in later kernels? But, if
>> not, is there any way of encouraging some in-between behavior? That
>> is, limit the allocation of I/O buffers to a smaller amount, which
>> still give the system a chance to do some coalescing, but perhaps
>> avoid the extreme badness that we are seeing?
>>
>
> What kernel version?
>
> Are you able to share that little test app with us?
>
> Which filesystem is being used and with what mount options etc?
>
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-24 23:43 ` Luigi Semenzato
@ 2015-06-25 18:24 ` Luigi Semenzato
2015-06-26 0:58 ` Sergey Senozhatsky
0 siblings, 1 reply; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-25 18:24 UTC (permalink / raw)
To: Andrew Morton; +Cc: Linux Memory Management List
I looked at this some more and I am not sure that there is any bug, or
other possible tuning.
While the random-write process runs, iostat -x -k 1 reports these numbers:
average queue size: around 300
average write wait: typically 200 to 400 ms, but can be over 1000 ms
average read wait: typically 50 to 100 ms
(more info at crbug.com/414709)
The read latency may be enough to explain the jank. In addition, the
browser can do fsyncs, and I think that those will block for a long
time.
Ionice doesn't seem to make a difference. I suspect that once the
blocks are in the output queue, it's first-come/first-serve. Is this
correct or am I confused?
We can fix this on the application side but only partially. The OS
version updater can use O_SYNC. The problem is that his can happen in
a number of situations, such as when simply downloading a large file,
and in other code we don't control.
On Wed, Jun 24, 2015 at 4:43 PM, Luigi Semenzato <semenzato@google.com> wrote:
> Kernel version is 3.8.
>
> I am not using a file system, I am writing directly into a partition.
>
> Here's the little test app. I call it "random-write" but you're
> welcome to call it whatever you wish.
>
> My apologies for the copyright notice.
>
> /* Copyright 2015 The Chromium OS Authors. All rights reserved.
> * Use of this source code is governed by a BSD-style license that can be
> * found in the LICENSE file.
> */
>
> #define _FILE_OFFSET_BITS 64
> #include <fcntl.h>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
> #include <strings.h>
> #include <unistd.h>
> #include <sys/stat.h>
> #include <sys/time.h>
> #include <sys/types.h>
>
> #define PAGE_SIZE 4096
> #define GIGA (1024 * 1024 * 1024)
>
> typedef u_int8_t u8;
> typedef u_int64_t u64;
>
> typedef char bool;
> const bool true = 1;
> const bool false = 0;
>
>
> void permute_randomly(u64 *offsets, int offset_count) {
> int i;
> for (i = 0; i < offset_count; i++) {
> int r = random() % (offset_count - i) + i;
> u64 t = offsets[r];
> offsets[r] = offsets[i];
> offsets[i] = t;
> }
> }
>
> u8 page[4096];
> off_t offsets[2 * (GIGA / PAGE_SIZE)];
>
> int main(int ac, char **av) {
> u64 i;
> int out;
>
> /* Make "page" slightly non-empty, why not. */
> page[4] = 1;
> page[34] = 1;
> page[234] = 1;
> page[1234] = 1;
>
> for (i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
> offsets[i] = i * PAGE_SIZE;
> }
>
> permute_randomly(offsets, sizeof(offsets) / sizeof(offsets[0]));
>
> if (ac < 2) {
> fprintf(stderr, "usage: %s <device>\n", av[0]);
> exit(1);
> }
>
> out = open(av[1], O_WRONLY);
> if (out < 0) {
> perror(av[1]);
> exit(1);
> }
>
> for (i = 0; i < sizeof(offsets) / sizeof(offsets[0]); i++) {
> int rc;
> if (lseek(out, offsets[i], SEEK_SET) < 0) {
> perror("lseek");
> exit(1);
> }
> rc = write(out, page, sizeof(page));
> if (rc < 0) {
> perror("write");
> exit(1);
> } else if (rc != sizeof(page)) {
> fprintf(stderr, "wrote %d bytes, expected %d\n", rc, sizeof(page));
> exit(1);
> }
> }
> }
>
> On Wed, Jun 24, 2015 at 3:25 PM, Andrew Morton
> <akpm@linux-foundation.org> wrote:
>> On Wed, 24 Jun 2015 14:54:09 -0700 Luigi Semenzato <semenzato@google.com> wrote:
>>
>>> Greetings,
>>>
>>> we have an app that writes 4k blocks to an SSD partition with more or
>>> less random seeks. (For the curious: it's called "update engine" and
>>> it's used to install a new Chrome OS version in the background.) The
>>> total size of the writes can be a few hundred megabytes. During this
>>> time, we see that other apps, such as the browser, block for seconds,
>>> or tens of seconds.
>>>
>>> I have reproduced this behavior with a small program that writes 2GB
>>> worth of 4k blocks randomly to the SSD partition. I can get apps to
>>> block for over 2 minutes, at which point our hang detector triggers
>>> and panics the kernel.
>>>
>>> CPU: Intel Haswell i7
>>> RAM: 4GB
>>> SSD: 16GB SanDisk
>>> kernel: 3.8
>>>
>>> >From /proc/meminfo I see that the "Buffers:" entry easily gets over
>>> 1GB. The problem goes away completely, as expected, if I use O_SYNC
>>> when doing the random writes, but then the average size of the I/O
>>> requests goes down a lot, also as expected.
>>>
>>> First of all, it seems that there may be some kind of resource
>>> management bug. Maybe it has been fixed in later kernels? But, if
>>> not, is there any way of encouraging some in-between behavior? That
>>> is, limit the allocation of I/O buffers to a smaller amount, which
>>> still give the system a chance to do some coalescing, but perhaps
>>> avoid the extreme badness that we are seeing?
>>>
>>
>> What kernel version?
>>
>> Are you able to share that little test app with us?
>>
>> Which filesystem is being used and with what mount options etc?
>>
>>
>>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-25 18:24 ` Luigi Semenzato
@ 2015-06-26 0:58 ` Sergey Senozhatsky
2015-06-26 1:31 ` Luigi Semenzato
0 siblings, 1 reply; 9+ messages in thread
From: Sergey Senozhatsky @ 2015-06-26 0:58 UTC (permalink / raw)
To: Luigi Semenzato; +Cc: Andrew Morton, Linux Memory Management List
Hello,
On (06/25/15 11:24), Luigi Semenzato wrote:
> I looked at this some more and I am not sure that there is any bug, or
> other possible tuning.
>
> While the random-write process runs, iostat -x -k 1 reports these numbers:
>
> average queue size: around 300
> average write wait: typically 200 to 400 ms, but can be over 1000 ms
> average read wait: typically 50 to 100 ms
>
> (more info at crbug.com/414709)
>
> The read latency may be enough to explain the jank. In addition, the
> browser can do fsyncs, and I think that those will block for a long
> time.
>
> Ionice doesn't seem to make a difference. I suspect that once the
> blocks are in the output queue, it's first-come/first-serve. Is this
> correct or am I confused?
>
> We can fix this on the application side but only partially. The OS
> version updater can use O_SYNC. The problem is that his can happen in
> a number of situations, such as when simply downloading a large file,
> and in other code we don't control.
>
do you use CONFIG_IOSCHED_DEADLINE or CONFIG_IOSCHED_CFQ?
-ss
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-26 0:58 ` Sergey Senozhatsky
@ 2015-06-26 1:31 ` Luigi Semenzato
2015-06-26 1:42 ` Sergey Senozhatsky
0 siblings, 1 reply; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-26 1:31 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: Andrew Morton, Linux Memory Management List
We're using CFQ.
CONFIG_DEFAULT_IOSCHED="cfq"
...
CONFIG_IOSCHED_CFQ=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_NOOP=y
On Thu, Jun 25, 2015 at 5:58 PM, Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
> Hello,
>
> On (06/25/15 11:24), Luigi Semenzato wrote:
>> I looked at this some more and I am not sure that there is any bug, or
>> other possible tuning.
>>
>> While the random-write process runs, iostat -x -k 1 reports these numbers:
>>
>> average queue size: around 300
>> average write wait: typically 200 to 400 ms, but can be over 1000 ms
>> average read wait: typically 50 to 100 ms
>>
>> (more info at crbug.com/414709)
>>
>> The read latency may be enough to explain the jank. In addition, the
>> browser can do fsyncs, and I think that those will block for a long
>> time.
>>
>> Ionice doesn't seem to make a difference. I suspect that once the
>> blocks are in the output queue, it's first-come/first-serve. Is this
>> correct or am I confused?
>>
>> We can fix this on the application side but only partially. The OS
>> version updater can use O_SYNC. The problem is that his can happen in
>> a number of situations, such as when simply downloading a large file,
>> and in other code we don't control.
>>
>
> do you use CONFIG_IOSCHED_DEADLINE or CONFIG_IOSCHED_CFQ?
>
> -ss
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-26 1:31 ` Luigi Semenzato
@ 2015-06-26 1:42 ` Sergey Senozhatsky
2015-06-26 1:43 ` Luigi Semenzato
0 siblings, 1 reply; 9+ messages in thread
From: Sergey Senozhatsky @ 2015-06-26 1:42 UTC (permalink / raw)
To: Luigi Semenzato
Cc: Sergey Senozhatsky, Andrew Morton, Linux Memory Management List
On (06/25/15 18:31), Luigi Semenzato wrote:
> We're using CFQ.
>
> CONFIG_DEFAULT_IOSCHED="cfq"
> ...
> CONFIG_IOSCHED_CFQ=y
> CONFIG_IOSCHED_DEADLINE=y
> CONFIG_IOSCHED_NOOP=y
>
any chance to try out DEADLINE?
CFQ, as far as I understand, doesn't make too much sense for SSDs.
-ss
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-26 1:42 ` Sergey Senozhatsky
@ 2015-06-26 1:43 ` Luigi Semenzato
2015-06-26 18:24 ` Luigi Semenzato
0 siblings, 1 reply; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-26 1:43 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: Andrew Morton, Linux Memory Management List
I will try and report, thanks.
On Thu, Jun 25, 2015 at 6:42 PM, Sergey Senozhatsky
<sergey.senozhatsky.work@gmail.com> wrote:
> On (06/25/15 18:31), Luigi Semenzato wrote:
>> We're using CFQ.
>>
>> CONFIG_DEFAULT_IOSCHED="cfq"
>> ...
>> CONFIG_IOSCHED_CFQ=y
>> CONFIG_IOSCHED_DEADLINE=y
>> CONFIG_IOSCHED_NOOP=y
>>
>
> any chance to try out DEADLINE?
> CFQ, as far as I understand, doesn't make too much sense for SSDs.
>
> -ss
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: extremely long blockages when doing random writes to SSD
2015-06-26 1:43 ` Luigi Semenzato
@ 2015-06-26 18:24 ` Luigi Semenzato
0 siblings, 0 replies; 9+ messages in thread
From: Luigi Semenzato @ 2015-06-26 18:24 UTC (permalink / raw)
To: Sergey Senozhatsky; +Cc: Andrew Morton, Linux Memory Management List
I've tried both deadline and noop schedulers. They both seem to
improve the behavior somewhat, in the sense that I no longer see the
panic-inducing two-minute hung tasks. But the interactive response
can remain poor, with the UI freezing for many seconds, including the
mouse cursor. The write bandwidth also goes down, from 8-10 MB/s to
2-4 MB/s, but I am not sure that's a concern because of the nature of
the test.
Interestingly, some of the very long blockages with the CFQ scheduler
happen on page faults from reading an mmapped file, as below.
In any case I appreciate all the help. This request was mostly to
make sure that I am not missing some major change to the I/O scheduler
("oh yes, there was this nasty bug, but it's fixed in 4.x..."). Maybe
this is not the right group though?
Thanks!
[215549.914848] INFO: task update_engine:1249 blocked for more than 120 seconds.
[215549.914858] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[215549.914865] update_engine D ffff88017425ddb8 0 1249 1 0x00000000
[215549.914875] ffff88015558d710 0000000000000082 ffff8801780c5280
ffff88015558dfd8
[215549.914887] ffff88015558dfd8 0000000000011cc0 ffff88017425da00
ffff88017ca91cc0
[215549.914898] ffff88017cdc3b18 ffff88015558d7b8 ffffffff84d12ca5
0000000000000002
[215549.914909] Call Trace:
[215549.914920] [<ffffffff84d12ca5>] ? generic_block_bmap+0x65/0x65
[215549.914929] [<ffffffff850bf38b>] schedule+0x64/0x66
[215549.914935] [<ffffffff850bf509>] io_schedule+0x57/0x71
[215549.914942] [<ffffffff84d12cb3>] sleep_on_buffer+0xe/0x12
[215549.914951] [<ffffffff850bd8d3>] __wait_on_bit+0x46/0x76
[215549.914958] [<ffffffff850bd984>] out_of_line_wait_on_bit+0x81/0xa0
[215549.914966] [<ffffffff84d12ca5>] ? generic_block_bmap+0x65/0x65
[215549.914974] [<ffffffff84c51999>] ? autoremove_wake_function+0x34/0x34
[215549.914981] [<ffffffff84d13777>] __wait_on_buffer+0x26/0x28
[215549.914988] [<ffffffff84d13872>] wait_on_buffer+0x1e/0x20
[215549.914994] [<ffffffff84d1460a>] bh_submit_read+0x49/0x5b
[215549.915004] [<ffffffff84d7ea00>] ext4_get_branch+0x94/0x117
[215549.915011] [<ffffffff84d7eb72>] ext4_ind_map_blocks+0xef/0x513
[215549.915019] [<ffffffff84d4b7bb>] ext4_map_blocks+0x68/0x22a
[215549.915026] [<ffffffff84d4d724>] _ext4_get_block+0xd6/0x171
[215549.915034] [<ffffffff84d4d7d5>] ext4_get_block+0x16/0x18
[215549.915041] [<ffffffff84d1ba95>] do_mpage_readpage+0x1b1/0x50c
[215549.915048] [<ffffffff84d4d7bf>] ? _ext4_get_block+0x171/0x171
[215549.915057] [<ffffffff84cbfeef>] ? __lru_cache_add+0x39/0x75
[215549.915064] [<ffffffff84d4d7bf>] ? _ext4_get_block+0x171/0x171
[215549.915071] [<ffffffff84d1bee2>] mpage_readpages+0xf2/0x149
[215549.915078] [<ffffffff84d4d7bf>] ? _ext4_get_block+0x171/0x171
[215549.915085] [<ffffffff84d49ed5>] ext4_readpages+0x3c/0x43
[215549.915092] [<ffffffff84cbee01>] __do_page_cache_readahead+0x14d/0x203
[215549.915100] [<ffffffff84cbf0d1>] ra_submit+0x21/0x25
[215549.915107] [<ffffffff84cb7745>] filemap_fault+0x197/0x381
[215549.915115] [<ffffffff84cd14d0>] __do_fault+0xb0/0x34a
[215549.915122] [<ffffffff84cf9c30>] ? poll_select_copy_remaining+0x11d/0x11d
[215549.915130] [<ffffffff84cd3320>] handle_pte_fault+0x124/0x4f9
[215549.915137] [<ffffffff84cd446e>] handle_mm_fault+0x97/0xbb
[215549.915145] [<ffffffff84c297cd>] __do_page_fault+0x1d4/0x38c
[215549.915152] [<ffffffff84d23059>] ? eventfd_ctx_read+0x184/0x1aa
[215549.915159] [<ffffffff84c5f37a>] ? wake_up_state+0x12/0x12
[215549.915168] [<ffffffff84c39318>] ? timespec_add_safe+0x38/0x7b
[215549.915174] [<ffffffff84c299b7>] do_page_fault+0xe/0x10
[215549.915182] [<ffffffff850c05b2>] page_fault+0x22/0x30
On Thu, Jun 25, 2015 at 6:43 PM, Luigi Semenzato <semenzato@google.com> wrote:
> I will try and report, thanks.
>
> On Thu, Jun 25, 2015 at 6:42 PM, Sergey Senozhatsky
> <sergey.senozhatsky.work@gmail.com> wrote:
>> On (06/25/15 18:31), Luigi Semenzato wrote:
>>> We're using CFQ.
>>>
>>> CONFIG_DEFAULT_IOSCHED="cfq"
>>> ...
>>> CONFIG_IOSCHED_CFQ=y
>>> CONFIG_IOSCHED_DEADLINE=y
>>> CONFIG_IOSCHED_NOOP=y
>>>
>>
>> any chance to try out DEADLINE?
>> CFQ, as far as I understand, doesn't make too much sense for SSDs.
>>
>> -ss
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-06-26 18:24 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-06-24 21:54 extremely long blockages when doing random writes to SSD Luigi Semenzato
2015-06-24 22:25 ` Andrew Morton
2015-06-24 23:43 ` Luigi Semenzato
2015-06-25 18:24 ` Luigi Semenzato
2015-06-26 0:58 ` Sergey Senozhatsky
2015-06-26 1:31 ` Luigi Semenzato
2015-06-26 1:42 ` Sergey Senozhatsky
2015-06-26 1:43 ` Luigi Semenzato
2015-06-26 18:24 ` Luigi Semenzato
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).