public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* I/O and pdflush
@ 2009-07-11 17:27 Fernando Silveira
  2009-07-12  8:04 ` Wu Fengguang
  0 siblings, 1 reply; 16+ messages in thread
From: Fernando Silveira @ 2009-07-11 17:27 UTC (permalink / raw)
  To: linux-kernel

Hi.

I'm having a hard time with an application that writes sequentially
250GB of non-stop data directly to a solid state disk (OCZ SSD CORE
v2) device and I hope you can help me. The command "dd if=/dev/zero
of=/dev/sdc bs=4M" reproduces the same symptoms I'm having and writes
exactly as that application does.

The problem is that after some time of data writing at 70MB/s, it
eventually falls down to about 25MB/s and does not get up again until
a loooong time has passed (from 1 to 30 minutes). This happens much
more often when "vm.dirty_*" settings are default (30 secs to expire,
5 secs for writeback, 10% and 40% for background and normal ratio),
and when I set them to 1 second or even 0, the problem happens much
less often and the sticking period of 25MB/s is much lower.

In one of my experiences, I could see that writing some blocks of of
data (aprox. 48 blocks of 4MB each time) at a random position of the
"disk" increases the chances of decreasing the writing rate to 25MB/s.
You can see at this graph[1] that after the 7th random big write (at
66 GB) it falls down to 25MB/s. The writes happened at the following
positions (in GB): 10, 20, 30, 39, 48, 57, 66, 73, 80, 90, 100, 109,
118, 128, 137, 147, and 156 GB.

As I don't know much about kernel internals, IMHO it might be the SSD
might be "hiccuping" and some kind of kernel I/O scheduler or pdflush
decreases its rate to avoid write errors, I don't know.

Could somebody tell me how could I debug the kernel and any of its
modules to understand exactly why the writing is behaving this way?
Maybe I could do it just by logging write errors or something, I don't
know. Telling me which part I should start analyzing would be a huge
hint, seriously.

Thanks.

1. http://rootshell.be/~swrh/ssd-tests/ssd-no_dirty_buffer_with_random_192mb_writes.png

PS: This is used with two A/D converters which provide 25MB/s of data
each, leading my writing software to need at least 50MB/s of
sequential writing rate.

-- 
Fernando Silveira <fsilveira@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread
[parent not found: <cWOyL-3Ys-15@gated-at.bofh.it>]

end of thread, other threads:[~2009-09-04  2:34 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-11 17:27 I/O and pdflush Fernando Silveira
2009-07-12  8:04 ` Wu Fengguang
2009-08-28 21:48   ` Fernando Silveira
2009-08-29 10:12     ` Wu Fengguang
2009-08-29 10:21       ` Wu Fengguang
2009-08-31 13:24         ` Fernando Silveira
2009-08-31 14:00           ` Wu Fengguang
2009-08-31 14:01             ` Wu Fengguang
2009-08-31 14:07               ` Wu Fengguang
2009-08-31 14:33                 ` Fernando Silveira
2009-09-01  8:14                   ` Wu Fengguang
     [not found]                     ` <6afc6d4a0909010710l2cf77fbbmb1ab192ed12a7efc@mail.gmail.com>
2009-09-02  3:05                       ` Wu Fengguang
     [not found]                         ` <6afc6d4a0909020429l2bfecee9xd00527fcaa323751@mail.gmail.com>
     [not found]                           ` <20090902125057.GA7982@localhost>
     [not found]                             ` <6afc6d4a0909031346qda0b17coe4c60250fcac827f@mail.gmail.com>
2009-09-04  2:21                               ` Wu Fengguang
2009-09-04  2:34                                 ` Wu Fengguang
     [not found] <cWOyL-3Ys-15@gated-at.bofh.it>
2009-08-31 21:57 ` Daniel J Blueman
2009-09-01 14:33   ` Fernando Silveira

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox