From: "Randy.Dunlap" <rddunlap@osdlab.org>
To: Martin Wilck <Martin.Wilck@fujitsu-siemens.com>
Cc: Linux Kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: Problem: Large file I/O waits (almost) forever
Date: Mon, 23 Jul 2001 09:00:10 -0700 [thread overview]
Message-ID: <3B5C4A0A.BC9E32FA@osdlab.org> (raw)
In-Reply-To: <Pine.LNX.4.30.0107231043520.24403-100000@biker.pdb.fsc.net>
Hi-
This sounds like something that was noticed at Intel a couple
of months ago. I believe that Linus made a patch for it
in 2.4.7. I'll be trying that out today or tomorrow with
some large files.
The problem observed at Intel was in linux/fs/buffer.c.
The dirty buffer list is scanned Buffers^2 times (with a
lock) for each dirty buffer (or something like that; I'm not
sure of the details), so the larger the file and dirty buffers,
the longer each write took.
~Randy
Martin Wilck wrote:
>
> Hi,
>
> I just came across the following phenomenon and would like to inquire
> whether it's a feature or a bug, and what to do about it:
>
> I have run our "copy-compare" test during the weekend to test I/O
> stability on a IA64 server running 2.4.5. The test works by generating
> a collection of binary files with specified lengths, copying them between
> different directories, and checking the result a) by checking the
> predefined binary patterns and b) by comparing source and destination with cmp.
>
> In order to avoid testing only the buffer cache (system has 8GB memory), I
> used exponentially growing file sizes fro 1kb up to 2GB. The
> copy/test/compare cycle for each file size was run 1024 times, tests
> for different file sizes were run simultaneously.
>
> As expected, tests for smaller files run faster than tests for larger
> ones. However, I observed that the very big files (1 GB and 2GB) hardly
> proceeded at all while the others were running. When I came back
> after the weekend, all tests except these two had completed (1024 times
> each), the 1GB test had completed ~500 times, and the 2GB test had not
> even completed once. After I killed the 1GB job, 2GB started to run again
> (i.e. it wasn't dead, just sleeping deeply). Apparently the block requests
> scheduled for I/O by the processes operating on the large file had to
> "yield" to other processes over and over again before being submitted.
>
> I think this is a dangerous behaviour, because the test situation is
> similar to a real-world scenario where applications (e.g. a data
> base) are doing a lot of small-file I/O while a background process is
> trying to do a large backup. If the system behaved similar then, it would
> mean that the backup would take (almost) forever unless database activity
> would cease.
>
> Any comments/suggestions?
>
> Regards,
> Martin
>
> PS: I am not subscribed to LK, but I'm reading the archives regularly.
prev parent reply other threads:[~2001-07-23 16:02 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-07-23 9:05 Problem: Large file I/O waits (almost) forever Martin Wilck
2001-07-23 9:13 ` Andreas Jaeger
2001-07-23 9:34 ` Martin Wilck
2001-07-23 16:00 ` Randy.Dunlap [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3B5C4A0A.BC9E32FA@osdlab.org \
--to=rddunlap@osdlab.org \
--cc=Martin.Wilck@fujitsu-siemens.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox