* Possible to use multiple disk to bypass I/O wait?
@ 2011-06-09 9:24 Emmanuel Noobadmin
2011-06-09 10:19 ` Mathias Burén
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Emmanuel Noobadmin @ 2011-06-09 9:24 UTC (permalink / raw)
To: CentOS mailing list, linux-raid
I'm trying to resolve an I/O problem on a CentOS 5.6 server. The
process basically scans through Maildirs, checking for space usage and
quota. Because there are hundred odd user folders and several 10s of
thousands of small files, this sends the I/O wait % way high. The
server hits a very high load level and stops responding to other
requests until the crawl is done.
I am wondering if I add another disk and symlink the sub-directories
to that, would that free up the server to respond to other requests
despite the wait on that disk?
Alternatively, if I mdraid mirror the existing disk, would md be smart
enough to read using the other disk while the first's tied up with the
first process?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Possible to use multiple disk to bypass I/O wait?
2011-06-09 9:24 Possible to use multiple disk to bypass I/O wait? Emmanuel Noobadmin
@ 2011-06-09 10:19 ` Mathias Burén
2011-06-09 16:15 ` Emmanuel Noobadmin
2011-06-09 12:06 ` Nagilum
[not found] ` <BANLkTimFOaJoMnwid1F+ghVwkBgJi2FymQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2 siblings, 1 reply; 5+ messages in thread
From: Mathias Burén @ 2011-06-09 10:19 UTC (permalink / raw)
To: Emmanuel Noobadmin; +Cc: CentOS mailing list, linux-raid
On 9 June 2011 10:24, Emmanuel Noobadmin <centos.admin@gmail.com> wrote:
> I'm trying to resolve an I/O problem on a CentOS 5.6 server. The
> process basically scans through Maildirs, checking for space usage and
> quota. Because there are hundred odd user folders and several 10s of
> thousands of small files, this sends the I/O wait % way high. The
> server hits a very high load level and stops responding to other
> requests until the crawl is done.
>
> I am wondering if I add another disk and symlink the sub-directories
> to that, would that free up the server to respond to other requests
> despite the wait on that disk?
>
> Alternatively, if I mdraid mirror the existing disk, would md be smart
> enough to read using the other disk while the first's tied up with the
> first process?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
The first thing that comes to my mind: Have you tried another IO scheduler?
/M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Possible to use multiple disk to bypass I/O wait?
2011-06-09 9:24 Possible to use multiple disk to bypass I/O wait? Emmanuel Noobadmin
2011-06-09 10:19 ` Mathias Burén
@ 2011-06-09 12:06 ` Nagilum
[not found] ` <BANLkTimFOaJoMnwid1F+ghVwkBgJi2FymQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2 siblings, 0 replies; 5+ messages in thread
From: Nagilum @ 2011-06-09 12:06 UTC (permalink / raw)
To: Emmanuel Noobadmin; +Cc: CentOS mailing list, linux-raid
----- Message from centos.admin@gmail.com ---------
Date: Thu, 9 Jun 2011 17:24:23 +0800
From: Emmanuel Noobadmin <centos.admin@gmail.com>
Subject: Possible to use multiple disk to bypass I/O wait?
To: CentOS mailing list <centos@centos.org>, linux-raid
<linux-raid@vger.kernel.org>
> I'm trying to resolve an I/O problem on a CentOS 5.6 server. The
> process basically scans through Maildirs, checking for space usage and
> quota. Because there are hundred odd user folders and several 10s of
> thousands of small files, this sends the I/O wait % way high. The
> server hits a very high load level and stops responding to other
> requests until the crawl is done.
>
> I am wondering if I add another disk and symlink the sub-directories
> to that, would that free up the server to respond to other requests
> despite the wait on that disk?
Have you tried using ionice -c 3 on the process?
----- End message from centos.admin@gmail.com -----
========================================================================
# _ __ _ __ http://www.nagilum.org/ \n icq://69646724 #
# / |/ /__ ____ _(_) /_ ____ _ nagilum@nagilum.org \n +491776461165 #
# / / _ `/ _ `/ / / // / ' \ Amiga (68k/PPC): AOS/NetBSD/Linux #
# /_/|_/\_,_/\_, /_/_/\_,_/_/_/_/ Mac (PPC): MacOS-X / NetBSD /Linux #
# /___/ x86: FreeBSD/Linux/Solaris/Win2k ARM9: EPOC EV6 #
========================================================================
----------------------------------------------------------------
cakebox.homeunix.net - all the machine one needs..
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Possible to use multiple disk to bypass I/O wait?
2011-06-09 10:19 ` Mathias Burén
@ 2011-06-09 16:15 ` Emmanuel Noobadmin
0 siblings, 0 replies; 5+ messages in thread
From: Emmanuel Noobadmin @ 2011-06-09 16:15 UTC (permalink / raw)
To: Mathias Burén; +Cc: CentOS mailing list, linux-raid
On 6/9/11, Mathias Burén <mathias.buren@gmail.com> wrote:
> The first thing that comes to my mind: Have you tried another IO scheduler?
and the first thing that came to this noob's mind was: Wait, you mean
there's actually more than one? AND I get to choose?
I'll probably be experimenting with deadline and anticipatory since
the i/o wait seems to be due to the disk running back and fro trying
to serve the file scan as well as legit read request so having that
small wait for reads in the same area sounds like it would help.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Possible to use multiple disk to bypass I/O wait?
[not found] ` <BANLkTimFOaJoMnwid1F+ghVwkBgJi2FymQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2011-06-09 19:06 ` Steve Thompson
0 siblings, 0 replies; 5+ messages in thread
From: Steve Thompson @ 2011-06-09 19:06 UTC (permalink / raw)
To: CentOS mailing list; +Cc: linux-raid
On Thu, 9 Jun 2011, Emmanuel Noobadmin wrote:
> I'm trying to resolve an I/O problem on a CentOS 5.6 server. The
> process basically scans through Maildirs, checking for space usage and
> quota. Because there are hundred odd user folders and several 10s of
> thousands of small files, this sends the I/O wait % way high. The
> server hits a very high load level and stops responding to other
> requests until the crawl is done.
If the server is reduced to a crawl, it's possible that you are hitting
the dirty_ratio limit due to writes and the server has entered synchronous
I/O mode. As others have mentioned, setting noatime could have a
significant effect, especially if there are many files and the server
doesn't have much memory. You can try increasing dirty_ratio to see if it
has an effect, eg:
# sysctl vm.dirty_ratio
# sysctl -w vm.dirty_ratio=50
Steve
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-06-09 19:06 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-09 9:24 Possible to use multiple disk to bypass I/O wait? Emmanuel Noobadmin
2011-06-09 10:19 ` Mathias Burén
2011-06-09 16:15 ` Emmanuel Noobadmin
2011-06-09 12:06 ` Nagilum
[not found] ` <BANLkTimFOaJoMnwid1F+ghVwkBgJi2FymQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2011-06-09 19:06 ` Steve Thompson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).