public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Javier Guerra <javier@guerrag.com>
Cc: "Alberto Treviño" <alberto@byu.edu>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: Avoiding I/O bottlenecks between VM's
Date: Fri, 19 Sep 2008 16:53:25 -0300	[thread overview]
Message-ID: <20080919195325.GA15908@dmt.cnet> (raw)
In-Reply-To: <90eb1dc70809191214k70f2377cr55bea42be9cbe02e@mail.gmail.com>

On Fri, Sep 19, 2008 at 02:14:32PM -0500, Javier Guerra wrote:
> On Fri, Sep 19, 2008 at 1:53 PM, Alberto Treviño <alberto@byu.edu> wrote:
> > On Friday 19 September 2008 12:41:46 pm you wrote:
> >> Are you using filesystem backed storage for the guest images or direct
> >> block device storage? I assume there's heavy write activity on the
> >> guests when these hangs happen?
> >
> > Yes, they happen when one VM is doing heavy writes.  I'm actually using a
> > whole stack of things:
> >
> > OCFS2 on DRBD (Primary-Primary) on LVM Volume (continuous) on LUKS-encrypted
> > partition.  Fun debugging that, heh?

Heh. Lots of variables there.

> a not-so-wild guess might be the inter-node locking needed by any
> cluster FS.  you'd do much better using just CLVM or EVMS-Ha
> 
> if it's a single box, it would be interesting to compare with ext3
> 
> > So, any ideas on how to solve the bottleneck?  Isn't the CFQ scheduler
> > supposed to grant every processes the same amount of I/O?  

Yes, but if the filesystem on top is at fault, the IO scheduler can't
help (this is the case with ext3 ordered mode and fsync latency, which
could last for hundreds of seconds last time I checked).

> > Is there a way to
> > change something in proc to avoid this situation?
> 
> i don't think CFQ can do much to alleviate the heavy lock-dependency
> of a cluster FS

Perhaps isolate the problem by having the guest images directly on
partitions first (or ext3 with writeback data mode).

      reply	other threads:[~2008-09-19 19:54 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-09-19 17:26 Avoiding I/O bottlenecks between VM's Alberto Treviño
2008-09-19 18:41 ` Marcelo Tosatti
2008-09-19 18:53   ` Alberto Treviño
2008-09-19 19:14     ` Javier Guerra
2008-09-19 19:53       ` Marcelo Tosatti [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080919195325.GA15908@dmt.cnet \
    --to=mtosatti@redhat.com \
    --cc=alberto@byu.edu \
    --cc=javier@guerrag.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox