qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Harper <ryanh@us.ibm.com>
To: Laurent Vivier <laurent@lvivier.info>
Cc: Chris Wright <chrisw@redhat.com>,
	Mark McLoughlin <markmc@redhat.com>,
	Laurent Vivier <Laurent.Vivier@bull.net>,
	qemu-devel@nongnu.org, Ryan Harper <ryanh@us.ibm.com>
Subject: Re: [Qemu-devel] Re: [RFC] Disk integrity in QEMU
Date: Mon, 13 Oct 2008 16:05:09 -0500	[thread overview]
Message-ID: <20081013210509.GL21410@us.ibm.com> (raw)
In-Reply-To: <148FE536-F397-4F51-AE3F-C94E4F1F5D4E@lvivier.info>

* Laurent Vivier <laurent@lvivier.info> [2008-10-13 15:39]:
> >>
> >>as "cache=on" implies a factor (memory) shared by the whole system,
> >>you must take into account the size of the host memory and run some
> >>applications (several guests ?) to pollute the host cache, for
> >>instance you can run 4 guest and run bench in each of them
> >>concurrently, and you could reasonably limits the size of the host
> >>memory to 5 x the size of the guest memory.
> >>(for instance 4 guests with 128 MB on a host with 768 MB).
> >
> >I'm not following you here, the only assumption I see is that we  
> >have 1g
> >of host mem free for caching the write.
> 
> Is this a realistic use case ?

Optimistic? I don't think it is unrealistic.  It is hard to know what
hardware and use-case any end user may have at their disposal.

> >>
> >>as O_DSYNC implies journal commit, you should run a bench on the ext3
> >>host file system concurrently to the bench on a guest to see the
> >>impact of the commit on each bench.
> >
> >I understand the goal here, but what sort of host ext3 journaling load
> >is appropriate.  Additionally, when we're exporting block devices, I
> >don't believe the ext3 journal is an issue.
> 
> Yes, it's a comment for the last test case.
> I think you can run the same benchmark as you do in the guest.

I'm not sure where to go with this.  If it turns out that scaling out on
to of ext3 stinks, then the deployment needs to change to deal with that
limitation in ext3.  Use a proper block device, something like lvm.

> >>According to the semantic, I don't understand how O_DSYNC can be
> >>better than cache=off in this case...
> >
> >I don't have a good answer either, but O_DIRECT and O_DSYNC are
> >different paths through the kernel.  This deserves a better reply, but
> >I don't have one off the top of my head.
> 
> The O_DIRECT kernel path should be more "direct" than the O_DSYNC one.  
> Perhaps a oprofile could help to understand ?
> What it is strange also is the CPU usage with cache=off. It should be  
> lower than others, perhaps an alignment issue ?
>  due to the LVM ?

All possible, I don't have an oprofile of it.

> >>
> >>OK, but in this case the size of the cache for "cache=off" is the  
> >>size
> >>of the guest cache whereas in the other cases the size of the cache  
> >>is
> >>the size of the guest cache + the size of the host cache, this is not
> >>fair...
> >
> >it isn't supposed to be fair, cache=off is O_DIRECT, we're reading  
> >from
> >the device, we *want* to be able to lean on the host cache to read the
> >data, pay once and benefit in other guests if possible.
> 
> OK, but if you want to follow this way I think you must run several  
> guests concurrently to see how the host cache help each of them.
> If you want I can try this tomorrow ? The O_DSYNC patch is the one  
> posted to the mailing-list ?

The patch used is the same as what is on the list, feel free to try.

> 
> And moreover, you should run an endurance test to see how the cache  
> evolves.

I'm not sure how interesting this is, either it was in the cache or not,
depending on what work you do you can either devolve to a case where
nothing is in cache or where everything is in cache.  The point being
that by using cache where we can we get the benefit.  If you use
cache=off you'll never be able to get that boost when it would other wise
been available.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@us.ibm.com

  reply	other threads:[~2008-10-13 21:05 UTC|newest]

Thread overview: 101+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-10-09 17:00 [Qemu-devel] [RFC] Disk integrity in QEMU Anthony Liguori
2008-10-10  7:54 ` Gerd Hoffmann
2008-10-10  8:12   ` Mark McLoughlin
2008-10-12 23:10     ` Jamie Lokier
2008-10-14 17:15       ` Avi Kivity
2008-10-10  9:32   ` Avi Kivity
2008-10-12 23:00     ` Jamie Lokier
2008-10-10  8:11 ` Aurelien Jarno
2008-10-10 12:26   ` Anthony Liguori
2008-10-10 12:53     ` Paul Brook
2008-10-10 13:55       ` Anthony Liguori
2008-10-10 14:05         ` Paul Brook
2008-10-10 14:19         ` Avi Kivity
2008-10-17 13:14           ` Jens Axboe
2008-10-19  9:13             ` Avi Kivity
2008-10-10 15:48     ` Aurelien Jarno
2008-10-10  9:16 ` Avi Kivity
2008-10-10  9:58   ` Daniel P. Berrange
2008-10-10 10:26     ` Avi Kivity
2008-10-10 12:59       ` Paul Brook
2008-10-10 13:20         ` Avi Kivity
2008-10-10 12:34   ` Anthony Liguori
2008-10-10 12:56     ` Avi Kivity
2008-10-11  9:07     ` andrzej zaborowski
2008-10-11 17:54   ` Mark Wagner
2008-10-11 20:35     ` Anthony Liguori
2008-10-12  0:43       ` Mark Wagner
2008-10-12  1:50         ` Chris Wright
2008-10-12 16:22           ` Jamie Lokier
2008-10-12 17:54         ` Anthony Liguori
2008-10-12 18:14           ` nuitari-qemu
2008-10-13  0:27           ` Mark Wagner
2008-10-13  1:21             ` Anthony Liguori
2008-10-13  2:09               ` Mark Wagner
2008-10-13  3:16                 ` Anthony Liguori
2008-10-13  6:42                 ` Aurelien Jarno
2008-10-13 14:38                 ` Steve Ofsthun
2008-10-12  0:44       ` Chris Wright
2008-10-12 10:21         ` Avi Kivity
2008-10-12 14:37           ` Dor Laor
2008-10-12 15:35             ` Jamie Lokier
2008-10-12 18:00               ` Anthony Liguori
2008-10-12 18:02             ` Anthony Liguori
2008-10-15 10:17               ` Andrea Arcangeli
2008-10-12 17:59           ` Anthony Liguori
2008-10-12 18:34             ` Avi Kivity
2008-10-12 19:33               ` Izik Eidus
2008-10-14 17:08                 ` Avi Kivity
2008-10-12 19:59               ` Anthony Liguori
2008-10-12 20:43                 ` Avi Kivity
2008-10-12 21:11                   ` Anthony Liguori
2008-10-14 15:21                     ` Avi Kivity
2008-10-14 15:32                       ` Anthony Liguori
2008-10-14 15:43                         ` Avi Kivity
2008-10-14 19:25                       ` Laurent Vivier
2008-10-16  9:47                         ` Avi Kivity
2008-10-12 10:12       ` Avi Kivity
2008-10-17 13:20         ` Jens Axboe
2008-10-19  9:01           ` Avi Kivity
2008-10-19 18:10             ` Jens Axboe
2008-10-19 18:23               ` Avi Kivity
2008-10-19 19:17                 ` M. Warner Losh
2008-10-19 19:31                   ` Avi Kivity
2008-10-19 18:24               ` Avi Kivity
2008-10-19 18:36                 ` Jens Axboe
2008-10-19 19:11                   ` Avi Kivity
2008-10-19 19:30                     ` Jens Axboe
2008-10-19 20:16                       ` Avi Kivity
2008-10-20 14:14                       ` Avi Kivity
2008-10-10 10:03 ` Fabrice Bellard
2008-10-13 16:11 ` Laurent Vivier
2008-10-13 16:58   ` Anthony Liguori
2008-10-13 17:36     ` Jamie Lokier
2008-10-13 17:06 ` [Qemu-devel] " Ryan Harper
2008-10-13 18:43   ` Anthony Liguori
2008-10-14 16:42     ` Avi Kivity
2008-10-13 18:51   ` Laurent Vivier
2008-10-13 19:43     ` Ryan Harper
2008-10-13 20:21       ` Laurent Vivier
2008-10-13 21:05         ` Ryan Harper [this message]
2008-10-15 13:10           ` Laurent Vivier
2008-10-16 10:24             ` Laurent Vivier
2008-10-16 13:43               ` Anthony Liguori
2008-10-16 16:08                 ` Laurent Vivier
2008-10-17 12:48                 ` Avi Kivity
2008-10-17 13:17                   ` Laurent Vivier
2008-10-14 10:05       ` Kevin Wolf
2008-10-14 14:32         ` Ryan Harper
2008-10-14 16:37       ` Avi Kivity
2008-10-13 19:00   ` Mark Wagner
2008-10-13 19:15     ` Ryan Harper
2008-10-14 16:49       ` Avi Kivity
2008-10-13 17:58 ` [Qemu-devel] " Rik van Riel
2008-10-13 18:22   ` Jamie Lokier
2008-10-13 18:34     ` Rik van Riel
2008-10-14  1:56       ` Jamie Lokier
2008-10-14  2:28         ` nuitari-qemu
2008-10-28 17:34 ` Ian Jackson
2008-10-28 17:45   ` Anthony Liguori
2008-10-28 17:50     ` Ian Jackson
2008-10-28 18:19       ` Jamie Lokier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20081013210509.GL21410@us.ibm.com \
    --to=ryanh@us.ibm.com \
    --cc=Laurent.Vivier@bull.net \
    --cc=chrisw@redhat.com \
    --cc=laurent@lvivier.info \
    --cc=markmc@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).