From: Kevin Wolf <kwolf@redhat.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Cc: uril@redhat.com, qemu-devel@nongnu.org,
Luiz Capitulino <lcapitulino@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 0/3]: BLOCK_WATERMARK QMP event
Date: Thu, 11 Mar 2010 09:34:31 +0100 [thread overview]
Message-ID: <4B98AB17.8080506@redhat.com> (raw)
In-Reply-To: <4B980D26.8070000@codemonkey.ws>
Am 10.03.2010 22:20, schrieb Anthony Liguori:
> On 03/10/2010 02:40 AM, Kevin Wolf wrote:
>> Am 10.03.2010 00:08, schrieb Anthony Liguori:
>>
>>> On 03/09/2010 04:53 PM, Luiz Capitulino wrote:
>>>
>>>> Hi,
>>>>
>>>> This series is based on a previous series submitted by Uri Lublin:
>>>>
>>>> http://lists.gnu.org/archive/html/qemu-devel/2009-03/msg00864.html
>>>>
>>>> Details on the patches, except for this question: does it make sense to have
>>>> a 'low' watermark for block devices?
>>>>
>>>> I think it doesn't, then the event (and the monitor accompanying command)
>>>> should be called BLOCK_HIGH_WATERMARK. But this makes the event very
>>>> unflexible, so I have called it BLOCK_WATERMARK and added parameters for the
>>>> high/low watermark type.
>>>>
>>>>
>>> The alternative way to implement this is for a management tool to just
>>> poll the allocated disk size periodically.
>>>
>> Then we need to provide that information using the monitor. As far as I
>> know, we don't do that yet.
>
> Okay, but that's certainly a reasonable thing to add though, no?
Well, if you're aware of the semantics of this value, it might be. It's
not exactly intuitive, but this is currently hidden inside qemu.
What the high watermark says (in this implementation) is the highest
offset into the image file of an cluster that was allocated during this
qemu run. If you restart qemu, it starts at 0 again.
I think there once was a version that tried to calculate the absolute
highest value when the image was opened, but it was reverted because it
just took too long. For the same reason I think a low watermark is
unrealistic, even if we get shrinking images some time. It's just not
doable efficiently, at least not in an easy way.
I'm not sure if this semantics makes it a good public interface. Other
than that, I'm not overly concerned with doing it like you suggest.
>>> It's no more/less safe than generating an event on a "watermark" because
>>> the event is still racy with respect to a guest that's writing very
>>> quickly to the disk.
>>>
>> Being racy isn't a problem, a management tool doing this kind of things
>> needs to use werror=ENOSPC (at least) anyway. The watermark thing, as I
>> understand it, is only a mechanism to make it less likely that the VM
>> has to be stopped.
>>
>
> Correct. A management tool could poll every 5 minutes to make the same
> determination.
Yes. At the cost of doing it too little during installation and too
often during regular operation. Of course, you could add heuristics to
make the interval dynamic...
But honestly, while I do understand your point, this feels like a hack
to work around shortcomings of an interface. So what we need to decide
is which criterion outweighs the other in practice.
> This approach seems superior to me because it's considerably more
> flexible. In this particular model, you have one disk on an LVM volume
> and you need to grow that single disk image when you hit a high water mark.
>
> However, an alternative and equally valid model would be an LVM volume
> containing a file system with multiple disk images for a single guest.
> In this case, there is no high water mark for an individual disk, but
> rather, there's a high water mark for the combination of all the disks.
If your file system supports sparse files, it's the wrong number. You
need the allocated space then, not the highest offset. I really can't
see much use of this watermark outside the original model.
Kevin
next prev parent reply other threads:[~2010-03-11 8:35 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-09 22:53 [Qemu-devel] [PATCH 0/3]: BLOCK_WATERMARK QMP event Luiz Capitulino
2010-03-09 22:53 ` [Qemu-devel] [PATCH 1/3] block-qcow2: keep highest allocated offset Luiz Capitulino
2010-03-09 22:53 ` [Qemu-devel] [PATCH 2/3] monitor: Introduce block_watermark command Luiz Capitulino
2010-03-09 22:53 ` [Qemu-devel] [PATCH 3/3] QMP: Introduce BLOCK_WATERMARK event Luiz Capitulino
2010-03-09 23:03 ` [Qemu-devel] [PATCH 0/3]: BLOCK_WATERMARK QMP event Anthony Liguori
2010-03-09 23:18 ` Luiz Capitulino
2010-03-09 23:08 ` Anthony Liguori
2010-03-09 23:22 ` Luiz Capitulino
2010-03-09 23:25 ` Luiz Capitulino
2010-03-09 23:55 ` Anthony Liguori
2010-03-10 21:02 ` Luiz Capitulino
2010-03-09 23:46 ` Anthony Liguori
2010-03-10 8:40 ` Kevin Wolf
2010-03-10 21:09 ` Luiz Capitulino
2010-03-10 21:20 ` Anthony Liguori
2010-03-11 8:34 ` Kevin Wolf [this message]
2010-03-11 14:19 ` Anthony Liguori
2010-03-11 14:58 ` Avi Kivity
2010-03-11 15:07 ` Anthony Liguori
2010-03-11 15:09 ` Avi Kivity
2010-03-11 16:16 ` Kevin Wolf
2010-03-10 8:33 ` [Qemu-devel] " Kevin Wolf
2010-03-10 9:28 ` Christoph Hellwig
2010-03-10 21:11 ` Luiz Capitulino
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B98AB17.8080506@redhat.com \
--to=kwolf@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=lcapitulino@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=uril@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).