From: Michael Roth <mdroth@linux.vnet.ibm.com>
To: Jes Sorensen <Jes.Sorensen@redhat.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
qemu-devel@nongnu.org, agl@us.ibm.com
Subject: Re: [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw
Date: Fri, 04 Feb 2011 10:51:07 -0600 [thread overview]
Message-ID: <4D4C2E7B.6060407@linux.vnet.ibm.com> (raw)
In-Reply-To: <4D4BDD12.9060202@redhat.com>
On 02/04/2011 05:03 AM, Jes Sorensen wrote:
> On 02/03/11 18:41, Michael Roth wrote:
>> On 02/02/2011 02:48 AM, Jes Sorensen wrote:
>>> Yes very much so[1] - one reason why it would be nice to have virtagent
>>> use threads to execute the actual commands. We should probably add a
>>> flag to agent commands indicating whether they issue disk I/O or not, so
>>> we can block attempts to execute commands that do so, while the guest is
>>> frozen.
>>
>> **Warning, epic response**
>>
>> For things like logging and i/o on a frozen system...I agree we'd need
>> some flag for these kinds of situations. Maybe a disable_logging()
>> flag....i really don't like this though... I'd imagine even syslogd()
>> could block virtagent in this type of situation, so that would need to
>> be disabled as well.
>
> One way to resolve this would be to have the logging handled in it's own
> thread, which uses non blocking calls to do the actual logging.
> Obviously we'd have to use a non file system based communication method
> between the main thread and the logging thread :)
>
I suspect syslogd() already buffers to some extent. But the problem
there, as well the problem with having our own buffered logging
implementation, is that we can't rely on being able to log an arbitrary
number of messages. As some point that interface would need to either
block, or stop dropping log messages.
If it blocks, we're deadlocked again. If it drops messages, it's trivial
for someone to flood it with messages after the fsfreeze() to get back
into a state where they can execute RPC without any accounting.
So a seperate logging thread doesn't buy us much, and what we come up
with is rely gonna come down to syslogd:
1) if syslogd blocks, we must disable logging after fsfreeze().
Buffering, on syslogd's side or our own, only buys us time. If we
disable logging, we must only allow absolutely required/benign RPCs.
fsstatus/fsthaw are required, things like va.ping are benign, and
potentially useful, in this particular situation.
copyfile/shutdown/exec/etc should be considered "high-risk"...we want to
make sure these are logged.
2) if syslogd doesnt block, it either drops messages at some point with
no indication to us, or it drops them and provides some indication. If
there's no indication, we must treat this the same way we treat 1),
since we must assume that logging is effectively disabled. So only
required or benign RPCs.
if we get some indication when a call to syslog() fails to log/buffer,
we can allow all RPCs, but treat failures to log as a cue to immediately
cleanup and exit the RPC. fsfreeze() under this circumstance will need
to make sure it only logs after it does the unfreeze, else we may never
be able to unfreeze at that point forward.
So the solution is dependent on syslogd's behavior. Ill have to do some
testing to confirm...but I think the above covers the possibilities.
>> But doing so completely subverts our attempts and providing proper
>> accounting of what the agent is doing to the user. A user can freeze the
>> filesystem, knowing that logging would be disabled, then prod at
>> whatever he wants. So the handling should be something specific to
>> fsfreeze, with stricter requirements:
>>
>> If a user calls fsfreeze(), we disable logging, but also disable the
>> ability to do anything other than fsthaw() or fsstatus(). This actually
>> solves the potential deadlocking problem for other RPCs as well...since
>> they cant be executed in the first place.
>
> I disagree that we should block all calls, except for fsfreeze/fsstatus/
> fsthaw in this case. There are other calls that could be valid in this
> situations, so I think it needs to be evaluated on a case by case basis.
>
>> So a solution for these situations is still needed, and I'm starting to
>> agree that threads are needed, but I don't think we should do RPCs
>> concurrently (not sure if that's what is being suggested or not). At
>> least, there's no pressing reason for it as things currently stand
>> (there aren't currently any RPCs where fast response times are all that
>> important, so it's okay to serialize them behind previous RPCs, and
>> HMP/QMP are command at a time), and it's something that Im fairly
>> confident can be added if the need arises in the future.
>
> Eventually I think we will need to be able to support concurrent RPC
> calls. There can be situations where an operation takes a long time
> while it is valid to be able to ping the guest agent to verify that it
> is still alive etc.
Currently we're limited to 1 RPC at a time by the monitor/hmp/qmp
(except for some niche cases where virtagent has multiple requests in
flight). Ideally this will remain the interface we expose to users, but
there could be situations where this isn't the case...for instance,
other bits of qemu making direct calls into virtagent. I think we can
add support for concurrent RPCs incrementally though...I've thought over
the implementation details a bit and it seems to be fairly
straightforward. I think we need to get the basic use cases down first
though.
>
> Cheers,
> Jes
>
next prev parent reply other threads:[~2011-02-04 16:51 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-02-01 10:58 [Qemu-devel] [PATCH 0/2] virtagent - fsfreeze support Jes.Sorensen
2011-02-01 10:58 ` [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw Jes.Sorensen
2011-02-01 14:12 ` Stefan Hajnoczi
2011-02-01 14:26 ` Jes Sorensen
2011-02-01 14:34 ` Stefan Hajnoczi
2011-02-01 14:36 ` Jes Sorensen
2011-02-01 14:41 ` Stefan Hajnoczi
2011-02-01 17:22 ` Michael Roth
2011-02-01 14:48 ` [Qemu-devel] " Adam Litke
2011-02-01 15:02 ` Jes Sorensen
2011-02-01 16:50 ` Michael Roth
2011-02-02 8:38 ` Jes Sorensen
2011-02-02 7:57 ` [Qemu-devel] " Stefan Hajnoczi
2011-02-02 8:48 ` Jes Sorensen
2011-02-03 17:41 ` Michael Roth
2011-02-04 6:13 ` Stefan Hajnoczi
2011-02-04 16:27 ` Michael Roth
2011-02-04 16:52 ` Stefan Hajnoczi
2011-02-04 11:03 ` Jes Sorensen
2011-02-04 16:51 ` Michael Roth [this message]
2011-02-01 10:58 ` [Qemu-devel] [PATCH 2/2] Add monitor commands for fsfreeze support Jes.Sorensen
2011-02-01 11:25 ` [Qemu-devel] [PATCH 0/2] virtagent - " Vasiliy G Tolstov
2011-02-01 13:02 ` Jes Sorensen
2011-02-01 16:04 ` Richard W.M. Jones
2011-02-01 20:04 ` Vasiliy G Tolstov
2011-02-01 20:17 ` Richard W.M. Jones
2011-02-01 14:16 ` Stefan Hajnoczi
2011-02-01 14:28 ` Jes Sorensen
-- strict thread matches above, loose matches on Subject: below --
2011-02-02 8:42 [Qemu-devel] [PATCH v2 " Jes.Sorensen
2011-02-02 8:42 ` [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw Jes.Sorensen
2011-02-04 10:57 [Qemu-devel] [PATCH v3 0/2] virtagent - fsfreeze support Jes.Sorensen
2011-02-04 10:57 ` [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw Jes.Sorensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D4C2E7B.6060407@linux.vnet.ibm.com \
--to=mdroth@linux.vnet.ibm.com \
--cc=Jes.Sorensen@redhat.com \
--cc=agl@us.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).