From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [PATCH v1 12/12] libxl: add device backend listener in order to launch backends
Date: Wed, 6 Nov 2013 10:41:55 +0100 [thread overview]
Message-ID: <527A0EE3.1040603@citrix.com> (raw)
In-Reply-To: <21111.55145.501074.162529@mariner.uk.xensource.com>
On 04/11/13 18:20, Ian Jackson wrote:
> Roger Pau Monné writes ("Re: [PATCH v1 12/12] libxl: add device backend listener in order to launch backends"):
>> So if I got it right, this new libxl__nested_ao_create will return a new
>> ao (with a new gc), that I could use in conjunction with the
>> long-running ao that I use in the main xs_watch loop, right?
>
> Yes. It would give you a new psuedo-ao, which you can use for
> per-event memory allocation. It's a psuedo-ao in the sense that you
> mustn't call libxl__ao_abort or libxl__ao_complete on it, but it would
> have the right type and in particular you could stuff it in
> sub-operations' ao fields, call STATE_AO_GC on it and so on. I could
> make it possible to call libxl__ao_inprogress and have that reflected
> to the underlyhing real ao.
>
>> That sounds like a good solution to my problem, I wouldn't mind if you
>> write that :)
>
> OK, watch this space.
>
>> I'm wondering if there are also other memory problems, even when using
>> this approach, for example I register a xswatch callback, and the
>> callback gets called with a watch_path and an event_path arguments, does
>> the internal libxl event handler machinery reuse those (or allocate and
>> free them after each loop)?
>
> The event machinery gets those from a different gc which is
> per-system-event, so that's not a problem. (Otherwise waiting for a a
> particular thing in xenstore would involve memory growing endlessly
> with calls to read from xenstore, ec.)
>
>>>> + case LIBXL__DEVICE_KIND_VBD:
>>>> + case LIBXL__DEVICE_KIND_VIF:
>>>> + if (dev->backend_kind == LIBXL__DEVICE_KIND_VBD) dguest->num_vbds--;
>>>> + if (dev->backend_kind == LIBXL__DEVICE_KIND_VIF) dguest->num_vifs--;
>>>
>>> Is it really safe to decrement these already ? What if something else
>>> comes along in the meantime and makes num_devs 0 (below) and removes
>>> everything while this operation is still running and liable to be
>>> reentered on completion ?
>>
>> That's the point of decrementing it here, so that we get to 0 (if this
>> is the last device), and remove the libxl__ddomain_guest and
>> libxl__ddomain_device. Then, when the remove AO finishes, the AO
>> callback will take care of removing the associated libxl__device.
>>
>> I thought backend_watch_callback could not be called concurrently, but
>> maybe that's not true? (and if that's the case ignore everything above
>> because it's completely wrong)
>
> While you are _actually in this function_, you hold the Big Lock. So
> nothing else can come along find the wrong value of num_*.
>
> But what you actually do is call initiate_device_remove and then
> return - ie, you return to the event loop. That gives up the lock,
> obviously. So while the device removal is proceeding, other events
> can occur.
>
> If backend_watch_callback happens then, I think you may find that it
> seems num_*==0 and decides to tear down the state for that domain.
The cleanup for the domain already happened, after we decrement num_* we
return to the main backend_watch_callback (all holding the Big Lock),
and libxl proceeds with the removal of the libxl__ddomain_guest if
sum(num_*) == 0.
What happens in backend_watch_callback (with the Big Lock hold) is
basically:
- Decrement num_*
- Check if sum(num_*) == 0 -> cleanup all data for the domain
- END
If another device is added to the domain after the domain has been
removed from the list (because sum(num_*) == 0), a new
libxl__ddomain_guest will be created, just like when a device for a new
domain is added.
> That would at the very least involve messing about in xenstore with
> the device which is still being removed.
Tearing down the domain just involves removing it's associated data
structures, nothing is written to xenstore.
> Then, later, the device removal will complete and device_complete will
> be called.
device_complete doesn't make use of either libxl__ddomain_device or
libxl__ddomain_guest, during normal program flow device_complete will be
called with both of the above data structures already freed.
> I think you need to do the decrement in device_complete, and that
> means you need a kind of "perhaps tidy up domain" function which you
> can call from both there and backend_watch_callback. And you probably
> need to provide some more useful pointers to device_complete.
It's quite possible that I'm completely wrong, but I don't see a race
with the current program flow.
next prev parent reply other threads:[~2013-11-06 9:41 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-02 9:24 [PATCH v1 00/12] libxl: add driver domain backend daemon Roger Pau Monne
2013-10-02 9:24 ` [PATCH v1 01/12] libxl/hotplug: add support for getting domid Roger Pau Monne
2013-10-02 9:36 ` Andrew Cooper
2013-10-10 11:32 ` Ian Campbell
2013-11-01 18:42 ` Ian Jackson
2013-11-05 13:49 ` Ian Campbell
2013-10-02 9:24 ` [PATCH v1 02/12] libxl: remove unneeded libxl_domain_info in wait_device_connection Roger Pau Monne
2013-11-01 18:43 ` Ian Jackson
2013-10-02 9:24 ` [PATCH v1 03/12] libxl: make hotplug execution conditional on backend_domid == domid Roger Pau Monne
2013-11-01 18:43 ` Ian Jackson
2013-11-05 13:49 ` Ian Campbell
2013-10-02 9:24 ` [PATCH v1 04/12] libxl: create a local xenstore libxl and device-model dir for guests Roger Pau Monne
2013-10-10 11:37 ` Ian Campbell
2013-10-30 9:14 ` Roger Pau Monné
2013-10-30 11:53 ` Ian Jackson
2013-10-31 16:09 ` Ian Campbell
2013-10-02 9:24 ` [PATCH v1 05/12] libxl: don't remove device frontend path from driver domains Roger Pau Monne
2013-11-01 18:45 ` Ian Jackson
2013-10-02 9:24 ` [PATCH v1 06/12] libxl: synchronize device removal when using " Roger Pau Monne
2013-11-01 18:48 ` Ian Jackson
2013-10-02 9:24 ` [PATCH v1 07/12] libxl: remove the Qemu bodge for driver domain devices Roger Pau Monne
2013-10-02 9:45 ` Andrew Cooper
2013-10-02 9:24 ` [PATCH v1 08/12] libxl: don't launch Qemu on Dom0 for Qdisk devices on driver domains Roger Pau Monne
2013-10-02 9:24 ` [PATCH v1 09/12] libxl: add Qdisk backend launch helper Roger Pau Monne
2013-10-02 9:24 ` [PATCH v1 10/12] xl: put daemonize code in it's own function Roger Pau Monne
2013-10-02 9:24 ` [PATCH v1 11/12] libxl: revert 326a7b74 Roger Pau Monne
2013-10-02 9:24 ` [PATCH v1 12/12] libxl: add device backend listener in order to launch backends Roger Pau Monne
2013-10-30 17:00 ` Ian Jackson
2013-11-04 17:03 ` Roger Pau Monné
2013-11-04 17:20 ` Ian Jackson
2013-11-04 17:59 ` Ian Jackson
2013-11-06 12:15 ` Roger Pau Monné
2013-11-06 14:46 ` Ian Jackson
2013-11-07 10:22 ` Ian Campbell
2013-11-07 16:35 ` Ian Jackson
2013-11-07 19:11 ` Shriram Rajagopalan
2013-11-08 11:41 ` Ian Jackson
2013-11-11 17:59 ` Shriram Rajagopalan
2013-11-12 15:29 ` Ian Campbell
2013-11-06 9:41 ` Roger Pau Monné [this message]
2013-11-11 11:37 ` Ian Jackson
2013-11-06 13:02 ` Roger Pau Monné
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=527A0EE3.1040603@citrix.com \
--to=roger.pau@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=ian.campbell@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).