From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ian Jackson Subject: [PATCH v3 00/18] libxl: domain save/restore: run in a separate process Date: Fri, 8 Jun 2012 18:34:11 +0100 Message-ID: <1339176870-32652-1-git-send-email-ian.jackson@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org This is v3 of my series to asyncify save/restore, rebased to current tip, retested, and with all comments addressed. In the list below "A" indicates a patch which has been acked sufficiently to go in (assuming its dependencies were to go in too). "*" indicates a new patch in v3. Preparatory work: 01/19 libxc: xc_domain_restore, make toolstack_restore const-correct 02/19 libxl: domain save: rename variables etc. 03/19 libxl: domain restore: reshuffle, preparing for ao 04/19 libxl: domain save: API changes for asynchrony The meat: 05/19 libxl: domain save/restore: run in a separate process Some fixups: A 06/19 libxl: rename libxl_dom:save_helper to physmap_path 07/19 libxl: provide libxl__xs_*_checked and libxl__xs_transaction_* 08/19 libxl: wait for qemu to acknowledge logdirty command Asyncify writing of qemu save file, too: 09/19 libxl: datacopier: provide "prefix data" facility 10/19 libxl: prepare for asynchronous writing of qemu save file 11/19 libxl: Make libxl__domain_save_device_model asynchronous Fix gc_opt handling: * 12/19 libxl: Add a gc to libxl_get_cpu_topology * 13/19 libxl: Do not pass NULL as gc_opt; introduce NOGC * 14/19 libxl: Get compiler to warn about gc_opt==NULL Work on essentially-unrelated bugs: A 15/19 xl: Handle return value from libxl_domain_suspend correctly A 16/19 libxl: do not leak dms->saved_state 17/19 libxl: do not leak spawned middle children A 18/19 libxl: do not leak an event struct on ignored ao progress * 19/19 libxl: DO NOT APPLY enforce prohibition on internal Thanks, Ian.