xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Wen Congyang <wency@cn.fujitsu.com>
To: xen devel <xen-devel@lists.xen.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Wen Congyang <wency@cn.fujitsu.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Dong Eddie <eddie.dong@intel.com>,
	Yang Hongyang <yanghy@cn.fujitsu.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>
Subject: [RFC Patch v2 25/45] colo: dynamic allocate aio_requests to avoid -EBUSY error
Date: Fri, 8 Aug 2014 15:01:24 +0800	[thread overview]
Message-ID: <1407481305-19808-26-git-send-email-wency@cn.fujitsu.com> (raw)
In-Reply-To: <1407481305-19808-1-git-send-email-wency@cn.fujitsu.com>

From: Lai Jiangshan <laijs@cn.fujitsu.com>

In normal case, there are at most TAPDISK_DATA_REQUESTS request
at the same time. But in remus mode, the write requests are
forwarded from the master side, and cached in block-remus. All
cached requests will be forwarded to aio driver when syncing PVM
and SVM. In this case, The number of requests may be more than
TAPDISK_DATA_REQUESTS. So aio driver can't hanlde these requests
at the same time, it will cause tapdisk2 exit.

We don't know how many requests will be handled, so dynamic allocate
aio_requests to avoid this error.

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Jiang Yunhong <yunhong.jiang@intel.com>
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/blktap2/drivers/block-aio.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/tools/blktap2/drivers/block-aio.c b/tools/blktap2/drivers/block-aio.c
index f398da2..10ab20b 100644
--- a/tools/blktap2/drivers/block-aio.c
+++ b/tools/blktap2/drivers/block-aio.c
@@ -55,9 +55,10 @@ struct tdaio_state {
 	int                  fd;
 	td_driver_t         *driver;
 
+	int                  aio_max_count;
 	int                  aio_free_count;	
 	struct aio_request   aio_requests[MAX_AIO_REQS];
-	struct aio_request  *aio_free_list[MAX_AIO_REQS];
+	struct aio_request   **aio_free_list;
 };
 
 /*Get Image size, secsize*/
@@ -122,6 +123,11 @@ int tdaio_open(td_driver_t *driver, const char *name, td_flag_t flags)
 
 	memset(prv, 0, sizeof(struct tdaio_state));
 
+	prv->aio_free_list = malloc(MAX_AIO_REQS * sizeof(*prv->aio_free_list));
+	if (!prv->aio_free_list)
+		return -ENOMEM;
+
+	prv->aio_max_count = MAX_AIO_REQS;
 	prv->aio_free_count = MAX_AIO_REQS;
 	for (i = 0; i < MAX_AIO_REQS; i++)
 		prv->aio_free_list[i] = &prv->aio_requests[i];
@@ -159,6 +165,28 @@ done:
 	return ret;	
 }
 
+static int tdaio_refill(struct tdaio_state *prv)
+{
+	struct aio_request **new, *new_req;
+	int i, max = prv->aio_max_count + MAX_AIO_REQS;
+
+	new = realloc(prv->aio_free_list, max * sizeof(*prv->aio_free_list));
+	if (!new)
+		return -1;
+	prv->aio_free_list = new;
+
+	new_req = calloc(MAX_AIO_REQS, sizeof(*new_req));
+	if (!new_req)
+		return -1;
+
+	prv->aio_max_count = max;
+	prv->aio_free_count = MAX_AIO_REQS;
+	for (i = 0; i < MAX_AIO_REQS; i++)
+		prv->aio_free_list[i] = &new_req[i];
+
+	return 0;
+}
+
 void tdaio_complete(void *arg, struct tiocb *tiocb, int err)
 {
 	struct aio_request *aio = (struct aio_request *)arg;
@@ -207,8 +235,10 @@ void tdaio_queue_write(td_driver_t *driver, td_request_t treq)
 	size    = treq.secs * driver->info.sector_size;
 	offset  = treq.sec  * (uint64_t)driver->info.sector_size;
 
-	if (prv->aio_free_count == 0)
-		goto fail;
+	if (prv->aio_free_count == 0) {
+		if (tdaio_refill(prv) < 0)
+			goto fail;
+	}
 
 	aio        = prv->aio_free_list[--prv->aio_free_count];
 	aio->treq  = treq;
-- 
1.9.3

  parent reply	other threads:[~2014-08-08  7:01 UTC|newest]

Thread overview: 64+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-08  7:00 [RFC Patch v2 00/45] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 01/45] copy the correct page to memory Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 02/45] csum the correct page Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 03/45] don't zero out ioreq page Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 04/45] Refactor domain_suspend_callback_common() Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 05/45] Update libxl__domain_resume() for colo Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 06/45] Update libxl__domain_suspend_common_switch_qemu_logdirty() " Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 07/45] Introduce a new internal API libxl__domain_unpause() Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 08/45] Update libxl__domain_unpause() to support qemu-xen Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 09/45] support to resume uncooperative HVM guests Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 10/45] update datecopier to support sending data only Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 11/45] introduce a new API to aync read data from fd Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 12/45] move remus related codes to libxl_remus.c Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 13/45] rename remus device to checkpoint device Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 14/45] adjust the indentation Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 15/45] don't touch remus in checkpoint_device Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 16/45] Update libxl_save_msgs_gen.pl to support return data from xl to xc Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 17/45] Allow slave sends data to master Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 18/45] secondary vm suspend/resume/checkpoint code Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 19/45] primary vm suspend/get_dirty_pfn/resume/checkpoint code Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 20/45] xc_domain_save: flush cache before calling callbacks->postcopy() in colo mode Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 21/45] COLO: xc related codes Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 22/45] send store mfn and console mfn to xl before resuming secondary vm Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 23/45] implement the cmdline for COLO Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 24/45] HACK: do checkpoint per 20ms Wen Congyang
2014-08-08  7:01 ` Wen Congyang [this message]
2014-08-08  7:01 ` [RFC Patch v2 26/45] fix memory leak in block-remus Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 27/45] pass uuid to the callback td_open Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 28/45] return the correct dev path Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 29/45] blktap2: use correct way to get remus_image Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 30/45] don't call client_flush() when switching to unprotected mode Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 31/45] remus: fix bug in tdremus_close() Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 32/45] blktap2: use correct way to get free event id Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 33/45] blktap2: don't return negative " Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 34/45] blktap2: use correct way to define array Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 35/45] blktap2: connect to backup asynchronously Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 36/45] switch to unprotected mode before closing Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 37/45] blktap2: move async connect related codes to block-replication.c Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 38/45] blktap2: move ramdisk " Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 39/45] block-colo: implement colo disk replication Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 40/45] pass correct file to qemu if we use blktap2 Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 41/45] support blktap remus in xl Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 42/45] support blktap colo in xl: Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 43/45] update libxl__device_disk_from_xs_be() to support blktap device Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 44/45] libxl/colo: setup and control disk replication for blktap2 backends Wen Congyang
2014-08-08  7:01 ` [RFC Patch v2 45/45] x86/hvm: Always set pending event injection when loading VMC[BS] state Wen Congyang
2014-08-08  7:24   ` Jan Beulich
2014-08-08  7:29     ` Wen Congyang
2014-08-26 16:02   ` Jan Beulich
2014-08-27  0:46     ` Wen Congyang
2014-08-27 14:58       ` Aravind Gopalakrishnan
2014-08-28  1:04         ` Wen Congyang
2014-08-28  8:54           ` Andrew Cooper
2014-08-28 11:17             ` Wen Congyang
2014-08-28 11:31               ` Paul Durrant
2014-08-29  5:59                 ` Wen Congyang
2014-08-28  9:53         ` Tim Deegan
2014-08-27 23:24     ` Tian, Kevin
2014-08-27 15:02   ` Andrew Cooper
2014-08-08  7:01 ` [RFC Patch v2 46/45] Introduce "xen-load-devices-state" Wen Congyang
2014-08-08  7:19 ` [RFC Patch v2 00/45] COarse-grain LOck-stepping Virtual Machines for Non-stop Service Jan Beulich
2014-08-08  7:39   ` Wen Congyang
2014-08-08  8:21   ` Wen Congyang
2014-08-08  9:02     ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1407481305-19808-26-git-send-email-wency@cn.fujitsu.com \
    --to=wency@cn.fujitsu.com \
    --cc=Ian.Campbell@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=eddie.dong@intel.com \
    --cc=laijs@cn.fujitsu.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yanghy@cn.fujitsu.com \
    --cc=yunhong.jiang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).