xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: !!!!!help!I wouldn't be able to meet the deadline!(qcow format image file read operation in qemu-img-xen)[updated]
@ 2012-12-19 16:04 hxkhust
  2012-12-19 16:23 ` Mats Petersson
  0 siblings, 1 reply; 4+ messages in thread
From: hxkhust @ 2012-12-19 16:04 UTC (permalink / raw)
  To: mats.petersson; +Cc: xen-devel@lists.xensource.com


[-- Attachment #1.1: Type: text/plain, Size: 1608 bytes --]

>Date: Wed, 19 Dec 2012 15:32:41 +0000
>From: Mats Petersson <mats.petersson@citrix.com>
>To: <xen-devel@lists.xen.org>
>Subject: Re: [Xen-devel] !!!!!help!I wouldn't be able to meet the
>	deadline!(qcow format image file read operation in
>	qemu-img-xen)[updated]
>Message-ID: <50D1DE19.1080709@citrix.com>
>Content-Type: text/plain; charset="GB2312"
>
>On 19/12/12 15:23, hxkhust wrote:
>> Or could you tell me how to cache the data which is read from the
>> backingfile when a qcow image is regarded as a virtual disk in a
>> running HVM?
>I take it the above single question is the effect of my previous reply?
>Why did you have to "hide" that little extra question in the whole
>previous e-mail?
>......yse,that is.Because previously what point out is a detail question, the reader may be convenient to answear and at least input less words to me, I guess.And in this way I can get the one that is more maneuverable.
>Sorry, don't know the answer to your question [I'm guessing, in general,
>that the Dom0 will do that for you, subject to available space], just
>pointing out that there is a minor difference between your previous and
>current mail.
>yean,the difference is minor.I have no time and I'm worry about the problem.I have seen that a lot of questions have more that one Re:XXX mails(mails to answer the question),How could they do that?  
>By caching, do you mean "load the entire file into RAM", or "if a read
>is requested for the same piece of 'disk' multiple times, I want the
>previous result to be stored and returned".
>I mean the latter one.Could you give me some proposal?
>--
>Mats

[-- Attachment #1.2: Type: text/html, Size: 2848 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread
* !!!!!help!I wouldn't be able to meet the deadline!(qcow format image file read operation in qemu-img-xen)[updated]
@ 2012-12-19 15:23 hxkhust
  2012-12-19 15:32 ` Mats Petersson
  0 siblings, 1 reply; 4+ messages in thread
From: hxkhust @ 2012-12-19 15:23 UTC (permalink / raw)
  To: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2344 bytes --]

Hi,guys,
During a HVM's running which take a qcow format image file as its own virtual disk, the qcow image file will be always read.In the situation that its qcow format image is based on a raw format image,  if nesethe backingfile ,just that raw format image file,would be read .my purpose is to cache the data that is read from the backingfile when the hvm is running .
Now what I concern is the following (which is in the /xen-4.1.2/tools/ioemu-qemu-xen/block-qcow.c) :
static void qcow_aio_read_cb(void *opaque, int ret)
{
........
if (!acb->cluster_offset) {
        if (bs->backing_hd) {
            /* read from the base image */
            acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,                           //*************
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);   //**************
//I read what the acb->buf points to, but find the reading operation is not finished. 
            if (acb->hd_aiocb == NULL)
                goto fail;
        } else {
            /* Note: in this case, no need to wait */
            memset(acb->buf, 0, 512 * acb->n);
            goto redo;
        }
    } else if (acb->cluster_offset & QCOW_OFLAG_COMPRESSED) {
        /* add AIO support for compressed blocks ? */
        if (decompress_cluster(s, acb->cluster_offset) < 0)
            goto fail;
        memcpy(acb->buf,
               s->cluster_cache + index_in_cluster * 512, 512 * acb->n);
        goto redo;
 .........
//********************************************************************************************8
when the statement:
acb->hd_aiocb = bdrv_aio_read(bs->backing_hd,
                acb->sector_num, acb->buf, acb->n, qcow_aio_read_cb, acb);n
has been completed, the content which the acb->buf points to has not been prepared.This is a asynchronous read operation.Who could tell me the principle or process about this  asynchronous read operation about these codes? if you describe it using the codes in xen,that will be so kind of you.I need to know when  the data has been copied to the memory which the acb->buf points to, and this problem is important to me.as the title mentioned ,I have to solve it as soon as possible.Or could you tell me how to cache the data which is read from the backingfile when a qcow image is regarded as a virtual disk in a running  HVM?


A newbie






[-- Attachment #1.2: Type: text/html, Size: 4044 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-12-19 16:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-12-19 16:04 !!!!!help!I wouldn't be able to meet the deadline!(qcow format image file read operation in qemu-img-xen)[updated] hxkhust
2012-12-19 16:23 ` Mats Petersson
  -- strict thread matches above, loose matches on Subject: below --
2012-12-19 15:23 hxkhust
2012-12-19 15:32 ` Mats Petersson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).