qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Anthony Liguori <anthony@codemonkey.ws>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] xen_disk: support cache backend option
Date: Wed, 26 Jun 2013 17:10:20 -0500	[thread overview]
Message-ID: <87wqpg1q2b.fsf@codemonkey.ws> (raw)
In-Reply-To: <alpine.DEB.2.02.1306262242270.4782@kaball.uk.xensource.com>

Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> On Wed, 26 Jun 2013, Anthony Liguori wrote:
>> Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:
>> 
>> > Support a backend option "cache" that specifies the cache mode that
>> > should be used to open the disk file or device.
>> >
>> > See: http://marc.info/?l=xen-devel&m=137226872905057
>> >
>> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>> 
>> Is the guest setting this or a management tool?  I thought we were
>> moving to having the Xen management tools use QMP and the command line
>> instead of putting this stuff in XenStore...
>
> And we are, in fact we have just introduced QMP based cpu hotplug in
> libxl for HVM guests, using the existing cpu-add command.
>
> However this option would be part of the existing block protocol, that
> like all the other PV protocols are entirely xenstore based.
> I think it makes sense to introduce the cache configuration via xenstore
> and make it part of the block interface so that other block backend
> (like blkback and blktap) might support it too in the future. Otherwise
> it would be a QEMU-only thing and moreover it would be the only block
> related configuration for the PV backend to go via QMP when everything
> else comes from xenstore. Pretty ugly.

Bleck.  I really wish we didn't have this logic in QEMU in the first
place.  I guess since it's already here though, extending it can't hurt.

But please try to remove the whole set-things-up-via-Xenstore in the
future.  It will become a problem at some point as it's a pretty
significant layering violation.  Maybe the thing to do is move the logic
out of the device and into a Xen-specific module that setups the device
model based on Xenstore...

Anyway, I guess:

Acked-by: Anthony Liguori <aliguori@us.ibm.com>

I would not use bdrv_parse_cache_flags since we may add more down the
road.

Regards,

Anthony Liguori

>
>
>
>> Regards,
>> 
>> Anthony Liguori
>> 
>> >
>> > diff --git a/hw/xen_disk.c b/hw/xen_disk.c
>> > index f484404..092aa6b 100644
>> > --- a/hw/block/xen_disk.c
>> > +++ b/hw/block/xen_disk.c
>> > @@ -94,6 +94,7 @@ struct XenBlkDev {
>> >      char                *type;
>> >      char                *dev;
>> >      char                *devtype;
>> > +    char                *cache;
>> >      const char          *fileproto;
>> >      const char          *filename;
>> >      int                 ring_ref;
>> > @@ -734,6 +735,12 @@ static int blk_init(struct XenDevice *xendev)
>> >      if (blkdev->devtype == NULL) {
>> >          blkdev->devtype = xenstore_read_be_str(&blkdev->xendev, "device-type");
>> >      }
>> > +    if (blkdev->cache == NULL) {
>> > +        blkdev->cache = xenstore_read_be_str(&blkdev->xendev, "cache");
>> > +    }
>> > +    if (blkdev->cache == NULL) {
>> > +        blkdev->cache = g_strdup("writeback");
>> > +    }
>> >  
>> >      /* do we have all we need? */
>> >      if (blkdev->params == NULL ||
>> > @@ -774,6 +781,8 @@ out_error:
>> >      blkdev->dev = NULL;
>> >      g_free(blkdev->devtype);
>> >      blkdev->devtype = NULL;
>> > +    g_free(blkdev->cache);
>> > +    blkdev->cache = NULL;
>> >      return -1;
>> >  }
>> >  
>> > @@ -782,8 +791,14 @@ static int blk_connect(struct XenDevice *xendev)
>> >      struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
>> >      int pers, index, qflags;
>> >  
>> > -    /* read-only ? */
>> > -    qflags = BDRV_O_CACHE_WB | BDRV_O_NATIVE_AIO;
>> > +    if (!strcmp(blkdev->cache, "none")) {
>> > +        qflags = BDRV_O_NATIVE_AIO | BDRV_O_NOCACHE;
>> > +    } else if (!strcmp(blkdev->cache, "writethrough")) {
>> > +        qflags = 0;
>> > +    } else {
>> > +        /* default to writeback */
>> > +        qflags = BDRV_O_NATIVE_AIO | BDRV_O_CACHE_WB;
>> > +    }
>> >      if (strcmp(blkdev->mode, "w") == 0) {
>> >          qflags |= BDRV_O_RDWR;
>> >      }
>> > @@ -950,6 +965,7 @@ static int blk_free(struct XenDevice *xendev)
>> >      g_free(blkdev->type);
>> >      g_free(blkdev->dev);
>> >      g_free(blkdev->devtype);
>> > +    g_free(blkdev->cache);
>> >      qemu_bh_delete(blkdev->bh);
>> >      return 0;
>> >  }
>> 

      reply	other threads:[~2013-06-26 22:10 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-26 17:48 [Qemu-devel] [PATCH] xen_disk: support cache backend option Stefano Stabellini
2013-06-26 20:53 ` Paolo Bonzini
2013-06-26 21:42   ` Stefano Stabellini
2013-06-26 20:58 ` Anthony Liguori
2013-06-26 21:48   ` Stefano Stabellini
2013-06-26 22:10     ` Anthony Liguori [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87wqpg1q2b.fsf@codemonkey.ws \
    --to=anthony@codemonkey.ws \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).