qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Prasanna Kumar Kalever <pkalever@redhat.com>
To: qemu-devel@nongnu.org
Cc: kwolf@redhat.com, pkrempa@redhat.com, stefanha@gmail.com,
	deepakcs@redhat.com, bharata@linux.vnet.ibm.com,
	rtalur@redhat.com
Subject: Re: [Qemu-devel] [PATCH 3/3] block/gluster: add support for multiple gluster servers
Date: Thu, 5 Nov 2015 07:45:50 -0500 (EST)	[thread overview]
Message-ID: <1677110309.4230186.1446727550535.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <1446727026-18094-2-git-send-email-prasanna.kalever@redhat.com>

On Thursday, November 5, 2015 6:07:06 PM, Prasanna Kumar Kalever wrote:
> This patch adds a way to specify multiple volfile servers to the gluster
> block backend of QEMU with tcp|rdma transport types and their port numbers.
> 
> Problem:
> 
> Currently VM Image on gluster volume is specified like this:

[...]

>  static void qemu_gluster_complete_aio(void *opaque)
>  {
>      GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
> @@ -309,13 +641,13 @@ static void qemu_gluster_parse_flags(int bdrv_flags,
> int *open_flags)
>      }
>  }
>  
> -static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
> +static int qemu_gluster_open(BlockDriverState *bs,  QDict *options,
>                               int bdrv_flags, Error **errp)
>  {
>      BDRVGlusterState *s = bs->opaque;
>      int open_flags = 0;
>      int ret = 0;
> -    GlusterConf *gconf = g_new0(GlusterConf, 1);
> +    BlockdevOptionsGluster *gconf = NULL;
>      QemuOpts *opts;
>      Error *local_err = NULL;
>      const char *filename;
> @@ -329,8 +661,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict
> *options,
>      }
>  
>      filename = qemu_opt_get(opts, "filename");
> -
> -    s->glfs = qemu_gluster_init(gconf, filename, errp);
> +    s->glfs = qemu_gluster_init(&gconf, filename, options, errp);
>      if (!s->glfs) {
>          ret = -errno;
>          goto out;
> @@ -345,7 +676,7 @@ static int qemu_gluster_open(BlockDriverState *bs, QDict
> *options,
>  
>  out:
>      qemu_opts_del(opts);
> -    qemu_gluster_gconf_free(gconf);
> +    qapi_free_BlockdevOptionsGluster(gconf);

Can some one help me please ?
This leads to crash in the second iteration i.e. while freeing "gconf->servers->next->value"

-prasanna 

>      if (!ret) {
>          return ret;
>      }

[...]

> --
> 2.1.0
> 
> 

  reply	other threads:[~2015-11-05 12:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-11-05 12:37 [Qemu-devel] [PATCH RFC 0/3] block/gluster: add support for multiple gluster servers Prasanna Kumar Kalever
2015-11-05 12:37 ` [Qemu-devel] [PATCH 3/3] " Prasanna Kumar Kalever
2015-11-05 12:45   ` Prasanna Kumar Kalever [this message]
2015-11-09  7:04     ` Peter Krempa
2015-11-09  9:40       ` Prasanna Kumar Kalever
2015-11-09 21:11         ` Eric Blake

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1677110309.4230186.1446727550535.JavaMail.zimbra@redhat.com \
    --to=pkalever@redhat.com \
    --cc=bharata@linux.vnet.ibm.com \
    --cc=deepakcs@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pkrempa@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rtalur@redhat.com \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).