From: Paul Durrant <Paul.Durrant@citrix.com>
To: Juergen Gross <jgross@suse.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Cc: Anthony Perard <anthony.perard@citrix.com>,
"sstabellini@kernel.org" <sstabellini@kernel.org>,
"kraxel@redhat.com" <kraxel@redhat.com>
Subject: Re: [Qemu-devel] [Xen-devel] [PATCH] xen: fix qdisk BLKIF_OP_DISCARD for 32/64 word size mix
Date: Thu, 16 Jun 2016 11:24:38 +0000 [thread overview]
Message-ID: <481f9b77db71494d81ea43dada59abf0@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <1466071320-10964-1-git-send-email-jgross@suse.com>
> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
> Juergen Gross
> Sent: 16 June 2016 11:02
> To: qemu-devel@nongnu.org; xen-devel@lists.xensource.com
> Cc: Anthony Perard; Juergen Gross; sstabellini@kernel.org;
> kraxel@redhat.com
> Subject: [Xen-devel] [PATCH] xen: fix qdisk BLKIF_OP_DISCARD for 32/64
> word size mix
>
> In case the word size of the domU and qemu running the qdisk backend
> differ BLKIF_OP_DISCARD will not work reliably, as the request
> structure in the ring have different layouts for different word size.
>
> Correct this by copying the request structure in case of different
> word size element by element in the BLKIF_OP_DISCARD case, too.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
Would it not be better to re-import the canonical blkif header as a whole rather than cherry-picking like this? You'd need to post-process to maintain style and possibly change some names for compatibility etc. but probably nothing beyond what indent and a simple [s]ed script can do.
I did broadly the same thing to re-import the netif header into Linux recently.
Paul
> ---
> hw/block/xen_blkif.h | 20 ++++++++++++++++++--
> 1 file changed, 18 insertions(+), 2 deletions(-)
>
> diff --git a/hw/block/xen_blkif.h b/hw/block/xen_blkif.h
> index e3b133b..969112f 100644
> --- a/hw/block/xen_blkif.h
> +++ b/hw/block/xen_blkif.h
> @@ -26,6 +26,14 @@ struct blkif_x86_32_request {
> blkif_sector_t sector_number;/* start sector idx on disk (r/w only)
> */
> struct blkif_request_segment
> seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
> };
> +struct blkif_x86_32_request_discard {
> + uint8_t operation; /* BLKIF_OP_DISCARD */
> + uint8_t flag; /* nr_segments in request struct */
> + blkif_vdev_t handle; /* only for read/write requests */
> + uint64_t id; /* private guest value, echoed in resp */
> + blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
> + uint64_t nr_sectors; /* # of contiguous sectors to discard */
> +};
> struct blkif_x86_32_response {
> uint64_t id; /* copied from request */
> uint8_t operation; /* copied from request */
> @@ -44,6 +52,14 @@ struct blkif_x86_64_request {
> blkif_sector_t sector_number;/* start sector idx on disk (r/w only)
> */
> struct blkif_request_segment
> seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
> };
> +struct blkif_x86_64_request_discard {
> + uint8_t operation; /* BLKIF_OP_DISCARD */
> + uint8_t flag; /* nr_segments in request struct */
> + blkif_vdev_t handle; /* only for read/write requests */
> + uint64_t __attribute__((__aligned__(8))) id;
> + blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
> + uint64_t nr_sectors; /* # of contiguous sectors to discard */
> +};
> struct blkif_x86_64_response {
> uint64_t __attribute__((__aligned__(8))) id;
> uint8_t operation; /* copied from request */
> @@ -82,7 +98,7 @@ static inline void blkif_get_x86_32_req(blkif_request_t
> *dst, blkif_x86_32_reque
> /* Prevent the compiler from using src->... instead. */
> barrier();
> if (dst->operation == BLKIF_OP_DISCARD) {
> - struct blkif_request_discard *s = (void *)src;
> + struct blkif_x86_32_request_discard *s = (void *)src;
> struct blkif_request_discard *d = (void *)dst;
> d->nr_sectors = s->nr_sectors;
> return;
> @@ -105,7 +121,7 @@ static inline void
> blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_reque
> /* Prevent the compiler from using src->... instead. */
> barrier();
> if (dst->operation == BLKIF_OP_DISCARD) {
> - struct blkif_request_discard *s = (void *)src;
> + struct blkif_x86_64_request_discard *s = (void *)src;
> struct blkif_request_discard *d = (void *)dst;
> d->nr_sectors = s->nr_sectors;
> return;
> --
> 2.6.6
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-06-16 11:24 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-16 10:02 [Qemu-devel] [PATCH] xen: fix qdisk BLKIF_OP_DISCARD for 32/64 word size mix Juergen Gross
2016-06-16 10:54 ` [Qemu-devel] [Xen-devel] " Jan Beulich
[not found] ` <5762A17B02000078000F59F3@suse.com>
2016-06-16 11:04 ` Juergen Gross
2016-06-16 11:17 ` Jan Beulich
2016-06-16 13:07 ` Stefano Stabellini
2016-06-16 13:49 ` Juergen Gross
2016-06-17 16:07 ` Stefano Stabellini
2016-06-16 11:24 ` Paul Durrant [this message]
2016-06-16 12:18 ` Juergen Gross
2016-06-16 12:23 ` Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=481f9b77db71494d81ea43dada59abf0@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=anthony.perard@citrix.com \
--cc=jgross@suse.com \
--cc=kraxel@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).