From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v9 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list
Date: Mon, 9 Oct 2017 12:45:16 +0000 [thread overview]
Message-ID: <1f72769be23745b98f369be5bd9f751f@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <59DB8A560200007800183EA7@prv-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 09 October 2017 13:40
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH v9 01/11] x86/hvm/ioreq: maintain an array of ioreq
> servers rather than a list
>
> >>> On 06.10.17 at 14:25, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -33,6 +33,44 @@
> >
> > #include <public/hvm/ioreq.h>
> >
> > +static void set_ioreq_server(struct domain *d, unsigned int id,
> > + struct hvm_ioreq_server *s)
> > +{
> > + ASSERT(id < MAX_NR_IOREQ_SERVERS);
> > + ASSERT(!s || !d->arch.hvm_domain.ioreq_server.server[id]);
> > +
> > + d->arch.hvm_domain.ioreq_server.server[id] = s;
> > +}
> > +
> > +#define GET_IOREQ_SERVER(d, id) \
> > + (d)->arch.hvm_domain.ioreq_server.server[id]
> > +
> > +static struct hvm_ioreq_server *get_ioreq_server(const struct domain
> *d,
> > + unsigned int id)
> > +{
> > + if ( id >= MAX_NR_IOREQ_SERVERS )
> > + return NULL;
> > +
> > + return GET_IOREQ_SERVER(d, id);
> > +}
> > +
> > +#define IS_DEFAULT(s) \
> > + ((s) == get_ioreq_server((s)->domain, DEFAULT_IOSERVID))
>
> While at this point it looks like all users of this macro either
> explicitly check s to be non-NULL before invoking the macro or
> are being called with s guaranteed non-NULL, going forward it
> may easily be that NULL might be handed here. Therefore I
> think it would be better to either add an ASSERT() or do
>
> #define IS_DEFAULT(s) \
> ((s) && (s) == get_ioreq_server((s)->domain, DEFAULT_IOSERVID))
>
Ok, I'll code it as you suggest.
> .
>
> > +/*
> > + * Iterate over all possible ioreq servers. The use of inline function
> > + * get_ioreq_server() in the increment is deliberate as use of the
> > + * GET_IOREQ_SERVER() macro will cause gcc to complain about an array
> > + * overflow.
>
> I think you shouldn't accuse gcc of complaining, but simply state
> that there _will be_ an array overflow otherwise.
>
> > + */
> > +#define FOR_EACH_IOREQ_SERVER(d, id, s) \
> > + for ( (id) = 0, (s) = GET_IOREQ_SERVER(d, 0); \
> > + (id) < MAX_NR_IOREQ_SERVERS; \
> > + (s) = get_ioreq_server(d, ++(id)) ) \
> > + if ( !s ) \
>
> Alternatively, how about folding both macro and inline function
> invocation be putting them in the if() here?
I guess that now the if is there I could do that... probably will look neater.
>
> > @@ -685,52 +688,64 @@ int hvm_create_ioreq_server(struct domain *d,
> domid_t domid,
> > ioservid_t *id)
> > {
> > struct hvm_ioreq_server *s;
> > + unsigned int i;
> > int rc;
> >
> > if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
> > return -EINVAL;
> >
> > - rc = -ENOMEM;
> > s = xzalloc(struct hvm_ioreq_server);
> > if ( !s )
> > - goto fail1;
> > + return -ENOMEM;
> >
> > domain_pause(d);
> > spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> >
> > - rc = -EEXIST;
> > - if ( is_default && d->arch.hvm_domain.default_ioreq_server != NULL )
> > - goto fail2;
> > -
> > - rc = hvm_ioreq_server_init(s, d, domid, is_default, bufioreq_handling,
> > - next_ioservid(d));
> > - if ( rc )
> > - goto fail3;
> > -
> > - list_add(&s->list_entry,
> > - &d->arch.hvm_domain.ioreq_server.list);
> > -
> > if ( is_default )
> > {
> > - d->arch.hvm_domain.default_ioreq_server = s;
> > - hvm_ioreq_server_enable(s, true);
> > + i = DEFAULT_IOSERVID;
> > +
> > + rc = -EEXIST;
> > + if ( GET_IOREQ_SERVER(d, i) )
> > + goto fail;
> > }
> > + else
> > + {
> > + for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
> > + {
> > + if ( i != DEFAULT_IOSERVID && !GET_IOREQ_SERVER(d, i) )
> > + break;
> > + }
>
> Strictly speaking the braces here are pointless. But you're the
> maintainer, so you know what you like.
Yes, I prefer to keep them.
>
> Everything else looks fine to me now.
>
Cool. Thanks,
Paul
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-10-09 12:45 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-06 12:25 [PATCH v9 00/11] x86: guest resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-10-09 12:40 ` Jan Beulich
2017-10-09 12:45 ` Paul Durrant [this message]
2017-10-06 12:25 ` [PATCH v9 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-10-06 12:25 ` [PATCH v9 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-10-06 12:25 ` [PATCH v9 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-10-09 12:45 ` Jan Beulich
2017-10-09 12:47 ` Paul Durrant
2017-10-06 12:25 ` [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-10-09 13:05 ` Jan Beulich
2017-10-10 13:26 ` Paul Durrant
2017-10-11 8:20 ` Jan Beulich
2017-10-09 14:23 ` Jan Beulich
2017-10-10 14:10 ` Paul Durrant
2017-10-10 14:37 ` Paul Durrant
2017-10-11 8:30 ` Jan Beulich
2017-10-11 8:38 ` Paul Durrant
2017-10-11 8:48 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-10-09 15:20 ` Jan Beulich
2017-10-10 14:45 ` Paul Durrant
2017-10-11 8:35 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-10-09 15:44 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-10-06 12:25 ` [PATCH v9 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-10-10 10:25 ` Jan Beulich
2017-10-10 16:01 ` Paul Durrant
2017-10-11 8:47 ` Jan Beulich
2017-10-11 8:54 ` Paul Durrant
2017-10-11 9:43 ` Jan Beulich
2017-10-11 9:54 ` Paul Durrant
2017-10-11 10:12 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1f72769be23745b98f369be5bd9f751f@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).