From: Julien Grall <julien.grall@arm.com>
To: Andre Przywara <andre.przywara@arm.com>,
Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH 09/10] ARM: vGIC: introduce vgic_get/put_pending_irq
Date: Thu, 4 May 2017 18:36:56 +0100 [thread overview]
Message-ID: <0b17acaa-a58d-ccb7-b49d-8462258e8e66@arm.com> (raw)
In-Reply-To: <20170504153123.1204-10-andre.przywara@arm.com>
Hi Andre,
On 04/05/17 16:31, Andre Przywara wrote:
> So far there is always a statically allocated struct pending_irq for
> each interrupt that we deal with.
> To prepare for dynamically allocated LPIs, introduce a put/get wrapper
> to get hold of a pending_irq pointer.
> So far get() just returns the same pointer and put() is empty, but this
> change allows to introduce ref-counting very easily, to prevent
> use-after-free usage of struct pending_irq's once LPIs get unmapped from
> a domain.
> For convenience reasons we introduce a put_unlock() version, which also
> drops the pending_irq lock before calling the actual put() function.
Please explain where get/put should be used in both the commit message
and the code.
>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> ---
> xen/arch/arm/gic.c | 24 +++++++++++++++------
> xen/arch/arm/vgic-v2.c | 4 ++--
> xen/arch/arm/vgic-v3.c | 4 ++--
> xen/arch/arm/vgic.c | 52 +++++++++++++++++++++++++++++++--------------
> xen/include/asm-arm/event.h | 20 +++++++++++------
> xen/include/asm-arm/vgic.h | 7 +++++-
> 6 files changed, 77 insertions(+), 34 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 737da6b..7147b6c 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -136,9 +136,7 @@ void gic_route_irq_to_xen(struct irq_desc *desc, unsigned int priority)
> int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
> struct irq_desc *desc, unsigned int priority)
> {
> - /* Use vcpu0 to retrieve the pending_irq struct. Given that we only
> - * route SPIs to guests, it doesn't make any difference. */
> - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
> + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq);
This vgic_get_pending_irq would benefits of an explanation where is the
associated put.
>
> ASSERT(spin_is_locked(&desc->lock));
> /* Caller has already checked that the IRQ is an SPI */
> @@ -148,7 +146,10 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
> if ( p->desc ||
> /* The VIRQ should not be already enabled by the guest */
> test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) )
> + {
> + vgic_put_pending_irq(d, p);
> return -EBUSY;
> + }
>
> desc->handler = gic_hw_ops->gic_guest_irq_type;
> set_bit(_IRQ_GUEST, &desc->status);
> @@ -159,6 +160,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
>
> p->desc = desc;
>
> + vgic_put_pending_irq(d, p);
Newline.
> return 0;
> }
>
> @@ -166,7 +168,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned int virq,
> int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
> struct irq_desc *desc)
> {
> - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
> + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq);
>
> ASSERT(spin_is_locked(&desc->lock));
> ASSERT(test_bit(_IRQ_GUEST, &desc->status));
> @@ -189,7 +191,10 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
> */
> if ( test_bit(_IRQ_INPROGRESS, &desc->status) ||
> !test_bit(_IRQ_DISABLED, &desc->status) )
> + {
> + vgic_put_pending_irq(d, p);
> return -EBUSY;
> + }
> }
>
> clear_bit(_IRQ_GUEST, &desc->status);
> @@ -197,6 +202,8 @@ int gic_remove_irq_from_guest(struct domain *d, unsigned int virq,
>
> p->desc = NULL;
>
> + vgic_put_pending_irq(d, p);
> +
> return 0;
> }
>
> @@ -383,13 +390,14 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, struct pending_irq *n)
>
> void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
> {
> - struct pending_irq *p = irq_to_pending(v, virtual_irq);
> + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, virtual_irq);
> unsigned long flags;
>
> spin_lock_irqsave(&v->arch.vgic.lock, flags);
> if ( !list_empty(&p->lr_queue) )
> list_del_init(&p->lr_queue);
> spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> + vgic_put_pending_irq(v->domain, p);
> }
>
> void gic_raise_inflight_irq(struct vcpu *v, struct pending_irq *n)
> @@ -406,6 +414,7 @@ void gic_raise_inflight_irq(struct vcpu *v, struct pending_irq *n)
> gdprintk(XENLOG_DEBUG, "trying to inject irq=%u into d%dv%d, when it is still lr_pending\n",
> n->irq, v->domain->domain_id, v->vcpu_id);
> #endif
> + vgic_put_pending_irq(v->domain, n);
Why this one?
> }
>
> void gic_raise_guest_irq(struct vcpu *v, struct pending_irq *p)
> @@ -440,8 +449,9 @@ static void gic_update_one_lr(struct vcpu *v, int i)
>
> gic_hw_ops->read_lr(i, &lr_val);
> irq = lr_val.virq;
> - p = irq_to_pending(v, irq);
> + p = vgic_get_pending_irq(v->domain, v, irq);
> spin_lock(&p->lock);
It sounds like to me that you want to introduce a
vgic_get_pending_irq_lock(...).
> +
Spurious change.
> if ( lr_val.state & GICH_LR_ACTIVE )
> {
> set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
> @@ -499,7 +509,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
> }
> }
> }
> - spin_unlock(&p->lock);
> + vgic_put_pending_irq_unlock(v->domain, p);
> }
>
> void gic_clear_lrs(struct vcpu *v)
> diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
> index bf755ae..36ed04f 100644
> --- a/xen/arch/arm/vgic-v2.c
> +++ b/xen/arch/arm/vgic-v2.c
> @@ -75,11 +75,11 @@ static uint32_t vgic_fetch_itargetsr(struct vcpu *v, unsigned int offset)
>
> for ( i = 0; i < NR_TARGETS_PER_ITARGETSR; i++, offset++ )
> {
> - struct pending_irq *p = irq_to_pending(v, offset);
> + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, offset);
>
> spin_lock(&p->lock);
> reg |= (1 << p->vcpu_id) << (i * NR_BITS_PER_TARGET);
> - spin_unlock(&p->lock);
> + vgic_put_pending_irq_unlock(v->domain, p);
> }
>
> return reg;
> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> index 15a512a..fff518e 100644
> --- a/xen/arch/arm/vgic-v3.c
> +++ b/xen/arch/arm/vgic-v3.c
> @@ -105,10 +105,10 @@ static uint64_t vgic_fetch_irouter(struct vcpu *v, unsigned int offset)
> /* There is exactly 1 vIRQ per IROUTER */
> offset /= NR_BYTES_PER_IROUTER;
>
> - p = irq_to_pending(v, offset);
> + p = vgic_get_pending_irq(v->domain, v, offset);
> spin_lock(&p->lock);
> aff = vcpuid_to_vaffinity(p->vcpu_id);
> - spin_unlock(&p->lock);
> + vgic_put_pending_irq_unlock(v->domain, p);
>
> return aff;
> }
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 530ac55..c7d645e 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -245,17 +245,16 @@ static void set_config(struct pending_irq *p, unsigned int config)
> set_bit(GIC_IRQ_GUEST_EDGE, &p->status);
> }
>
> -
This newline should not have been introduced at first hand.
> #define DEFINE_GATHER_IRQ_INFO(name, get_val, shift) \
> uint32_t gather_irq_info_##name(struct vcpu *v, unsigned int irq) \
> { \
> uint32_t ret = 0, i; \
> for ( i = 0; i < (32 / shift); i++ ) \
> { \
> - struct pending_irq *p = irq_to_pending(v, irq + i); \
> + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, irq + i); \
> spin_lock(&p->lock); \
> ret |= get_val(p) << (shift * i); \
> - spin_unlock(&p->lock); \
> + vgic_put_pending_irq_unlock(v->domain, p); \
> } \
> return ret; \
> }
> @@ -267,10 +266,10 @@ void scatter_irq_info_##name(struct vcpu *v, unsigned int irq, \
> unsigned int i; \
> for ( i = 0; i < (32 / shift); i++ ) \
> { \
> - struct pending_irq *p = irq_to_pending(v, irq + i); \
> + struct pending_irq *p = vgic_get_pending_irq(v->domain, v, irq + i); \
> spin_lock(&p->lock); \
> set_val(p, (value >> (shift * i)) & ((1 << shift) - 1)); \
> - spin_unlock(&p->lock); \
> + vgic_put_pending_irq_unlock(v->domain, p); \
> } \
> }
>
> @@ -332,13 +331,13 @@ void arch_move_irqs(struct vcpu *v)
>
> for ( i = 32; i < vgic_num_irqs(d); i++ )
> {
> - p = irq_to_pending(d->vcpu[0], i);
> + p = vgic_get_pending_irq(d, NULL, i);
> spin_lock(&p->lock);
> v_target = vgic_get_target_vcpu(d, p);
>
> if ( v_target == v && !test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) )
> irq_set_affinity(p->desc, cpu_mask);
> - spin_unlock(&p->lock);
> + vgic_put_pending_irq_unlock(d, p);
> }
> }
>
> @@ -351,7 +350,7 @@ void vgic_disable_irqs(struct vcpu *v, unsigned int irq, uint32_t r)
> struct vcpu *v_target;
>
> while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> - p = irq_to_pending(v, irq + i);
> + p = vgic_get_pending_irq(v->domain, v, irq + i);
> v_target = vgic_get_target_vcpu(v->domain, p);
> clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> gic_remove_from_queues(v_target, irq + i);
> @@ -361,6 +360,7 @@ void vgic_disable_irqs(struct vcpu *v, unsigned int irq, uint32_t r)
> p->desc->handler->disable(p->desc);
> spin_unlock_irqrestore(&p->desc->lock, flags);
> }
> + vgic_put_pending_irq(v->domain, p);
> i++;
> }
> }
> @@ -376,7 +376,7 @@ void vgic_enable_irqs(struct vcpu *v, unsigned int irq, uint32_t r)
> struct domain *d = v->domain;
>
> while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
> - p = irq_to_pending(v, irq + i);
> + p = vgic_get_pending_irq(d, v, irq + i);
> v_target = vgic_get_target_vcpu(d, p);
>
> spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
> @@ -404,6 +404,7 @@ void vgic_enable_irqs(struct vcpu *v, unsigned int irq, uint32_t r)
> p->desc->handler->enable(p->desc);
> spin_unlock_irqrestore(&p->desc->lock, flags);
> }
> + vgic_put_pending_irq(v->domain, p);
> i++;
> }
> }
> @@ -461,23 +462,39 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
> return true;
> }
>
> -struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq)
> +struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq)
> +{
> + ASSERT(irq >= NR_LOCAL_IRQS);
> +
> + return &d->arch.vgic.pending_irqs[irq - 32];
> +}
This re-ordering should have been made in a separate patch. Also the
change of taking a domain too.
But I don't understand why you keep it around.
> +
> +struct pending_irq *vgic_get_pending_irq(struct domain *d, struct vcpu *v,
> + unsigned int irq)
> {
> struct pending_irq *n;
> +
> /* Pending irqs allocation strategy: the first vgic.nr_spis irqs
> * are used for SPIs; the rests are used for per cpu irqs */
> if ( irq < 32 )
> + {
> + ASSERT(v);
> n = &v->arch.vgic.pending_irqs[irq];
> + }
> else
> - n = &v->domain->arch.vgic.pending_irqs[irq - 32];
> + n = spi_to_pending(d, irq);
> +
This change does not belong to this patch.
> return n;
> }
>
> -struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq)
> +void vgic_put_pending_irq(struct domain *d, struct pending_irq *p)
> {
> - ASSERT(irq >= NR_LOCAL_IRQS);
> +}
>
> - return &d->arch.vgic.pending_irqs[irq - 32];
> +void vgic_put_pending_irq_unlock(struct domain *d, struct pending_irq *p)
> +{
> + spin_unlock(&p->lock);
> + vgic_put_pending_irq(d, p);
> }
>
> void vgic_clear_pending_irqs(struct vcpu *v)
> @@ -494,7 +511,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
>
> void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
> {
> - struct pending_irq *iter, *n = irq_to_pending(v, virq);
> + struct pending_irq *iter, *n = vgic_get_pending_irq(v->domain, v, virq);
> unsigned long flags;
> bool running;
>
> @@ -504,6 +521,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
> if ( test_bit(_VPF_down, &v->pause_flags) )
> {
> spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> + vgic_put_pending_irq(v->domain, n);
> return;
> }
>
> @@ -536,6 +554,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq)
> spin_unlock(&n->lock);
>
> out:
> + vgic_put_pending_irq(v->domain, n);
> spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> /* we have a new higher priority irq, inject it into the guest */
> running = v->is_running;
> @@ -550,12 +569,13 @@ out:
> void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq)
> {
> struct vcpu *v;
> - struct pending_irq *p = irq_to_pending(d->vcpu[0], virq);
> + struct pending_irq *p = vgic_get_pending_irq(d, NULL, virq);
>
> /* the IRQ needs to be an SPI */
> ASSERT(virq >= 32 && virq <= vgic_num_irqs(d));
>
> v = vgic_get_target_vcpu(d, p);
> + vgic_put_pending_irq(d, p);
> vgic_vcpu_inject_irq(v, virq);
> }
>
> diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h
> index 5330dfe..df672e2 100644
> --- a/xen/include/asm-arm/event.h
> +++ b/xen/include/asm-arm/event.h
> @@ -16,8 +16,7 @@ static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)
>
> static inline int local_events_need_delivery_nomask(void)
> {
> - struct pending_irq *p = irq_to_pending(current,
> - current->domain->arch.evtchn_irq);
Limiting the scope of pending_irq should be a separate patch.
> + int ret = 0;
>
> /* XXX: if the first interrupt has already been delivered, we should
> * check whether any other interrupts with priority higher than the
> @@ -28,11 +27,20 @@ static inline int local_events_need_delivery_nomask(void)
> * case.
> */
> if ( gic_events_need_delivery() )
> - return 1;
> + {
> + ret = 1;
> + }
> + else
> + {
> + struct pending_irq *p;
>
> - if ( vcpu_info(current, evtchn_upcall_pending) &&
> - list_empty(&p->inflight) )
> - return 1;
> + p = vgic_get_pending_irq(current->domain, current,
> + current->domain->arch.evtchn_irq);
> + if ( vcpu_info(current, evtchn_upcall_pending) &&
> + list_empty(&p->inflight) )
> + ret = 1;
> + vgic_put_pending_irq(current->domain, p);
Looking at this code, I think there are a race condition. Because
nothing protect list_empty(&p->inflight) (this could be modified by
another physical vCPU at the same time).
Although, I don't know if this is really an issue. Stefano do you have
an opinion?
> + }
>
> return 0;
> }
> diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h
> index 186e6df..36e4de2 100644
> --- a/xen/include/asm-arm/vgic.h
> +++ b/xen/include/asm-arm/vgic.h
> @@ -299,7 +299,12 @@ extern struct vcpu *vgic_get_target_vcpu(struct domain *d,
> extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int virq);
> extern void vgic_vcpu_inject_spi(struct domain *d, unsigned int virq);
> extern void vgic_clear_pending_irqs(struct vcpu *v);
> -extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
> +extern struct pending_irq *vgic_get_pending_irq(struct domain *d,
> + struct vcpu *v,
> + unsigned int irq);
> +extern void vgic_put_pending_irq(struct domain *d, struct pending_irq *p);
> +extern void vgic_put_pending_irq_unlock(struct domain *d,
> + struct pending_irq *p);
> extern struct pending_irq *spi_to_pending(struct domain *d, unsigned int irq);
> extern struct vgic_irq_rank *vgic_rank_offset(struct vcpu *v, int b, int n, int s);
> extern struct vgic_irq_rank *vgic_rank_irq(struct vcpu *v, unsigned int irq);
>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-05-04 17:37 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-04 15:31 [RFC PATCH 00/10] ARM VGIC rework: remove rank, introduce per-IRQ lock Andre Przywara
2017-05-04 15:31 ` [RFC PATCH 01/10] ARM: vGIC: remove rank lock from IRQ routing functions Andre Przywara
2017-05-04 15:53 ` Julien Grall
2017-05-08 9:15 ` Andre Przywara
2017-05-08 14:26 ` Julien Grall
2017-05-08 21:53 ` Stefano Stabellini
2017-05-08 22:13 ` Julien Grall
2017-05-09 0:47 ` Stefano Stabellini
2017-05-09 15:24 ` Julien Grall
2017-05-04 15:31 ` [RFC PATCH 02/10] ARM: vGIC: rework gic_raise_*_irq() functions Andre Przywara
2017-05-04 15:31 ` [RFC PATCH 03/10] ARM: vGIC: introduce and initialize pending_irq lock Andre Przywara
2017-05-04 15:31 ` [RFC PATCH 04/10] ARM: vGIC: add struct pending_irq locking Andre Przywara
2017-05-04 16:21 ` Julien Grall
2017-05-05 9:02 ` Andre Przywara
2017-05-05 23:29 ` Stefano Stabellini
2017-05-06 6:41 ` Julien Grall
2017-05-05 23:42 ` Stefano Stabellini
2017-05-04 16:54 ` Julien Grall
2017-05-05 23:26 ` Stefano Stabellini
2017-05-04 15:31 ` [RFC PATCH 05/10] ARM: vGIC: move priority from irq_rank to struct pending_irq Andre Przywara
2017-05-04 16:39 ` Julien Grall
2017-05-04 16:47 ` Julien Grall
2017-05-04 15:31 ` [RFC PATCH 06/10] ARM: vGIC: move config " Andre Przywara
2017-05-04 16:55 ` Julien Grall
2017-05-04 15:31 ` [RFC PATCH 07/10] ARM: vGIC: move enable status " Andre Przywara
2017-05-04 15:31 ` [RFC PATCH 08/10] ARM: vGIC: move target vcpu " Andre Przywara
2017-05-04 17:20 ` Julien Grall
2017-05-04 15:31 ` [RFC PATCH 09/10] ARM: vGIC: introduce vgic_get/put_pending_irq Andre Przywara
2017-05-04 17:36 ` Julien Grall [this message]
2017-05-04 22:42 ` Stefano Stabellini
2017-05-04 15:31 ` [RFC PATCH 10/10] ARM: vGIC: remove struct irq_rank and support functions Andre Przywara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0b17acaa-a58d-ccb7-b49d-8462258e8e66@arm.com \
--to=julien.grall@arm.com \
--cc=andre.przywara@arm.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).