* [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union
@ 2023-08-03 15:28 David Woodhouse
2023-08-14 12:31 ` Paul Durrant
0 siblings, 1 reply; 3+ messages in thread
From: David Woodhouse @ 2023-08-03 15:28 UTC (permalink / raw)
To: qemu-devel
Cc: Paul Durrant, Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum
[-- Attachment #1: Type: text/plain, Size: 15875 bytes --]
From: David Woodhouse <dwmw@amazon.co.uk>
A previous implementation of this stuff used a 64-bit field for all of
the port information (vcpu/type/type_val) and did atomic exchanges on
them. When I implemented that in Qemu I regretted my life choices and
just kept it simple with locking instead.
So there's no need for the XenEvtchnPort to be so simplistic. We can
use a union for the pirq/virq/interdomain information, which lets us
keep a separate bit for the 'remote domain' in interdomain ports. A
single bit is enough since the only possible targets are loopback or
qemu itself.
So now we can ditch PORT_INFO_TYPEVAL_REMOTE_QEMU and the horrid
manual masking, although the in-memory representation is identical
so there's no change in the saved state ABI.
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
---
Thought this would be a nice cleanup to avoid abusing `type_val` for
various different purposes, and especially the top bit of it for
interdomain ports. But having done it I find myself fairly ambivalent
about it. Does anyone feel strongly either way?
hw/i386/kvm/xen_evtchn.c | 124 ++++++++++++++++++++-------------------
1 file changed, 64 insertions(+), 60 deletions(-)
diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
index a731738411..446ae46022 100644
--- a/hw/i386/kvm/xen_evtchn.c
+++ b/hw/i386/kvm/xen_evtchn.c
@@ -58,7 +58,15 @@ OBJECT_DECLARE_SIMPLE_TYPE(XenEvtchnState, XEN_EVTCHN)
typedef struct XenEvtchnPort {
uint32_t vcpu; /* Xen/ACPI vcpu_id */
uint16_t type; /* EVTCHNSTAT_xxxx */
- uint16_t type_val; /* pirq# / virq# / remote port according to type */
+ union {
+ uint16_t type_val; /* pirq# / virq# / remote port according to type */
+ uint16_t pirq;
+ uint16_t virq;
+ struct {
+ uint16_t port:15;
+ uint16_t to_qemu:1; /* Only two targets; qemu or loopback */
+ } interdomain;
+ } u;
} XenEvtchnPort;
/* 32-bit compatibility definitions, also used natively in 32-bit build */
@@ -210,16 +218,16 @@ static int xen_evtchn_post_load(void *opaque, int version_id)
XenEvtchnPort *p = &s->port_table[i];
if (p->type == EVTCHNSTAT_pirq) {
- assert(p->type_val);
- assert(p->type_val < s->nr_pirqs);
+ assert(p->u.pirq);
+ assert(p->u.pirq < s->nr_pirqs);
/*
* Set the gsi to IRQ_UNBOUND; it may be changed to an actual
* GSI# below, or to IRQ_MSI_EMU when the MSI table snooping
* catches up with it.
*/
- s->pirq[p->type_val].gsi = IRQ_UNBOUND;
- s->pirq[p->type_val].port = i;
+ s->pirq[p->u.pirq].gsi = IRQ_UNBOUND;
+ s->pirq[p->u.pirq].port = i;
}
}
/* Rebuild s->pirq[].gsi mapping */
@@ -243,7 +251,7 @@ static const VMStateDescription xen_evtchn_port_vmstate = {
.fields = (VMStateField[]) {
VMSTATE_UINT32(vcpu, XenEvtchnPort),
VMSTATE_UINT16(type, XenEvtchnPort),
- VMSTATE_UINT16(type_val, XenEvtchnPort),
+ VMSTATE_UINT16(u.type_val, XenEvtchnPort),
VMSTATE_END_OF_LIST()
}
};
@@ -599,14 +607,13 @@ static void unbind_backend_ports(XenEvtchnState *s)
for (i = 1; i < s->nr_ports; i++) {
p = &s->port_table[i];
- if (p->type == EVTCHNSTAT_interdomain &&
- (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU)) {
- evtchn_port_t be_port = p->type_val & PORT_INFO_TYPEVAL_REMOTE_PORT_MASK;
+ if (p->type == EVTCHNSTAT_interdomain && p->u.interdomain.to_qemu) {
+ evtchn_port_t be_port = p->u.interdomain.port;
if (s->be_handles[be_port]) {
/* This part will be overwritten on the load anyway. */
p->type = EVTCHNSTAT_unbound;
- p->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ p->u.interdomain.port = 0;
/* Leave the backend port open and unbound too. */
if (kvm_xen_has_cap(EVTCHN_SEND)) {
@@ -644,7 +651,7 @@ int xen_evtchn_status_op(struct evtchn_status *status)
switch (p->type) {
case EVTCHNSTAT_unbound:
- if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
+ if (p->u.interdomain.to_qemu) {
status->u.unbound.dom = DOMID_QEMU;
} else {
status->u.unbound.dom = xen_domid;
@@ -652,22 +659,21 @@ int xen_evtchn_status_op(struct evtchn_status *status)
break;
case EVTCHNSTAT_interdomain:
- if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
+ if (p->u.interdomain.to_qemu) {
status->u.interdomain.dom = DOMID_QEMU;
} else {
status->u.interdomain.dom = xen_domid;
}
- status->u.interdomain.port = p->type_val &
- PORT_INFO_TYPEVAL_REMOTE_PORT_MASK;
+ status->u.interdomain.port = p->u.interdomain.port;
break;
case EVTCHNSTAT_pirq:
- status->u.pirq = p->type_val;
+ status->u.pirq = p->u.pirq;
break;
case EVTCHNSTAT_virq:
- status->u.virq = p->type_val;
+ status->u.virq = p->u.virq;
break;
}
@@ -983,7 +989,7 @@ static int clear_port_pending(XenEvtchnState *s, evtchn_port_t port)
static void free_port(XenEvtchnState *s, evtchn_port_t port)
{
s->port_table[port].type = EVTCHNSTAT_closed;
- s->port_table[port].type_val = 0;
+ s->port_table[port].u.type_val = 0;
s->port_table[port].vcpu = 0;
if (s->nr_ports == port + 1) {
@@ -1006,7 +1012,7 @@ static int allocate_port(XenEvtchnState *s, uint32_t vcpu, uint16_t type,
if (s->port_table[p].type == EVTCHNSTAT_closed) {
s->port_table[p].vcpu = vcpu;
s->port_table[p].type = type;
- s->port_table[p].type_val = val;
+ s->port_table[p].u.type_val = val;
*port = p;
@@ -1047,15 +1053,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
return -ENOENT;
case EVTCHNSTAT_pirq:
- s->pirq[p->type_val].port = 0;
- if (s->pirq[p->type_val].is_translated) {
+ s->pirq[p->u.pirq].port = 0;
+ if (s->pirq[p->u.pirq].is_translated) {
*flush_kvm_routes = true;
}
break;
case EVTCHNSTAT_virq:
- kvm_xen_set_vcpu_virq(virq_is_global(p->type_val) ? 0 : p->vcpu,
- p->type_val, 0);
+ kvm_xen_set_vcpu_virq(virq_is_global(p->u.virq) ? 0 : p->vcpu,
+ p->u.virq, 0);
break;
case EVTCHNSTAT_ipi:
@@ -1065,8 +1071,8 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
break;
case EVTCHNSTAT_interdomain:
- if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
- uint16_t be_port = p->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ if (p->u.interdomain.to_qemu) {
+ uint16_t be_port = p->u.interdomain.port;
struct xenevtchn_handle *xc = s->be_handles[be_port];
if (xc) {
if (kvm_xen_has_cap(EVTCHN_SEND)) {
@@ -1076,14 +1082,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
}
} else {
/* Loopback interdomain */
- XenEvtchnPort *rp = &s->port_table[p->type_val];
- if (!valid_port(p->type_val) || rp->type_val != port ||
+ XenEvtchnPort *rp = &s->port_table[p->u.interdomain.port];
+ if (!valid_port(p->u.interdomain.port) ||
+ rp->u.interdomain.port != port ||
rp->type != EVTCHNSTAT_interdomain) {
error_report("Inconsistent state for interdomain unbind");
} else {
/* Set the other end back to unbound */
rp->type = EVTCHNSTAT_unbound;
- rp->type_val = 0;
+ rp->u.interdomain.port = 0;
}
}
break;
@@ -1207,7 +1214,7 @@ int xen_evtchn_bind_vcpu_op(struct evtchn_bind_vcpu *vcpu)
if (p->type == EVTCHNSTAT_interdomain ||
p->type == EVTCHNSTAT_unbound ||
p->type == EVTCHNSTAT_pirq ||
- (p->type == EVTCHNSTAT_virq && virq_is_global(p->type_val))) {
+ (p->type == EVTCHNSTAT_virq && virq_is_global(p->u.virq))) {
/*
* unmask_port() with do_unmask==false will just raise the event
* on the new vCPU if the port was already pending.
@@ -1352,19 +1359,15 @@ int xen_evtchn_bind_ipi_op(struct evtchn_bind_ipi *ipi)
int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
{
XenEvtchnState *s = xen_evtchn_singleton;
- uint16_t type_val;
int ret;
if (!s) {
return -ENOTSUP;
}
- if (interdomain->remote_dom == DOMID_QEMU) {
- type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
- } else if (interdomain->remote_dom == DOMID_SELF ||
- interdomain->remote_dom == xen_domid) {
- type_val = 0;
- } else {
+ if (interdomain->remote_dom != DOMID_QEMU &&
+ interdomain->remote_dom != DOMID_SELF &&
+ interdomain->remote_dom != xen_domid) {
return -ESRCH;
}
@@ -1375,8 +1378,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
qemu_mutex_lock(&s->port_lock);
/* The newly allocated port starts out as unbound */
- ret = allocate_port(s, 0, EVTCHNSTAT_unbound, type_val,
- &interdomain->local_port);
+ ret = allocate_port(s, 0, EVTCHNSTAT_unbound, 0, &interdomain->local_port);
+
if (ret) {
goto out;
}
@@ -1401,7 +1404,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
assign_kernel_eventfd(lp->type, xc->guest_port, xc->fd);
}
lp->type = EVTCHNSTAT_interdomain;
- lp->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU | interdomain->remote_port;
+ lp->u.interdomain.to_qemu = 1;
+ lp->u.interdomain.port = interdomain->remote_port;
ret = 0;
} else {
/* Loopback */
@@ -1415,13 +1419,13 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
* the port that was just allocated for the local end.
*/
if (interdomain->local_port != interdomain->remote_port &&
- rp->type == EVTCHNSTAT_unbound && rp->type_val == 0) {
+ rp->type == EVTCHNSTAT_unbound && !rp->u.interdomain.to_qemu) {
rp->type = EVTCHNSTAT_interdomain;
- rp->type_val = interdomain->local_port;
+ rp->u.interdomain.port = interdomain->local_port;
lp->type = EVTCHNSTAT_interdomain;
- lp->type_val = interdomain->remote_port;
+ lp->u.interdomain.port = interdomain->remote_port;
} else {
ret = -EINVAL;
}
@@ -1440,7 +1444,6 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc)
{
XenEvtchnState *s = xen_evtchn_singleton;
- uint16_t type_val;
int ret;
if (!s) {
@@ -1451,18 +1454,19 @@ int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc)
return -ESRCH;
}
- if (alloc->remote_dom == DOMID_QEMU) {
- type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
- } else if (alloc->remote_dom == DOMID_SELF ||
- alloc->remote_dom == xen_domid) {
- type_val = 0;
- } else {
+ if (alloc->remote_dom != DOMID_QEMU && alloc->remote_dom != DOMID_SELF &&
+ alloc->remote_dom != xen_domid) {
return -EPERM;
}
qemu_mutex_lock(&s->port_lock);
- ret = allocate_port(s, 0, EVTCHNSTAT_unbound, type_val, &alloc->port);
+ ret = allocate_port(s, 0, EVTCHNSTAT_unbound, 0, &alloc->port);
+
+ if (!ret && alloc->remote_dom == DOMID_QEMU) {
+ XenEvtchnPort *p = &s->port_table[alloc->port];
+ p->u.interdomain.to_qemu = 1;
+ }
qemu_mutex_unlock(&s->port_lock);
@@ -1489,12 +1493,12 @@ int xen_evtchn_send_op(struct evtchn_send *send)
switch (p->type) {
case EVTCHNSTAT_interdomain:
- if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
+ if (p->u.interdomain.to_qemu) {
/*
* This is an event from the guest to qemu itself, which is
* serving as the driver domain.
*/
- uint16_t be_port = p->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ uint16_t be_port = p->u.interdomain.port;
struct xenevtchn_handle *xc = s->be_handles[be_port];
if (xc) {
eventfd_write(xc->fd, 1);
@@ -1504,7 +1508,7 @@ int xen_evtchn_send_op(struct evtchn_send *send)
}
} else {
/* Loopback interdomain ports; just a complex IPI */
- set_port_pending(s, p->type_val);
+ set_port_pending(s, p->u.interdomain.port);
}
break;
@@ -1546,8 +1550,7 @@ int xen_evtchn_set_port(uint16_t port)
/* QEMU has no business sending to anything but these */
if (p->type == EVTCHNSTAT_virq ||
- (p->type == EVTCHNSTAT_interdomain &&
- (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU))) {
+ (p->type == EVTCHNSTAT_interdomain && p->u.interdomain.to_qemu)) {
set_port_pending(s, port);
ret = 0;
}
@@ -2057,7 +2060,7 @@ int xen_be_evtchn_bind_interdomain(struct xenevtchn_handle *xc, uint32_t domid,
switch (gp->type) {
case EVTCHNSTAT_interdomain:
/* Allow rebinding after migration, preserve port # if possible */
- be_port = gp->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ be_port = gp->u.interdomain.port;
assert(be_port != 0);
if (!s->be_handles[be_port]) {
s->be_handles[be_port] = xc;
@@ -2078,7 +2081,8 @@ int xen_be_evtchn_bind_interdomain(struct xenevtchn_handle *xc, uint32_t domid,
}
gp->type = EVTCHNSTAT_interdomain;
- gp->type_val = be_port | PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ gp->u.interdomain.to_qemu = 1;
+ gp->u.interdomain.port = be_port;
xc->guest_port = guest_port;
if (kvm_xen_has_cap(EVTCHN_SEND)) {
assign_kernel_eventfd(gp->type, guest_port, xc->fd);
@@ -2123,7 +2127,7 @@ int xen_be_evtchn_unbind(struct xenevtchn_handle *xc, evtchn_port_t port)
/* This should never *not* be true */
if (gp->type == EVTCHNSTAT_interdomain) {
gp->type = EVTCHNSTAT_unbound;
- gp->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
+ gp->u.interdomain.port = 0;
}
if (kvm_xen_has_cap(EVTCHN_SEND)) {
@@ -2277,11 +2281,11 @@ EvtchnInfoList *qmp_xen_event_list(Error **errp)
info->type = p->type;
if (p->type == EVTCHNSTAT_interdomain) {
- info->remote_domain = g_strdup((p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) ?
+ info->remote_domain = g_strdup(p->u.interdomain.to_qemu ?
"qemu" : "loopback");
- info->target = p->type_val & PORT_INFO_TYPEVAL_REMOTE_PORT_MASK;
+ info->target = p->u.interdomain.port;
} else {
- info->target = p->type_val;
+ info->target = p->u.type_val; /* pirq# or virq# */
}
info->vcpu = p->vcpu;
info->pending = test_bit(i, pending);
--
2.34.1
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union
2023-08-03 15:28 [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union David Woodhouse
@ 2023-08-14 12:31 ` Paul Durrant
2023-08-23 12:09 ` David Woodhouse
0 siblings, 1 reply; 3+ messages in thread
From: Paul Durrant @ 2023-08-14 12:31 UTC (permalink / raw)
To: David Woodhouse, qemu-devel
Cc: Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum
On 03/08/2023 16:28, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
>
> A previous implementation of this stuff used a 64-bit field for all of
> the port information (vcpu/type/type_val) and did atomic exchanges on
> them. When I implemented that in Qemu I regretted my life choices and
> just kept it simple with locking instead.
>
> So there's no need for the XenEvtchnPort to be so simplistic. We can
> use a union for the pirq/virq/interdomain information, which lets us
> keep a separate bit for the 'remote domain' in interdomain ports. A
> single bit is enough since the only possible targets are loopback or
> qemu itself.
>
> So now we can ditch PORT_INFO_TYPEVAL_REMOTE_QEMU and the horrid
> manual masking, although the in-memory representation is identical
> so there's no change in the saved state ABI.
>
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> ---
> Thought this would be a nice cleanup to avoid abusing `type_val` for
> various different purposes, and especially the top bit of it for
> interdomain ports. But having done it I find myself fairly ambivalent
> about it. Does anyone feel strongly either way?
>
> hw/i386/kvm/xen_evtchn.c | 124 ++++++++++++++++++++-------------------
> 1 file changed, 64 insertions(+), 60 deletions(-)
>
I don't feel that strongly, but using the union+bitfield approach is a
little nicer to read and only makes the code 4 lines longer.
> diff --git a/hw/i386/kvm/xen_evtchn.c b/hw/i386/kvm/xen_evtchn.c
> index a731738411..446ae46022 100644
> --- a/hw/i386/kvm/xen_evtchn.c
> +++ b/hw/i386/kvm/xen_evtchn.c
> @@ -58,7 +58,15 @@ OBJECT_DECLARE_SIMPLE_TYPE(XenEvtchnState, XEN_EVTCHN)
> typedef struct XenEvtchnPort {
> uint32_t vcpu; /* Xen/ACPI vcpu_id */
> uint16_t type; /* EVTCHNSTAT_xxxx */
> - uint16_t type_val; /* pirq# / virq# / remote port according to type */
> + union {
> + uint16_t type_val; /* pirq# / virq# / remote port according to type */
Not sure the comment is that valuable any more... and maybe just 'val'
now rather than 'type_val'?
> + uint16_t pirq;
> + uint16_t virq;
> + struct {
> + uint16_t port:15;
> + uint16_t to_qemu:1; /* Only two targets; qemu or loopback */
I'd have switch the sense and called this 'loopback'... since it's the
more unlikely case.
> + } interdomain;
> + } u;
> } XenEvtchnPort;
>
> /* 32-bit compatibility definitions, also used natively in 32-bit build */
> @@ -210,16 +218,16 @@ static int xen_evtchn_post_load(void *opaque, int version_id)
> XenEvtchnPort *p = &s->port_table[i];
>
> if (p->type == EVTCHNSTAT_pirq) {
> - assert(p->type_val);
> - assert(p->type_val < s->nr_pirqs);
> + assert(p->u.pirq);
> + assert(p->u.pirq < s->nr_pirqs);
>
> /*
> * Set the gsi to IRQ_UNBOUND; it may be changed to an actual
> * GSI# below, or to IRQ_MSI_EMU when the MSI table snooping
> * catches up with it.
> */
> - s->pirq[p->type_val].gsi = IRQ_UNBOUND;
> - s->pirq[p->type_val].port = i;
> + s->pirq[p->u.pirq].gsi = IRQ_UNBOUND;
> + s->pirq[p->u.pirq].port = i;
> }
> }
> /* Rebuild s->pirq[].gsi mapping */
> @@ -243,7 +251,7 @@ static const VMStateDescription xen_evtchn_port_vmstate = {
> .fields = (VMStateField[]) {
> VMSTATE_UINT32(vcpu, XenEvtchnPort),
> VMSTATE_UINT16(type, XenEvtchnPort),
> - VMSTATE_UINT16(type_val, XenEvtchnPort),
> + VMSTATE_UINT16(u.type_val, XenEvtchnPort),
> VMSTATE_END_OF_LIST()
> }
> };
> @@ -599,14 +607,13 @@ static void unbind_backend_ports(XenEvtchnState *s)
>
> for (i = 1; i < s->nr_ports; i++) {
> p = &s->port_table[i];
> - if (p->type == EVTCHNSTAT_interdomain &&
> - (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU)) {
> - evtchn_port_t be_port = p->type_val & PORT_INFO_TYPEVAL_REMOTE_PORT_MASK;
> + if (p->type == EVTCHNSTAT_interdomain && p->u.interdomain.to_qemu) {
> + evtchn_port_t be_port = p->u.interdomain.port;
>
> if (s->be_handles[be_port]) {
> /* This part will be overwritten on the load anyway. */
> p->type = EVTCHNSTAT_unbound;
> - p->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
> + p->u.interdomain.port = 0;
>
> /* Leave the backend port open and unbound too. */
> if (kvm_xen_has_cap(EVTCHN_SEND)) {
> @@ -644,7 +651,7 @@ int xen_evtchn_status_op(struct evtchn_status *status)
>
> switch (p->type) {
> case EVTCHNSTAT_unbound:
> - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
> + if (p->u.interdomain.to_qemu) {
> status->u.unbound.dom = DOMID_QEMU;
> } else {
> status->u.unbound.dom = xen_domid;
> @@ -652,22 +659,21 @@ int xen_evtchn_status_op(struct evtchn_status *status)
> break;
>
> case EVTCHNSTAT_interdomain:
> - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
> + if (p->u.interdomain.to_qemu) {
> status->u.interdomain.dom = DOMID_QEMU;
> } else {
> status->u.interdomain.dom = xen_domid;
> }
Possibly neater as a ternary now you're switching on a simple boolean.
>
> - status->u.interdomain.port = p->type_val &
> - PORT_INFO_TYPEVAL_REMOTE_PORT_MASK;
> + status->u.interdomain.port = p->u.interdomain.port;
> break;
>
> case EVTCHNSTAT_pirq:
> - status->u.pirq = p->type_val;
> + status->u.pirq = p->u.pirq;
> break;
>
> case EVTCHNSTAT_virq:
> - status->u.virq = p->type_val;
> + status->u.virq = p->u.virq;
> break;
> }
>
> @@ -983,7 +989,7 @@ static int clear_port_pending(XenEvtchnState *s, evtchn_port_t port)
> static void free_port(XenEvtchnState *s, evtchn_port_t port)
> {
> s->port_table[port].type = EVTCHNSTAT_closed;
> - s->port_table[port].type_val = 0;
> + s->port_table[port].u.type_val = 0;
> s->port_table[port].vcpu = 0;
>
> if (s->nr_ports == port + 1) {
> @@ -1006,7 +1012,7 @@ static int allocate_port(XenEvtchnState *s, uint32_t vcpu, uint16_t type,
> if (s->port_table[p].type == EVTCHNSTAT_closed) {
> s->port_table[p].vcpu = vcpu;
> s->port_table[p].type = type;
> - s->port_table[p].type_val = val;
> + s->port_table[p].u.type_val = val;
>
> *port = p;
>
> @@ -1047,15 +1053,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
> return -ENOENT;
>
> case EVTCHNSTAT_pirq:
> - s->pirq[p->type_val].port = 0;
> - if (s->pirq[p->type_val].is_translated) {
> + s->pirq[p->u.pirq].port = 0;
> + if (s->pirq[p->u.pirq].is_translated) {
> *flush_kvm_routes = true;
> }
> break;
>
> case EVTCHNSTAT_virq:
> - kvm_xen_set_vcpu_virq(virq_is_global(p->type_val) ? 0 : p->vcpu,
> - p->type_val, 0);
> + kvm_xen_set_vcpu_virq(virq_is_global(p->u.virq) ? 0 : p->vcpu,
> + p->u.virq, 0);
> break;
>
> case EVTCHNSTAT_ipi:
> @@ -1065,8 +1071,8 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
> break;
>
> case EVTCHNSTAT_interdomain:
> - if (p->type_val & PORT_INFO_TYPEVAL_REMOTE_QEMU) {
> - uint16_t be_port = p->type_val & ~PORT_INFO_TYPEVAL_REMOTE_QEMU;
> + if (p->u.interdomain.to_qemu) {
> + uint16_t be_port = p->u.interdomain.port;
> struct xenevtchn_handle *xc = s->be_handles[be_port];
> if (xc) {
> if (kvm_xen_has_cap(EVTCHN_SEND)) {
> @@ -1076,14 +1082,15 @@ static int close_port(XenEvtchnState *s, evtchn_port_t port,
> }
> } else {
> /* Loopback interdomain */
> - XenEvtchnPort *rp = &s->port_table[p->type_val];
> - if (!valid_port(p->type_val) || rp->type_val != port ||
> + XenEvtchnPort *rp = &s->port_table[p->u.interdomain.port];
> + if (!valid_port(p->u.interdomain.port) ||
> + rp->u.interdomain.port != port ||
> rp->type != EVTCHNSTAT_interdomain) {
> error_report("Inconsistent state for interdomain unbind");
> } else {
> /* Set the other end back to unbound */
> rp->type = EVTCHNSTAT_unbound;
> - rp->type_val = 0;
> + rp->u.interdomain.port = 0;
> }
> }
> break;
> @@ -1207,7 +1214,7 @@ int xen_evtchn_bind_vcpu_op(struct evtchn_bind_vcpu *vcpu)
> if (p->type == EVTCHNSTAT_interdomain ||
> p->type == EVTCHNSTAT_unbound ||
> p->type == EVTCHNSTAT_pirq ||
> - (p->type == EVTCHNSTAT_virq && virq_is_global(p->type_val))) {
> + (p->type == EVTCHNSTAT_virq && virq_is_global(p->u.virq))) {
> /*
> * unmask_port() with do_unmask==false will just raise the event
> * on the new vCPU if the port was already pending.
> @@ -1352,19 +1359,15 @@ int xen_evtchn_bind_ipi_op(struct evtchn_bind_ipi *ipi)
> int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
> {
> XenEvtchnState *s = xen_evtchn_singleton;
> - uint16_t type_val;
> int ret;
>
> if (!s) {
> return -ENOTSUP;
> }
>
> - if (interdomain->remote_dom == DOMID_QEMU) {
> - type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
> - } else if (interdomain->remote_dom == DOMID_SELF ||
> - interdomain->remote_dom == xen_domid) {
> - type_val = 0;
> - } else {
> + if (interdomain->remote_dom != DOMID_QEMU &&
> + interdomain->remote_dom != DOMID_SELF &&
> + interdomain->remote_dom != xen_domid) {
> return -ESRCH;
> }
>
> @@ -1375,8 +1378,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
> qemu_mutex_lock(&s->port_lock);
>
> /* The newly allocated port starts out as unbound */
> - ret = allocate_port(s, 0, EVTCHNSTAT_unbound, type_val,
> - &interdomain->local_port);
> + ret = allocate_port(s, 0, EVTCHNSTAT_unbound, 0, &interdomain->local_port);
> +
> if (ret) {
> goto out;
> }
> @@ -1401,7 +1404,8 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
> assign_kernel_eventfd(lp->type, xc->guest_port, xc->fd);
> }
> lp->type = EVTCHNSTAT_interdomain;
> - lp->type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU | interdomain->remote_port;
> + lp->u.interdomain.to_qemu = 1;
> + lp->u.interdomain.port = interdomain->remote_port;
> ret = 0;
> } else {
> /* Loopback */
> @@ -1415,13 +1419,13 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
> * the port that was just allocated for the local end.
> */
> if (interdomain->local_port != interdomain->remote_port &&
> - rp->type == EVTCHNSTAT_unbound && rp->type_val == 0) {
> + rp->type == EVTCHNSTAT_unbound && !rp->u.interdomain.to_qemu) {
>
> rp->type = EVTCHNSTAT_interdomain;
> - rp->type_val = interdomain->local_port;
> + rp->u.interdomain.port = interdomain->local_port;
>
> lp->type = EVTCHNSTAT_interdomain;
> - lp->type_val = interdomain->remote_port;
> + lp->u.interdomain.port = interdomain->remote_port;
> } else {
> ret = -EINVAL;
> }
> @@ -1440,7 +1444,6 @@ int xen_evtchn_bind_interdomain_op(struct evtchn_bind_interdomain *interdomain)
> int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc)
> {
> XenEvtchnState *s = xen_evtchn_singleton;
> - uint16_t type_val;
> int ret;
>
> if (!s) {
> @@ -1451,18 +1454,19 @@ int xen_evtchn_alloc_unbound_op(struct evtchn_alloc_unbound *alloc)
> return -ESRCH;
> }
>
> - if (alloc->remote_dom == DOMID_QEMU) {
> - type_val = PORT_INFO_TYPEVAL_REMOTE_QEMU;
> - } else if (alloc->remote_dom == DOMID_SELF ||
> - alloc->remote_dom == xen_domid) {
> - type_val = 0;
> - } else {
> + if (alloc->remote_dom != DOMID_QEMU && alloc->remote_dom != DOMID_SELF &&
> + alloc->remote_dom != xen_domid) {
Maybe vertically align the clauses here as in the 'if' a couple of hunks
back?
> return -EPERM;
> }
>
Since all the above are nits...
Reviewed-by: Paul Durrant <paul@xen.org>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union
2023-08-14 12:31 ` Paul Durrant
@ 2023-08-23 12:09 ` David Woodhouse
0 siblings, 0 replies; 3+ messages in thread
From: David Woodhouse @ 2023-08-23 12:09 UTC (permalink / raw)
To: paul, qemu-devel
Cc: Paolo Bonzini, Richard Henderson, Eduardo Habkost,
Michael S. Tsirkin, Marcel Apfelbaum
[-- Attachment #1: Type: text/plain, Size: 594 bytes --]
On Mon, 2023-08-14 at 13:31 +0100, Paul Durrant wrote:
>
> > + uint16_t pirq;
> > + uint16_t virq;
> > + struct {
> > + uint16_t port:15;
> > + uint16_t to_qemu:1; /* Only two targets; qemu or loopback */
>
> I'd have switch the sense and called this 'loopback'... since it's the
> more unlikely case.
Meh, got half way through doing that and remembered why I didn't do it
the first time. It's not binary-compatible with serialized state from
before, which used the flag in 1<<15 manually.
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 5965 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-08-23 12:09 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-08-03 15:28 [RFC PATCH post-8.1] hw/xen: Clean up event channel 'type_val' handling to use union David Woodhouse
2023-08-14 12:31 ` Paul Durrant
2023-08-23 12:09 ` David Woodhouse
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).