* [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect @ 2023-04-20 11:02 mark.syms--- via 2023-04-24 10:32 ` Paul Durrant 0 siblings, 1 reply; 6+ messages in thread From: mark.syms--- via @ 2023-04-20 11:02 UTC (permalink / raw) To: qemu-devel Cc: Mark Syms, Stefano Stabellini, Anthony Perard, Paul Durrant, xen-devel From: Mark Syms <mark.syms@citrix.com> Ensure the PV ring is drained on disconnect. Also ensure all pending AIO is complete, otherwise AIO tries to complete into a mapping of the ring which has been torn down. Signed-off-by: Mark Syms <mark.syms@citrix.com> --- CC: Stefano Stabellini <sstabellini@kernel.org> CC: Anthony Perard <anthony.perard@citrix.com> CC: Paul Durrant <paul@xen.org> CC: xen-devel@lists.xenproject.org v2: * Ensure all inflight requests are completed before teardown * RESEND to fix formatting --- hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index 734da42ea7..d9da4090bf 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane) dataplane->more_work = 0; + if (dataplane->sring == 0) { + return done_something; + } + rc = dataplane->rings.common.req_cons; rp = dataplane->rings.common.sring->req_prod; xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane) void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) { XenDevice *xendev; + XenBlockRequest *request, *next; if (!dataplane) { return; } + /* We're about to drain the ring. We can cancel the scheduling of any + * bottom half now */ + qemu_bh_cancel(dataplane->bh); + + /* Ensure we have drained the ring */ + aio_context_acquire(dataplane->ctx); + do { + xen_block_handle_requests(dataplane); + } while (dataplane->more_work); + aio_context_release(dataplane->ctx); + + /* Now ensure that all inflight requests are complete */ + while (!QLIST_EMPTY(&dataplane->inflight)) { + QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) { + blk_aio_flush(request->dataplane->blk, xen_block_complete_aio, + request); + } + } + xendev = dataplane->xendev; aio_context_acquire(dataplane->ctx); + if (dataplane->event_channel) { /* Only reason for failure is a NULL channel */ xen_device_set_event_channel_context(xendev, dataplane->event_channel, @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort); aio_context_release(dataplane->ctx); - /* - * Now that the context has been moved onto the main thread, cancel - * further processing. - */ - qemu_bh_cancel(dataplane->bh); - if (dataplane->event_channel) { Error *local_err = NULL; -- 2.40.0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect 2023-04-20 11:02 [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect mark.syms--- via @ 2023-04-24 10:32 ` Paul Durrant 2023-04-24 12:07 ` Mark Syms 0 siblings, 1 reply; 6+ messages in thread From: Paul Durrant @ 2023-04-24 10:32 UTC (permalink / raw) To: mark.syms, qemu-devel; +Cc: Stefano Stabellini, Anthony Perard, xen-devel On 20/04/2023 12:02, mark.syms@citrix.com wrote: > From: Mark Syms <mark.syms@citrix.com> > > Ensure the PV ring is drained on disconnect. Also ensure all pending > AIO is complete, otherwise AIO tries to complete into a mapping of the > ring which has been torn down. > > Signed-off-by: Mark Syms <mark.syms@citrix.com> > --- > CC: Stefano Stabellini <sstabellini@kernel.org> > CC: Anthony Perard <anthony.perard@citrix.com> > CC: Paul Durrant <paul@xen.org> > CC: xen-devel@lists.xenproject.org > > v2: > * Ensure all inflight requests are completed before teardown > * RESEND to fix formatting > --- > hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------ > 1 file changed, 25 insertions(+), 6 deletions(-) > > diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c > index 734da42ea7..d9da4090bf 100644 > --- a/hw/block/dataplane/xen-block.c > +++ b/hw/block/dataplane/xen-block.c > @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane) > > dataplane->more_work = 0; > > + if (dataplane->sring == 0) { > + return done_something; > + } > + I think you could just return false here... Nothing is ever going to be done if there's no ring :-) > rc = dataplane->rings.common.req_cons; > rp = dataplane->rings.common.sring->req_prod; > xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ > @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane > void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > { > XenDevice *xendev; > + XenBlockRequest *request, *next; > > if (!dataplane) { > return; > } > > + /* We're about to drain the ring. We can cancel the scheduling of any > + * bottom half now */ > + qemu_bh_cancel(dataplane->bh); > + > + /* Ensure we have drained the ring */ > + aio_context_acquire(dataplane->ctx); > + do { > + xen_block_handle_requests(dataplane); > + } while (dataplane->more_work); > + aio_context_release(dataplane->ctx); > + I don't think we want to be taking new requests, do we? > + /* Now ensure that all inflight requests are complete */ > + while (!QLIST_EMPTY(&dataplane->inflight)) { > + QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) { > + blk_aio_flush(request->dataplane->blk, xen_block_complete_aio, > + request); > + } > + } > + I think this could possibly be simplified by doing the drain after the call to blk_set_aio_context(), as long as we set dataplane->ctx to qemu_get_aio_context(). Alos, as long as more_work is not set then it should still be safe to cancel the bh before the drain AFAICT. Paul > xendev = dataplane->xendev; > > aio_context_acquire(dataplane->ctx); > + > if (dataplane->event_channel) { > /* Only reason for failure is a NULL channel */ > xen_device_set_event_channel_context(xendev, dataplane->event_channel, > @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort); > aio_context_release(dataplane->ctx); > > - /* > - * Now that the context has been moved onto the main thread, cancel > - * further processing. > - */ > - qemu_bh_cancel(dataplane->bh); > - > if (dataplane->event_channel) { > Error *local_err = NULL; > ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect 2023-04-24 10:32 ` Paul Durrant @ 2023-04-24 12:07 ` Mark Syms 2023-04-24 13:17 ` Tim Smith 0 siblings, 1 reply; 6+ messages in thread From: Mark Syms @ 2023-04-24 12:07 UTC (permalink / raw) To: paul Cc: mark.syms, qemu-devel, Stefano Stabellini, Anthony Perard, xen-devel, tim.smith [-- Attachment #1: Type: text/plain, Size: 3804 bytes --] Copying in Tim who did the final phase of the changes. On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote: > > On 20/04/2023 12:02, mark.syms@citrix.com wrote: > > From: Mark Syms <mark.syms@citrix.com> > > > > Ensure the PV ring is drained on disconnect. Also ensure all pending > > AIO is complete, otherwise AIO tries to complete into a mapping of the > > ring which has been torn down. > > > > Signed-off-by: Mark Syms <mark.syms@citrix.com> > > --- > > CC: Stefano Stabellini <sstabellini@kernel.org> > > CC: Anthony Perard <anthony.perard@citrix.com> > > CC: Paul Durrant <paul@xen.org> > > CC: xen-devel@lists.xenproject.org > > > > v2: > > * Ensure all inflight requests are completed before teardown > > * RESEND to fix formatting > > --- > > hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------ > > 1 file changed, 25 insertions(+), 6 deletions(-) > > > > diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c > > index 734da42ea7..d9da4090bf 100644 > > --- a/hw/block/dataplane/xen-block.c > > +++ b/hw/block/dataplane/xen-block.c > > @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane) > > > > dataplane->more_work = 0; > > > > + if (dataplane->sring == 0) { > > + return done_something; > > + } > > + > > I think you could just return false here... Nothing is ever going to be > done if there's no ring :-) > > > rc = dataplane->rings.common.req_cons; > > rp = dataplane->rings.common.sring->req_prod; > > xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ > > @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane > void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > > { > > XenDevice *xendev; > > + XenBlockRequest *request, *next; > > > > if (!dataplane) { > > return; > > } > > > > + /* We're about to drain the ring. We can cancel the scheduling of any > > + * bottom half now */ > > + qemu_bh_cancel(dataplane->bh); > > + > > + /* Ensure we have drained the ring */ > > + aio_context_acquire(dataplane->ctx); > > + do { > > + xen_block_handle_requests(dataplane); > > + } while (dataplane->more_work); > > + aio_context_release(dataplane->ctx); > > + > > I don't think we want to be taking new requests, do we? > > > + /* Now ensure that all inflight requests are complete */ > > + while (!QLIST_EMPTY(&dataplane->inflight)) { > > + QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) { > > + blk_aio_flush(request->dataplane->blk, xen_block_complete_aio, > > + request); > > + } > > + } > > + > > I think this could possibly be simplified by doing the drain after the > call to blk_set_aio_context(), as long as we set dataplane->ctx to > qemu_get_aio_context(). Alos, as long as more_work is not set then it > should still be safe to cancel the bh before the drain AFAICT. > > Paul > > > xendev = dataplane->xendev; > > > > aio_context_acquire(dataplane->ctx); > > + > > if (dataplane->event_channel) { > > /* Only reason for failure is a NULL channel */ > > xen_device_set_event_channel_context(xendev, dataplane->event_channel, > > @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > > blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort); > > aio_context_release(dataplane->ctx); > > > > - /* > > - * Now that the context has been moved onto the main thread, cancel > > - * further processing. > > - */ > > - qemu_bh_cancel(dataplane->bh); > > - > > if (dataplane->event_channel) { > > Error *local_err = NULL; > > > [-- Attachment #2: Type: text/html, Size: 5357 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect 2023-04-24 12:07 ` Mark Syms @ 2023-04-24 13:17 ` Tim Smith 2023-04-24 13:51 ` Paul Durrant 0 siblings, 1 reply; 6+ messages in thread From: Tim Smith @ 2023-04-24 13:17 UTC (permalink / raw) To: Mark Syms Cc: paul, mark.syms, qemu-devel, Stefano Stabellini, Anthony Perard, xen-devel, tim.smith On Mon, Apr 24, 2023 at 1:08 PM Mark Syms <mark.syms@cloud.com> wrote: > > Copying in Tim who did the final phase of the changes. > > On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote: > > > > On 20/04/2023 12:02, mark.syms@citrix.com wrote: > > > From: Mark Syms <mark.syms@citrix.com> > > > > > > Ensure the PV ring is drained on disconnect. Also ensure all pending > > > AIO is complete, otherwise AIO tries to complete into a mapping of the > > > ring which has been torn down. > > > > > > Signed-off-by: Mark Syms <mark.syms@citrix.com> > > > --- > > > CC: Stefano Stabellini <sstabellini@kernel.org> > > > CC: Anthony Perard <anthony.perard@citrix.com> > > > CC: Paul Durrant <paul@xen.org> > > > CC: xen-devel@lists.xenproject.org > > > > > > v2: > > > * Ensure all inflight requests are completed before teardown > > > * RESEND to fix formatting > > > --- > > > hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------ > > > 1 file changed, 25 insertions(+), 6 deletions(-) > > > > > > diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c > > > index 734da42ea7..d9da4090bf 100644 > > > --- a/hw/block/dataplane/xen-block.c > > > +++ b/hw/block/dataplane/xen-block.c > > > @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane) > > > > > > dataplane->more_work = 0; > > > > > > + if (dataplane->sring == 0) { > > > + return done_something; > > > + } > > > + > > > > I think you could just return false here... Nothing is ever going to be > > done if there's no ring :-) > > > > > rc = dataplane->rings.common.req_cons; > > > rp = dataplane->rings.common.sring->req_prod; > > > xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ > > > @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane > void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > > > { > > > XenDevice *xendev; > > > + XenBlockRequest *request, *next; > > > > > > if (!dataplane) { > > > return; > > > } > > > > > > + /* We're about to drain the ring. We can cancel the scheduling of any > > > + * bottom half now */ > > > + qemu_bh_cancel(dataplane->bh); > > > + > > > + /* Ensure we have drained the ring */ > > > + aio_context_acquire(dataplane->ctx); > > > + do { > > > + xen_block_handle_requests(dataplane); > > > + } while (dataplane->more_work); > > > + aio_context_release(dataplane->ctx); > > > + > > > > I don't think we want to be taking new requests, do we? > > If we're in this situation and the guest has put something on the ring, I think we should do our best with it. We cannot just rely on the guest to be well-behaved, because they're not :-( We're about to throw the ring away, so whatever is there would otherwise be lost. This bit is here to try to handle guests which are less than diligent about their shutdown. We *should* always be past this fast enough when the disconnect()/connect() of XenbusStateConnected happens that all remains well (if not, we were in a worse situation before). > > > + /* Now ensure that all inflight requests are complete */ > > > + while (!QLIST_EMPTY(&dataplane->inflight)) { > > > + QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) { > > > + blk_aio_flush(request->dataplane->blk, xen_block_complete_aio, > > > + request); > > > + } > > > + } > > > + > > > > I think this could possibly be simplified by doing the drain after the > > call to blk_set_aio_context(), as long as we set dataplane->ctx to > > qemu_get_aio_context(). Alos, as long as more_work is not set then it > > should still be safe to cancel the bh before the drain AFAICT. I'm not sure what you mean by simpler? Possibly I'm not getting something. We have to make sure that any "aio_bh_schedule_oneshot_full()" which happens as a result of "blk_aio_flush()" has finished before any change of AIO context, because it tries to use the one which was current at the time of being called (I have the SEGVs to prove it :-)). Whether that happens before or after "blk_set_aio_context(qemu_get_aio_context())" doesn't seem to be a change in complexity to me. Motivation was to get as much as possible to happen in the way it "normally" would, so that future changes are less likely to regress, but as mentioned maybe I'm missing something. The BH needs to be prevented from firing ASAP, otherwise the disconnect()/connect() which happens when XenbusStateConnected can have the bh fire from what the guest does next right in the middle of juggling contexts for the disconnect() (I have the SEGVs from that too...). > > Paul > > > > > xendev = dataplane->xendev; > > > > > > aio_context_acquire(dataplane->ctx); > > > + > > > if (dataplane->event_channel) { > > > /* Only reason for failure is a NULL channel */ > > > xen_device_set_event_channel_context(xendev, dataplane->event_channel, > > > @@ -684,12 +709,6 @@ void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) > > > blk_set_aio_context(dataplane->blk, qemu_get_aio_context(), &error_abort); > > > aio_context_release(dataplane->ctx); > > > > > > - /* > > > - * Now that the context has been moved onto the main thread, cancel > > > - * further processing. > > > - */ > > > - qemu_bh_cancel(dataplane->bh); > > > - > > > if (dataplane->event_channel) { > > > Error *local_err = NULL; > > > > > Tim (hoping GMail behaves itself with this message...) ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect 2023-04-24 13:17 ` Tim Smith @ 2023-04-24 13:51 ` Paul Durrant 2023-04-26 8:32 ` Tim Smith 0 siblings, 1 reply; 6+ messages in thread From: Paul Durrant @ 2023-04-24 13:51 UTC (permalink / raw) To: Tim Smith, Mark Syms Cc: mark.syms, qemu-devel, Stefano Stabellini, Anthony Perard, xen-devel, tim.smith On 24/04/2023 14:17, Tim Smith wrote: > On Mon, Apr 24, 2023 at 1:08 PM Mark Syms <mark.syms@cloud.com> wrote: >> >> Copying in Tim who did the final phase of the changes. >> >> On Mon, 24 Apr 2023 at 11:32, Paul Durrant <xadimgnik@gmail.com> wrote: >>> >>> On 20/04/2023 12:02, mark.syms@citrix.com wrote: >>>> From: Mark Syms <mark.syms@citrix.com> >>>> >>>> Ensure the PV ring is drained on disconnect. Also ensure all pending >>>> AIO is complete, otherwise AIO tries to complete into a mapping of the >>>> ring which has been torn down. >>>> >>>> Signed-off-by: Mark Syms <mark.syms@citrix.com> >>>> --- >>>> CC: Stefano Stabellini <sstabellini@kernel.org> >>>> CC: Anthony Perard <anthony.perard@citrix.com> >>>> CC: Paul Durrant <paul@xen.org> >>>> CC: xen-devel@lists.xenproject.org >>>> >>>> v2: >>>> * Ensure all inflight requests are completed before teardown >>>> * RESEND to fix formatting >>>> --- >>>> hw/block/dataplane/xen-block.c | 31 +++++++++++++++++++++++++------ >>>> 1 file changed, 25 insertions(+), 6 deletions(-) >>>> >>>> diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c >>>> index 734da42ea7..d9da4090bf 100644 >>>> --- a/hw/block/dataplane/xen-block.c >>>> +++ b/hw/block/dataplane/xen-block.c >>>> @@ -523,6 +523,10 @@ static bool xen_block_handle_requests(XenBlockDataPlane *dataplane) >>>> >>>> dataplane->more_work = 0; >>>> >>>> + if (dataplane->sring == 0) { >>>> + return done_something; >>>> + } >>>> + >>> >>> I think you could just return false here... Nothing is ever going to be >>> done if there's no ring :-) >>> >>>> rc = dataplane->rings.common.req_cons; >>>> rp = dataplane->rings.common.sring->req_prod; >>>> xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ >>>> @@ -666,14 +670,35 @@ void xen_block_dataplane_destroy(XenBlockDataPlane *dataplane > void xen_block_dataplane_stop(XenBlockDataPlane *dataplane) >>>> { >>>> XenDevice *xendev; >>>> + XenBlockRequest *request, *next; >>>> >>>> if (!dataplane) { >>>> return; >>>> } >>>> >>>> + /* We're about to drain the ring. We can cancel the scheduling of any >>>> + * bottom half now */ >>>> + qemu_bh_cancel(dataplane->bh); >>>> + >>>> + /* Ensure we have drained the ring */ >>>> + aio_context_acquire(dataplane->ctx); >>>> + do { >>>> + xen_block_handle_requests(dataplane); >>>> + } while (dataplane->more_work); >>>> + aio_context_release(dataplane->ctx); >>>> + >>> >>> I don't think we want to be taking new requests, do we? >>> > > If we're in this situation and the guest has put something on the > ring, I think we should do our best with it. > We cannot just rely on the guest to be well-behaved, because they're > not :-( We're about to throw the > ring away, so whatever is there would otherwise be lost. We only throw away our mapping. The memory belongs to the guest and it should ensure it does not submit requests after the state has left 'connected' > This bit is > here to try to handle guests which are > less than diligent about their shutdown. We *should* always be past > this fast enough when the disconnect()/connect() > of XenbusStateConnected happens that all remains well (if not, we were > in a worse situation before). > What about a malicious guest that is piling requests into the ring. It could keep us in the loop forever, couldn't it? >>>> + /* Now ensure that all inflight requests are complete */ >>>> + while (!QLIST_EMPTY(&dataplane->inflight)) { >>>> + QLIST_FOREACH_SAFE(request, &dataplane->inflight, list, next) { >>>> + blk_aio_flush(request->dataplane->blk, xen_block_complete_aio, >>>> + request); >>>> + } >>>> + } >>>> + >>> >>> I think this could possibly be simplified by doing the drain after the >>> call to blk_set_aio_context(), as long as we set dataplane->ctx to >>> qemu_get_aio_context(). Alos, as long as more_work is not set then it >>> should still be safe to cancel the bh before the drain AFAICT. > > I'm not sure what you mean by simpler? Possibly I'm not getting something. > Sorry, I was referring to the need to do aio_context_acquire() calls but they are only around the disputed xen_block_handle_requests() call anyway, so there's no simplification in this bit. > We have to make sure that any "aio_bh_schedule_oneshot_full()" which > happens as a result of > "blk_aio_flush()" has finished before any change of AIO context, > because it tries to use the one which > was current at the time of being called (I have the SEGVs to prove it > :-)). Ok, I had assumed that the issue was the context being picked up inside the xen_block_complete_aio() call. > Whether that happens before or after > "blk_set_aio_context(qemu_get_aio_context())" doesn't seem to be a > change in complexity to me. > > Motivation was to get as much as possible to happen in the way it > "normally" would, so that future changes > are less likely to regress, but as mentioned maybe I'm missing something. > > The BH needs to be prevented from firing ASAP, otherwise the > disconnect()/connect() which happens when > XenbusStateConnected can have the bh fire from what the guest does > next right in the middle of juggling > contexts for the disconnect() (I have the SEGVs from that too...). > So if you drop the ring drain then this patch should still stop the SEGVs, right? Paul ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect 2023-04-24 13:51 ` Paul Durrant @ 2023-04-26 8:32 ` Tim Smith 0 siblings, 0 replies; 6+ messages in thread From: Tim Smith @ 2023-04-26 8:32 UTC (permalink / raw) To: paul Cc: Mark Syms, mark.syms, qemu-devel, Stefano Stabellini, Anthony Perard, xen-devel, tim.smith On Mon, Apr 24, 2023 at 2:51 PM Paul Durrant <xadimgnik@gmail.com> wrote: > > So if you drop the ring drain then this patch should still stop the > SEGVs, right? > I think that's worth a few test runs. I recall some coredumps in that condition when I was investigating early on, but I don't have them in my collection so maybe I'm misremembering. Tim ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-04-26 8:33 UTC | newest] Thread overview: 6+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-04-20 11:02 [PATCH v2 RESEND] xen: Fix SEGV on domain disconnect mark.syms--- via 2023-04-24 10:32 ` Paul Durrant 2023-04-24 12:07 ` Mark Syms 2023-04-24 13:17 ` Tim Smith 2023-04-24 13:51 ` Paul Durrant 2023-04-26 8:32 ` Tim Smith
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).