* BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
@ 2017-03-02 11:56 Juergen Gross
2017-03-02 12:06 ` Wei Liu
2017-03-02 14:25 ` Boris Ostrovsky
0 siblings, 2 replies; 6+ messages in thread
From: Juergen Gross @ 2017-03-02 11:56 UTC (permalink / raw)
To: igor.druzhinin, xen-devel, Linux Kernel Mailing List,
netdev@vger.kernel.org
Cc: Boris Ostrovsky, Paul Durrant, Wei Liu, David Miller
With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
[ 174.512861] switch: port 2(vif3.0) entered disabled state
[ 174.522735] BUG: sleeping function called from invalid context at
/home/build/linux-linus/mm/vmalloc.c:1441
[ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
[ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
4.10.0upstream-11073-g4977ab6-dirty #1
[ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0
03/14/2011
[ 174.525517] Call Trace:
[ 174.526217] show_stack+0x23/0x60
[ 174.526899] dump_stack+0x5b/0x88
[ 174.527562] ___might_sleep+0xde/0x130
[ 174.528208] __might_sleep+0x35/0xa0
[ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
[ 174.529463] ? __wake_up+0x40/0x50
[ 174.530089] remove_vm_area+0x20/0x90
[ 174.530724] __vunmap+0x1d/0xc0
[ 174.531346] ? delete_object_full+0x13/0x20
[ 174.531973] vfree+0x40/0x80
[ 174.532594] set_backend_state+0x18a/0xa90
[ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
[ 174.533850] ? kfree+0x5b/0xc0
[ 174.534476] ? xenbus_read+0x3d/0x50
[ 174.535101] ? xenbus_read+0x3d/0x50
[ 174.535718] ? xenbus_gather+0x31/0x90
[ 174.536332] ? ___might_sleep+0xf6/0x130
[ 174.536945] frontend_changed+0x6b/0xd0
[ 174.537565] xenbus_otherend_changed+0x7d/0x80
[ 174.538185] frontend_changed+0x12/0x20
[ 174.538803] xenwatch_thread+0x74/0x110
[ 174.539417] ? woken_wake_function+0x20/0x20
[ 174.540049] kthread+0xe5/0x120
[ 174.540663] ? xenbus_printf+0x50/0x50
[ 174.541278] ? __kthread_init_worker+0x40/0x40
[ 174.541898] ret_from_fork+0x21/0x2c
[ 174.548635] switch: port 2(vif3.0) entered disabled state
I believe calling vfree() when holding a spin_lock isn't a good idea.
Boris, this is the dumpdata failure:
FAILURE 4.10.0upstream-11073-g4977ab6-dirty(x86_64)
4.10.0upstream-11073-g4977ab6-dirty(i386)\: 2017-03-02 (tst007)
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
2017-03-02 11:56 BUG due to "xen-netback: protect resource cleaning on XenBus disconnect" Juergen Gross
@ 2017-03-02 12:06 ` Wei Liu
2017-03-02 12:12 ` Juergen Gross
2017-03-02 14:25 ` Boris Ostrovsky
1 sibling, 1 reply; 6+ messages in thread
From: Wei Liu @ 2017-03-02 12:06 UTC (permalink / raw)
To: Juergen Gross
Cc: igor.druzhinin, Wei Liu, netdev@vger.kernel.org,
Linux Kernel Mailing List, Paul Durrant, xen-devel,
Boris Ostrovsky, David Miller
On Thu, Mar 02, 2017 at 12:56:20PM +0100, Juergen Gross wrote:
> With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
>
> [ 174.512861] switch: port 2(vif3.0) entered disabled state
> [ 174.522735] BUG: sleeping function called from invalid context at
> /home/build/linux-linus/mm/vmalloc.c:1441
> [ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
> [ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
> 4.10.0upstream-11073-g4977ab6-dirty #1
> [ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0
> 03/14/2011
> [ 174.525517] Call Trace:
> [ 174.526217] show_stack+0x23/0x60
> [ 174.526899] dump_stack+0x5b/0x88
> [ 174.527562] ___might_sleep+0xde/0x130
> [ 174.528208] __might_sleep+0x35/0xa0
> [ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
> [ 174.529463] ? __wake_up+0x40/0x50
> [ 174.530089] remove_vm_area+0x20/0x90
> [ 174.530724] __vunmap+0x1d/0xc0
> [ 174.531346] ? delete_object_full+0x13/0x20
> [ 174.531973] vfree+0x40/0x80
> [ 174.532594] set_backend_state+0x18a/0xa90
> [ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
> [ 174.533850] ? kfree+0x5b/0xc0
> [ 174.534476] ? xenbus_read+0x3d/0x50
> [ 174.535101] ? xenbus_read+0x3d/0x50
> [ 174.535718] ? xenbus_gather+0x31/0x90
> [ 174.536332] ? ___might_sleep+0xf6/0x130
> [ 174.536945] frontend_changed+0x6b/0xd0
> [ 174.537565] xenbus_otherend_changed+0x7d/0x80
> [ 174.538185] frontend_changed+0x12/0x20
> [ 174.538803] xenwatch_thread+0x74/0x110
> [ 174.539417] ? woken_wake_function+0x20/0x20
> [ 174.540049] kthread+0xe5/0x120
> [ 174.540663] ? xenbus_printf+0x50/0x50
> [ 174.541278] ? __kthread_init_worker+0x40/0x40
> [ 174.541898] ret_from_fork+0x21/0x2c
> [ 174.548635] switch: port 2(vif3.0) entered disabled state
>
> I believe calling vfree() when holding a spin_lock isn't a good idea.
>
Use vfree_atomic instead?
> Boris, this is the dumpdata failure:
> FAILURE 4.10.0upstream-11073-g4977ab6-dirty(x86_64)
> 4.10.0upstream-11073-g4977ab6-dirty(i386)\: 2017-03-02 (tst007)
>
>
> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
2017-03-02 12:06 ` Wei Liu
@ 2017-03-02 12:12 ` Juergen Gross
2017-03-02 12:19 ` Paul Durrant
0 siblings, 1 reply; 6+ messages in thread
From: Juergen Gross @ 2017-03-02 12:12 UTC (permalink / raw)
To: Wei Liu
Cc: igor.druzhinin, netdev@vger.kernel.org, Linux Kernel Mailing List,
Paul Durrant, xen-devel, Boris Ostrovsky, David Miller
On 02/03/17 13:06, Wei Liu wrote:
> On Thu, Mar 02, 2017 at 12:56:20PM +0100, Juergen Gross wrote:
>> With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
>>
>> [ 174.512861] switch: port 2(vif3.0) entered disabled state
>> [ 174.522735] BUG: sleeping function called from invalid context at
>> /home/build/linux-linus/mm/vmalloc.c:1441
>> [ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
>> [ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
>> 4.10.0upstream-11073-g4977ab6-dirty #1
>> [ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0
>> 03/14/2011
>> [ 174.525517] Call Trace:
>> [ 174.526217] show_stack+0x23/0x60
>> [ 174.526899] dump_stack+0x5b/0x88
>> [ 174.527562] ___might_sleep+0xde/0x130
>> [ 174.528208] __might_sleep+0x35/0xa0
>> [ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
>> [ 174.529463] ? __wake_up+0x40/0x50
>> [ 174.530089] remove_vm_area+0x20/0x90
>> [ 174.530724] __vunmap+0x1d/0xc0
>> [ 174.531346] ? delete_object_full+0x13/0x20
>> [ 174.531973] vfree+0x40/0x80
>> [ 174.532594] set_backend_state+0x18a/0xa90
>> [ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
>> [ 174.533850] ? kfree+0x5b/0xc0
>> [ 174.534476] ? xenbus_read+0x3d/0x50
>> [ 174.535101] ? xenbus_read+0x3d/0x50
>> [ 174.535718] ? xenbus_gather+0x31/0x90
>> [ 174.536332] ? ___might_sleep+0xf6/0x130
>> [ 174.536945] frontend_changed+0x6b/0xd0
>> [ 174.537565] xenbus_otherend_changed+0x7d/0x80
>> [ 174.538185] frontend_changed+0x12/0x20
>> [ 174.538803] xenwatch_thread+0x74/0x110
>> [ 174.539417] ? woken_wake_function+0x20/0x20
>> [ 174.540049] kthread+0xe5/0x120
>> [ 174.540663] ? xenbus_printf+0x50/0x50
>> [ 174.541278] ? __kthread_init_worker+0x40/0x40
>> [ 174.541898] ret_from_fork+0x21/0x2c
>> [ 174.548635] switch: port 2(vif3.0) entered disabled state
>>
>> I believe calling vfree() when holding a spin_lock isn't a good idea.
>>
>
> Use vfree_atomic instead?
Hmm, isn't this overkill here?
You can just set a local variable with the address and do vfree() after
releasing the lock.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
2017-03-02 12:12 ` Juergen Gross
@ 2017-03-02 12:19 ` Paul Durrant
2017-03-02 14:55 ` Igor Druzhinin
0 siblings, 1 reply; 6+ messages in thread
From: Paul Durrant @ 2017-03-02 12:19 UTC (permalink / raw)
To: 'Juergen Gross', Wei Liu
Cc: Igor Druzhinin, netdev@vger.kernel.org, Linux Kernel Mailing List,
xen-devel, Boris Ostrovsky, David Miller
> -----Original Message-----
> From: Juergen Gross [mailto:jgross@suse.com]
> Sent: 02 March 2017 12:13
> To: Wei Liu <wei.liu2@citrix.com>
> Cc: Igor Druzhinin <igor.druzhinin@citrix.com>; xen-devel <xen-
> devel@lists.xenproject.org>; Linux Kernel Mailing List <linux-
> kernel@vger.kernel.org>; netdev@vger.kernel.org; Boris Ostrovsky
> <boris.ostrovsky@oracle.com>; David Miller <davem@davemloft.net>; Paul
> Durrant <Paul.Durrant@citrix.com>
> Subject: Re: BUG due to "xen-netback: protect resource cleaning on XenBus
> disconnect"
>
> On 02/03/17 13:06, Wei Liu wrote:
> > On Thu, Mar 02, 2017 at 12:56:20PM +0100, Juergen Gross wrote:
> >> With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
> >>
> >> [ 174.512861] switch: port 2(vif3.0) entered disabled state
> >> [ 174.522735] BUG: sleeping function called from invalid context at
> >> /home/build/linux-linus/mm/vmalloc.c:1441
> >> [ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
> >> [ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
> >> 4.10.0upstream-11073-g4977ab6-dirty #1
> >> [ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS
> V17.0
> >> 03/14/2011
> >> [ 174.525517] Call Trace:
> >> [ 174.526217] show_stack+0x23/0x60
> >> [ 174.526899] dump_stack+0x5b/0x88
> >> [ 174.527562] ___might_sleep+0xde/0x130
> >> [ 174.528208] __might_sleep+0x35/0xa0
> >> [ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
> >> [ 174.529463] ? __wake_up+0x40/0x50
> >> [ 174.530089] remove_vm_area+0x20/0x90
> >> [ 174.530724] __vunmap+0x1d/0xc0
> >> [ 174.531346] ? delete_object_full+0x13/0x20
> >> [ 174.531973] vfree+0x40/0x80
> >> [ 174.532594] set_backend_state+0x18a/0xa90
> >> [ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
> >> [ 174.533850] ? kfree+0x5b/0xc0
> >> [ 174.534476] ? xenbus_read+0x3d/0x50
> >> [ 174.535101] ? xenbus_read+0x3d/0x50
> >> [ 174.535718] ? xenbus_gather+0x31/0x90
> >> [ 174.536332] ? ___might_sleep+0xf6/0x130
> >> [ 174.536945] frontend_changed+0x6b/0xd0
> >> [ 174.537565] xenbus_otherend_changed+0x7d/0x80
> >> [ 174.538185] frontend_changed+0x12/0x20
> >> [ 174.538803] xenwatch_thread+0x74/0x110
> >> [ 174.539417] ? woken_wake_function+0x20/0x20
> >> [ 174.540049] kthread+0xe5/0x120
> >> [ 174.540663] ? xenbus_printf+0x50/0x50
> >> [ 174.541278] ? __kthread_init_worker+0x40/0x40
> >> [ 174.541898] ret_from_fork+0x21/0x2c
> >> [ 174.548635] switch: port 2(vif3.0) entered disabled state
> >>
> >> I believe calling vfree() when holding a spin_lock isn't a good idea.
> >>
> >
> > Use vfree_atomic instead?
>
> Hmm, isn't this overkill here?
>
> You can just set a local variable with the address and do vfree() after
> releasing the lock.
>
Yep, that's what I was thinking. Patch coming shortly.
Paul
>
> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
2017-03-02 11:56 BUG due to "xen-netback: protect resource cleaning on XenBus disconnect" Juergen Gross
2017-03-02 12:06 ` Wei Liu
@ 2017-03-02 14:25 ` Boris Ostrovsky
1 sibling, 0 replies; 6+ messages in thread
From: Boris Ostrovsky @ 2017-03-02 14:25 UTC (permalink / raw)
To: Juergen Gross, igor.druzhinin, xen-devel,
Linux Kernel Mailing List, netdev@vger.kernel.org
Cc: Paul Durrant, Wei Liu, David Miller
On 03/02/2017 06:56 AM, Juergen Gross wrote:
> With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
>
> [ 174.512861] switch: port 2(vif3.0) entered disabled state
> [ 174.522735] BUG: sleeping function called from invalid context at
> /home/build/linux-linus/mm/vmalloc.c:1441
> [ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
> [ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
> 4.10.0upstream-11073-g4977ab6-dirty #1
> [ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS V17.0
> 03/14/2011
> [ 174.525517] Call Trace:
> [ 174.526217] show_stack+0x23/0x60
> [ 174.526899] dump_stack+0x5b/0x88
> [ 174.527562] ___might_sleep+0xde/0x130
> [ 174.528208] __might_sleep+0x35/0xa0
> [ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
> [ 174.529463] ? __wake_up+0x40/0x50
> [ 174.530089] remove_vm_area+0x20/0x90
> [ 174.530724] __vunmap+0x1d/0xc0
> [ 174.531346] ? delete_object_full+0x13/0x20
> [ 174.531973] vfree+0x40/0x80
> [ 174.532594] set_backend_state+0x18a/0xa90
> [ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
> [ 174.533850] ? kfree+0x5b/0xc0
> [ 174.534476] ? xenbus_read+0x3d/0x50
> [ 174.535101] ? xenbus_read+0x3d/0x50
> [ 174.535718] ? xenbus_gather+0x31/0x90
> [ 174.536332] ? ___might_sleep+0xf6/0x130
> [ 174.536945] frontend_changed+0x6b/0xd0
> [ 174.537565] xenbus_otherend_changed+0x7d/0x80
> [ 174.538185] frontend_changed+0x12/0x20
> [ 174.538803] xenwatch_thread+0x74/0x110
> [ 174.539417] ? woken_wake_function+0x20/0x20
> [ 174.540049] kthread+0xe5/0x120
> [ 174.540663] ? xenbus_printf+0x50/0x50
> [ 174.541278] ? __kthread_init_worker+0x40/0x40
> [ 174.541898] ret_from_fork+0x21/0x2c
> [ 174.548635] switch: port 2(vif3.0) entered disabled state
>
> I believe calling vfree() when holding a spin_lock isn't a good idea.
>
> Boris, this is the dumpdata failure:
> FAILURE 4.10.0upstream-11073-g4977ab6-dirty(x86_64)
> 4.10.0upstream-11073-g4977ab6-dirty(i386)\: 2017-03-02 (tst007)
That's not the cause of the test failure though --- it's "just" a warning.
The problem here was that 64- and 32-bit build trees got out of sync
(which is my fault, I switched the former to staging but forgot to do
the same for the latter). We have in the log:
libxl: error: libxl_create.c:564:libxl__domain_make: domain creation
fail: Operation not supported
libxl: error: libxl_create.c:931:initiate_domain_create: cannot make
domain: -3
I now have both trees use staging.
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: BUG due to "xen-netback: protect resource cleaning on XenBus disconnect"
2017-03-02 12:19 ` Paul Durrant
@ 2017-03-02 14:55 ` Igor Druzhinin
0 siblings, 0 replies; 6+ messages in thread
From: Igor Druzhinin @ 2017-03-02 14:55 UTC (permalink / raw)
To: Paul Durrant, 'Juergen Gross', Wei Liu
Cc: xen-devel, Boris Ostrovsky, David Miller,
Linux Kernel Mailing List, netdev@vger.kernel.org
On 02/03/17 12:19, Paul Durrant wrote:
>> -----Original Message-----
>> From: Juergen Gross [mailto:jgross@suse.com]
>> Sent: 02 March 2017 12:13
>> To: Wei Liu <wei.liu2@citrix.com>
>> Cc: Igor Druzhinin <igor.druzhinin@citrix.com>; xen-devel <xen-
>> devel@lists.xenproject.org>; Linux Kernel Mailing List <linux-
>> kernel@vger.kernel.org>; netdev@vger.kernel.org; Boris Ostrovsky
>> <boris.ostrovsky@oracle.com>; David Miller <davem@davemloft.net>; Paul
>> Durrant <Paul.Durrant@citrix.com>
>> Subject: Re: BUG due to "xen-netback: protect resource cleaning on XenBus
>> disconnect"
>>
>> On 02/03/17 13:06, Wei Liu wrote:
>>> On Thu, Mar 02, 2017 at 12:56:20PM +0100, Juergen Gross wrote:
>>>> With commits f16f1df65 and 9a6cdf52b we get in our Xen testing:
>>>>
>>>> [ 174.512861] switch: port 2(vif3.0) entered disabled state
>>>> [ 174.522735] BUG: sleeping function called from invalid context at
>>>> /home/build/linux-linus/mm/vmalloc.c:1441
>>>> [ 174.523451] in_atomic(): 1, irqs_disabled(): 0, pid: 28, name: xenwatch
>>>> [ 174.524131] CPU: 1 PID: 28 Comm: xenwatch Tainted: G W
>>>> 4.10.0upstream-11073-g4977ab6-dirty #1
>>>> [ 174.524819] Hardware name: MSI MS-7680/H61M-P23 (MS-7680), BIOS
>> V17.0
>>>> 03/14/2011
>>>> [ 174.525517] Call Trace:
>>>> [ 174.526217] show_stack+0x23/0x60
>>>> [ 174.526899] dump_stack+0x5b/0x88
>>>> [ 174.527562] ___might_sleep+0xde/0x130
>>>> [ 174.528208] __might_sleep+0x35/0xa0
>>>> [ 174.528840] ? _raw_spin_unlock_irqrestore+0x13/0x20
>>>> [ 174.529463] ? __wake_up+0x40/0x50
>>>> [ 174.530089] remove_vm_area+0x20/0x90
>>>> [ 174.530724] __vunmap+0x1d/0xc0
>>>> [ 174.531346] ? delete_object_full+0x13/0x20
>>>> [ 174.531973] vfree+0x40/0x80
>>>> [ 174.532594] set_backend_state+0x18a/0xa90
>>>> [ 174.533221] ? dwc_scan_descriptors+0x24d/0x430
>>>> [ 174.533850] ? kfree+0x5b/0xc0
>>>> [ 174.534476] ? xenbus_read+0x3d/0x50
>>>> [ 174.535101] ? xenbus_read+0x3d/0x50
>>>> [ 174.535718] ? xenbus_gather+0x31/0x90
>>>> [ 174.536332] ? ___might_sleep+0xf6/0x130
>>>> [ 174.536945] frontend_changed+0x6b/0xd0
>>>> [ 174.537565] xenbus_otherend_changed+0x7d/0x80
>>>> [ 174.538185] frontend_changed+0x12/0x20
>>>> [ 174.538803] xenwatch_thread+0x74/0x110
>>>> [ 174.539417] ? woken_wake_function+0x20/0x20
>>>> [ 174.540049] kthread+0xe5/0x120
>>>> [ 174.540663] ? xenbus_printf+0x50/0x50
>>>> [ 174.541278] ? __kthread_init_worker+0x40/0x40
>>>> [ 174.541898] ret_from_fork+0x21/0x2c
>>>> [ 174.548635] switch: port 2(vif3.0) entered disabled state
>>>>
>>>> I believe calling vfree() when holding a spin_lock isn't a good idea.
>>>>
>>>
>>> Use vfree_atomic instead?
>>
>> Hmm, isn't this overkill here?
>>
>> You can just set a local variable with the address and do vfree() after
>> releasing the lock.
>>
>
> Yep, that's what I was thinking. Patch coming shortly.
>
> Paul
We have an internal patch that was just recently tested without using
spinlocks. Calling vfree in the spinlock section is not the worst that
could happen as our testing revealed. I switched to RCU for protecting
the environment from memory release. I'll share it today.
Igor
>
>>
>> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-03-02 14:55 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-03-02 11:56 BUG due to "xen-netback: protect resource cleaning on XenBus disconnect" Juergen Gross
2017-03-02 12:06 ` Wei Liu
2017-03-02 12:12 ` Juergen Gross
2017-03-02 12:19 ` Paul Durrant
2017-03-02 14:55 ` Igor Druzhinin
2017-03-02 14:25 ` Boris Ostrovsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).