* [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop [not found] <87hb0kgiwn.fsf@rustcorp.com.au> @ 2011-12-29 10:42 ` Rusty Russell 2011-12-29 19:38 ` David Miller 2012-04-04 9:32 ` question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) Michael S. Tsirkin 2011-12-29 20:33 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Michael S. Tsirkin 1 sibling, 2 replies; 9+ messages in thread From: Rusty Russell @ 2011-12-29 10:42 UTC (permalink / raw) To: netdev; +Cc: Michael S. Tsirkin Michael S. Tsirkin noticed that we could run the refill work after ndo_close, which can re-enable napi - we don't disable it until virtnet_remove. This is clearly wrong, so move the workqueue control to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). One subtle point: virtnet_probe() could simply fail if it couldn't allocate a receive buffer, but that's less polite in virtnet_open() so we schedule a refill as we do in the normal receive path if we run out of memory. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> --- drivers/net/virtio_net.c | 17 +++++++++++++---- 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct return err; } -/* Returns false if we couldn't fill entirely (OOM). */ +/* + * Returns false if we couldn't fill entirely (OOM). + * + * Normally run in the receive path, but can also be run from ndo_open + * before we're receiving packets, or from refill_work which is + * careful to disable receiving (using napi_disable). + */ static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp) { int err; @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic { struct virtnet_info *vi = netdev_priv(dev); + /* Make sure we have some buffers: if oom use wq. */ + if (!try_fill_recv(vi, GFP_KERNEL)) + schedule_delayed_work(&vi->refill, 0); + virtnet_napi_enable(vi); return 0; } @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi { struct virtnet_info *vi = netdev_priv(dev); + /* Make sure refill_work doesn't re-enable napi! */ + cancel_delayed_work_sync(&vi->refill); napi_disable(&vi->napi); return 0; @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d unregister: unregister_netdev(dev); - cancel_delayed_work_sync(&vi->refill); free_vqs: vdev->config->del_vqs(vdev); free_stats: @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str /* Stop all the virtqueues. */ vdev->config->reset(vdev); - unregister_netdev(vi->dev); - cancel_delayed_work_sync(&vi->refill); /* Free unused buffers in both send and recv, if any. */ free_unused_bufs(vi); ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop 2011-12-29 10:42 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Rusty Russell @ 2011-12-29 19:38 ` David Miller 2011-12-29 20:31 ` Michael S. Tsirkin 2012-04-04 9:32 ` question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) Michael S. Tsirkin 1 sibling, 1 reply; 9+ messages in thread From: David Miller @ 2011-12-29 19:38 UTC (permalink / raw) To: rusty; +Cc: netdev, mst Michael will you integrate these patches from Rusty and submit them to me along with other stuff you might have? Or would you like me to apply them to net-next directly? Thanks. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop 2011-12-29 19:38 ` David Miller @ 2011-12-29 20:31 ` Michael S. Tsirkin 2011-12-29 21:44 ` David Miller 0 siblings, 1 reply; 9+ messages in thread From: Michael S. Tsirkin @ 2011-12-29 20:31 UTC (permalink / raw) To: David Miller; +Cc: rusty, netdev On Thu, Dec 29, 2011 at 02:38:06PM -0500, David Miller wrote: > > Michael will you integrate these patches from Rusty and submit them > to me along with other stuff you might have? > > Or would you like me to apply them to net-next directly? > > Thanks. For personal reasons, my availability in this merge cycle is limited, so net-next directly's better. Thanks, -- MST ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop 2011-12-29 20:31 ` Michael S. Tsirkin @ 2011-12-29 21:44 ` David Miller 0 siblings, 0 replies; 9+ messages in thread From: David Miller @ 2011-12-29 21:44 UTC (permalink / raw) To: mst; +Cc: rusty, netdev From: "Michael S. Tsirkin" <mst@redhat.com> Date: Thu, 29 Dec 2011 22:31:50 +0200 > On Thu, Dec 29, 2011 at 02:38:06PM -0500, David Miller wrote: >> >> Michael will you integrate these patches from Rusty and submit them >> to me along with other stuff you might have? >> >> Or would you like me to apply them to net-next directly? >> >> Thanks. > > For personal reasons, my availability in this merge cycle is limited, so > net-next directly's better. Ok. ^ permalink raw reply [flat|nested] 9+ messages in thread
* question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) 2011-12-29 10:42 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Rusty Russell 2011-12-29 19:38 ` David Miller @ 2012-04-04 9:32 ` Michael S. Tsirkin 2012-04-04 9:47 ` Michael S. Tsirkin 2012-04-05 6:32 ` Jason Wang 1 sibling, 2 replies; 9+ messages in thread From: Michael S. Tsirkin @ 2012-04-04 9:32 UTC (permalink / raw) To: Rusty Russell; +Cc: kvm, netdev, virtualization, Amit Shah, David Miller On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote: > Michael S. Tsirkin noticed that we could run the refill work after > ndo_close, which can re-enable napi - we don't disable it until > virtnet_remove. This is clearly wrong, so move the workqueue control > to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). > > One subtle point: virtnet_probe() could simply fail if it couldn't > allocate a receive buffer, but that's less polite in virtnet_open() so > we schedule a refill as we do in the normal receive path if we run out > of memory. > > Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Doh. napi_disable does not prevent the following napi_schedule, does it? Can someone confirm that I am not seeing things please? And this means this hack does not work: try_fill_recv can still run in parallel with napi, corrupting the vq. I suspect we need to resurrect a patch that used a dedicated flag to avoid this race. Comments? > --- > drivers/net/virtio_net.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct > return err; > } > > -/* Returns false if we couldn't fill entirely (OOM). */ > +/* > + * Returns false if we couldn't fill entirely (OOM). > + * > + * Normally run in the receive path, but can also be run from ndo_open > + * before we're receiving packets, or from refill_work which is > + * careful to disable receiving (using napi_disable). > + */ > static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp) > { > int err; > @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic > { > struct virtnet_info *vi = netdev_priv(dev); > > + /* Make sure we have some buffers: if oom use wq. */ > + if (!try_fill_recv(vi, GFP_KERNEL)) > + schedule_delayed_work(&vi->refill, 0); > + > virtnet_napi_enable(vi); > return 0; > } > @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi > { > struct virtnet_info *vi = netdev_priv(dev); > > + /* Make sure refill_work doesn't re-enable napi! */ > + cancel_delayed_work_sync(&vi->refill); > napi_disable(&vi->napi); > > return 0; > @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d > > unregister: > unregister_netdev(dev); > - cancel_delayed_work_sync(&vi->refill); > free_vqs: > vdev->config->del_vqs(vdev); > free_stats: > @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str > /* Stop all the virtqueues. */ > vdev->config->reset(vdev); > > - > unregister_netdev(vi->dev); > - cancel_delayed_work_sync(&vi->refill); > > /* Free unused buffers in both send and recv, if any. */ > free_unused_bufs(vi); ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) 2012-04-04 9:32 ` question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) Michael S. Tsirkin @ 2012-04-04 9:47 ` Michael S. Tsirkin 2012-04-05 6:32 ` Jason Wang 1 sibling, 0 replies; 9+ messages in thread From: Michael S. Tsirkin @ 2012-04-04 9:47 UTC (permalink / raw) To: Rusty Russell; +Cc: kvm, netdev, virtualization, Amit Shah, David Miller On Wed, Apr 04, 2012 at 12:32:29PM +0300, Michael S. Tsirkin wrote: > On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote: > > Michael S. Tsirkin noticed that we could run the refill work after > > ndo_close, which can re-enable napi - we don't disable it until > > virtnet_remove. This is clearly wrong, so move the workqueue control > > to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). > > > > One subtle point: virtnet_probe() could simply fail if it couldn't > > allocate a receive buffer, but that's less polite in virtnet_open() so > > we schedule a refill as we do in the normal receive path if we run out > > of memory. > > > > Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> > > Doh. > napi_disable does not prevent the following > napi_schedule, does it? > > Can someone confirm that I am not seeing things please? Yes, I *was* seeing things. After napi_disable, NAPI_STATE_SCHED is set to napi_schedule does nothing. Sorry about the noise. > And this means this hack does not work: > try_fill_recv can still run in parallel with > napi, corrupting the vq. > > I suspect we need to resurrect a patch that used a > dedicated flag to avoid this race. > > Comments? > > > --- > > drivers/net/virtio_net.c | 17 +++++++++++++---- > > 1 file changed, 13 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct > > return err; > > } > > > > -/* Returns false if we couldn't fill entirely (OOM). */ > > +/* > > + * Returns false if we couldn't fill entirely (OOM). > > + * > > + * Normally run in the receive path, but can also be run from ndo_open > > + * before we're receiving packets, or from refill_work which is > > + * careful to disable receiving (using napi_disable). > > + */ > > static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp) > > { > > int err; > > @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic > > { > > struct virtnet_info *vi = netdev_priv(dev); > > > > + /* Make sure we have some buffers: if oom use wq. */ > > + if (!try_fill_recv(vi, GFP_KERNEL)) > > + schedule_delayed_work(&vi->refill, 0); > > + > > virtnet_napi_enable(vi); > > return 0; > > } > > @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi > > { > > struct virtnet_info *vi = netdev_priv(dev); > > > > + /* Make sure refill_work doesn't re-enable napi! */ > > + cancel_delayed_work_sync(&vi->refill); > > napi_disable(&vi->napi); > > > > return 0; > > @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d > > > > unregister: > > unregister_netdev(dev); > > - cancel_delayed_work_sync(&vi->refill); > > free_vqs: > > vdev->config->del_vqs(vdev); > > free_stats: > > @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str > > /* Stop all the virtqueues. */ > > vdev->config->reset(vdev); > > > > - > > unregister_netdev(vi->dev); > > - cancel_delayed_work_sync(&vi->refill); > > > > /* Free unused buffers in both send and recv, if any. */ > > free_unused_bufs(vi); ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) 2012-04-04 9:32 ` question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) Michael S. Tsirkin 2012-04-04 9:47 ` Michael S. Tsirkin @ 2012-04-05 6:32 ` Jason Wang 1 sibling, 0 replies; 9+ messages in thread From: Jason Wang @ 2012-04-05 6:32 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: kvm, netdev, virtualization, Amit Shah, David Miller On 04/04/2012 05:32 PM, Michael S. Tsirkin wrote: > On Thu, Dec 29, 2011 at 09:12:38PM +1030, Rusty Russell wrote: >> > Michael S. Tsirkin noticed that we could run the refill work after >> > ndo_close, which can re-enable napi - we don't disable it until >> > virtnet_remove. This is clearly wrong, so move the workqueue control >> > to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). >> > >> > One subtle point: virtnet_probe() could simply fail if it couldn't >> > allocate a receive buffer, but that's less polite in virtnet_open() so >> > we schedule a refill as we do in the normal receive path if we run out >> > of memory. >> > >> > Signed-off-by: Rusty Russell<rusty@rustcorp.com.au> > Doh. > napi_disable does not prevent the following > napi_schedule, does it? > > Can someone confirm that I am not seeing things please? Looks like napi_disable() does prevent the following scheduling, as napi_schedule_prep() returns true only when there's an 0 -> 1 transition of NAPI_STATE_SCHED bit. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop [not found] <87hb0kgiwn.fsf@rustcorp.com.au> 2011-12-29 10:42 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Rusty Russell @ 2011-12-29 20:33 ` Michael S. Tsirkin 2011-12-29 21:44 ` David Miller 1 sibling, 1 reply; 9+ messages in thread From: Michael S. Tsirkin @ 2011-12-29 20:33 UTC (permalink / raw) To: Rusty Russell; +Cc: netdev On Thu, Dec 29, 2011 at 01:53:52PM +1030, Rusty Russell wrote: > Michael S. Tsirkin noticed that we could run the refill work after > ndo_close, which can re-enable napi - we don't disable it until > virtnet_remove. This is clearly wrong, so move the workqueue control > to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). > > One subtle point: virtnet_probe() could simply fail if it couldn't > allocate a receive buffer, but that's less polite in virtnet_open() so > we schedule a refill as we do in the normal receive path if we run out > of memory. > > Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Michael S. Tsirkin <mst@redhat.com> > --- > drivers/net/virtio_net.c | 17 +++++++++++++---- > 1 file changed, 13 insertions(+), 4 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -439,7 +439,13 @@ static int add_recvbuf_mergeable(struct > return err; > } > > -/* Returns false if we couldn't fill entirely (OOM). */ > +/* > + * Returns false if we couldn't fill entirely (OOM). > + * > + * Normally run in the receive path, but can also be run from ndo_open > + * before we're receiving packets, or from refill_work which is > + * careful to disable receiving (using napi_disable). > + */ > static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp) > { > int err; > @@ -719,6 +725,10 @@ static int virtnet_open(struct net_devic > { > struct virtnet_info *vi = netdev_priv(dev); > > + /* Make sure we have some buffers: if oom use wq. */ > + if (!try_fill_recv(vi, GFP_KERNEL)) > + schedule_delayed_work(&vi->refill, 0); > + > virtnet_napi_enable(vi); > return 0; > } > @@ -772,6 +782,8 @@ static int virtnet_close(struct net_devi > { > struct virtnet_info *vi = netdev_priv(dev); > > + /* Make sure refill_work doesn't re-enable napi! */ > + cancel_delayed_work_sync(&vi->refill); > napi_disable(&vi->napi); > > return 0; > @@ -1082,7 +1094,6 @@ static int virtnet_probe(struct virtio_d > > unregister: > unregister_netdev(dev); > - cancel_delayed_work_sync(&vi->refill); > free_vqs: > vdev->config->del_vqs(vdev); > free_stats: > @@ -1121,9 +1132,7 @@ static void __devexit virtnet_remove(str > /* Stop all the virtqueues. */ > vdev->config->reset(vdev); > > - > unregister_netdev(vi->dev); > - cancel_delayed_work_sync(&vi->refill); > > /* Free unused buffers in both send and recv, if any. */ > free_unused_bufs(vi); ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop 2011-12-29 20:33 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Michael S. Tsirkin @ 2011-12-29 21:44 ` David Miller 0 siblings, 0 replies; 9+ messages in thread From: David Miller @ 2011-12-29 21:44 UTC (permalink / raw) To: mst; +Cc: rusty, netdev From: "Michael S. Tsirkin" <mst@redhat.com> Date: Thu, 29 Dec 2011 22:33:02 +0200 > On Thu, Dec 29, 2011 at 01:53:52PM +1030, Rusty Russell wrote: >> Michael S. Tsirkin noticed that we could run the refill work after >> ndo_close, which can re-enable napi - we don't disable it until >> virtnet_remove. This is clearly wrong, so move the workqueue control >> to ndo_open and ndo_stop (aka. virtnet_open and virtnet_close). >> >> One subtle point: virtnet_probe() could simply fail if it couldn't >> allocate a receive buffer, but that's less polite in virtnet_open() so >> we schedule a refill as we do in the normal receive path if we run out >> of memory. >> >> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> > > Acked-by: Michael S. Tsirkin <mst@redhat.com> Applied. ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2012-04-05 6:32 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- [not found] <87hb0kgiwn.fsf@rustcorp.com.au> 2011-12-29 10:42 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Rusty Russell 2011-12-29 19:38 ` David Miller 2011-12-29 20:31 ` Michael S. Tsirkin 2011-12-29 21:44 ` David Miller 2012-04-04 9:32 ` question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop) Michael S. Tsirkin 2012-04-04 9:47 ` Michael S. Tsirkin 2012-04-05 6:32 ` Jason Wang 2011-12-29 20:33 ` [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop Michael S. Tsirkin 2011-12-29 21:44 ` David Miller
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).