netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wanlong Gao <gaowanlong@cn.fujitsu.com>
To: Jason Wang <jasowang@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Eric Dumazet <erdnetdev@gmail.com>
Subject: Re: [PATCH V2 2/2] virtio-net: reset virtqueue affinity when doing cpu hotplug
Date: Mon, 07 Jan 2013 16:46:31 +0800	[thread overview]
Message-ID: <50EA8B67.4040000@cn.fujitsu.com> (raw)
In-Reply-To: <50EA7F5D.7050907@redhat.com>

On 01/07/2013 03:55 PM, Jason Wang wrote:
> On 01/07/2013 03:48 PM, Wanlong Gao wrote:
>> On 01/07/2013 03:28 PM, Jason Wang wrote:
>>> On 01/07/2013 03:15 PM, Wanlong Gao wrote:
>>>> Add a cpu notifier to virtio-net, so that we can reset the
>>>> virtqueue affinity if the cpu hotplug happens. It improve
>>>> the performance through enabling or disabling the virtqueue
>>>> affinity after doing cpu hotplug.
>>>> Adding the notifier block to virtnet_info is suggested by
>>>> Jason, thank you.
>>>>
>>>> Cc: Rusty Russell <rusty@rustcorp.com.au>
>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>>> Cc: Jason Wang <jasowang@redhat.com>
>>>> Cc: Eric Dumazet <erdnetdev@gmail.com>
>>>> Cc: virtualization@lists.linux-foundation.org
>>>> Cc: netdev@vger.kernel.org
>>>> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
>>>> ---
>>>>  drivers/net/virtio_net.c | 30 ++++++++++++++++++++++++++++++
>>>>  1 file changed, 30 insertions(+)
>>>>
>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>> index b483fb5..9547b4c 100644
>>>> --- a/drivers/net/virtio_net.c
>>>> +++ b/drivers/net/virtio_net.c
>>>> @@ -26,6 +26,7 @@
>>>>  #include <linux/scatterlist.h>
>>>>  #include <linux/if_vlan.h>
>>>>  #include <linux/slab.h>
>>>> +#include <linux/cpu.h>
>>>>  
>>>>  static int napi_weight = 128;
>>>>  module_param(napi_weight, int, 0444);
>>>> @@ -123,6 +124,9 @@ struct virtnet_info {
>>>>  
>>>>  	/* Does the affinity hint is set for virtqueues? */
>>>>  	bool affinity_hint_set;
>>>> +
>>>> +	/* CPU hot plug notifier */
>>>> +	struct notifier_block nb;
>>>>  };
>>>>  
>>>>  struct skb_vnet_hdr {
>>>> @@ -1051,6 +1055,23 @@ static void virtnet_set_affinity(struct virtnet_info *vi, bool set)
>>>>  	}
>>>>  }
>>>>  
>>>> +static int virtnet_cpu_callback(struct notifier_block *nfb,
>>>> +			        unsigned long action, void *hcpu)
>>>> +{
>>>> +	struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
>>>> +	switch(action) {
>>>> +	case CPU_ONLINE:
>>>> +	case CPU_ONLINE_FROZEN:
>>>> +	case CPU_DEAD:
>>>> +	case CPU_DEAD_FROZEN:
>>>> +		virtnet_set_affinity(vi, true);
>>>> +		break;
>>>> +	default:
>>>> +		break;
>>>> +	}
>>>> +	return NOTIFY_OK;
>>>> +}
>>>> +
>>> I think you'd better fix the .ndo_select_queue() as well (as Michael
>>> said in your V1) since it currently uses smp processor id which may not
>>> work very well in this case also.
>> The bug is we can't get the right txq if the CPU IDs are not consecutive,
>> right? Do you have any good idea about fixing this? 
>>
>> Thanks,
>> Wanlong Gao
> 
> The point is make the virtqueue private to a specific cpu when the
> number of queue pairs is equal to the number of cpus. So after you bind
> the vq affinity to a specific cpu, you'd better use the reverse mapping
> of this affinity to do .ndo_select_queue(). One possible idea, as
> Michael suggested, is a per-cpu structure to record the preferable
> virtqueue and do both .ndo_select_queue() and affinity hint setting
> based on this.

Yeah, I think I got it now, will address it in V3. thank you. ;)

Regards,
Wanlong Gao

>>
>>> Thanks
>>>>  static void virtnet_get_ringparam(struct net_device *dev,
>>>>  				struct ethtool_ringparam *ring)
>>>>  {
>>>> @@ -1509,6 +1530,13 @@ static int virtnet_probe(struct virtio_device *vdev)
>>>>  		}
>>>>  	}
>>>>  
>>>> +	vi->nb.notifier_call = &virtnet_cpu_callback;
>>>> +	err = register_hotcpu_notifier(&vi->nb);
>>>> +	if (err) {
>>>> +		pr_debug("virtio_net: registering cpu notifier failed\n");
>>>> +		goto free_recv_bufs;
>>>> +	}
>>>> +
>>>>  	/* Assume link up if device can't report link status,
>>>>  	   otherwise get link status from config. */
>>>>  	if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
>>>> @@ -1553,6 +1581,8 @@ static void virtnet_remove(struct virtio_device *vdev)
>>>>  {
>>>>  	struct virtnet_info *vi = vdev->priv;
>>>>  
>>>> +	unregister_hotcpu_notifier(&vi->nb);
>>>> +
>>>>  	/* Prevent config work handler from accessing the device. */
>>>>  	mutex_lock(&vi->config_lock);
>>>>  	vi->config_enable = false;
>>>
> 
> 

      reply	other threads:[~2013-01-07  8:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-01-07  7:14 [PATCH V2 1/2] virtio-net: fix the set affinity bug when CPU IDs are not consecutive Wanlong Gao
2013-01-07  7:15 ` [PATCH V2 2/2] virtio-net: reset virtqueue affinity when doing cpu hotplug Wanlong Gao
2013-01-07  7:28   ` Jason Wang
2013-01-07  7:48     ` Wanlong Gao
2013-01-07  7:55       ` Jason Wang
2013-01-07  8:46         ` Wanlong Gao [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50EA8B67.4040000@cn.fujitsu.com \
    --to=gaowanlong@cn.fujitsu.com \
    --cc=erdnetdev@gmail.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).