From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCHv2 RFC 3/4] virtio_net: limit xmit polling Date: Tue, 7 Jun 2011 18:59:15 +0300 Message-ID: <20110607155915.GA17581@redhat.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Krishna Kumar , Carsten Otte , lguest-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org, Shirley Ma , kvm-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-s390-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, netdev-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, habanero-23VcF4HTsmIX0ybBhKVfKdBPR1lH4CV8@public.gmane.org, Heiko Carstens , virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, steved-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org, Christian Borntraeger , Tom Lendacky , Martin Schwidefsky , linux390-tA70FqPdS9bQT0dZR+AlfA@public.gmane.org To: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lguest-bounces+glkvl-lguest=m.gmane.org-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org Sender: lguest-bounces+glkvl-lguest=m.gmane.org-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org List-Id: netdev.vger.kernel.org On Thu, Jun 02, 2011 at 06:43:17PM +0300, Michael S. Tsirkin wrote: > Current code might introduce a lot of latency variation > if there are many pending bufs at the time we > attempt to transmit a new one. This is bad for > real-time applications and can't be good for TCP either. > > Free up just enough to both clean up all buffers > eventually and to be able to xmit the next packet. > > Signed-off-by: Michael S. Tsirkin I've been testing this patch and it seems to work fine so far. The following fixups are needed to make it build though: diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b25db1c..77cdf34 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -529,11 +529,8 @@ static bool free_old_xmit_skb(struct virtnet_info *vi) * virtqueue_add_buf will succeed. */ static bool free_xmit_capacity(struct virtnet_info *vi) { - struct sk_buff *skb; - unsigned int len; - while (virtqueue_min_capacity(vi->svq) < MAX_SKB_FRAGS + 2) - if (unlikely(!free_old_xmit_skb)) + if (unlikely(!free_old_xmit_skb(vi))) return false; return true; } @@ -628,7 +625,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) * Doing this after kick means there's a chance we'll free * the skb we have just sent, which is hot in cache. */ for (i = 0; i < 2; i++) - free_old_xmit_skb(v); + free_old_xmit_skb(vi); if (likely(free_xmit_capacity(vi))) return NETDEV_TX_OK;