virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Lendacky <tahm@linux.vnet.ibm.com>
To: Rusty Russell <rusty@rustcorp.com.au>
Cc: kvm@vger.kernel.org, "Michael S. Tsirkin" <mst@redhat.com>,
	linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, avi@redhat.com,
	Sasha Levin <levinsasha928@gmail.com>
Subject: Re: [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache when possible
Date: Mon, 10 Sep 2012 10:47:15 -0500	[thread overview]
Message-ID: <14037987.hiN6MSGY6z@tomlt1.ibmoffice.com> (raw)
In-Reply-To: <87txvahfv3.fsf@rustcorp.com.au>


[-- Attachment #1.1: Type: text/plain, Size: 3937 bytes --]

On Friday, September 07, 2012 09:19:04 AM Rusty Russell wrote:
> "Michael S. Tsirkin" <mst@redhat.com> writes:
> > On Thu, Sep 06, 2012 at 05:27:23PM +0930, Rusty Russell wrote:
> >> "Michael S. Tsirkin" <mst@redhat.com> writes:
> >> > Yes without checksum net core always linearizes packets, so yes it is
> >> > screwed.
> >> > For -net, skb always allocates space for 17 frags + linear part so
> >> > it seems sane to do same in virtio core, and allocate, for -net,
> >> > up to max_frags + 1 from cache.
> >> > We can adjust it: no _SG -> 2 otherwise 18.
> >> 
> >> But I thought it used individual buffers these days?
> > 
> > Yes for receive, no for transmit. That's probably why
> > we should have the threshold per vq, not per device, BTW.
> 
> Can someone actually run with my histogram patch and see what the real
> numbers are?
> 

I ran some TCP_RR and TCP_STREAM sessions, both host-to-guest and
guest-to-host, with a form of the histogram patch applied against a
RHEL6.3 kernel. The histogram values were reset after each test.

Here are the results:

60 session TCP_RR from host-to-guest with 256 byte request and 256 byte
response for 60 seconds:

Queue histogram for virtio1:
Size distribution for input (max=7818456):
 1: 7818456 ################################################################
Size distribution for output (max=7816698):
 2: 149     
 3: 7816698 ################################################################
 4: 2       
 5: 1       
Size distribution for control (max=1):
 0: 0


4 session TCP_STREAM from host-to-guest with 4K message size for 60 seconds:

Queue histogram for virtio1:
Size distribution for input (max=16050941):
 1: 16050941 ################################################################
Size distribution for output (max=1877796):
 2: 1877796 ################################################################
 3: 5       
Size distribution for control (max=1):
 0: 0

4 session TCP_STREAM from host-to-guest with 16K message size for 60 seconds:

Queue histogram for virtio1:
Size distribution for input (max=16831151):
 1: 16831151 ################################################################
Size distribution for output (max=1923965):
 2: 1923965 ################################################################
 3: 5       
Size distribution for control (max=1):
 0: 0

4 session TCP_STREAM from guest-to-host with 4K message size for 60 seconds:

Queue histogram for virtio1:
Size distribution for input (max=1316069):
 1: 1316069 ################################################################
Size distribution for output (max=879213):
 2: 24      
 3: 24097   #
 4: 23176   #
 5: 3412    
 6: 4446    
 7: 4663    
 8: 4195    
 9: 3772    
10: 3388    
11: 3666    
12: 2885    
13: 2759    
14: 2997    
15: 3060    
16: 2651    
17: 2235    
18: 92721   ######  
19: 879213  ################################################################
Size distribution for control (max=1):
 0: 0

4 session TCP_STREAM from guest-to-host with 16K message size for 60 seconds:

Queue histogram for virtio1:
Size distribution for input (max=1428590):
 1: 1428590 ################################################################
Size distribution for output (max=957774):
 2: 20
 3: 54955   ###
 4: 34281   ##
 5: 2967
 6: 3394
 7: 9400
 8: 3061
 9: 3397
10: 3258
11: 3275
12: 3147
13: 2876
14: 2747
15: 2832
16: 2013
17: 1670
18: 100369  ######
19: 957774  ################################################################
Size distribution for control (max=1):
 0: 0

Thanks,
Tom

> I'm not convinced that the ideal 17-buffer case actually happens as much
> as we think.  And if it's not happening with this netperf test, we're
> testing the wrong thing.
> 
> Thanks,
> Rusty.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #1.2: Type: text/html, Size: 23318 bytes --]

[-- Attachment #2: Type: text/plain, Size: 183 bytes --]

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2012-09-10 15:47 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-28 13:04 [PATCH v2 1/2] virtio-ring: Use threshold for switching to indirect descriptors Sasha Levin
2012-08-28 13:04 ` [PATCH v2 2/2] virtio-ring: Allocate indirect buffers from cache when possible Sasha Levin
2012-08-28 13:20   ` Michael S. Tsirkin
2012-08-28 13:35     ` Sasha Levin
2012-08-29 11:07       ` Michael S. Tsirkin
2012-08-29 15:03         ` Sasha Levin
2012-08-29 15:14           ` Michael S. Tsirkin
2012-08-30 10:34             ` Sasha Levin
2012-08-29 15:38           ` Michael S. Tsirkin
2012-08-29 16:50             ` Sasha Levin
2012-09-06  1:02               ` Rusty Russell
2012-09-06  5:02                 ` Michael S. Tsirkin
2012-09-06  7:57                   ` Rusty Russell
     [not found]                   ` <877gs7inx8.fsf@rustcorp.com.au>
2012-09-06  8:45                     ` Michael S. Tsirkin
2012-09-06 23:49                       ` Rusty Russell
     [not found]                       ` <87txvahfv3.fsf@rustcorp.com.au>
2012-09-07  0:06                         ` Michael S. Tsirkin
2012-09-10 15:47                         ` Thomas Lendacky [this message]
2012-09-10 16:08                           ` Michael S. Tsirkin
2012-09-12  6:13                           ` Rusty Russell
     [not found]                           ` <87bohbdb0o.fsf@rustcorp.com.au>
2012-09-12 10:44                             ` Sasha Levin
2012-10-23 15:14                               ` Michael S. Tsirkin
2012-09-10 16:01                         ` Thomas Lendacky
2012-09-10 15:52                   ` Paolo Bonzini
2012-09-06  0:02       ` Rusty Russell
2012-08-29 15:38   ` Michael S. Tsirkin
2012-08-29 17:14     ` Sasha Levin
2012-08-29 18:12       ` Michael S. Tsirkin
2012-08-29 20:46         ` Sasha Levin
2012-08-29 22:52           ` Michael S. Tsirkin
2012-08-28 13:20 ` [PATCH v2 1/2] virtio-ring: Use threshold for switching to indirect descriptors Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14037987.hiN6MSGY6z@tomlt1.ibmoffice.com \
    --to=tahm@linux.vnet.ibm.com \
    --cc=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=levinsasha928@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=rusty@rustcorp.com.au \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).