public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200
@ 2010-03-30 23:48 Sridhar Samudrala
  2010-03-31  9:51 ` Michael S. Tsirkin
  2010-04-01  8:52 ` Avi Kivity
  0 siblings, 2 replies; 4+ messages in thread
From: Sridhar Samudrala @ 2010-03-30 23:48 UTC (permalink / raw)
  To: Avi Kivity, Michael S. Tsirkin; +Cc: kvm@vger.kernel.org

This patch increases the current hardcoded limit of NR_IOBUS_DEVS
from 6 to 200. We are hitting this limit when creating a guest with more
than 1 virtio-net device using vhost-net backend. Each virtio-net
device requires 2 such devices to service notifications from rx/tx queues.

Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>


diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a3fd0f9..7fb48d3 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -54,7 +54,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
  */
 struct kvm_io_bus {
 	int                   dev_count;
-#define NR_IOBUS_DEVS 6
+#define NR_IOBUS_DEVS 200 
 	struct kvm_io_device *devs[NR_IOBUS_DEVS];
 };
 




^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200
  2010-03-30 23:48 [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200 Sridhar Samudrala
@ 2010-03-31  9:51 ` Michael S. Tsirkin
  2010-03-31 20:04   ` Sridhar Samudrala
  2010-04-01  8:52 ` Avi Kivity
  1 sibling, 1 reply; 4+ messages in thread
From: Michael S. Tsirkin @ 2010-03-31  9:51 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: Avi Kivity, kvm@vger.kernel.org

On Tue, Mar 30, 2010 at 04:48:25PM -0700, Sridhar Samudrala wrote:
> This patch increases the current hardcoded limit of NR_IOBUS_DEVS
> from 6 to 200. We are hitting this limit when creating a guest with more
> than 1 virtio-net device using vhost-net backend. Each virtio-net
> device requires 2 such devices to service notifications from rx/tx queues.
> 
> Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
> 

I tried this, but observed a measurable performance
degradation with vhost net and this patch. Did not
investigate yet.  Do you see this as well?

> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index a3fd0f9..7fb48d3 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -54,7 +54,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
>   */
>  struct kvm_io_bus {
>  	int                   dev_count;
> -#define NR_IOBUS_DEVS 6
> +#define NR_IOBUS_DEVS 200 
>  	struct kvm_io_device *devs[NR_IOBUS_DEVS];
>  };
>  
> 
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200
  2010-03-31  9:51 ` Michael S. Tsirkin
@ 2010-03-31 20:04   ` Sridhar Samudrala
  0 siblings, 0 replies; 4+ messages in thread
From: Sridhar Samudrala @ 2010-03-31 20:04 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Avi Kivity, kvm@vger.kernel.org

On Wed, 2010-03-31 at 12:51 +0300, Michael S. Tsirkin wrote:
> On Tue, Mar 30, 2010 at 04:48:25PM -0700, Sridhar Samudrala wrote:
> > This patch increases the current hardcoded limit of NR_IOBUS_DEVS
> > from 6 to 200. We are hitting this limit when creating a guest with more
> > than 1 virtio-net device using vhost-net backend. Each virtio-net
> > device requires 2 such devices to service notifications from rx/tx queues.
> > 
> > Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
> > 
> 
> I tried this, but observed a measurable performance
> degradation with vhost net and this patch. Did not
> investigate yet.  Do you see this as well?

No. I am not seeing any degradation that is not within the range of
variation seen with kvm networking.
For ex: On a Nehalem 2 socket, 8 core system, running netperf TCP_STREAM
with 64K size packets, i am getting 12-13Gb/s guest to host and
11-12Gb/s host to guest with and without this patch.

Thanks
Sridhar


> 
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index a3fd0f9..7fb48d3 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -54,7 +54,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
> >   */
> >  struct kvm_io_bus {
> >  	int                   dev_count;
> > -#define NR_IOBUS_DEVS 6
> > +#define NR_IOBUS_DEVS 200 
> >  	struct kvm_io_device *devs[NR_IOBUS_DEVS];
> >  };
> >  
> > 
> > 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200
  2010-03-30 23:48 [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200 Sridhar Samudrala
  2010-03-31  9:51 ` Michael S. Tsirkin
@ 2010-04-01  8:52 ` Avi Kivity
  1 sibling, 0 replies; 4+ messages in thread
From: Avi Kivity @ 2010-04-01  8:52 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: Michael S. Tsirkin, kvm@vger.kernel.org

On 03/31/2010 02:48 AM, Sridhar Samudrala wrote:
> This patch increases the current hardcoded limit of NR_IOBUS_DEVS
> from 6 to 200. We are hitting this limit when creating a guest with more
> than 1 virtio-net device using vhost-net backend. Each virtio-net
> device requires 2 such devices to service notifications from rx/tx queues.
>    

Applied, thanks.

-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2010-04-01 12:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-30 23:48 [PATCH] kvm: Increase NR_IOBUS_DEVS limit to 200 Sridhar Samudrala
2010-03-31  9:51 ` Michael S. Tsirkin
2010-03-31 20:04   ` Sridhar Samudrala
2010-04-01  8:52 ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox