qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Venkateswararao Jujjuri (JV)" <jvrao@linux.vnet.ibm.com>
To: Ryan Harper <ryanh@us.ibm.com>
Cc: Stefan Hajnoczi <stefanha@gmail.com>,
	Qemu-development List <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] QEMU throughput is down with SMP
Date: Fri, 01 Oct 2010 08:04:40 -0700	[thread overview]
Message-ID: <4CA5F888.7010906@linux.vnet.ibm.com> (raw)
In-Reply-To: <20101001133855.GA30086@us.ibm.com>

On 10/1/2010 6:38 AM, Ryan Harper wrote:
> * Stefan Hajnoczi<stefanha@gmail.com>  [2010-10-01 03:48]:
>> On Thu, Sep 30, 2010 at 8:19 PM, Venkateswararao Jujjuri (JV)
>> <jvrao@linux.vnet.ibm.com>  wrote:
>>> On 9/30/2010 2:13 AM, Stefan Hajnoczi wrote:
>>>>
>>>> On Thu, Sep 30, 2010 at 1:50 AM, Venkateswararao Jujjuri (JV)
>>>> <jvrao@linux.vnet.ibm.com>    wrote:
>>>>>
>>>>> Code: Mainline QEMU (git://git.qemu.org/qemu.git)
>>>>> Machine: LS21 blade.
>>>>> Disk: Local disk through VirtIO.
>>>>> Did not select any cache option. Defaulting to writethrough.
>>>>>
>>>>> Command tested:
>>>>> 3 parallel instances of : dd if=/dev/zero of=/pmnt/my_pw bs=4k
>>>>> count=100000
>>>>>
>>>>> QEMU with  smp=1
>>>>> 19.3 MB/s + 19.2 MB/s + 18.6 MB/s = 57.1 MB/s
>>>>>
>>>>> QEMU with smp=4
>>>>> 15.3 MB/s + 14.1 MB/s + 13.6 MB/s = 43.0 MB/s
>>>>>
>>>>> Is this expected?
>>>>
>>>> Did you configure with --enable-io-thread?
>>>
>>> Yes I did.
>>>>
>>>> Also, try using dd oflag=direct to eliminate effects introduced by the
>>>> guest page cache and really hit the disk.
>>>
>>> With oflag=direct , I see no difference and the throughput is so slow and I
>>> would not
>>> expect to see any difference.
>>> It is 225 kb/s  for each thread either with smp=1 or with smp=4.
>>
>> If I understand correctly you are getting:
>>
>> QEMU oflag=direct with smp=1
>> 225 KB/s + 225 KB/s + 225 KB/s = 675 KB/s
>>
>> QEMU oflag=direct with smp=4
>> 225 KB/s + 225 KB/s + 225 KB/s = 675 KB/s
>>
>> This suggests the degradation for smp=4 is guest kernel page cache or
>> buffered I/O related.  Perhaps lockholder preemption?
>
> or just a single spindle maxed out because the blade hard drive doesn't
> have writecache enabled (it's disabled by default).

Yes, I am sure we are hitting the max limit on the blade local disk.
Question is why the smp=4 degraded the performance in the cached mode.

I am running latest kernel from upstream on the guest(2.6.36-rc5)..and using 
block IO.
Do we have any know issues in there which could explain performance degradation?

I am trying to get to a test which proves that the QEMU SMP improves/scales.
I would like to use it in validating our new VirtFS threading code (yet to hit 
mailing list).

Thanks,
JV

  reply	other threads:[~2010-10-01 15:09 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-09-30  0:50 [Qemu-devel] QEMU throughput is down with SMP Venkateswararao Jujjuri (JV)
2010-09-30  9:13 ` Stefan Hajnoczi
2010-09-30 19:19   ` Venkateswararao Jujjuri (JV)
2010-10-01  8:41     ` Stefan Hajnoczi
2010-10-01 13:38       ` Ryan Harper
2010-10-01 15:04         ` Venkateswararao Jujjuri (JV) [this message]
2010-10-01 15:09           ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4CA5F888.7010906@linux.vnet.ibm.com \
    --to=jvrao@linux.vnet.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=ryanh@us.ibm.com \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).